Science.gov

Sample records for accuracy assessments performed

  1. Teacher Compliance and Accuracy in State Assessment of Student Motor Skill Performance

    ERIC Educational Resources Information Center

    Hall, Tina J.; Hicklin, Lori K.; French, Karen E.

    2015-01-01

    Purpose: The purpose of this study was to investigate teacher compliance with state mandated assessment protocols and teacher accuracy in assessing student motor skill performance. Method: Middle school teachers (N = 116) submitted eighth grade student motor skill performance data from 318 physical education classes to a trained monitoring…

  2. GEOSPATIAL DATA ACCURACY ASSESSMENT

    EPA Science Inventory

    The development of robust accuracy assessment methods for the validation of spatial data represent's a difficult scientific challenge for the geospatial science community. The importance and timeliness of this issue is related directly to the dramatic escalation in the developmen...

  3. Assessment of the relationship between lesion segmentation accuracy and computer-aided diagnosis scheme performance

    NASA Astrophysics Data System (ADS)

    Zheng, Bin; Pu, Jiantao; Park, Sang Cheol; Zuley, Margarita; Gur, David

    2008-03-01

    In this study we randomly select 250 malignant and 250 benign mass regions as a training dataset. The boundary contours of these regions were manually identified and marked. Twelve image features were computed for each region. An artificial neural network (ANN) was trained as a classifier. To select a specific testing dataset, we applied a topographic multi-layer region growth algorithm to detect boundary contours of 1,903 mass regions in an initial pool of testing regions. All processed regions are sorted based on a size difference ratio between manual and automated segmentation. We selected a testing dataset involving 250 malignant and 250 benign mass regions with larger size difference ratios. Using the area under ROC curve (A Z value) as performance index we investigated the relationship between the accuracy of mass segmentation and the performance of a computer-aided diagnosis (CAD) scheme. CAD performance degrades as the size difference ratio increases. Then, we developed and tested a hybrid region growth algorithm that combined the topographic region growth with an active contour approach. In this hybrid algorithm, the boundary contour detected by the topographic region growth is used as the initial contour of the active contour algorithm. The algorithm iteratively searches for the optimal region boundaries. A CAD likelihood score of the growth region being a true-positive mass is computed in each iteration. The region growth is automatically terminated once the first maximum CAD score is reached. This hybrid region growth algorithm reduces the size difference ratios between two areas segmented automatically and manually to less than +/-15% for all testing regions and the testing A Z value increases to from 0.63 to 0.90. The results indicate that CAD performance heavily depends on the accuracy of mass segmentation. In order to achieve robust CAD performance, reducing lesion segmentation error is important.

  4. Increased Throwing Accuracy Improves Children's Catching Performance in a Ball-Catching Task from the Movement Assessment Battery (MABC-2)

    PubMed Central

    Dirksen, Tim; De Lussanet, Marc H. E.; Zentgraf, Karen; Slupinski, Lena; Wagner, Heiko

    2016-01-01

    The Movement Assessment Battery for Children (MABC-2) is a functional test for identifying deficits in the motor performance of children. The test contains a ball-catching task that requires the children to catch a self-thrown ball with one hand. As the task can be executed with a variety of different catching strategies, it is assumed that the task success can also vary considerably. Even though it is not clear, whether the performance merely depends on the catching skills or also to some extent on the throwing skills, the MABC-2 takes into account only the movement outcome. Therefore, the purpose of the current study was to examine (1) to what extent the throwing accuracy has an effect on the children's catching performance and (2) to what extent the throwing accuracy influences their choice of catching strategy. In line with the test manual, the children's catching performance was quantified on basis of the number of correctly caught balls. The throwing accuracy and the catching strategy were quantified by applying a kinematic analysis on the ball's trajectory and the hand movements. Based on linear regression analyses, we then investigated the relation between throwing accuracy, catching performance and catching strategy. The results show that an increased throwing accuracy is significantly correlated with an increased catching performance. Moreover, a higher throwing accuracy is significantly correlated with a longer duration of the hand on the ball's parabola, which indicates that throwing the ball more accurately could enable the children to effectively reduce the requirements on temporal precision. As the children's catching performance and their choice of catching strategy in the ball-catching task of the MABC-2 are substantially determined by their throwing accuracy, the test evaluation should not be based on the movement outcome alone, but should also take into account the children's throwing performance. Our findings could be of particular value for the

  5. Landsat classification accuracy assessment procedures

    USGS Publications Warehouse

    Mead, R. R.; Szajgin, John

    1982-01-01

    A working conference was held in Sioux Falls, South Dakota, 12-14 November, 1980 dealing with Landsat classification Accuracy Assessment Procedures. Thirteen formal presentations were made on three general topics: (1) sampling procedures, (2) statistical analysis techniques, and (3) examples of projects which included accuracy assessment and the associated costs, logistical problems, and value of the accuracy data to the remote sensing specialist and the resource manager. Nearly twenty conference attendees participated in two discussion sessions addressing various issues associated with accuracy assessment. This paper presents an account of the accomplishments of the conference.

  6. An Accuracy--Response Time Capacity Assessment Function that Measures Performance against Standard Parallel Predictions

    ERIC Educational Resources Information Center

    Townsend, James T.; Altieri, Nicholas

    2012-01-01

    Measures of human efficiency under increases in mental workload or attentional limitations are vital in studying human perception, cognition, and action. Assays of efficiency as workload changes have typically been confined to either reaction times (RTs) or accuracy alone. Within the realm of RTs, a nonparametric measure called the "workload…

  7. An accuracy-response time capacity assessment function that measures performance against standard parallel predictions.

    PubMed

    Townsend, James T; Altieri, Nicholas

    2012-07-01

    Measures of human efficiency under increases in mental workload or attentional limitations are vital in studying human perception, cognition, and action. Assays of efficiency as workload changes have typically been confined to either reaction times (RTs) or accuracy alone. Within the realm of RTs, a nonparametric measure called the workload capacity coefficient has been employed in many studies (Townsend & Nozawa, 1995). However, the contribution of correct versus incorrect responses has been unavailable in that context. A nonparametric statistic that is capable of simultaneously taking into account accuracy as well as RTs would be highly useful. This theoretical study develops such a tool for two important decisional stopping rules. Preliminary data from a simple visual identification study illustrate one potential application. PMID:22775497

  8. Future dedicated Venus-SGG flight mission: Accuracy assessment and performance analysis

    NASA Astrophysics Data System (ADS)

    Zheng, Wei; Hsu, Houtse; Zhong, Min; Yun, Meijuan

    2016-01-01

    This study concentrates principally on the systematic requirements analysis for the future dedicated Venus-SGG (spacecraft gravity gradiometry) flight mission in China in respect of the matching measurement accuracies of the spacecraft-based scientific instruments and the orbital parameters of the spacecraft. Firstly, we created and proved the single and combined analytical error models of the cumulative Venusian geoid height influenced by the gravity gradient error of the spacecraft-borne atom-interferometer gravity gradiometer (AIGG) and the orbital position error and orbital velocity error tracked by the deep space network (DSN) on the Earth station. Secondly, the ultra-high-precision spacecraft-borne AIGG is propitious to making a significant contribution to globally mapping the Venusian gravitational field and modeling the geoid with unprecedented accuracy and spatial resolution through weighing the advantages and disadvantages among the electrostatically suspended gravity gradiometer, the superconducting gravity gradiometer and the AIGG. Finally, the future dedicated Venus-SGG spacecraft had better adopt the optimal matching accuracy indices consisting of 3 × 10-13/s2 in gravity gradient, 10 m in orbital position and 8 × 10-4 m/s in orbital velocity and the preferred orbital parameters comprising an orbital altitude of 300 ± 50 km, an observation time of 60 months and a sampling interval of 1 s.

  9. Awareness of Memory Ability and Change: (In)Accuracy of Memory Self-Assessments in Relation to Performance

    PubMed Central

    Rickenbach, Elizabeth Hahn; Agrigoroaei, Stefan; Lachman, Margie E.

    2015-01-01

    Little is known about subjective assessments of memory abilities and decline among middle-aged adults or their association with objective memory performance in the general population. In this study we examined self-ratings of memory ability and change in relation to episodic memory performance in two national samples of middle-aged and older adults from the Midlife in the United States study (MIDUS II in 2005-06) and the Health and Retirement Study (HRS; every two years from 2002 to 2012). MIDUS (Study 1) participants (N=3,581) rated their memory compared to others their age and to themselves five years ago; HRS (Study 2) participants (N=14,821) rated their current memory and their memory compared to two years ago, with up to six occasions of longitudinal data over ten years. In both studies, episodic memory performance was the total number of words recalled in immediate and delayed conditions. When controlling for demographic and health correlates, self-ratings of memory abilities, but not subjective change, were related to performance. We examined accuracy by comparing subjective and objective memory ability and change. More than one third of the participants across the studies had self-assessments that were inaccurate relative to their actual level of performance and change, and accuracy differed as a function of demographic and health factors. Further understanding of self-awareness of memory abilities and change beginning in midlife may be useful for identifying early warning signs of decline, with implications regarding policies and practice for early detection and treatment of cognitive impairment. PMID:25821529

  10. Accuracy assessment system and operation

    NASA Technical Reports Server (NTRS)

    Pitts, D. E.; Houston, A. G.; Badhwar, G.; Bender, M. J.; Rader, M. L.; Eppler, W. G.; Ahlers, C. W.; White, W. P.; Vela, R. R.; Hsu, E. M. (Principal Investigator)

    1979-01-01

    The accuracy and reliability of LACIE estimates of wheat production, area, and yield is determined at regular intervals throughout the year by the accuracy assessment subsystem which also investigates the various LACIE error sources, quantifies the errors, and relates then to their causes. Timely feedback of these error evaluations to the LACIE project was the only mechanism by which improvements in the crop estimation system could be made during the short 3 year experiment.

  11. A Comparative Analysis of Diagnostic Accuracy of Focused Assessment With Sonography for Trauma Performed by Emergency Medicine and Radiology Residents

    PubMed Central

    Zamani, Majid; Masoumi, Babak; Esmailian, Mehrdad; Habibi, Amin; Khazaei, Mehdi; Mohammadi Esfahani, Mohammad

    2015-01-01

    Background: Focused assessment with sonography in trauma (FAST) is a method for prompt detection of the abdominal free fluid in patients with abdominal trauma. Objectives: This study was conducted to compare the diagnostic accuracy of FAST performed by emergency medicine residents (EMR) and radiology residents (RRs) in detecting peritoneal free fluids. Patients and Methods: Patients triaged in the emergency department with blunt abdominal trauma, high energy trauma, and multiple traumas underwent a FAST examination by EMRs and RRs with the same techniques to obtain the standard views. Ultrasound findings for free fluid in peritoneal cavity for each patient (positive/negative) were compared with the results of computed tomography, operative exploration, or observation as the final outcome. Results: A total of 138 patients were included in the final analysis. Good diagnostic agreement was noted between the results of FAST scans performed by EMRs and RRs (κ = 0.701, P < 0.001), also between the results of EMRs-performed FAST and the final outcome (κ = 0.830, P < 0.0010), and finally between the results of RRs-performed FAST and final outcome (κ = 0.795, P < 0.001). No significant differences were noted between EMRs- and RRs-performed FASTs regarding sensitivity (84.6% vs 84.6%), specificity (98.4% vs 97.6%), positive predictive value (84.6% vs 84.6%), and negative predictive value (98.4% vs 98.4%). Conclusions: Trained EMRs like their fellow RRs have the ability to perform FAST scan with high diagnostic value in patients with blunt abdominal trauma. PMID:26756009

  12. Classification Accuracy of Oral Reading Fluency and Maze in Predicting Performance on Large-Scale Reading Assessments

    ERIC Educational Resources Information Center

    Decker, Dawn M.; Hixson, Michael D.; Shaw, Amber; Johnson, Gloria

    2014-01-01

    The purpose of this study was to examine whether using a multiple-measure framework yielded better classification accuracy than oral reading fluency (ORF) or maze alone in predicting pass/fail rates for middle-school students on a large-scale reading assessment. Participants were 178 students in Grades 7 and 8 from a Midwestern school district.…

  13. PLÉIADES Project: Assessment of Georeferencing Accuracy, Image Quality, Pansharpening Performence and Dsm/dtm Quality

    NASA Astrophysics Data System (ADS)

    Topan, Hüseyin; Cam, Ali; Özendi, Mustafa; Oruç, Murat; Jacobsen, Karsten; Taşkanat, Talha

    2016-06-01

    Pléiades 1A and 1B are twin optical satellites of Optical and Radar Federated Earth Observation (ORFEO) program jointly running by France and Italy. They are the first satellites of Europe with sub-meter resolution. Airbus DS (formerly Astrium Geo) runs a MyGIC (formerly Pléiades Users Group) program to validate Pléiades images worldwide for various application purposes. The authors conduct three projects, one is within this program, the second is supported by BEU Scientific Research Project Program, and the third is supported by TÜBİTAK. Assessment of georeferencing accuracy, image quality, pansharpening performance and Digital Surface Model/Digital Terrain Model (DSM/DTM) quality subjects are investigated in these projects. For these purposes, triplet panchromatic (50 cm Ground Sampling Distance (GSD)) and VNIR (2 m GSD) Pléiades 1A images were investigated over Zonguldak test site (Turkey) which is urbanised, mountainous and covered by dense forest. The georeferencing accuracy was estimated with a standard deviation in X and Y (SX, SY) in the range of 0.45m by bias corrected Rational Polynomial Coefficient (RPC) orientation, using ~170 Ground Control Points (GCPs). 3D standard deviation of ±0.44m in X, ±0.51m in Y, and ±1.82m in Z directions have been reached in spite of the very narrow angle of convergence by bias corrected RPC orientation. The image quality was also investigated with respect to effective resolution, Signal to Noise Ratio (SNR) and blur coefficient. The effective resolution was estimated with factor slightly below 1.0, meaning that the image quality corresponds to the nominal resolution of 50cm. The blur coefficients were achieved between 0.39-0.46 for triplet panchromatic images, indicating a satisfying image quality. SNR is in the range of other comparable space borne images which may be caused by de-noising of Pléiades images. The pansharpened images were generated by various methods, and are validated by most common statistical

  14. Ground Truth Sampling and LANDSAT Accuracy Assessment

    NASA Technical Reports Server (NTRS)

    Robinson, J. W.; Gunther, F. J.; Campbell, W. J.

    1982-01-01

    It is noted that the key factor in any accuracy assessment of remote sensing data is the method used for determining the ground truth, independent of the remote sensing data itself. The sampling and accuracy procedures developed for nuclear power plant siting study are described. The purpose of the sampling procedure was to provide data for developing supervised classifications for two study sites and for assessing the accuracy of that and the other procedures used. The purpose of the accuracy assessment was to allow the comparison of the cost and accuracy of various classification procedures as applied to various data types.

  15. Arizona Vegetation Resource Inventory (AVRI) accuracy assessment

    USGS Publications Warehouse

    Szajgin, John; Pettinger, L.R.; Linden, D.S.; Ohlen, D.O.

    1982-01-01

    A quantitative accuracy assessment was performed for the vegetation classification map produced as part of the Arizona Vegetation Resource Inventory (AVRI) project. This project was a cooperative effort between the Bureau of Land Management (BLM) and the Earth Resources Observation Systems (EROS) Data Center. The objective of the accuracy assessment was to estimate (with a precision of ?10 percent at the 90 percent confidence level) the comission error in each of the eight level II hierarchical vegetation cover types. A stratified two-phase (double) cluster sample was used. Phase I consisted of 160 photointerpreted plots representing clusters of Landsat pixels, and phase II consisted of ground data collection at 80 of the phase I cluster sites. Ground data were used to refine the phase I error estimates by means of a linear regression model. The classified image was stratified by assigning each 15-pixel cluster to the stratum corresponding to the dominant cover type within each cluster. This method is known as stratified plurality sampling. Overall error was estimated to be 36 percent with a standard error of 2 percent. Estimated error for individual vegetation classes ranged from a low of 10 percent ?6 percent for evergreen woodland to 81 percent ?7 percent for cropland and pasture. Total cost of the accuracy assessment was $106,950 for the one-million-hectare study area. The combination of the stratified plurality sampling (SPS) method of sample allocation with double sampling provided the desired estimates within the required precision levels. The overall accuracy results confirmed that highly accurate digital classification of vegetation is difficult to perform in semiarid environments, due largely to the sparse vegetation cover. Nevertheless, these techniques show promise for providing more accurate information than is presently available for many BLM-administered lands.

  16. Skinfold Assessment: Accuracy and Application

    ERIC Educational Resources Information Center

    Ball, Stephen; Swan, Pamela D.; Altena, Thomas S.

    2006-01-01

    Although not perfect, skinfolds (SK), or the measurement of fat under the skin, remains the most popular and practical method available to assess body composition on a large scale (Kuczmarski, Flegal, Campbell, & Johnson, 1994). Even for practitioners who have been using SK for years and are highly proficient at locating the correct anatomical…

  17. Tracking accuracy assessment for concentrator photovoltaic systems

    NASA Astrophysics Data System (ADS)

    Norton, Matthew S. H.; Anstey, Ben; Bentley, Roger W.; Georghiou, George E.

    2010-10-01

    The accuracy to which a concentrator photovoltaic (CPV) system can track the sun is an important parameter that influences a number of measurements that indicate the performance efficiency of the system. This paper presents work carried out into determining the tracking accuracy of a CPV system, and illustrates the steps involved in gaining an understanding of the tracking accuracy. A Trac-Stat SL1 accuracy monitor has been used in the determination of pointing accuracy and has been integrated into the outdoor CPV module test facility at the Photovoltaic Technology Laboratories in Nicosia, Cyprus. Results from this work are provided to demonstrate how important performance indicators may be presented, and how the reliability of results is improved through the deployment of such accuracy monitors. Finally, recommendations on the use of such sensors are provided as a means to improve the interpretation of real outdoor performance.

  18. Alaska national hydrography dataset positional accuracy assessment study

    USGS Publications Warehouse

    Arundel, Samantha; Yamamoto, Kristina H.; Constance, Eric; Mantey, Kim; Vinyard-Houx, Jeremy

    2013-01-01

    Initial visual assessments Wide range in the quality of fit between features in NHD and these new image sources. No statistical analysis has been performed to actually quantify accuracy Determining absolute accuracy is cost prohibitive (must collect independent, well defined test points) Quantitative analysis of relative positional error is feasible.

  19. Laboratory Performance Assessment.

    ERIC Educational Resources Information Center

    Slater, Timothy F.; Ryan, Joseph M.

    1993-01-01

    Describes a performance assessment protocol that rates six goals: (1) methodology of research, (2) use of equipment, (3) accuracy in measurement, (4) application of concepts and formulas, (5) use of mathematics, and (6) completeness and clarity. Provides an example performance task evaluation sheet. (MVL)

  20. Assessment of the Thematic Accuracy of Land Cover Maps

    NASA Astrophysics Data System (ADS)

    Höhle, J.

    2015-08-01

    Several land cover maps are generated from aerial imagery and assessed by different approaches. The test site is an urban area in Europe for which six classes (`building', `hedge and bush', `grass', `road and parking lot', `tree', `wall and car port') had to be derived. Two classification methods were applied (`Decision Tree' and `Support Vector Machine') using only two attributes (height above ground and normalized difference vegetation index) which both are derived from the images. The assessment of the thematic accuracy applied a stratified design and was based on accuracy measures such as user's and producer's accuracy, and kappa coefficient. In addition, confidence intervals were computed for several accuracy measures. The achieved accuracies and confidence intervals are thoroughly analysed and recommendations are derived from the gained experiences. Reliable reference values are obtained using stereovision, false-colour image pairs, and positioning to the checkpoints with 3D coordinates. The influence of the training areas on the results is studied. Cross validation has been tested with a few reference points in order to derive approximate accuracy measures. The two classification methods perform equally for five classes. Trees are classified with a much better accuracy and a smaller confidence interval by means of the decision tree method. Buildings are classified by both methods with an accuracy of 99% (95% CI: 95%-100%) using independent 3D checkpoints. The average width of the confidence interval of six classes was 14% of the user's accuracy.

  1. Positional Accuracy Assessment of Googleearth in Riyadh

    NASA Astrophysics Data System (ADS)

    Farah, Ashraf; Algarni, Dafer

    2014-06-01

    Google Earth is a virtual globe, map and geographical information program that is controlled by Google corporation. It maps the Earth by the superimposition of images obtained from satellite imagery, aerial photography and GIS 3D globe. With millions of users all around the globe, GoogleEarth® has become the ultimate source of spatial data and information for private and public decision-support systems besides many types and forms of social interactions. Many users mostly in developing countries are also using it for surveying applications, the matter that raises questions about the positional accuracy of the Google Earth program. This research presents a small-scale assessment study of the positional accuracy of GoogleEarth® Imagery in Riyadh; capital of Kingdom of Saudi Arabia (KSA). The results show that the RMSE of the GoogleEarth imagery is 2.18 m and 1.51 m for the horizontal and height coordinates respectively.

  2. Accuracy Assessment of Altimeter Derived Geostrophic Velocities

    NASA Astrophysics Data System (ADS)

    Leben, R. R.; Powell, B. S.; Born, G. H.; Guinasso, N. L.

    2002-12-01

    Along track sea surface height anomaly gradients are proportional to cross track geostrophic velocity anomalies allowing satellite altimetry to provide much needed satellite observations of changes in the geostrophic component of surface ocean currents. Often, surface height gradients are computed from altimeter data archives that have been corrected to give the most accurate absolute sea level, a practice that may unnecessarily increase the error in the cross track velocity anomalies and thereby require excessive smoothing to mitigate noise. Because differentiation along track acts as a high-pass filter, many of the path length corrections applied to altimeter data for absolute height accuracy are unnecessary for the corresponding gradient calculations. We report on a study to investigate appropriate altimetric corrections and processing techniques for improving geostrophic velocity accuracy. Accuracy is assessed by comparing cross track current measurements from two moorings placed along the descending TOPEX/POSEIDON ground track number 52 in the Gulf of Mexico to the corresponding altimeter velocity estimates. The buoys are deployed and maintained by the Texas Automated Buoy System (TABS) under Interagency Contracts with Texas A&M University. The buoys telemeter observations in near real-time via satellite to the TABS station located at the Geochemical and Environmental Research Group (GERG) at Texas A&M. Buoy M is located in shelf waters of 57 m depth with a second, Buoy N, 38 km away on the shelf break at 105 m depth. Buoy N has been operational since the beginning of 2002 and has a current meter at 2m depth providing in situ measurements of surface velocities coincident with Jason and TOPEX/POSEIDON altimeter over flights. This allows one of the first detailed comparisons of shallow water near surface current meter time series to coincident altimetry.

  3. Accuracy of quantitative visual soil assessment

    NASA Astrophysics Data System (ADS)

    van Leeuwen, Maricke; Heuvelink, Gerard; Stoorvogel, Jetse; Wallinga, Jakob; de Boer, Imke; van Dam, Jos; van Essen, Everhard; Moolenaar, Simon; Verhoeven, Frank; Stoof, Cathelijne

    2016-04-01

    Visual soil assessment (VSA) is a method to assess soil quality visually, when standing in the field. VSA is increasingly used by farmers, farm organisations and companies, because it is rapid and cost-effective, and because looking at soil provides understanding about soil functioning. Often VSA is regarded as subjective, so there is a need to verify VSA. Also, many VSAs have not been fine-tuned for contrasting soil types. This could lead to wrong interpretation of soil quality and soil functioning when contrasting sites are compared to each other. We wanted to assess accuracy of VSA, while taking into account soil type. The first objective was to test whether quantitative visual field observations, which form the basis in many VSAs, could be validated with standardized field or laboratory measurements. The second objective was to assess whether quantitative visual field observations are reproducible, when used by observers with contrasting backgrounds. For the validation study, we made quantitative visual observations at 26 cattle farms. Farms were located at sand, clay and peat soils in the North Friesian Woodlands, the Netherlands. Quantitative visual observations evaluated were grass cover, number of biopores, number of roots, soil colour, soil structure, number of earthworms, number of gley mottles and soil compaction. Linear regression analysis showed that four out of eight quantitative visual observations could be well validated with standardized field or laboratory measurements. The following quantitative visual observations correlated well with standardized field or laboratory measurements: grass cover with classified images of surface cover; number of roots with root dry weight; amount of large structure elements with mean weight diameter; and soil colour with soil organic matter content. Correlation coefficients were greater than 0.3, from which half of the correlations were significant. For the reproducibility study, a group of 9 soil scientists and 7

  4. Data accuracy assessment using enterprise architecture

    NASA Astrophysics Data System (ADS)

    Närman, Per; Holm, Hannes; Johnson, Pontus; König, Johan; Chenine, Moustafa; Ekstedt, Mathias

    2011-02-01

    Errors in business processes result in poor data accuracy. This article proposes an architecture analysis method which utilises ArchiMate and the Probabilistic Relational Model formalism to model and analyse data accuracy. Since the resources available for architecture analysis are usually quite scarce, the method advocates interviews as the primary data collection technique. A case study demonstrates that the method yields correct data accuracy estimates and is more resource-efficient than a competing sampling-based data accuracy estimation method.

  5. Accuracy assessment of GPS satellite orbits

    NASA Technical Reports Server (NTRS)

    Schutz, B. E.; Tapley, B. D.; Abusali, P. A. M.; Ho, C. S.

    1991-01-01

    GPS orbit accuracy is examined using several evaluation procedures. The existence is shown of unmodeled effects which correlate with the eclipsing of the sun. The ability to obtain geodetic results that show an accuracy of 1-2 parts in 10 to the 8th or better has not diminished.

  6. Assessing the performance of the MM/PBSA and MM/GBSA methods: I. The accuracy of binding free energy calculations based on molecular dynamics simulations

    PubMed Central

    Hou, Tingjun; Wang, Junmei; Li, Youyong; Wang, Wei

    2011-01-01

    The Molecular Mechanics/Poisson Boltzmann Surface Area (MM/PBSA) and the Molecular Mechanics/Generalized Born Surface Area (MM/GBSA) methods calculate binding free energies for macromolecules by combining molecular mechanics calculations and continuum solvation models. To systematically evaluate the performance of these methods, we report here an extensive study of 59 ligands interacting with six different proteins. First, we explored the effects of the length of the molecular dynamics (MD) simulation, ranging from 400 to 4800 ps, and the solute dielectric constant (1, 2 or 4) to the binding free energies predicted by MM/PBSA. The following three important conclusions could be observed: (1). MD simulation lengths have obvious impact on the predictions, and longer MD simulations are not always necessary to achieve better predictions; (2). The predictions are quite sensitive to solute dielectric constant, and this parameter should be carefully determined according to the characteristics of the protein/ligand binding interface; (3). Conformational entropy showed large fluctuations in MD trajectories and a large number of snapshots are necessary to achieve stable predictions. Next, we evaluated the accuracy of the binding free energies calculated by three Generalized Born (GB) models. We found that the GB model developed by Onufriev and Case was the most successful model in ranking the binding affinities of the studied inhibitors. Finally, we evaluated the performance of MM/GBSA and MM/PBSA in predicting binding free energies. Our results showed that MM/PBSA performed better in calculating absolute, but not necessarily relative, binding free energies than MM/GBSA. Considering its computational efficiency, MM/GBSA can serve as a powerful tool in drug design, where correct ranking of inhibitors is often emphasized. PMID:21117705

  7. 20 CFR 404.1643 - Performance accuracy standard.

    Code of Federal Regulations, 2014 CFR

    2014-04-01

    ... 20 Employees' Benefits 2 2014-04-01 2014-04-01 false Performance accuracy standard. 404.1643 Section 404.1643 Employees' Benefits SOCIAL SECURITY ADMINISTRATION FEDERAL OLD-AGE, SURVIVORS AND DISABILITY INSURANCE (1950- ) Determinations of Disability Performance Standards § 404.1643 Performance accuracy standard. (a) General. Performance...

  8. A Framework for the Objective Assessment of Registration Accuracy

    PubMed Central

    Simonetti, Flavio; Foroni, Roberto Israel

    2014-01-01

    Validation and accuracy assessment are the main bottlenecks preventing the adoption of image processing algorithms in the clinical practice. In the classical approach, a posteriori analysis is performed through objective metrics. In this work, a different approach based on Petri nets is proposed. The basic idea consists in predicting the accuracy of a given pipeline based on the identification and characterization of the sources of inaccuracy. The concept is demonstrated on a case study: intrasubject rigid and affine registration of magnetic resonance images. Both synthetic and real data are considered. While synthetic data allow the benchmarking of the performance with respect to the ground truth, real data enable to assess the robustness of the methodology in real contexts as well as to determine the suitability of the use of synthetic data in the training phase. Results revealed a higher correlation and a lower dispersion among the metrics for simulated data, while the opposite trend was observed for pathologic ones. Results show that the proposed model not only provides a good prediction performance but also leads to the optimization of the end-to-end chain in terms of accuracy and robustness, setting the ground for its generalization to different and more complex scenarios. PMID:24659997

  9. Performance and accuracy benchmarks for a next generation geodynamo simulation

    NASA Astrophysics Data System (ADS)

    Matsui, H.

    2015-12-01

    A number of numerical dynamo models have successfully represented basic characteristics of the geomagnetic field in the last twenty years. However, parameters in the current dynamo model are far from realistic for the Earth's core. To approach a realistic parameters for the Earth's core in geodynmo simulations, extremely large spatial resolutions are required to resolve convective turbulence and small-scale magnetic fields. To assess the next generation dynamo models on a massively parallel computer, we performed performance and accuracy benchmarks from 15 dynamo codes which employ a diverse range of discretization (spectral, finite difference, finite element, and hybrid methods) and parallelization methods. In the performance benchmark, we compare elapsed time and parallelization capability on the TACC Stampede platform, using up to 16384 processor cores. In the accuracy benchmark, we compare required resolutions to obtain less than 1% error from the suggested solutions. The results of the performance benchmark show that codes using 2-D or 3-D parallelization models have a capability to run with 16384 processor cores. The elapsed time for Calypso and Rayleigh, two parallelized codes that use the spectral method, scales with a smaller exponent than the ideal scaling. The elapsed time of SFEMaNS, which uses finite element and Fourier transform, has the smallest growth of the elapsed time with the resolution and parallelization. However, the accuracy benchmark results show that SFEMaNS require three times more degrees of freedoms in each direction compared with a spherical harmonics expansion. Consequently, SFEMaNS needs more than 200 times of elapsed time for the Calypso and Rayleigh with 10000 cores to obtain the same accuracy. These benchmark results indicate that the spectral method with 2-D or 3-D domain decomposition is the most promising methodology for advancing numerical dynamo simulations in the immediate future.

  10. Accuracy Assessment for AG500, Electromagnetic Articulograph

    ERIC Educational Resources Information Center

    Yunusova, Yana; Green, Jordan R.; Mefferd, Antje

    2009-01-01

    Purpose: The goal of this article was to evaluate the accuracy and reliability of the AG500 (Carstens Medizinelectronik, Lenglern, Germany), an electromagnetic device developed recently to register articulatory movements in three dimensions. This technology seems to have unprecedented capabilities to provide rich information about time-varying…

  11. DESIGNA ND ANALYSIS FOR THEMATIC MAP ACCURACY ASSESSMENT: FUNDAMENTAL PRINCIPLES

    EPA Science Inventory

    Before being used in scientific investigations and policy decisions, thematic maps constructed from remotely sensed data should be subjected to a statistically rigorous accuracy assessment. The three basic components of an accuracy assessment are: 1) the sampling design used to s...

  12. Accuracy assessment of landslide prediction models

    NASA Astrophysics Data System (ADS)

    Othman, A. N.; Mohd, W. M. N. W.; Noraini, S.

    2014-02-01

    The increasing population and expansion of settlements over hilly areas has greatly increased the impact of natural disasters such as landslide. Therefore, it is important to developed models which could accurately predict landslide hazard zones. Over the years, various techniques and models have been developed to predict landslide hazard zones. The aim of this paper is to access the accuracy of landslide prediction models developed by the authors. The methodology involved the selection of study area, data acquisition, data processing and model development and also data analysis. The development of these models are based on nine different landslide inducing parameters i.e. slope, land use, lithology, soil properties, geomorphology, flow accumulation, aspect, proximity to river and proximity to road. Rank sum, rating, pairwise comparison and AHP techniques are used to determine the weights for each of the parameters used. Four (4) different models which consider different parameter combinations are developed by the authors. Results obtained are compared to landslide history and accuracies for Model 1, Model 2, Model 3 and Model 4 are 66.7, 66.7%, 60% and 22.9% respectively. From the results, rank sum, rating and pairwise comparison can be useful techniques to predict landslide hazard zones.

  13. Accuracy of commercial geocoding: assessment and implications

    PubMed Central

    Whitsel, Eric A; Quibrera, P Miguel; Smith, Richard L; Catellier, Diane J; Liao, Duanping; Henley, Amanda C; Heiss, Gerardo

    2006-01-01

    Background Published studies of geocoding accuracy often focus on a single geographic area, address source or vendor, do not adjust accuracy measures for address characteristics, and do not examine effects of inaccuracy on exposure measures. We addressed these issues in a Women's Health Initiative ancillary study, the Environmental Epidemiology of Arrhythmogenesis in WHI. Results Addresses in 49 U.S. states (n = 3,615) with established coordinates were geocoded by four vendors (A-D). There were important differences among vendors in address match rate (98%; 82%; 81%; 30%), concordance between established and vendor-assigned census tracts (85%; 88%; 87%; 98%) and distance between established and vendor-assigned coordinates (mean ρ [meters]: 1809; 748; 704; 228). Mean ρ was lowest among street-matched, complete, zip-coded, unedited and urban addresses, and addresses with North American Datum of 1983 or World Geodetic System of 1984 coordinates. In mixed models restricted to vendors with minimally acceptable match rates (A-C) and adjusted for address characteristics, within-address correlation, and among-vendor heteroscedasticity of ρ, differences in mean ρ were small for street-type matches (280; 268; 275), i.e. likely to bias results relying on them about equally for most applications. In contrast, differences between centroid-type matches were substantial in some vendor contrasts, but not others (5497; 4303; 4210) pinteraction < 10-4, i.e. more likely to bias results differently in many applications. The adjusted odds of an address match was higher for vendor A versus C (odds ratio = 66, 95% confidence interval: 47, 93), but not B versus C (OR = 1.1, 95% CI: 0.9, 1.3). That of census tract concordance was no higher for vendor A versus C (OR = 1.0, 95% CI: 0.9, 1.2) or B versus C (OR = 1.1, 95% CI: 0.9, 1.3). Misclassification of a related exposure measure – distance to the nearest highway – increased with mean ρ and in the absence of confounding, non

  14. Accuracy assessment of NLCD 2006 land cover and impervious surface

    USGS Publications Warehouse

    Wickham, James D.; Stehman, Stephen V.; Gass, Leila; Dewitz, Jon; Fry, Joyce A.; Wade, Timothy G.

    2013-01-01

    Release of NLCD 2006 provides the first wall-to-wall land-cover change database for the conterminous United States from Landsat Thematic Mapper (TM) data. Accuracy assessment of NLCD 2006 focused on four primary products: 2001 land cover, 2006 land cover, land-cover change between 2001 and 2006, and impervious surface change between 2001 and 2006. The accuracy assessment was conducted by selecting a stratified random sample of pixels with the reference classification interpreted from multi-temporal high resolution digital imagery. The NLCD Level II (16 classes) overall accuracies for the 2001 and 2006 land cover were 79% and 78%, respectively, with Level II user's accuracies exceeding 80% for water, high density urban, all upland forest classes, shrubland, and cropland for both dates. Level I (8 classes) accuracies were 85% for NLCD 2001 and 84% for NLCD 2006. The high overall and user's accuracies for the individual dates translated into high user's accuracies for the 2001–2006 change reporting themes water gain and loss, forest loss, urban gain, and the no-change reporting themes for water, urban, forest, and agriculture. The main factor limiting higher accuracies for the change reporting themes appeared to be difficulty in distinguishing the context of grass. We discuss the need for more research on land-cover change accuracy assessment.

  15. Evaluating the Effect of Learning Style and Student Background on Self-Assessment Accuracy

    ERIC Educational Resources Information Center

    Alaoutinen, Satu

    2012-01-01

    This study evaluates a new taxonomy-based self-assessment scale and examines factors that affect assessment accuracy and course performance. The scale is based on Bloom's Revised Taxonomy and is evaluated by comparing students' self-assessment results with course performance in a programming course. Correlation has been used to reveal possible…

  16. Assessing the Accuracy of the Precise Point Positioning Technique

    NASA Astrophysics Data System (ADS)

    Bisnath, S. B.; Collins, P.; Seepersad, G.

    2012-12-01

    The Precise Point Positioning (PPP) GPS data processing technique has developed over the past 15 years to become a standard method for growing categories of positioning and navigation applications. The technique relies on single receiver point positioning combined with the use of precise satellite orbit and clock information and high-fidelity error modelling. The research presented here uniquely addresses the current accuracy of the technique, explains the limits of performance, and defines paths to improvements. For geodetic purposes, performance refers to daily static position accuracy. PPP processing of over 80 IGS stations over one week results in few millimetre positioning rms error in the north and east components and few centimetres in the vertical (all one sigma values). Larger error statistics for real-time and kinematic processing are also given. GPS PPP with ambiguity resolution processing is also carried out, producing slight improvements over the float solution results. These results are categorised into quality classes in order to analyse the root error causes of the resultant accuracies: "best", "worst", multipath, site displacement effects, satellite availability and geometry, etc. Also of interest in PPP performance is solution convergence period. Static, conventional solutions are slow to converge, with approximately 35 minutes required for 95% of solutions to reach the 20 cm or better horizontal accuracy. Ambiguity resolution can significantly reduce this period without biasing solutions. The definition of a PPP error budget is a complex task even with the resulting numerical assessment, as unlike the epoch-by-epoch processing in the Standard Position Service, PPP processing involving filtering. An attempt is made here to 1) define the magnitude of each error source in terms of range, 2) transform ranging error to position error via Dilution Of Precision (DOP), and 3) scale the DOP through the filtering process. The result is a deeper

  17. Assessing and Ensuring GOES-R Magnetometer Accuracy

    NASA Technical Reports Server (NTRS)

    Carter, Delano R.; Todirita, Monica; Kronenwetter, Jeffrey; Chu, Donald

    2016-01-01

    The GOES-R magnetometer subsystem accuracy requirement is 1.7 nanoteslas (nT). During quiet times (100 nT), accuracy is defined as absolute mean plus 3 sigma. During storms (300 nT), accuracy is defined as absolute mean plus 2 sigma. Error comes both from outside the magnetometers, e.g. spacecraft fields and misalignments, as well as inside, e.g. zero offset and scale factor errors. Because zero offset and scale factor drift over time, it will be necessary to perform annual calibration maneuvers. To predict performance before launch, we have used Monte Carlo simulations and covariance analysis. Both behave as expected, and their accuracy predictions agree within 30%. With the proposed calibration regimen, both suggest that the GOES-R magnetometer subsystem will meet its accuracy requirements.

  18. Assessing and Ensuring GOES-R Magnetometer Accuracy

    NASA Technical Reports Server (NTRS)

    Kronenwetter, Jeffrey; Carter, Delano R.; Todirita, Monica; Chu, Donald

    2016-01-01

    The GOES-R magnetometer accuracy requirement is 1.7 nanoteslas (nT). During quiet times (100 nT), accuracy is defined as absolute mean plus 3 sigma. During storms (300 nT), accuracy is defined as absolute mean plus 2 sigma. To achieve this, the sensor itself has better than 1 nT accuracy. Because zero offset and scale factor drift over time, it is also necessary to perform annual calibration maneuvers. To predict performance, we used covariance analysis and attempted to corroborate it with simulations. Although not perfect, the two generally agree and show the expected behaviors. With the annual calibration regimen, these predictions suggest that the magnetometers will meet their accuracy requirements.

  19. 20 CFR 416.1043 - Performance accuracy standard.

    Code of Federal Regulations, 2011 CFR

    2011-04-01

    ... 20 Employees' Benefits 2 2011-04-01 2011-04-01 false Performance accuracy standard. 416.1043 Section 416.1043 Employees' Benefits SOCIAL SECURITY ADMINISTRATION SUPPLEMENTAL SECURITY INCOME FOR THE... stepping stones to progress towards our targeted level of performance. (d) Threshold levels. The...

  20. 20 CFR 416.1043 - Performance accuracy standard.

    Code of Federal Regulations, 2010 CFR

    2010-04-01

    ... 20 Employees' Benefits 2 2010-04-01 2010-04-01 false Performance accuracy standard. 416.1043 Section 416.1043 Employees' Benefits SOCIAL SECURITY ADMINISTRATION SUPPLEMENTAL SECURITY INCOME FOR THE... stepping stones to progress towards our targeted level of performance. (d) Threshold levels. The...

  1. 20 CFR 404.1643 - Performance accuracy standard.

    Code of Federal Regulations, 2013 CFR

    2013-04-01

    ... 20 Employees' Benefits 2 2013-04-01 2013-04-01 false Performance accuracy standard. 404.1643 Section 404.1643 Employees' Benefits SOCIAL SECURITY ADMINISTRATION FEDERAL OLD-AGE, SURVIVORS AND DISABILITY INSURANCE (1950- ) Determinations of Disability Performance Standards § 404.1643...

  2. 20 CFR 404.1643 - Performance accuracy standard.

    Code of Federal Regulations, 2012 CFR

    2012-04-01

    ... 20 Employees' Benefits 2 2012-04-01 2012-04-01 false Performance accuracy standard. 404.1643 Section 404.1643 Employees' Benefits SOCIAL SECURITY ADMINISTRATION FEDERAL OLD-AGE, SURVIVORS AND DISABILITY INSURANCE (1950- ) Determinations of Disability Performance Standards § 404.1643...

  3. 20 CFR 404.1643 - Performance accuracy standard.

    Code of Federal Regulations, 2010 CFR

    2010-04-01

    ... 20 Employees' Benefits 2 2010-04-01 2010-04-01 false Performance accuracy standard. 404.1643 Section 404.1643 Employees' Benefits SOCIAL SECURITY ADMINISTRATION FEDERAL OLD-AGE, SURVIVORS AND DISABILITY INSURANCE (1950- ) Determinations of Disability Performance Standards § 404.1643...

  4. 20 CFR 404.1643 - Performance accuracy standard.

    Code of Federal Regulations, 2011 CFR

    2011-04-01

    ... 20 Employees' Benefits 2 2011-04-01 2011-04-01 false Performance accuracy standard. 404.1643 Section 404.1643 Employees' Benefits SOCIAL SECURITY ADMINISTRATION FEDERAL OLD-AGE, SURVIVORS AND DISABILITY INSURANCE (1950- ) Determinations of Disability Performance Standards § 404.1643...

  5. Update and review of accuracy assessment techniques for remotely sensed data

    NASA Technical Reports Server (NTRS)

    Congalton, R. G.; Heinen, J. T.; Oderwald, R. G.

    1983-01-01

    Research performed in the accuracy assessment of remotely sensed data is updated and reviewed. The use of discrete multivariate analysis techniques for the assessment of error matrices, the use of computer simulation for assessing various sampling strategies, and an investigation of spatial autocorrelation techniques are examined.

  6. Accuracy of telepsychiatric assessment of new routine outpatient referrals

    PubMed Central

    Singh, Surendra P; Arya, Dinesh; Peters, Trish

    2007-01-01

    Background Studies on the feasibility of telepsychiatry tend to concentrate only on a subset of clinical parameters. In contrast, this study utilises data from a comprehensive assessment. The main objective of this study is to compare the accuracy of findings from telepsychiatry with those from face to face interviews. Method This is a primary, cross-sectional, single-cluster, balanced crossover, blind study involving new routine psychiatric referrals. Thirty-seven out of forty cases fulfilling the selection criteria went through a complete set of independent face to face and video assessments by the researchers who were blind to each other's findings. Results The accuracy ratio of the pooled results for DSM-IV diagnoses, risk assessment, non-drug and drug interventions were all above 0.76, and the combined overall accuracy ratio was 0.81. There were substantial intermethod agreements for Cohen's kappa on all the major components of evaluation except on the Risk Assessment Scale where there was only weak agreement. Conclusion Telepsychiatric assessment is a dependable method of assessment with a high degree of accuracy and substantial overall intermethod agreement when compared with standard face to face interview for new routine outpatient psychiatric referrals. PMID:17919329

  7. Assessing and ensuring GOES-R magnetometer accuracy

    NASA Astrophysics Data System (ADS)

    Carter, Delano; Todirita, Monica; Kronenwetter, Jeffrey; Dahya, Melissa; Chu, Donald

    2016-05-01

    The GOES-R magnetometer subsystem accuracy requirement is 1.7 nanoteslas (nT). During quiet times (100 nT), accuracy is defined as absolute mean plus 3 sigma error per axis. During storms (300 nT), accuracy is defined as absolute mean plus 2 sigma error per axis. Error comes both from outside the magnetometers, e.g. spacecraft fields and misalignments, as well as inside, e.g. zero offset and scale factor errors. Because zero offset and scale factor drift over time, it will be necessary to perform annual calibration maneuvers. To predict performance before launch, we have used Monte Carlo simulations and covariance analysis. With the proposed calibration regimen, both suggest that the magnetometer subsystem will meet its accuracy requirements.

  8. Performance Assessment: Lessons from Performers

    ERIC Educational Resources Information Center

    Parkes, Kelly A.

    2010-01-01

    The performing arts studio is a highly complex learning setting, and assessing student outcomes relative to reliable and valid standards has presented challenges to this teaching and learning method. Building from the general international higher education literature, this article illustrates details, processes, and solutions, drawing on…

  9. Modelling Second Language Performance: Integrating Complexity, Accuracy, Fluency, and Lexis

    ERIC Educational Resources Information Center

    Skehan, Peter

    2009-01-01

    Complexity, accuracy, and fluency have proved useful measures of second language performance. The present article will re-examine these measures themselves, arguing that fluency needs to be rethought if it is to be measured effectively, and that the three general measures need to be supplemented by measures of lexical use. Building upon this…

  10. Estimating Classification Consistency and Accuracy for Cognitive Diagnostic Assessment

    ERIC Educational Resources Information Center

    Cui, Ying; Gierl, Mark J.; Chang, Hua-Hua

    2012-01-01

    This article introduces procedures for the computation and asymptotic statistical inference for classification consistency and accuracy indices specifically designed for cognitive diagnostic assessments. The new classification indices can be used as important indicators of the reliability and validity of classification results produced by…

  11. ASSESSING ACCURACY OF NET CHANGE DERIVED FROM LAND COVER MAPS

    EPA Science Inventory

    Net change derived from land-cover maps provides important descriptive information for environmental monitoring and is often used as an input or explanatory variable in environmental models. The sampling design and analysis for assessing net change accuracy differ from traditio...

  12. Accuracy of a semiquantitative method for Dermal Exposure Assessment (DREAM)

    PubMed Central

    van Wendel, de Joo... B; Vermeulen, R; van Hemmen, J J; Fransman, W; Kromhout, H

    2005-01-01

    Background: The authors recently developed a Dermal Exposure Assessment Method (DREAM), an observational semiquantitative method to assess dermal exposures by systematically evaluating exposure determinants using pre-assigned default values. Aim: To explore the accuracy of the DREAM method by comparing its estimates with quantitative dermal exposure measurements in several occupational settings. Methods: Occupational hygienists observed workers performing a certain task, whose exposure to chemical agents on skin or clothing was measured quantitatively simultaneously, and filled in the DREAM questionnaire. DREAM estimates were compared with measurement data by estimating Spearman correlation coefficients for each task and for individual observations. In addition, mixed linear regression models were used to study the effect of DREAM estimates on the variability in measured exposures between tasks, between workers, and from day to day. Results: For skin exposures, spearman correlation coefficients for individual observations ranged from 0.19 to 0.82. DREAM estimates for exposure levels on hands and forearms showed a fixed effect between and within surveys, explaining mainly between-task variance. In general, exposure levels on clothing layer were only predicted in a meaningful way by detailed DREAM estimates, which comprised detailed information on the concentration of the agent in the formulation to which exposure occurred. Conclusions: The authors expect that the DREAM method can be successfully applied for semiquantitative dermal exposure assessment in epidemiological and occupational hygiene surveys of groups of workers with considerable contrast in dermal exposure levels (variability between groups >1.0). For surveys with less contrasting exposure levels, quantitative dermal exposure measurements are preferable. PMID:16109819

  13. [Navigation in implantology: Accuracy assessment regarding the literature].

    PubMed

    Barrak, Ibrahim Ádám; Varga, Endre; Piffko, József

    2016-06-01

    Our objective was to assess the literature regarding the accuracy of the different static guided systems. After applying electronic literature search we found 661 articles. After reviewing 139 articles, the authors chose 52 articles for full-text evaluation. 24 studies involved accuracy measurements. Fourteen of our selected references were clinical and ten of them were in vitro (modell or cadaver). Variance-analysis (Tukey's post-hoc test; p < 0.05) was conducted to summarize the selected publications. Regarding 2819 results the average mean error at the entry point was 0.98 mm. At the level of the apex the average deviation was 1.29 mm while the mean of the angular deviation was 3,96 degrees. Significant difference could be observed between the two methods of implant placement (partially and fully guided sequence) in terms of deviation at the entry point, apex and angular deviation. Different levels of quality and quantity of evidence were available for assessing the accuracy of the different computer-assisted implant placement. The rapidly evolving field of digital dentistry and the new developments will further improve the accuracy of guided implant placement. In the interest of being able to draw dependable conclusions and for the further evaluation of the parameters used for accuracy measurements, randomized, controlled single or multi-centered clinical trials are necessary. PMID:27544966

  14. Effects of a rater training on rating accuracy in a physical examination skills assessment

    PubMed Central

    Weitz, Gunther; Vinzentius, Christian; Twesten, Christoph; Lehnert, Hendrik; Bonnemeier, Hendrik; König, Inke R.

    2014-01-01

    Background: The accuracy and reproducibility of medical skills assessment is generally low. Rater training has little or no effect. Our knowledge in this field, however, relies on studies involving video ratings of overall clinical performances. We hypothesised that a rater training focussing on the frame of reference could improve accuracy in grading the curricular assessment of a highly standardised physical head-to-toe examination. Methods: Twenty-one raters assessed the performance of 242 third-year medical students. Eleven raters had been randomly assigned to undergo a brief frame-of-reference training a few days before the assessment. 218 encounters were successfully recorded on video and re-assessed independently by three additional observers. Accuracy was defined as the concordance between the raters' grade and the median of the observers' grade. After the assessment, both students and raters filled in a questionnaire about their views on the assessment. Results: Rater training did not have a measurable influence on accuracy. However, trained raters rated significantly more stringently than untrained raters, and their overall stringency was closer to the stringency of the observers. The questionnaire indicated a higher awareness of the halo effect in the trained raters group. Although the self-assessment of the students mirrored the assessment of the raters in both groups, the students assessed by trained raters felt more discontent with their grade. Conclusions: While training had some marginal effects, it failed to have an impact on the individual accuracy. These results in real-life encounters are consistent with previous studies on rater training using video assessments of clinical performances. The high degree of standardisation in this study was not suitable to harmonize the trained raters’ grading. The data support the notion that the process of appraising medical performance is highly individual. A frame-of-reference training as applied does not

  15. Accuracy assessment in the Large Area Crop Inventory Experiment

    NASA Technical Reports Server (NTRS)

    Houston, A. G.; Pitts, D. E.; Feiveson, A. H.; Badhwar, G.; Ferguson, M.; Hsu, E.; Potter, J.; Chhikara, R.; Rader, M.; Ahlers, C.

    1979-01-01

    The Accuracy Assessment System (AAS) of the Large Area Crop Inventory Experiment (LACIE) was responsible for determining the accuracy and reliability of LACIE estimates of wheat production, area, and yield, made at regular intervals throughout the crop season, and for investigating the various LACIE error sources, quantifying these errors, and relating them to their causes. Some results of using the AAS during the three years of LACIE are reviewed. As the program culminated, AAS was able not only to meet the goal of obtaining accurate statistical estimates of sampling and classification accuracy, but also the goal of evaluating component labeling errors. Furthermore, the ground-truth data processing matured from collecting data for one crop (small grains) to collecting, quality-checking, and archiving data for all crops in a LACIE small segment.

  16. Examining rating quality in writing assessment: rater agreement, error, and accuracy.

    PubMed

    Wind, Stefanie A; Engelhard, George

    2012-01-01

    The use of performance assessments in which human raters evaluate student achievement has become increasingly prevalent in high-stakes assessment systems such as those associated with recent policy initiatives (e.g., Race to the Top). In this study, indices of rating quality are compared between two measurement perspectives. Within the context of a large-scale writing assessment, this study focuses on the alignment between indices of rater agreement, error, and accuracy based on traditional and Rasch measurement theory perspectives. Major empirical findings suggest that Rasch-based indices of model-data fit for ratings provide information about raters that is comparable to direct measures of accuracy. The use of easily obtained approximations of direct accuracy measures holds significant implications for monitoring rating quality in large-scale rater-mediated performance assessments. PMID:23270978

  17. An assessment of the accuracy of orthotropic photoelasticity - Abbreviated report

    NASA Technical Reports Server (NTRS)

    Hyer, M. W.; Liu, D.

    1984-01-01

    A brief overview is presented of a comprehensive study whose aim was to assess the accuracy of orthotropic photoelasticity. Particular attention is given to calibration of the material, forward testing for global and local behavior, and backward testing for stress determination. The experimentally determined stresses were found to agree with the elasticity solution. It is concluded that orthotropic photoelasticity does not appear to have the resolution of its isotropic counterpart, this being a consequence of the inherent inhomogeneity of the material.

  18. Survey methods for assessing land cover map accuracy

    USGS Publications Warehouse

    Nusser, S.M.; Klaas, E.E.

    2003-01-01

    The increasing availability of digital photographic materials has fueled efforts by agencies and organizations to generate land cover maps for states, regions, and the United States as a whole. Regardless of the information sources and classification methods used, land cover maps are subject to numerous sources of error. In order to understand the quality of the information contained in these maps, it is desirable to generate statistically valid estimates of accuracy rates describing misclassification errors. We explored a full sample survey framework for creating accuracy assessment study designs that balance statistical and operational considerations in relation to study objectives for a regional assessment of GAP land cover maps. We focused not only on appropriate sample designs and estimation approaches, but on aspects of the data collection process, such as gaining cooperation of land owners and using pixel clusters as an observation unit. The approach was tested in a pilot study to assess the accuracy of Iowa GAP land cover maps. A stratified two-stage cluster sampling design addressed sample size requirements for land covers and the need for geographic spread while minimizing operational effort. Recruitment methods used for private land owners yielded high response rates, minimizing a source of nonresponse error. Collecting data for a 9-pixel cluster centered on the sampled pixel was simple to implement, and provided better information on rarer vegetation classes as well as substantial gains in precision relative to observing data at a single-pixel.

  19. Rigorous A-Posteriori Assessment of Accuracy in EMG Decomposition

    PubMed Central

    McGill, Kevin C.; Marateb, Hamid R.

    2010-01-01

    If EMG decomposition is to be a useful tool for scientific investigation, it is essential to know that the results are accurate. Because of background noise, waveform variability, motor-unit action potential (MUAP) indistinguishability, and perplexing superpositions, accuracy assessment is not straightforward. This paper presents a rigorous statistical method for assessing decomposition accuracy based only on evidence from the signal itself. The method uses statistical decision theory in a Bayesian framework to integrate all the shape- and firing-time-related information in the signal to compute an objective a-posteriori measure of confidence in the accuracy of each discharge in the decomposition. The assessment is based on the estimated statistical properties of the MUAPs and noise and takes into account the relative likelihood of every other possible decomposition. The method was tested on 3 pairs of real EMG signals containing 4–7 active MUAP trains per signal that had been decomposed by a human expert. It rated 97% of the identified MUAP discharges as accurate to within ±0.5 ms with a confidence level of 99%, and detected 6 decomposition errors. Cross-checking between signal pairs verified all but 2 of these assertions. These results demonstrate that the approach is reliable and practical for real EMG signals. PMID:20639182

  20. Assessing Team Performance.

    ERIC Educational Resources Information Center

    Trimble, Susan; Rottier, Jerry

    Interdisciplinary middle school level teams capitalize on the idea that the whole is greater than the sum of its parts. Administrators and team members can maximize the advantages of teamwork using team assessments to increase the benefits for students, teachers, and the school environment. Assessing team performance can lead to high performing…

  1. Assessing accuracy of an electronic provincial medication repository

    PubMed Central

    2012-01-01

    Background Jurisdictional drug information systems are being implemented in many regions around the world. British Columbia, Canada has had a provincial medication dispensing record, PharmaNet, system since 1995. Little is known about how accurately PharmaNet reflects actual medication usage. Methods This prospective, multi-centre study compared pharmacist collected Best Possible Medication Histories (BPMH) to PharmaNet profiles to assess accuracy of the PharmaNet profiles for patients receiving a BPMH as part of clinical care. A review panel examined the anonymized BPMHs and discrepancies to estimate clinical significance of discrepancies. Results 16% of medication profiles were accurate, with 48% of the discrepant profiles considered potentially clinically significant by the clinical review panel. Cardiac medications tended to be more accurate (e.g. ramipril was accurate >90% of the time), while insulin, warfarin, salbutamol and pain relief medications were often inaccurate (80–85% of the time). 1215 sequential BPMHs were collected and reviewed for this study. Conclusions The PharmaNet medication repository has a low accuracy and should be used in conjunction with other sources for medication histories for clinical or research purposes. This finding is consistent with other, smaller medication repository accuracy studies in other jurisdictions. Our study highlights specific medications that tend to be lower in accuracy. PMID:22621690

  2. Standardized accuracy assessment of the calypso wireless transponder tracking system

    NASA Astrophysics Data System (ADS)

    Franz, A. M.; Schmitt, D.; Seitel, A.; Chatrasingh, M.; Echner, G.; Oelfke, U.; Nill, S.; Birkfellner, W.; Maier-Hein, L.

    2014-11-01

    Electromagnetic (EM) tracking allows localization of small EM sensors in a magnetic field of known geometry without line-of-sight. However, this technique requires a cable connection to the tracked object. A wireless alternative based on magnetic fields, referred to as transponder tracking, has been proposed by several authors. Although most of the transponder tracking systems are still in an early stage of development and not ready for clinical use yet, Varian Medical Systems Inc. (Palo Alto, California, USA) presented the Calypso system for tumor tracking in radiation therapy which includes transponder technology. But it has not been used for computer-assisted interventions (CAI) in general or been assessed for accuracy in a standardized manner, so far. In this study, we apply a standardized assessment protocol presented by Hummel et al (2005 Med. Phys. 32 2371-9) to the Calypso system for the first time. The results show that transponder tracking with the Calypso system provides a precision and accuracy below 1 mm in ideal clinical environments, which is comparable with other EM tracking systems. Similar to other systems the tracking accuracy was affected by metallic distortion, which led to errors of up to 3.2 mm. The potential of the wireless transponder tracking technology for use in many future CAI applications can be regarded as extremely high.

  3. A rotating torus phantom for assessing color Doppler accuracy.

    PubMed

    Stewart, S F

    1999-10-01

    A rotating torus phantom was designed to assess the accuracy of color Doppler ultrasound. A thin rubber tube was filled with blood analog fluid and joined at the ends to form a torus, then mounted on a disk submerged in water and rotated at constant speeds by a motor. Flow visualization experiments and finite element analyses demonstrated that the fluid accelerates quickly to the speed of the torus and spins as a solid body. The actual fluid velocity was found to be dependent only on the motor speed and location of the sample volume. The phantom was used to assess the accuracy of Doppler-derived velocities during two-dimensional (2-D) color imaging using a commercial ultrasound system. The Doppler-derived velocities averaged 0.81 +/- 0.11 of the imposed velocity, with the variations significantly dependent on velocity, pulse-repetition frequency and wall filter frequency (p < 0.001). The torus phantom was found to have certain advantages over currently available Doppler accuracy phantoms: 1. It has a high maximum velocity; 2. it has low velocity gradients, simplifying the calibration of 2-D color Doppler; and 3. it uses a real moving fluid that gives a realistic backscatter signal. PMID:10576268

  4. Accuracy assessment of fluoroscopy-transesophageal echocardiography registration

    NASA Astrophysics Data System (ADS)

    Lang, Pencilla; Seslija, Petar; Bainbridge, Daniel; Guiraudon, Gerard M.; Jones, Doug L.; Chu, Michael W.; Holdsworth, David W.; Peters, Terry M.

    2011-03-01

    This study assesses the accuracy of a new transesophageal (TEE) ultrasound (US) fluoroscopy registration technique designed to guide percutaneous aortic valve replacement. In this minimally invasive procedure, a valve is inserted into the aortic annulus via a catheter. Navigation and positioning of the valve is guided primarily by intra-operative fluoroscopy. Poor anatomical visualization of the aortic root region can result in incorrect positioning, leading to heart valve embolization, obstruction of the coronary ostia and acute kidney injury. The use of TEE US images to augment intra-operative fluoroscopy provides significant improvements to image-guidance. Registration is achieved using an image-based TEE probe tracking technique and US calibration. TEE probe tracking is accomplished using a single-perspective pose estimation algorithm. Pose estimation from a single image allows registration to be achieved using only images collected in standard OR workflow. Accuracy of this registration technique is assessed using three models: a point target phantom, a cadaveric porcine heart with implanted fiducials, and in-vivo porcine images. Results demonstrate that registration can be achieved with an RMS error of less than 1.5mm, which is within the clinical accuracy requirements of 5mm. US-fluoroscopy registration based on single-perspective pose estimation demonstrates promise as a method for providing guidance to percutaneous aortic valve replacement procedures. Future work will focus on real-time implementation and a visualization system that can be used in the operating room.

  5. Performance and Accuracy of LAPACK's Symmetric TridiagonalEigensolvers

    SciTech Connect

    Demmel, Jim W.; Marques, Osni A.; Parlett, Beresford N.; Vomel,Christof

    2007-04-19

    We compare four algorithms from the latest LAPACK 3.1 release for computing eigenpairs of a symmetric tridiagonal matrix. These include QR iteration, bisection and inverse iteration (BI), the Divide-and-Conquer method (DC), and the method of Multiple Relatively Robust Representations (MR). Our evaluation considers speed and accuracy when computing all eigenpairs, and additionally subset computations. Using a variety of carefully selected test problems, our study includes a variety of today's computer architectures. Our conclusions can be summarized as follows. (1) DC and MR are generally much faster than QR and BI on large matrices. (2) MR almost always does the fewest floating point operations, but at a lower MFlop rate than all the other algorithms. (3) The exact performance of MR and DC strongly depends on the matrix at hand. (4) DC and QR are the most accurate algorithms with observed accuracy O({radical}ne). The accuracy of BI and MR is generally O(ne). (5) MR is preferable to BI for subset computations.

  6. An Assessment of Citizen Contributed Ground Reference Data for Land Cover Map Accuracy Assessment

    NASA Astrophysics Data System (ADS)

    Foody, G. M.

    2015-08-01

    It is now widely accepted that an accuracy assessment should be part of a thematic mapping programme. Authoritative good or best practices for accuracy assessment have been defined but are often impractical to implement. Key reasons for this situation are linked to the ground reference data used in the accuracy assessment. Typically, it is a challenge to acquire a large sample of high quality reference cases in accordance to desired sampling designs specified as conforming to good practice and the data collected are normally to some degree imperfect limiting their value to an accuracy assessment which implicitly assumes the use of a gold standard reference. Citizen sensors have great potential to aid aspects of accuracy assessment. In particular, they may be able to act as a source of ground reference data that may, for example, reduce sample size problems but concerns with data quality remain. The relative strengths and limitations of citizen contributed data for accuracy assessment are reviewed in the context of the authoritative good practices defined for studies of land cover by remote sensing. The article will highlight some of the ways that citizen contributed data have been used in accuracy assessment as well as some of the problems that require further attention, and indicate some of the potential ways forward in the future.

  7. Assessing the accuracy of prediction algorithms for classification: an overview.

    PubMed

    Baldi, P; Brunak, S; Chauvin, Y; Andersen, C A; Nielsen, H

    2000-05-01

    We provide a unified overview of methods that currently are widely used to assess the accuracy of prediction algorithms, from raw percentages, quadratic error measures and other distances, and correlation coefficients, and to information theoretic measures such as relative entropy and mutual information. We briefly discuss the advantages and disadvantages of each approach. For classification tasks, we derive new learning algorithms for the design of prediction systems by directly optimising the correlation coefficient. We observe and prove several results relating sensitivity and specificity of optimal systems. While the principles are general, we illustrate the applicability on specific problems such as protein secondary structure and signal peptide prediction. PMID:10871264

  8. APPLICATION OF A "VITURAL FIELD REFERENCE DATABASE" TO ASSESS LAND-COVER MAP ACCURACIES

    EPA Science Inventory

    An accuracy assessment was performed for the Neuse River Basin, NC land-cover/use
    (LCLU) mapping results using a "Virtual Field Reference Database (VFRDB)". The VFRDB was developed using field measurement and digital imagery (camera) data collected at 1,409 sites over a perio...

  9. The Social Accuracy Model of Interpersonal Perception: Assessing Individual Differences in Perceptive and Expressive Accuracy

    ERIC Educational Resources Information Center

    Biesanz, Jeremy C.

    2010-01-01

    The social accuracy model of interpersonal perception (SAM) is a componential model that estimates perceiver and target effects of different components of accuracy across traits simultaneously. For instance, Jane may be generally accurate in her perceptions of others and thus high in "perceptive accuracy"--the extent to which a particular…

  10. Assessing the accuracy of different simplified frictional rolling contact algorithms

    NASA Astrophysics Data System (ADS)

    Vollebregt, E. A. H.; Iwnicki, S. D.; Xie, G.; Shackleton, P.

    2012-01-01

    This paper presents an approach for assessing the accuracy of different frictional rolling contact theories. The main characteristic of the approach is that it takes a statistically oriented view. This yields a better insight into the behaviour of the methods in diverse circumstances (varying contact patch ellipticities, mixed longitudinal, lateral and spin creepages) than is obtained when only a small number of (basic) circumstances are used in the comparison. The range of contact parameters that occur for realistic vehicles and tracks are assessed using simulations with the Vampire vehicle system dynamics (VSD) package. This shows that larger values for the spin creepage occur rather frequently. Based on this, our approach is applied to typical cases for which railway VSD packages are used. The results show that particularly the USETAB approach but also FASTSIM give considerably better results than the linear theory, Vermeulen-Johnson, Shen-Hedrick-Elkins and Polach methods, when compared with the 'complete theory' of the CONTACT program.

  11. Accuracy Assessment of a Uav-Based Landslide Monitoring System

    NASA Astrophysics Data System (ADS)

    Peppa, M. V.; Mills, J. P.; Moore, P.; Miller, P. E.; Chambers, J. E.

    2016-06-01

    Landslides are hazardous events with often disastrous consequences. Monitoring landslides with observations of high spatio-temporal resolution can help mitigate such hazards. Mini unmanned aerial vehicles (UAVs) complemented by structure-from-motion (SfM) photogrammetry and modern per-pixel image matching algorithms can deliver a time-series of landslide elevation models in an automated and inexpensive way. This research investigates the potential of a mini UAV, equipped with a Panasonic Lumix DMC-LX5 compact camera, to provide surface deformations at acceptable levels of accuracy for landslide assessment. The study adopts a self-calibrating bundle adjustment-SfM pipeline using ground control points (GCPs). It evaluates misalignment biases and unresolved systematic errors that are transferred through the SfM process into the derived elevation models. To cross-validate the research outputs, results are compared to benchmark observations obtained by standard surveying techniques. The data is collected with 6 cm ground sample distance (GSD) and is shown to achieve planimetric and vertical accuracy of a few centimetres at independent check points (ICPs). The co-registration error of the generated elevation models is also examined in areas of stable terrain. Through this error assessment, the study estimates that the vertical sensitivity to real terrain change of the tested landslide is equal to 9 cm.

  12. Assessing Scientific Performance.

    ERIC Educational Resources Information Center

    Weiner, John M.; And Others

    1984-01-01

    A method for assessing scientific performance based on relationships displayed numerically in published documents is proposed and illustrated using published documents in pediatric oncology for the period 1979-1982. Contributions of a major clinical investigations group, the Childrens Cancer Study Group, are analyzed. Twenty-nine references are…

  13. Accuracy assessment of gridded precipitation datasets in the Himalayas

    NASA Astrophysics Data System (ADS)

    Khan, A.

    2015-12-01

    Accurate precipitation data are vital for hydro-climatic modelling and water resources assessments. Based on mass balance calculations and Turc-Budyko analysis, this study investigates the accuracy of twelve widely used precipitation gridded datasets for sub-basins in the Upper Indus Basin (UIB) in the Himalayas-Karakoram-Hindukush (HKH) region. These datasets are: 1) Global Precipitation Climatology Project (GPCP), 2) Climate Prediction Centre (CPC) Merged Analysis of Precipitation (CMAP), 3) NCEP / NCAR, 4) Global Precipitation Climatology Centre (GPCC), 5) Climatic Research Unit (CRU), 6) Asian Precipitation Highly Resolved Observational Data Integration Towards Evaluation of Water Resources (APHRODITE), 7) Tropical Rainfall Measuring Mission (TRMM), 8) European Reanalysis (ERA) interim data, 9) PRINCETON, 10) European Reanalysis-40 (ERA-40), 11) Willmott and Matsuura, and 12) WATCH Forcing Data based on ERA interim (WFDEI). Precipitation accuracy and consistency was assessed by physical mass balance involving sum of annual measured flow, estimated actual evapotranspiration (average of 4 datasets), estimated glacier mass balance melt contribution (average of 4 datasets), and ground water recharge (average of 3 datasets), during 1999-2010. Mass balance assessment was complemented by Turc-Budyko non-dimensional analysis, where annual precipitation, measured flow and potential evapotranspiration (average of 5 datasets) data were used for the same period. Both analyses suggest that all tested precipitation datasets significantly underestimate precipitation in the Karakoram sub-basins. For the Hindukush and Himalayan sub-basins most datasets underestimate precipitation, except ERA-interim and ERA-40. The analysis indicates that for this large region with complicated terrain features and stark spatial precipitation gradients the reanalysis datasets have better consistency with flow measurements than datasets derived from records of only sparsely distributed climatic

  14. Accuracy assessment of contextual classification results for vegetation mapping

    NASA Astrophysics Data System (ADS)

    Thoonen, Guy; Hufkens, Koen; Borre, Jeroen Vanden; Spanhove, Toon; Scheunders, Paul

    2012-04-01

    A new procedure for quantitatively assessing the geometric accuracy of thematic maps, obtained from classifying hyperspectral remote sensing data, is presented. More specifically, the methodology is aimed at the comparison between results from any of the currently popular contextual classification strategies. The proposed procedure characterises the shapes of all objects in a classified image by defining an appropriate reference and a new quality measure. The results from the proposed procedure are represented in an intuitive way, by means of an error matrix, analogous to the confusion matrix used in traditional thematic accuracy representation. A suitable application for the methodology is vegetation mapping, where lots of closely related and spatially connected land cover types are to be distinguished. Consequently, the procedure is tested on a heathland vegetation mapping problem, related to Natura 2000 habitat monitoring. Object-based mapping and Markov Random Field classification results are compared, showing that the selected Markov Random Fields approach is more suitable for the fine-scale problem at hand, which is confirmed by the proposed procedure.

  15. Evaluating the effect of learning style and student background on self-assessment accuracy

    NASA Astrophysics Data System (ADS)

    Alaoutinen, Satu

    2012-06-01

    This study evaluates a new taxonomy-based self-assessment scale and examines factors that affect assessment accuracy and course performance. The scale is based on Bloom's Revised Taxonomy and is evaluated by comparing students' self-assessment results with course performance in a programming course. Correlation has been used to reveal possible connections between student information and both self-assessment and course performance. The results show that students can place their knowledge along the taxonomy-based scale quite well and the scale seems to fit engineering students' learning style. Advanced students assess themselves more accurately than novices. The results also show that reflective students were better in programming than active. The scale used in this study gives a more objective picture of students' knowledge than general scales and with modifications it can be used in other classes than programming.

  16. Large Area Crop Inventory Experiment (LACIE). Accuracy assessment report phase 1A, November - December 1974. [Kansas

    NASA Technical Reports Server (NTRS)

    1976-01-01

    The author has identified the following significant results. Results of the accuracy assessment activity for Phase IA of LACIE indicated that (1) The 90/90 criteria could be reached if the degree of accuracy of the LACIE performance in Kansas could be equaled in other areas. (2) The classification of both wheat and nonwheat fields was significantly accurate for the three ITS segments analyzed. The wheat field classification accuracy varied for the segments. However, this was not so with respect to nonwheat fields. (3) Biophase as well as its interaction with segment location turned out to be an important factor for the classification performance. Analyst interpretation of segments for training the classifier was a significant error-contributing factor in the estimation of wheat acreage at both the field and the segment levels.

  17. Examination of standardized patient performance: Accuracy and consistency of six standardized patients over time

    PubMed Central

    Erby, Lori A.H.; Roter, Debra L.; Biesecker, Barbara B.

    2011-01-01

    Objective To explore the accuracy and consistency of standardized patient (SP) performance in the context of routine genetic counseling, focusing on elements beyond scripted case items including general communication style and affective demeanor. Methods One hundred seventy-seven genetic counselors were randomly assigned to counsel one of six SPs. Videotapes and transcripts of the sessions were analyzed to assess consistency of performance across four dimensions. Results Accuracy of script item presentation was high; 91% and 89% in the prenatal and cancer cases. However, there were statistically significant differences among SPs in the accuracy of presentation, general communication style, and some aspects of affective presentation. All SPs were rated as presenting with similarly high levels of realism. SP performance over time was generally consistent, with some small but statistically significant differences. Conclusion and practice implications These findings demonstrate that well-trained SPs can not only perform the factual elements of a case with high degrees of accuracy and realism; but they can also maintain sufficient levels of uniformity in general communication style and affective demeanor over time to support their use in even the demanding context of genetic counseling. Results indicate a need for an additional focus in training on consistency between different SPs. PMID:21094590

  18. Bayesian reclassification statistics for assessing improvements in diagnostic accuracy.

    PubMed

    Huang, Zhipeng; Li, Jialiang; Cheng, Ching-Yu; Cheung, Carol; Wong, Tien-Yin

    2016-07-10

    We propose a Bayesian approach to the estimation of the net reclassification improvement (NRI) and three versions of the integrated discrimination improvement (IDI) under the logistic regression model. Both NRI and IDI were proposed as numerical characterizations of accuracy improvement for diagnostic tests and were shown to retain certain practical advantage over analysis based on ROC curves and offer complementary information to the changes in area under the curve. Our development is a new contribution towards Bayesian solution for the estimation of NRI and IDI, which eases computational burden and increases flexibility. Our simulation results indicate that Bayesian estimation enjoys satisfactory performance comparable with frequentist estimation and achieves point estimation and credible interval construction simultaneously. We adopt the methodology to analyze a real data from the Singapore Malay Eye Study. Copyright © 2016 John Wiley & Sons, Ltd. PMID:26875442

  19. Accuracy assessment of a surface electromyogram decomposition system in human first dorsal interosseus muscle

    NASA Astrophysics Data System (ADS)

    Hu, Xiaogang; Rymer, William Z.; Suresh, Nina L.

    2014-04-01

    Objective. The aim of this study is to assess the accuracy of a surface electromyogram (sEMG) motor unit (MU) decomposition algorithm during low levels of muscle contraction. Approach. A two-source method was used to verify the accuracy of the sEMG decomposition system, by utilizing simultaneous intramuscular and surface EMG recordings from the human first dorsal interosseous muscle recorded during isometric trapezoidal force contractions. Spike trains from each recording type were decomposed independently utilizing two different algorithms, EMGlab and dEMG decomposition algorithms. The degree of agreement of the decomposed spike timings was assessed for three different segments of the EMG signals, corresponding to specified regions in the force task. A regression analysis was performed to examine whether certain properties of the sEMG and force signal can predict the decomposition accuracy. Main results. The average accuracy of successful decomposition among the 119 MUs that were common to both intramuscular and surface records was approximately 95%, and the accuracy was comparable between the different segments of the sEMG signals (i.e., force ramp-up versus steady state force versus combined). The regression function between the accuracy and properties of sEMG and force signals revealed that the signal-to-noise ratio of the action potential and stability in the action potential records were significant predictors of the surface decomposition accuracy. Significance. The outcomes of our study confirm the accuracy of the sEMG decomposition algorithm during low muscle contraction levels and provide confidence in the overall validity of the surface dEMG decomposition algorithm.

  20. Accuracy of referrals for visual assessment in a stroke population

    PubMed Central

    Rowe, F J

    2011-01-01

    Purpose To evaluate accuracy of referrals from multidisciplinary stroke teams requesting visual assessments. Patients and methods Multicentre prospective study undertaken in 20 acute Trust hospitals. Stroke survivors referred with suspected visual difficulty were recruited. Standardised screening/referral and investigation forms were used to document data on referral signs and symptoms, plus type and extent of visual impairment. Results Referrals for 799 patients were reviewed: 60% men, 40% women. Mean age at onset of stroke was 69 years (SD 14: range 1–94 years). Signs recorded by referring staff were nil in 58% and positive in the remainder. Symptoms were recorded in 87%. Diagnosis of visual impairment was nil in 8% and positive in the remainder. Sensitivity of referrals (on the basis of signs detected) was calculated as 0.42 with specificity of 0.52. Kappa statistical evaluation of agreement between referral and diagnosis of visual impairment was 0.428 (SE 0.017: 95% confidence interval of −0.048, 0.019). Conclusion More than half of patient referrals were made despite no signs of visual difficulty being recorded by the referring staff. Visual impairment of varying severity was diagnosed in 92% of stroke survivors referred for visual assessment. Referrals were made based predominantly on visual symptoms and because of formal orthoptic liaison in Trusts involved. PMID:21127506

  1. Assessing Uncertainties in Accuracy of Landuse Classification Using Remote Sensing Images

    NASA Astrophysics Data System (ADS)

    Hsiao, L.-H.; Cheng, K.-S.

    2013-05-01

    Multispectral remote sensing images are widely used for landuse/landcover (LULC) classification. Performance of such classification practices is normally evaluated through the confusion matrix which summarizes the producer's and user's accuracies and the overall accuracy. However, the confusion matrix is based on the classification results of a set of multi-class training data. As a result, the classification accuracies are heavily dependent on the representativeness of the training data set and it is imperative for practitioners to assess the uncertainties of LULC classification in order for a full understanding of the classification results. In addition, the Gaussian-based maximum likelihood classifier (GMLC) is widely applied in many practices of LULC classification. The GMLC assumes the classification features jointly form a multivariate normal distribution, whereas as, in reality, many features of individual landcover classes have been found to be non-Gaussian. Direct application of GMLC will certainly affect the classification results. In a pilot study conducted in Taipei and its vicinity, we tackled these two problems by firstly transforming the original training data set to a corresponding data set which forms a multivariate normal distribution before conducting LULC classification using GMLC. We then applied the bootstrap resampling technique to generate a large set of multi-class resampled training data from the multivariate normal training data set. LULC classification was then implemented for each resampled training data set using the GMLC. Finally, the uncertainties of LULC classification accuracies were assessed by evaluating the means and standard deviations of the producer's and user's accuracies of individual LULC classes which were derived from a set of confusion matrices. Results of this study demonstrate that Gaussian-transformation of the original training data achieved better classification accuracies and the bootstrap resampling technique is

  2. Inertial Measures of Motion for Clinical Biomechanics: Comparative Assessment of Accuracy under Controlled Conditions – Changes in Accuracy over Time

    PubMed Central

    Lebel, Karina; Boissy, Patrick; Hamel, Mathieu; Duval, Christian

    2015-01-01

    Background Interest in 3D inertial motion tracking devices (AHRS) has been growing rapidly among the biomechanical community. Although the convenience of such tracking devices seems to open a whole new world of possibilities for evaluation in clinical biomechanics, its limitations haven’t been extensively documented. The objectives of this study are: 1) to assess the change in absolute and relative accuracy of multiple units of 3 commercially available AHRS over time; and 2) to identify different sources of errors affecting AHRS accuracy and to document how they may affect the measurements over time. Methods This study used an instrumented Gimbal table on which AHRS modules were carefully attached and put through a series of velocity-controlled sustained motions including 2 minutes motion trials (2MT) and 12 minutes multiple dynamic phases motion trials (12MDP). Absolute accuracy was assessed by comparison of the AHRS orientation measurements to those of an optical gold standard. Relative accuracy was evaluated using the variation in relative orientation between modules during the trials. Findings Both absolute and relative accuracy decreased over time during 2MT. 12MDP trials showed a significant decrease in accuracy over multiple phases, but accuracy could be enhanced significantly by resetting the reference point and/or compensating for initial Inertial frame estimation reference for each phase. Interpretation The variation in AHRS accuracy observed between the different systems and with time can be attributed in part to the dynamic estimation error, but also and foremost, to the ability of AHRS units to locate the same Inertial frame. Conclusions Mean accuracies obtained under the Gimbal table sustained conditions of motion suggest that AHRS are promising tools for clinical mobility assessment under constrained conditions of use. However, improvement in magnetic compensation and alignment between AHRS modules are desirable in order for AHRS to reach their

  3. Empathic accuracy for happiness in the daily lives of older couples: Fluid cognitive performance predicts pattern accuracy among men.

    PubMed

    Hülür, Gizem; Hoppmann, Christiane A; Rauers, Antje; Schade, Hannah; Ram, Nilam; Gerstorf, Denis

    2016-08-01

    Correctly identifying other's emotional states is a central cognitive component of empathy. We examined the role of fluid cognitive performance for empathic accuracy for happiness in the daily lives of 86 older couples (mean relationship length = 45 years; mean age = 75 years) on up to 42 occasions over 7 consecutive days. Men performing better on the Digit Symbol test were more accurate in identifying ups and downs of their partner's happiness. A similar association was not found for women. We discuss the potential role of fluid cognitive performance and other individual, partner, and situation characteristics for empathic accuracy. (PsycINFO Database Record PMID:27362351

  4. Performance Testing using Silicon Devices - Analysis of Accuracy: Preprint

    SciTech Connect

    Sengupta, M.; Gotseff, P.; Myers, D.; Stoffel, T.

    2012-06-01

    Accurately determining PV module performance in the field requires accurate measurements of solar irradiance reaching the PV panel (i.e., Plane-of-Array - POA Irradiance) with known measurement uncertainty. Pyranometers are commonly based on thermopile or silicon photodiode detectors. Silicon detectors, including PV reference cells, are an attractive choice for reasons that include faster time response (10 us) than thermopile detectors (1 s to 5 s), lower cost and maintenance. The main drawback of silicon detectors is their limited spectral response. Therefore, to determine broadband POA solar irradiance, a pyranometer calibration factor that converts the narrowband response to broadband is required. Normally this calibration factor is a single number determined under clear-sky conditions with respect to a broadband reference radiometer. The pyranometer is then used for various scenarios including varying airmass, panel orientation and atmospheric conditions. This would not be an issue if all irradiance wavelengths that form the broadband spectrum responded uniformly to atmospheric constituents. Unfortunately, the scattering and absorption signature varies widely with wavelength and the calibration factor for the silicon photodiode pyranometer is not appropriate for other conditions. This paper reviews the issues that will arise from the use of silicon detectors for PV performance measurement in the field based on measurements from a group of pyranometers mounted on a 1-axis solar tracker. Also we will present a comparison of simultaneous spectral and broadband measurements from silicon and thermopile detectors and estimated measurement errors when using silicon devices for both array performance and resource assessment.

  5. Assessing the Accuracy of Landscape-Scale Phenology Products

    NASA Astrophysics Data System (ADS)

    Morisette, Jeffrey T.; Nightingale, Joanne; Nickeson, Jaime

    2010-11-01

    An International Workshop on the Validation of Satellite-Based Phenology Products; Dublin, Ireland, 18 June 2010; A 1-day international workshop on the accuracy assessment of phenology products derived from satellite observations of the land surface was held at Trinity College Dublin. This was in conjunction with the larger 4-day Phenology 2010 conference. Phenology is the study of recurring plant and animal life cycle stages (such as leafing and flowering, maturation of agricultural plants, emergence of insects, and migration of birds). The workshop brought together producers of continental- to global-scale phenology products based on satellite data, as well as providers of field observations and tower-mounted near-surface imaging sensors whose data are useful for evaluating the satellite products. The meeting was held under the auspices of the Committee on Earth Observing Satellites (CEOS) Land Product Validation (LPV) subgroup. The mission of LPV is to foster quantitative validation of high-level global land products derived from remotely sensed data and relay results that are relevant to users.

  6. Airborne Topographic Mapper Calibration Procedures and Accuracy Assessment

    NASA Technical Reports Server (NTRS)

    Martin, Chreston F.; Krabill, William B.; Manizade, Serdar S.; Russell, Rob L.; Sonntag, John G.; Swift, Robert N.; Yungel, James K.

    2012-01-01

    Description of NASA Airborn Topographic Mapper (ATM) lidar calibration procedures including analysis of the accuracy and consistancy of various ATM instrument parameters and the resulting influence on topographic elevation measurements. The ATM elevations measurements from a nominal operating altitude 500 to 750 m above the ice surface was found to be: Horizontal Accuracy 74 cm, Horizontal Precision 14 cm, Vertical Accuracy 6.6 cm, Vertical Precision 3 cm.

  7. Accuracy assessment of high-rate GPS measurements for seismology

    NASA Astrophysics Data System (ADS)

    Elosegui, P.; Davis, J. L.; Ekström, G.

    2007-12-01

    Analysis of GPS measurements with a controlled laboratory system, built to simulate the ground motions caused by tectonic earthquakes and other transient geophysical signals such as glacial earthquakes, enables us to assess the technique of high-rate GPS. The root-mean-square (rms) position error of this system when undergoing realistic simulated seismic motions is 0.05~mm, with maximum position errors of 0.1~mm, thus providing "ground truth" GPS displacements. We have acquired an extensive set of high-rate GPS measurements while inducing seismic motions on a GPS antenna mounted on this system with a temporal spectrum similar to real seismic events. We found that, for a particular 15-min-long test event, the rms error of the 1-Hz GPS position estimates was 2.5~mm, with maximum position errors of 10~mm, and the error spectrum of the GPS estimates was approximately flicker noise. These results may however represent a best-case scenario since they were obtained over a short (~10~m) baseline, thereby greatly mitigating baseline-dependent errors, and when the number and distribution of satellites on the sky was good. For example, we have determined that the rms error can increase by a factor of 2--3 as the GPS constellation changes throughout the day, with an average value of 3.5~mm for eight identical, hourly-spaced, consecutive test events. The rms error also increases with increasing baseline, as one would expect, with an average rms error for a ~1400~km baseline of 9~mm. We will present an assessment of the accuracy of high-rate GPS based on these measurements, discuss the implications of this study for seismology, and describe new applications in glaciology.

  8. Mapping with Small UAS: A Point Cloud Accuracy Assessment

    NASA Astrophysics Data System (ADS)

    Toth, Charles; Jozkow, Grzegorz; Grejner-Brzezinska, Dorota

    2015-12-01

    Interest in using inexpensive Unmanned Aerial System (UAS) technology for topographic mapping has recently significantly increased. Small UAS platforms equipped with consumer grade cameras can easily acquire high-resolution aerial imagery allowing for dense point cloud generation, followed by surface model creation and orthophoto production. In contrast to conventional airborne mapping systems, UAS has limited ground coverage due to low flying height and limited flying time, yet it offers an attractive alternative to high performance airborne systems, as the cost of the sensors and platform, and the flight logistics, is relatively low. In addition, UAS is better suited for small area data acquisitions and to acquire data in difficult to access areas, such as urban canyons or densely built-up environments. The main question with respect to the use of UAS is whether the inexpensive consumer sensors installed in UAS platforms can provide the geospatial data quality comparable to that provided by conventional systems. This study aims at the performance evaluation of the current practice of UAS-based topographic mapping by reviewing the practical aspects of sensor configuration, georeferencing and point cloud generation, including comparisons between sensor types and processing tools. The main objective is to provide accuracy characterization and practical information for selecting and using UAS solutions in general mapping applications. The analysis is based on statistical evaluation as well as visual examination of experimental data acquired by a Bergen octocopter with three different image sensor configurations, including a GoPro HERO3+ Black Edition, a Nikon D800 DSLR and a Velodyne HDL-32. In addition, georeferencing data of varying quality were acquired and evaluated. The optical imagery was processed by using three commercial point cloud generation tools. Comparing point clouds created by active and passive sensors by using different quality sensors, and finally

  9. An assessment of reservoir storage change accuracy from SWOT

    NASA Astrophysics Data System (ADS)

    Clark, Elizabeth; Moller, Delwyn; Lettenmaier, Dennis

    2013-04-01

    The anticipated Surface Water and Ocean Topography (SWOT) satellite mission will provide water surface height and areal extent measurements for terrestrial water bodies at an unprecedented accuracy with essentially global coverage with a 22-day repeat cycle. These measurements will provide a unique opportunity to observe storage changes in naturally occurring lakes, as well as manmade reservoirs. Given political constraints on the sharing of water information, international data bases of reservoir characteristics, such as the Global Reservoir and Dam Database, are limited to the largest reservoirs for which countries have voluntarily provided information. Impressive efforts have been made to combine currently available altimetry data with satellite-based imagery of water surface extent; however, these data sets are limited to large reservoirs located on an altimeter's flight track. SWOT's global coverage and simultaneous measurement of height and water surface extent remove, in large part, the constraint of location relative to flight path. Previous studies based on Arctic lakes suggest that SWOT will be able to provide a noisy, but meaningful, storage change signal for lakes as small as 250 m x 250 m. Here, we assess the accuracy of monthly storage change estimates over 10 reservoirs in the U.S. and consider the plausibility of estimating total storage change. Published maps of reservoir bathymetry were combined with a historical time series of daily storage to produce daily time series of maps of water surface elevation. Next, these time series were then sampled based on realistic SWOT orbital parameters and noise characteristics to create a time series of synthetic SWOT observations of water surface elevation and extent for each reservoir. We then plotted area versus elevation for the true values and for the synthetic SWOT observations. For each reservoir, a curve was fit to the synthetic SWOT observations, and its integral was used to estimate total storage

  10. Assessment Of Accuracies Of Remote-Sensing Maps

    NASA Technical Reports Server (NTRS)

    Card, Don H.; Strong, Laurence L.

    1992-01-01

    Report describes study of accuracies of classifications of picture elements in map derived by digital processing of Landsat-multispectral-scanner imagery of coastal plain of Arctic National Wildlife Refuge. Accuracies of portions of map analyzed with help of statistical sampling procedure called "stratified plurality sampling", in which all picture elements in given cluster classified in stratum to which plurality of them belong.

  11. PRECISION AND ACCURACY ASSESSMENTS FOR STATE AND LOCAL AIR MONITORING NETWORKS--1988

    EPA Science Inventory

    Precision and accuracy data obtained from state and local agencies (SLAMS) during 1988 are analyzed. ooled site variances and average biases which are relevant quantities to both precision and accuracy determinations are statistically compared within and between states to assess ...

  12. Assessing the accuracy of self-reported self-talk

    PubMed Central

    Brinthaupt, Thomas M.; Benson, Scott A.; Kang, Minsoo; Moore, Zaver D.

    2015-01-01

    As with most kinds of inner experience, it is difficult to assess actual self-talk frequency beyond self-reports, given the often hidden and subjective nature of the phenomenon. The Self-Talk Scale (STS; Brinthaupt et al., 2009) is a self-report measure of self-talk frequency that has been shown to possess acceptable reliability and validity. However, no research using the STS has examined the accuracy of respondents’ self-reports. In the present paper, we report a series of studies directly examining the measurement of self-talk frequency and functions using the STS. The studies examine ways to validate self-reported self-talk by (1) comparing STS responses from 6 weeks earlier to recent experiences that might precipitate self-talk, (2) using experience sampling methods to determine whether STS scores are related to recent reports of self-talk over a period of a week, and (3) comparing self-reported STS scores to those provided by a significant other who rated the target on the STS. Results showed that (1) overall self-talk scores, particularly self-critical and self-reinforcing self-talk, were significantly related to reports of context-specific self-talk; (2) high STS scorers reported talking to themselves significantly more often during recent events compared to low STS scorers, and, contrary to expectations, (3) friends reported less agreement than strangers in their self-other self-talk ratings. Implications of the results for the validity of the STS and for measuring self-talk are presented. PMID:25999887

  13. Accuracy Assessment of Coastal Topography Derived from Uav Images

    NASA Astrophysics Data System (ADS)

    Long, N.; Millescamps, B.; Pouget, F.; Dumon, A.; Lachaussée, N.; Bertin, X.

    2016-06-01

    To monitor coastal environments, Unmanned Aerial Vehicle (UAV) is a low-cost and easy to use solution to enable data acquisition with high temporal frequency and spatial resolution. Compared to Light Detection And Ranging (LiDAR) or Terrestrial Laser Scanning (TLS), this solution produces Digital Surface Model (DSM) with a similar accuracy. To evaluate the DSM accuracy on a coastal environment, a campaign was carried out with a flying wing (eBee) combined with a digital camera. Using the Photoscan software and the photogrammetry process (Structure From Motion algorithm), a DSM and an orthomosaic were produced. Compared to GNSS surveys, the DSM accuracy is estimated. Two parameters are tested: the influence of the methodology (number and distribution of Ground Control Points, GCPs) and the influence of spatial image resolution (4.6 cm vs 2 cm). The results show that this solution is able to reproduce the topography of a coastal area with a high vertical accuracy (< 10 cm). The georeferencing of the DSM require a homogeneous distribution and a large number of GCPs. The accuracy is correlated with the number of GCPs (use 19 GCPs instead of 10 allows to reduce the difference of 4 cm); the required accuracy should be dependant of the research problematic. Last, in this particular environment, the presence of very small water surfaces on the sand bank does not allow to improve the accuracy when the spatial resolution of images is decreased.

  14. Classification method, spectral diversity, band combination and accuracy assessment evaluation for urban feature detection

    NASA Astrophysics Data System (ADS)

    Erener, A.

    2013-04-01

    Automatic extraction of urban features from high resolution satellite images is one of the main applications in remote sensing. It is useful for wide scale applications, namely: urban planning, urban mapping, disaster management, GIS (geographic information systems) updating, and military target detection. One common approach to detecting urban features from high resolution images is to use automatic classification methods. This paper has four main objectives with respect to detecting buildings. The first objective is to compare the performance of the most notable supervised classification algorithms, including the maximum likelihood classifier (MLC) and the support vector machine (SVM). In this experiment the primary consideration is the impact of kernel configuration on the performance of the SVM. The second objective of the study is to explore the suitability of integrating additional bands, namely first principal component (1st PC) and the intensity image, for original data for multi classification approaches. The performance evaluation of classification results is done using two different accuracy assessment methods: pixel based and object based approaches, which reflect the third aim of the study. The objective here is to demonstrate the differences in the evaluation of accuracies of classification methods. Considering consistency, the same set of ground truth data which is produced by labeling the building boundaries in the GIS environment is used for accuracy assessment. Lastly, the fourth aim is to experimentally evaluate variation in the accuracy of classifiers for six different real situations in order to identify the impact of spatial and spectral diversity on results. The method is applied to Quickbird images for various urban complexity levels, extending from simple to complex urban patterns. The simple surface type includes a regular urban area with low density and systematic buildings with brick rooftops. The complex surface type involves almost all

  15. Changes in Memory Prediction Accuracy: Age and Performance Effects

    ERIC Educational Resources Information Center

    Pearman, Ann; Trujillo, Amanda

    2013-01-01

    Memory performance predictions are subjective estimates of possible memory task performance. The purpose of this study was to examine possible factors related to changes in word list performance predictions made by younger and older adults. Factors included memory self-efficacy, actual performance, and perceptions of performance. The current study…

  16. Bootstrap performance profiles in stochastic algorithms assessment

    SciTech Connect

    Costa, Lino; Espírito Santo, Isabel A.C.P.; Oliveira, Pedro

    2015-03-10

    Optimization with stochastic algorithms has become a relevant research field. Due to its stochastic nature, its assessment is not straightforward and involves integrating accuracy and precision. Performance profiles for the mean do not show the trade-off between accuracy and precision, and parametric stochastic profiles require strong distributional assumptions and are limited to the mean performance for a large number of runs. In this work, bootstrap performance profiles are used to compare stochastic algorithms for different statistics. This technique allows the estimation of the sampling distribution of almost any statistic even with small samples. Multiple comparison profiles are presented for more than two algorithms. The advantages and drawbacks of each assessment methodology are discussed.

  17. Georgia's Teacher Performance Assessment

    ERIC Educational Resources Information Center

    Fenton, Anne Marie; Wetherington, Pamela

    2016-01-01

    Like most states, Georgia until recently depended on an assessment of content knowledge to award teaching licenses, along with a licensure recommendation from candidates' educator preparation programs. While the content assessment reflected candidates' grasp of subject matter, licensure decisions did not hinge on direct, statewide assessment of…

  18. Pixels, Blocks of Pixels, and Polygons: Choosing a Spatial Unit for Thematic Accuracy Assessment

    EPA Science Inventory

    Pixels, polygons, and blocks of pixels are all potentially viable spatial assessment units for conducting an accuracy assessment. We develop a statistical population-based framework to examine how the spatial unit chosen affects the outcome of an accuracy assessment. The populati...

  19. Does it Make a Difference? Investigating the Assessment Accuracy of Teacher Tutors and Student Tutors

    ERIC Educational Resources Information Center

    Herppich, Stephanie; Wittwer, Jorg; Nuckles, Matthias; Renkl, Alexander

    2013-01-01

    Tutors often have difficulty with accurately assessing a tutee's understanding. However, little is known about whether the professional expertise of tutors influences their assessment accuracy. In this study, the authors examined the accuracy with which 21 teacher tutors and 25 student tutors assessed a tutee's understanding of the human…

  20. Accuracy of virtual models in the assessment of maxillary defects

    PubMed Central

    Kurşun, Şebnem; Kılıç, Cenk; Özen, Tuncer

    2015-01-01

    Purpose This study aimed to assess the reliability of measurements performed on three-dimensional (3D) virtual models of maxillary defects obtained using cone-beam computed tomography (CBCT) and 3D optical scanning. Materials and Methods Mechanical cavities simulating maxillary defects were prepared on the hard palate of nine cadavers. Images were obtained using a CBCT unit at three different fields-of-views (FOVs) and voxel sizes: 1) 60×60 mm FOV, 0.125 mm3 (FOV60); 2) 80×80 mm FOV, 0.160 mm3 (FOV80); and 3) 100×100 mm FOV, 0.250 mm3 (FOV100). Superimposition of the images was performed using software called VRMesh Design. Automated volume measurements were conducted, and differences between surfaces were demonstrated. Silicon impressions obtained from the defects were also scanned with a 3D optical scanner. Virtual models obtained using VRMesh Design were compared with impressions obtained by scanning silicon models. Gold standard volumes of the impression models were then compared with CBCT and 3D scanner measurements. Further, the general linear model was used, and the significance was set to p=0.05. Results A comparison of the results obtained by the observers and methods revealed the p values to be smaller than 0.05, suggesting that the measurement variations were caused by both methods and observers along with the different cadaver specimens used. Further, the 3D scanner measurements were closer to the gold standard measurements when compared to the CBCT measurements. Conclusion In the assessment of artificially created maxillary defects, the 3D scanner measurements were more accurate than the CBCT measurements. PMID:25793180

  1. Evaluating the performance versus accuracy tradeoff for abstract models

    NASA Astrophysics Data System (ADS)

    McGraw, Robert M.; Clark, Joseph E.

    2001-09-01

    While the military and commercial communities are increasingly reliant on simulation to reduce cost, the cost of developing simulations for their complex system may be costly in themselves. In order to reduce simulation costs, simulation developers have turned toward using collaborative simulation, reusing existing simulation models, and utilizing model abstraction techniques to reduce simulation development time as well as simulation execution time. This paper focuses on model abstraction techniques that can be applied to reduce simulation execution and development time and the effects those techniques have on simulation accuracy.

  2. Assessment of Relative Accuracy of AHN-2 Laser Scanning Data Using Planar Features

    PubMed Central

    van der Sande, Corné; Soudarissanane, Sylvie; Khoshelham, Kourosh

    2010-01-01

    AHN-2 is the second part of the Actueel Hoogtebestand Nederland project, which concerns the acquisition of high-resolution altimetry data over the entire Netherlands using airborne laser scanning. The accuracy assessment of laser altimetry data usually relies on comparing corresponding tie elements, often points or lines, in the overlapping strips. This paper proposes a new approach to strip adjustment and accuracy assessment of AHN-2 data by using planar features. In the proposed approach a transformation is estimated between two overlapping strips by minimizing the distances between points in one strip and their corresponding planes in the other. The planes and the corresponding points are extracted in an automated segmentation process. The point-to-plane distances are used as observables in an estimation model, whereby the parameters of a transformation between the two strips and their associated quality measures are estimated. We demonstrate the performance of the method for the accuracy assessment of the AHN-2 dataset over Zeeland province of The Netherlands. The results show vertical offsets of up to 4 cm between the overlapping strips, and horizontal offsets ranging from 2 cm to 34 cm. PMID:22163650

  3. Assessment of relative accuracy of AHN-2 laser scanning data using planar features.

    PubMed

    van der Sande, Corné; Soudarissanane, Sylvie; Khoshelham, Kourosh

    2010-01-01

    AHN-2 is the second part of the Actueel Hoogtebestand Nederland project, which concerns the acquisition of high-resolution altimetry data over the entire Netherlands using airborne laser scanning. The accuracy assessment of laser altimetry data usually relies on comparing corresponding tie elements, often points or lines, in the overlapping strips. This paper proposes a new approach to strip adjustment and accuracy assessment of AHN-2 data by using planar features. In the proposed approach a transformation is estimated between two overlapping strips by minimizing the distances between points in one strip and their corresponding planes in the other. The planes and the corresponding points are extracted in an automated segmentation process. The point-to-plane distances are used as observables in an estimation model, whereby the parameters of a transformation between the two strips and their associated quality measures are estimated. We demonstrate the performance of the method for the accuracy assessment of the AHN-2 dataset over Zeeland province of The Netherlands. The results show vertical offsets of up to 4 cm between the overlapping strips, and horizontal offsets ranging from 2 cm to 34 cm. PMID:22163650

  4. Constraint on Absolute Accuracy of Metacomprehension Assessments: The Anchoring and Adjustment Model vs. the Standards Model

    ERIC Educational Resources Information Center

    Kwon, Heekyung

    2011-01-01

    The objective of this study is to provide a systematic account of three typical phenomena surrounding absolute accuracy of metacomprehension assessments: (1) the absolute accuracy of predictions is typically quite low; (2) there exist individual differences in absolute accuracy of predictions as a function of reading skill; and (3) postdictions…

  5. ASSESSING THE ACCURACY OF NATIONAL LAND COVER DATASET AREA ESTIMATES AT MULTIPLE SPATIAL EXTENTS

    EPA Science Inventory

    Site specific accuracy assessments provide fine-scale evaluation of the thematic accuracy of land use/land cover (LULC) datasets; however, they provide little insight into LULC accuracy across varying spatial extents. Additionally, LULC data are typically used to describe lands...

  6. Assessing the relative accuracies of two screening tests in the presence of verification bias.

    PubMed

    Zhou, X H; Higgs, R E

    Epidemiological studies of dementia often use two-stage designs because of the relatively low prevalence of the disease and the high cost of ascertaining a diagnosis. The first stage of a two-stage design assesses a large sample with a screening instrument. Then, the subjects are grouped according to their performance on the screening instrument, such as poor, intermediate and good performers. The second stage involves a more extensive diagnostic procedure, such as a clinical assessment, for a particular subset of the study sample selected from each of these groups. However, not all selected subjects have the clinical diagnosis because some subjects may refuse and others are unable to be clinically assessed. Thus, some subjects screened do not have a clinical diagnosis. Furthermore, whether a subject has a clinical diagnosis depends not only on the screening test result but also on other factors, and the sampling fractions for the diagnosis are unknown and have to be estimated. One of the goals in these studies is to assess the relative accuracies of two screening tests. Any analysis using only verified cases may result in verification bias. In this paper, we propose the use of two bootstrap methods to construct confidence intervals for the difference in the accuracies of two screening tests in the presence of verification bias. We illustrate the application of the proposed methods to a simulated data set from a real two-stage study of dementia that has motivated this research. PMID:10844728

  7. Comparative Accuracy Assessment of Global Land Cover Datasets Using Existing Reference Data

    NASA Astrophysics Data System (ADS)

    Tsendbazar, N. E.; de Bruin, S.; Mora, B.; Herold, M.

    2014-12-01

    Land cover is a key variable to monitor the impact of human and natural processes on the biosphere. As one of the Essential Climate Variables, land cover observations are used for climate models and several other applications. Remote sensing technologies have enabled the generation of several global land cover (GLC) products that are based on different data sources and methods (e.g. legends). Moreover, the reported map accuracies result from varying validation strategies. Such differences make the comparison of the GLC products challenging and create confusion on selecting suitable datasets for different applications. This study aims to conduct comparative accuracy assessment of GLC datasets (LC-CCI 2005, MODIS 2005, and Globcover 2005) using the Globcover 2005 reference data which can represent the thematic differences of these GLC maps. This GLC reference dataset provides LCCS classifier information for 3 main land cover types for each sample plot. The LCCS classifier information was translated according to the legends of the GLC maps analysed. The preliminary analysis showed some challenges in LCCS classifier translation arising from missing important classifier information, differences in class definition between the legends and absence of class proportion of main land cover types. To overcome these issues, we consolidated the entire reference data (i.e. 3857 samples distributed at global scale). Then the GLC maps and the reference dataset were harmonized into 13 general classes to perform the comparative accuracy assessments. To help users on selecting suitable GLC dataset(s) for their application, we conducted the map accuracy assessments considering different users' perspectives: climate modelling, bio-diversity assessments, agriculture monitoring, and map producers. This communication will present the method and the results of this study and provide a set of recommendations to the GLC map producers and users with the aim to facilitate the use of GLC maps.

  8. Accuracy of Nurse-Performed Lung Ultrasound in Patients With Acute Dyspnea

    PubMed Central

    Mumoli, Nicola; Vitale, Josè; Giorgi-Pierfranceschi, Matteo; Cresci, Alessandra; Cei, Marco; Basile, Valentina; Brondi, Barbara; Russo, Elisa; Giuntini, Lucia; Masi, Lorenzo; Cocciolo, Massimo; Dentali, Francesco

    2016-01-01

    Abstract In clinical practice lung ultrasound (LUS) is becoming an easy and reliable noninvasive tool for the evaluation of dyspnea. The aim of this study was to assess the accuracy of nurse-performed LUS, in particular, in the diagnosis of acute cardiogenic pulmonary congestion. We prospectively evaluated all the consecutive patients admitted for dyspnea in our Medicine Department between April and July 2014. At admission, serum brain natriuretic peptide (BNP) levels and LUS was performed by trained nurses blinded to clinical and laboratory data. The accuracy of nurse-performed LUS alone and combined with BNP for the diagnosis of acute cardiogenic dyspnea was calculated. Two hundred twenty-six patients (41.6% men, mean age 78.7 ± 12.7 years) were included in the study. Nurse-performed LUS alone had a sensitivity of 95.3% (95% CI: 92.6–98.1%), a specificity of 88.2% (95% CI: 84.0–92.4%), a positive predictive value of 87.9% (95% CI: 83.7–92.2%) and a negative predictive value of 95.5% (95% CI: 92.7–98.2%). The combination of nurse-performed LUS with BNP level (cut-off 400 pg/mL) resulted in a higher sensitivity (98.9%, 95% CI: 97.4–100%), negative predictive value (98.8%, 95% CI: 97.2–100%), and corresponding negative likelihood ratio (0.01, 95% CI: 0.0, 0.07). Nurse-performed LUS had a good accuracy in the diagnosis of acute cardiogenic dyspnea. Use of this technique in combination with BNP seems to be useful in ruling out cardiogenic dyspnea. Other studies are warranted to confirm our preliminary findings and to establish the role of this tool in other settings. PMID:26945396

  9. Accuracy of Nurse-Performed Lung Ultrasound in Patients With Acute Dyspnea: A Prospective Observational Study.

    PubMed

    Mumoli, Nicola; Vitale, Josè; Giorgi-Pierfranceschi, Matteo; Cresci, Alessandra; Cei, Marco; Basile, Valentina; Brondi, Barbara; Russo, Elisa; Giuntini, Lucia; Masi, Lorenzo; Cocciolo, Massimo; Dentali, Francesco

    2016-03-01

    In clinical practice lung ultrasound (LUS) is becoming an easy and reliable noninvasive tool for the evaluation of dyspnea. The aim of this study was to assess the accuracy of nurse-performed LUS, in particular, in the diagnosis of acute cardiogenic pulmonary congestion. We prospectively evaluated all the consecutive patients admitted for dyspnea in our Medicine Department between April and July 2014. At admission, serum brain natriuretic peptide (BNP) levels and LUS was performed by trained nurses blinded to clinical and laboratory data. The accuracy of nurse-performed LUS alone and combined with BNP for the diagnosis of acute cardiogenic dyspnea was calculated. Two hundred twenty-six patients (41.6% men, mean age 78.7 ± 12.7 years) were included in the study. Nurse-performed LUS alone had a sensitivity of 95.3% (95% CI: 92.6-98.1%), a specificity of 88.2% (95% CI: 84.0-92.4%), a positive predictive value of 87.9% (95% CI: 83.7-92.2%) and a negative predictive value of 95.5% (95% CI: 92.7-98.2%). The combination of nurse-performed LUS with BNP level (cut-off 400 pg/mL) resulted in a higher sensitivity (98.9%, 95% CI: 97.4-100%), negative predictive value (98.8%, 95% CI: 97.2-100%), and corresponding negative likelihood ratio (0.01, 95% CI: 0.0, 0.07). Nurse-performed LUS had a good accuracy in the diagnosis of acute cardiogenic dyspnea. Use of this technique in combination with BNP seems to be useful in ruling out cardiogenic dyspnea. Other studies are warranted to confirm our preliminary findings and to establish the role of this tool in other settings. PMID:26945396

  10. Sleep restriction and serving accuracy in performance tennis players, and effects of caffeine.

    PubMed

    Reyner, L A; Horne, J A

    2013-08-15

    Athletes often lose sleep on the night before a competition. Whilst it is unlikely that sleep loss will impair sports mostly relying on strength and endurance, little is known about potential effects on sports involving psychomotor performance necessitating judgement and accuracy, rather than speed, as in tennis for example, and where caffeine is 'permitted'. Two studies were undertaken, on 5h sleep (33%) restriction versus normal sleep, on serving accuracy in semi-professional tennis players. Testing (14:00 h-16:00 h) comprised 40 serves into a (1.8 m×1.1 m) 'service box' diagonally, over the net. Study 1 (8 m; 8 f) was within-Ss, counterbalanced (normal versus sleep restriction). Study 2 (6m;6f -different Ss) comprised three conditions (Latin square), identical to Study 1, except for an extra sleep restriction condition with 80 mg caffeine vs placebo in a sugar-free drink, given (double blind), 30 min before testing. Both studies showed significant impairments to serving accuracy after sleep restriction. Caffeine at this dose had no beneficial effect. Study 1 also assessed gender differences, with women significantly poorer under all conditions, and non-significant indications that women were more impaired by sleep restriction (also seen in Study 2). We conclude that adequate sleep is essential for best performance of this type of skill in tennis players and that caffeine is no substitute for 'lost sleep'. 210. PMID:23916998

  11. Assessing accuracy in citizen science-based plant phenology monitoring

    NASA Astrophysics Data System (ADS)

    Fuccillo, Kerissa K.; Crimmins, Theresa M.; de Rivera, Catherine E.; Elder, Timothy S.

    2015-07-01

    In the USA, thousands of volunteers are engaged in tracking plant and animal phenology through a variety of citizen science programs for the purpose of amassing spatially and temporally comprehensive datasets useful to scientists and resource managers. The quality of these observations and their suitability for scientific analysis, however, remains largely unevaluated. We aimed to evaluate the accuracy of plant phenology observations collected by citizen scientist volunteers following protocols designed by the USA National Phenology Network (USA-NPN). Phenology observations made by volunteers receiving several hours of formal training were compared to those collected independently by a professional ecologist. Approximately 11,000 observations were recorded by 28 volunteers over the course of one field season. Volunteers consistently identified phenophases correctly (91 % overall) for the 19 species observed. Volunteers demonstrated greatest overall accuracy identifying unfolded leaves, ripe fruits, and open flowers. Transitional accuracy decreased for some species/phenophase combinations (70 % average), and accuracy varied significantly by phenophase and species ( p < 0.0001). Volunteers who submitted fewer observations over the period of study did not exhibit a higher error rate than those who submitted more total observations. Overall, these results suggest that volunteers with limited training can provide reliable observations when following explicit, standardized protocols. Future studies should investigate different observation models (i.e., group/individual, online/in-person training) over subsequent seasons with multiple expert comparisons to further substantiate the ability of these monitoring programs to supply accurate broadscale datasets capable of answering pressing ecological questions about global change.

  12. Assessing genomic selection prediction accuracy in a dynamic barley breeding

    Technology Transfer Automated Retrieval System (TEKTRAN)

    Genomic selection is a method to improve quantitative traits in crops and livestock by estimating breeding values of selection candidates using phenotype and genome-wide marker data sets. Prediction accuracy has been evaluated through simulation and cross-validation, however validation based on prog...

  13. Bilingual Language Assessment: A Meta-Analysis of Diagnostic Accuracy

    ERIC Educational Resources Information Center

    Dollaghan, Christine A.; Horner, Elizabeth A.

    2011-01-01

    Purpose: To describe quality indicators for appraising studies of diagnostic accuracy and to report a meta-analysis of measures for diagnosing language impairment (LI) in bilingual Spanish-English U.S. children. Method: The authors searched electronically and by hand to locate peer-reviewed English-language publications meeting inclusion criteria;…

  14. 360-degree physician performance assessment.

    PubMed

    Dubinsky, Isser; Jennings, Kelly; Greengarten, Moshe; Brans, Amy

    2010-01-01

    Few jurisdictions have a robust common approach to assessing the quantitative and qualitative dimensions of physician performance. In this article, we examine the need for 360-degree physician performance assessment and review the literature supporting comprehensive physician assessment. An evidence-based, "best practice" approach to the development of a 360-degree physician performance assessment framework is presented, including an overview of a tool kit to support implementation. The focus of the framework is to support physician career planning and to enhance the quality of patient care. Finally, the legal considerations related to implementing 360-degree physician performance assessment are explored. PMID:20357549

  15. Performances. Assessment Resource Kit (ARK).

    ERIC Educational Resources Information Center

    Forster, Margaret; Masters, Geoff

    Performance assessment is the assessment of students engaged in an activity. It is the on-the-spot evaluation of a performance, behavior, or interaction. Ordinarily, there is no concrete product that can be judged at a later date. In Developmental Assessment, teachers monitor student progress against a preconstructed map of developing skills,…

  16. Positioning accuracy assessment for the 4GEO/5IGSO/2MEO constellation of COMPASS

    NASA Astrophysics Data System (ADS)

    Zhou, ShanShi; Cao, YueLing; Zhou, JianHua; Hu, XiaoGong; Tang, ChengPan; Liu, Li; Guo, Rui; He, Feng; Chen, JunPing; Wu, Bin

    2012-12-01

    Determined to become a new member of the well-established GNSS family, COMPASS (or BeiDou-2) is developing its capabilities to provide high accuracy positioning services. Two positioning modes are investigated in this study to assess the positioning accuracy of COMPASS' 4GEO/5IGSO/2MEO constellation. Precise Point Positioning (PPP) for geodetic users and real-time positioning for common navigation users are utilized. To evaluate PPP accuracy, coordinate time series repeatability and discrepancies with GPS' precise positioning are computed. Experiments show that COMPASS PPP repeatability for the east, north and up components of a receiver within mainland China is better than 2 cm, 2 cm and 5 cm, respectively. Apparent systematic offsets of several centimeters exist between COMPASS precise positioning and GPS precise positioning, indicating errors remaining in the treatments of COMPASS measurement and dynamic models and reference frame differences existing between two systems. For common positioning users, COMPASS provides both open and authorized services with rapid differential corrections and integrity information available to authorized users. Our assessment shows that in open service positioning accuracy of dual-frequency and single-frequency users is about 5 m and 6 m (RMS), respectively, which may be improved to about 3 m and 4 m (RMS) with the addition of differential corrections. Less accurate Signal In Space User Ranging Error (SIS URE) and Geometric Dilution of Precision (GDOP) contribute to the relatively inferior accuracy of COMPASS as compared to GPS. Since the deployment of the remaining 1 GEO and 2 MEO is not able to significantly improve GDOP, the performance gap could only be overcome either by the use of differential corrections or improvement of the SIS URE, or both.

  17. The analysis accuracy assessment of CORINE land cover in the Iberian coast

    NASA Astrophysics Data System (ADS)

    Grullón, Yraida R.; Alhaddad, Bahaaeddin; Cladera, Josep R.

    2009-09-01

    Corine land cover 2000 (CLC2000) is a project jointly managed by the Joint Research Centre (JRC) and the European Environment Agency (EEA). Its aim is to update the Corine land cover database in Europe for the year 2000. Landsat-7 Enhanced Thematic Mapper (ETM) satellite images were used for the update and were acquired within the framework of the Image2000 project. Knowledge of the land status through the use of mapping CORINE Land Cover is of great importance to study of interaction land cover and land use categories in Europe scale. This paper presents the accuracy assessment methodology designed and implemented to validate the Iberian Coast CORINE Land Cover 2000 cartography. It presents an implementation of a new methodological concept for land cover data production, Object- Based classification, and automatic generalization to assess the thematic accuracy of CLC2000 by means of an independent data source based on the comparison of the land cover database with reference data derived from visual interpretation of high resolution satellite imageries for sample areas. In our case study, the existing Object-Based classifications are supported with digital maps and attribute databases. According to the quality tests performed, we computed the overall accuracy, and Kappa Coefficient. We will focus on the development of a methodology based on classification and generalization analysis for built-up areas that may improve the investigation. This study can be divided in these fundamental steps: -Extract artificial areas from land use Classifications based on Land-sat and Spot images. -Manuel interpretation for high resolution of multispectral images. -Determine the homogeneity of artificial areas by generalization process. -Overall accuracy, Kappa Coefficient and Special grid (fishnet) test for quality test. Finally, this paper will concentrate to illustrate the precise accuracy of CORINE dataset based on the above general steps.

  18. [Assessment of precision and accuracy of digital surface photogrammetry with the DSP 400 system].

    PubMed

    Krimmel, M; Kluba, S; Dietz, K; Reinert, S

    2005-03-01

    The objective of the present study was to evaluate the precision and accuracy of facial anthropometric measurements obtained through digital 3-D surface photogrammetry with the DSP 400 system in comparison to traditional 2-D photogrammetry. Fifty plaster casts of cleft infants were imaged and 21 standard anthropometric measurements were obtained. For precision assessment the measurements were performed twice in a subsample. Accuracy was determined by comparison of direct measurements and indirect 2-D and 3-D image measurements. Precision of digital surface photogrammetry was almost as good as direct anthropometry and clearly better than 2-D photogrammetry. Measurements derived from 3-D images showed better congruence to direct measurements than from 2-D photos. Digital surface photogrammetry with the DSP 400 system is sufficiently precise and accurate for craniofacial anthropometric examinations. PMID:15832575

  19. Rectal cancer staging: Multidetector-row computed tomography diagnostic accuracy in assessment of mesorectal fascia invasion

    PubMed Central

    Ippolito, Davide; Drago, Silvia Girolama; Franzesi, Cammillo Talei; Fior, Davide; Sironi, Sandro

    2016-01-01

    AIM: To assess the diagnostic accuracy of multidetector-row computed tomography (MDCT) as compared with conventional magnetic resonance imaging (MRI), in identifying mesorectal fascia (MRF) invasion in rectal cancer patients. METHODS: Ninety-one patients with biopsy proven rectal adenocarcinoma referred for thoracic and abdominal CT staging were enrolled in this study. The contrast-enhanced MDCT scans were performed on a 256 row scanner (ICT, Philips) with the following acquisition parameters: tube voltage 120 KV, tube current 150-300 mAs. Imaging data were reviewed as axial and as multiplanar reconstructions (MPRs) images along the rectal tumor axis. MRI study, performed on 1.5 T with dedicated phased array multicoil, included multiplanar T2 and axial T1 sequences and diffusion weighted images (DWI). Axial and MPR CT images independently were compared to MRI and MRF involvement was determined. Diagnostic accuracy of both modalities was compared and statistically analyzed. RESULTS: According to MRI, the MRF was involved in 51 patients and not involved in 40 patients. DWI allowed to recognize the tumor as a focal mass with high signal intensity on high b-value images, compared with the signal of the normal adjacent rectal wall or with the lower tissue signal intensity background. The number of patients correctly staged by the native axial CT images was 71 out of 91 (41 with involved MRF; 30 with not involved MRF), while by using the MPR 80 patients were correctly staged (45 with involved MRF; 35 with not involved MRF). Local tumor staging suggested by MDCT agreed with those of MRI, obtaining for CT axial images sensitivity and specificity of 80.4% and 75%, positive predictive value (PPV) 80.4%, negative predictive value (NPV) 75% and accuracy 78%; while performing MPR the sensitivity and specificity increased to 88% and 87.5%, PPV was 90%, NPV 85.36% and accuracy 88%. MPR images showed higher diagnostic accuracy, in terms of MRF involvement, than native axial images

  20. Accuracy assessment of modeling architectural structures and details using terrestrial laser scanning

    NASA Astrophysics Data System (ADS)

    Kedzierski, M.; Walczykowski, P.; Orych, A.; Czarnecka, P.

    2015-08-01

    One of the most important aspects when performing architectural documentation of cultural heritage structures is the accuracy of both the data and the products which are generated from these data: documentation in the form of 3D models or vector drawings. The paper describes an assessment of the accuracy of modelling data acquired using a terrestrial phase scanner in relation to the density of a point cloud representing the surface of different types of construction materials typical for cultural heritage structures. This analysis includes the impact of the scanning geometry: the incidence angle of the laser beam and the scanning distance. For the purposes of this research, a test field consisting of samples of different types of construction materials (brick, wood, plastic, plaster, a ceramic tile, sheet metal) was built. The study involved conducting measurements at different angles and from a range of distances for chosen scanning densities. Data, acquired in the form of point clouds, were then filtered and modelled. An accuracy assessment of the 3D model was conducted by fitting it with the point cloud. The reflection intensity of each type of material was also analyzed, trying to determine which construction materials have the highest reflectance coefficients, and which have the lowest reflection coefficients, and in turn how this variable changes for different scanning parameters. Additionally measurements were taken of a fragment of a building in order to compare the results obtained in laboratory conditions, with those taken in field conditions.

  1. Simply Performance Assessment

    ERIC Educational Resources Information Center

    McLaughlin, Cheryl A.; McLaughlin, Felecia C.; Pringle, Rose M.

    2013-01-01

    This article presents the experiences of Miss Felecia McLaughlin, a fourth-grade teacher from the island of Jamaica who used the model proposed by Bass et al. (2009) to assess conceptual understanding of four of the six types of simple machines while encouraging collaboration through the creation of learning teams. Students had an opportunity to…

  2. Combining accuracy assessment of land-cover maps with environmental monitoring programs

    USGS Publications Warehouse

    Stehman, S.V.; Czaplewski, R.L.; Nusser, S.M.; Yang, L.; Zhu, Z.

    2000-01-01

    A scientifically valid accuracy assessment of a large-area, land-cover map is expensive. Environmental monitoring programs offer a potential source of data to partially defray the cost of accuracy assessment while still maintaining the statistical validity. In this article, three general strategies for combining accuracy assessment and environmental monitoring protocols are described. These strategies range from a fully integrated accuracy assessment and environmental monitoring protocol, to one in which the protocols operate nearly independently. For all three strategies, features critical to using monitoring data for accuracy assessment include compatibility of the land-cover classification schemes, precisely co-registered sample data, and spatial and temporal compatibility of the map and reference data. Two monitoring programs, the National Resources Inventory (NRI) and the Forest Inventory and Monitoring (FIM), are used to illustrate important features for implementing a combined protocol.

  3. Beyond Assessment: Performance Assessments in Teacher Education

    ERIC Educational Resources Information Center

    Chung, Ruth R.

    2008-01-01

    Over the last decade, teacher performance assessments (TPAs) have begun to find appeal in the context of teacher education programs and teacher licensing for their innovative ways of assessing teacher knowledge and skills but primarily for their potential to promote teacher learning and reflective teaching. Studies of preservice teachers who have…

  4. Supporting Reform through Performance Assessment.

    ERIC Educational Resources Information Center

    Kitchen, Richard; Cherrington, April; Gates, Joanne; Hitchings, Judith; Majka, Maria; Merk, Michael; Trubow, George

    2002-01-01

    Describes the impact of a performance assessment project on six teachers' teaching at Borel Middle School in the San Mateo/Foster City School District in California. Reports positive gains in student performance on the tasks over three years. (YDS)

  5. Tailoring Inlet Flow to Enable High Accuracy Compressor Performance Measurements

    NASA Astrophysics Data System (ADS)

    Brossman, John R.; Smith, Natalie R.; Talalayev, Anton; Key, Nicole L.

    2011-12-01

    To accomplish the research goals of capturing the effects of blade row interactions on compressor performance, small changes in performance must be measurable. This also requires axi-symmetric flow so that measuring one passage accurately captures the phenomena occurring in all passages. Thus, uniform inlet flow is a necessity. The original front-driven compressor had non-uniform temperature at the inlet. Additional challenges in controlling shaft speed to within tight tolerances were associated with the use of a viscous fluid coupling. Thus, a new electric motor, with variable frequency drive speed control was implemented. To address the issues with the inlet flow, the compressor is now driven from the rear resulting in improved inlet flow uniformity. This paper presents the design choices of the new layout in addition to the preliminary performance data of the compressor and an uncertainty analysis.

  6. Assessing the Accuracy of Alaska National Hydrography Data for Mapping and Science

    NASA Astrophysics Data System (ADS)

    Arundel, S. T.; Yamamoto, K. H.; Mantey, K.; Vinyard-Houx, J.; Miller-Corbett, C. D.

    2012-12-01

    In July, 2011, the National Geospatial Program embarked on a large-scale Alaska Topographic Mapping Initiative. Maps will be published through the USGS US Topo program. Mapping of the state requires an understanding of the spatial quality of the National Hydrography Dataset (NHD), which is the hydrographic source for the US Topo. The NHD in Alaska was originally produced from topographic maps at 1:63,360 scale. It is critical to determine whether the NHD is accurate enough to be represented at the targeted map scale of the US Topo (1:25,000). Concerns are the spatial accuracy of data and the density of the stream network. Unsuitably low accuracy can be a result of the lower positional accuracy standards required for the original 1:63,360 scale mapping, temporal changes in water features, or any combination of these factors. Insufficient positional accuracy results in poor vertical integration with data layers of higher positional accuracy. Poor integration is readily apparent on the US Topo, particularly relative to current imagery and elevation data. In Alaska, current IFSAR-derived digital terrain models meet positional accuracy requirements for 1:24,000-scale mapping. Initial visual assessments indicate a wide range in the quality of fit between features in NHD and the IFSAR. However, no statistical analysis had been performed to quantify NHD feature accuracy. Determining the absolute accuracy is cost prohibitive, because of the need to collect independent, well-defined test points for such analysis; however, quantitative analysis of relative positional error is a feasible alternative. The purpose of this study is to determine the baseline accuracy of Alaska NHD pertinent to US Topo production, and to recommend reasonable guidelines and costs for NHD improvement and updates. A second goal is to detect error trends that might help identify areas or features where data improvements are most needed. There are four primary objectives of the study: 1. Choose study

  7. Assessing expected accuracy of probe vehicle travel time reports

    SciTech Connect

    Hellinga, B.; Fu, L.

    1999-12-01

    The use of probe vehicles to provide estimates of link travel times has been suggested as a means of obtaining travel times within signalized networks for use in advanced travel information systems. Past research in the literature has proved contradictory conclusions regarding the expected accuracy of these probe-based estimates, and consequently has estimated different levels of market penetration of probe vehicles required to sustain accurate data within an advanced traveler information system. This paper examines the effect of sampling bias on the accuracy of the probe estimates. An analytical expression is derived on the basis of queuing theory to prove that bias in arrival time distributions and/or in the proportion of probes associated with each link departure turning movement will lead to a systematic bias in the sample estimate of the mean delay. Subsequently, the potential for and impact of sampling bias on a signalized link is examined by simulating an arterial corridor. The analytical derivation and the simulation analysis show that the reliability of probe-based average link travel times is highly affected by sampling bias. Furthermore, this analysis shows that the contradictory conclusions of previous research are directly related to the presence of absence of sample bias.

  8. Accuracy assessment of a mobile terrestrial lidar survey at Padre Island National Seashore

    USGS Publications Warehouse

    Lim, Samsung; Thatcher, Cindy A.; Brock, John C.; Kimbrow, Dustin R.; Danielson, Jeffrey J.; Reynolds, B.J.

    2013-01-01

    The higher point density and mobility of terrestrial laser scanning (light detection and ranging (lidar)) is desired when extremely detailed elevation data are needed for mapping vertically orientated complex features such as levees, dunes, and cliffs, or when highly accurate data are needed for monitoring geomorphic changes. Mobile terrestrial lidar scanners have the capability for rapid data collection on a larger spatial scale compared with tripod-based terrestrial lidar, but few studies have examined the accuracy of this relatively new mapping technology. For this reason, we conducted a field test at Padre Island National Seashore of a mobile lidar scanner mounted on a sport utility vehicle and integrated with a position and orientation system. The purpose of the study was to assess the vertical and horizontal accuracy of data collected by the mobile terrestrial lidar system, which is georeferenced to the Universal Transverse Mercator coordinate system and the North American Vertical Datum of 1988. To accomplish the study objectives, independent elevation data were collected by conducting a high-accuracy global positioning system survey to establish the coordinates and elevations of 12 targets spaced throughout the 12 km transect. These independent ground control data were compared to the lidar scanner-derived elevations to quantify the accuracy of the mobile lidar system. The performance of the mobile lidar system was also tested at various vehicle speeds and scan density settings (e.g. field of view and linear point spacing) to estimate the optimal parameters for desired point density. After adjustment of the lever arm parameters, the final point cloud accuracy was 0.060 m (east), 0.095 m (north), and 0.053 m (height). The very high density of the resulting point cloud was sufficient to map fine-scale topographic features, such as the complex shape of the sand dunes.

  9. A Multilaboratory Comparison of Calibration Accuracy and the Performance of External References in Analytical Ultracentrifugation

    PubMed Central

    Zhao, Huaying; Ghirlando, Rodolfo; Alfonso, Carlos; Arisaka, Fumio; Attali, Ilan; Bain, David L.; Bakhtina, Marina M.; Becker, Donald F.; Bedwell, Gregory J.; Bekdemir, Ahmet; Besong, Tabot M. D.; Birck, Catherine; Brautigam, Chad A.; Brennerman, William; Byron, Olwyn; Bzowska, Agnieszka; Chaires, Jonathan B.; Chaton, Catherine T.; Cölfen, Helmut; Connaghan, Keith D.; Crowley, Kimberly A.; Curth, Ute; Daviter, Tina; Dean, William L.; Díez, Ana I.; Ebel, Christine; Eckert, Debra M.; Eisele, Leslie E.; Eisenstein, Edward; England, Patrick; Escalante, Carlos; Fagan, Jeffrey A.; Fairman, Robert; Finn, Ron M.; Fischle, Wolfgang; de la Torre, José García; Gor, Jayesh; Gustafsson, Henning; Hall, Damien; Harding, Stephen E.; Cifre, José G. Hernández; Herr, Andrew B.; Howell, Elizabeth E.; Isaac, Richard S.; Jao, Shu-Chuan; Jose, Davis; Kim, Soon-Jong; Kokona, Bashkim; Kornblatt, Jack A.; Kosek, Dalibor; Krayukhina, Elena; Krzizike, Daniel; Kusznir, Eric A.; Kwon, Hyewon; Larson, Adam; Laue, Thomas M.; Le Roy, Aline; Leech, Andrew P.; Lilie, Hauke; Luger, Karolin; Luque-Ortega, Juan R.; Ma, Jia; May, Carrie A.; Maynard, Ernest L.; Modrak-Wojcik, Anna; Mok, Yee-Foong; Mücke, Norbert; Nagel-Steger, Luitgard; Narlikar, Geeta J.; Noda, Masanori; Nourse, Amanda; Obsil, Tomas; Park, Chad K.; Park, Jin-Ku; Pawelek, Peter D.; Perdue, Erby E.; Perkins, Stephen J.; Perugini, Matthew A.; Peterson, Craig L.; Peverelli, Martin G.; Piszczek, Grzegorz; Prag, Gali; Prevelige, Peter E.; Raynal, Bertrand D. E.; Rezabkova, Lenka; Richter, Klaus; Ringel, Alison E.; Rosenberg, Rose; Rowe, Arthur J.; Rufer, Arne C.; Scott, David J.; Seravalli, Javier G.; Solovyova, Alexandra S.; Song, Renjie; Staunton, David; Stoddard, Caitlin; Stott, Katherine; Strauss, Holger M.; Streicher, Werner W.; Sumida, John P.; Swygert, Sarah G.; Szczepanowski, Roman H.; Tessmer, Ingrid; Toth, Ronald T.; Tripathy, Ashutosh; Uchiyama, Susumu; Uebel, Stephan F. W.; Unzai, Satoru; Gruber, Anna Vitlin; von Hippel, Peter H.; Wandrey, Christine; Wang, Szu-Huan; Weitzel, Steven E.; Wielgus-Kutrowska, Beata; Wolberger, Cynthia; Wolff, Martin; Wright, Edward; Wu, Yu-Sung; Wubben, Jacinta M.; Schuck, Peter

    2015-01-01

    Analytical ultracentrifugation (AUC) is a first principles based method to determine absolute sedimentation coefficients and buoyant molar masses of macromolecules and their complexes, reporting on their size and shape in free solution. The purpose of this multi-laboratory study was to establish the precision and accuracy of basic data dimensions in AUC and validate previously proposed calibration techniques. Three kits of AUC cell assemblies containing radial and temperature calibration tools and a bovine serum albumin (BSA) reference sample were shared among 67 laboratories, generating 129 comprehensive data sets. These allowed for an assessment of many parameters of instrument performance, including accuracy of the reported scan time after the start of centrifugation, the accuracy of the temperature calibration, and the accuracy of the radial magnification. The range of sedimentation coefficients obtained for BSA monomer in different instruments and using different optical systems was from 3.655 S to 4.949 S, with a mean and standard deviation of (4.304 ± 0.188) S (4.4%). After the combined application of correction factors derived from the external calibration references for elapsed time, scan velocity, temperature, and radial magnification, the range of s-values was reduced 7-fold with a mean of 4.325 S and a 6-fold reduced standard deviation of ± 0.030 S (0.7%). In addition, the large data set provided an opportunity to determine the instrument-to-instrument variation of the absolute radial positions reported in the scan files, the precision of photometric or refractometric signal magnitudes, and the precision of the calculated apparent molar mass of BSA monomer and the fraction of BSA dimers. These results highlight the necessity and effectiveness of independent calibration of basic AUC data dimensions for reliable quantitative studies. PMID:25997164

  10. A multilaboratory comparison of calibration accuracy and the performance of external references in analytical ultracentrifugation.

    PubMed

    Zhao, Huaying; Ghirlando, Rodolfo; Alfonso, Carlos; Arisaka, Fumio; Attali, Ilan; Bain, David L; Bakhtina, Marina M; Becker, Donald F; Bedwell, Gregory J; Bekdemir, Ahmet; Besong, Tabot M D; Birck, Catherine; Brautigam, Chad A; Brennerman, William; Byron, Olwyn; Bzowska, Agnieszka; Chaires, Jonathan B; Chaton, Catherine T; Cölfen, Helmut; Connaghan, Keith D; Crowley, Kimberly A; Curth, Ute; Daviter, Tina; Dean, William L; Díez, Ana I; Ebel, Christine; Eckert, Debra M; Eisele, Leslie E; Eisenstein, Edward; England, Patrick; Escalante, Carlos; Fagan, Jeffrey A; Fairman, Robert; Finn, Ron M; Fischle, Wolfgang; de la Torre, José García; Gor, Jayesh; Gustafsson, Henning; Hall, Damien; Harding, Stephen E; Cifre, José G Hernández; Herr, Andrew B; Howell, Elizabeth E; Isaac, Richard S; Jao, Shu-Chuan; Jose, Davis; Kim, Soon-Jong; Kokona, Bashkim; Kornblatt, Jack A; Kosek, Dalibor; Krayukhina, Elena; Krzizike, Daniel; Kusznir, Eric A; Kwon, Hyewon; Larson, Adam; Laue, Thomas M; Le Roy, Aline; Leech, Andrew P; Lilie, Hauke; Luger, Karolin; Luque-Ortega, Juan R; Ma, Jia; May, Carrie A; Maynard, Ernest L; Modrak-Wojcik, Anna; Mok, Yee-Foong; Mücke, Norbert; Nagel-Steger, Luitgard; Narlikar, Geeta J; Noda, Masanori; Nourse, Amanda; Obsil, Tomas; Park, Chad K; Park, Jin-Ku; Pawelek, Peter D; Perdue, Erby E; Perkins, Stephen J; Perugini, Matthew A; Peterson, Craig L; Peverelli, Martin G; Piszczek, Grzegorz; Prag, Gali; Prevelige, Peter E; Raynal, Bertrand D E; Rezabkova, Lenka; Richter, Klaus; Ringel, Alison E; Rosenberg, Rose; Rowe, Arthur J; Rufer, Arne C; Scott, David J; Seravalli, Javier G; Solovyova, Alexandra S; Song, Renjie; Staunton, David; Stoddard, Caitlin; Stott, Katherine; Strauss, Holger M; Streicher, Werner W; Sumida, John P; Swygert, Sarah G; Szczepanowski, Roman H; Tessmer, Ingrid; Toth, Ronald T; Tripathy, Ashutosh; Uchiyama, Susumu; Uebel, Stephan F W; Unzai, Satoru; Gruber, Anna Vitlin; von Hippel, Peter H; Wandrey, Christine; Wang, Szu-Huan; Weitzel, Steven E; Wielgus-Kutrowska, Beata; Wolberger, Cynthia; Wolff, Martin; Wright, Edward; Wu, Yu-Sung; Wubben, Jacinta M; Schuck, Peter

    2015-01-01

    Analytical ultracentrifugation (AUC) is a first principles based method to determine absolute sedimentation coefficients and buoyant molar masses of macromolecules and their complexes, reporting on their size and shape in free solution. The purpose of this multi-laboratory study was to establish the precision and accuracy of basic data dimensions in AUC and validate previously proposed calibration techniques. Three kits of AUC cell assemblies containing radial and temperature calibration tools and a bovine serum albumin (BSA) reference sample were shared among 67 laboratories, generating 129 comprehensive data sets. These allowed for an assessment of many parameters of instrument performance, including accuracy of the reported scan time after the start of centrifugation, the accuracy of the temperature calibration, and the accuracy of the radial magnification. The range of sedimentation coefficients obtained for BSA monomer in different instruments and using different optical systems was from 3.655 S to 4.949 S, with a mean and standard deviation of (4.304 ± 0.188) S (4.4%). After the combined application of correction factors derived from the external calibration references for elapsed time, scan velocity, temperature, and radial magnification, the range of s-values was reduced 7-fold with a mean of 4.325 S and a 6-fold reduced standard deviation of ± 0.030 S (0.7%). In addition, the large data set provided an opportunity to determine the instrument-to-instrument variation of the absolute radial positions reported in the scan files, the precision of photometric or refractometric signal magnitudes, and the precision of the calculated apparent molar mass of BSA monomer and the fraction of BSA dimers. These results highlight the necessity and effectiveness of independent calibration of basic AUC data dimensions for reliable quantitative studies. PMID:25997164

  11. LANDSAT Scene-to-scene Registration Accuracy Assessment

    NASA Technical Reports Server (NTRS)

    Anderson, J. E.

    1984-01-01

    Initial results obtained from the registration of LANDSAT-4 data to LANDSAT-2 MSS data are documented and compared with results obtained from a LANDSAT-2 MSS-to-LANDSAT-2 scene-to-scene registration (using the same LANDSAT-2 MSS data as the base data set in both procedures). RMS errors calculated on the control points used in the establishment of scene-to-scene mapping equations are compared to error computed from independently chosen verification points. Models developed to estimate actual scene-to-scene registration accuracy based on the use of electrostatic plots are also presented. Analysis of results indicates a statistically significant difference in the RMS errors for the element contribution. Scan line errors were not significantly different. It appears that a modification to the LANDSAT-4 MSS scan mirror coefficients is required to correct the situation.

  12. Assessment of the Geodetic and Color Accuracy of Multi-Pass Airborne/Mobile Lidar Data

    NASA Astrophysics Data System (ADS)

    Pack, R. T.; Petersen, B.; Sunderland, D.; Blonquist, K.; Israelsen, P.; Crum, G.; Fowles, A.; Neale, C.

    2008-12-01

    The ability to merge lidar and color image data acquired by multiple passes of an aircraft or van is largely dependent on the accuracy of the navigation system that estimates the dynamic position and orientation of the sensor. We report an assessment of the performance of a Riegl Q560 lidar transceiver combined with a Litton LN-200 inertial measurement unit (IMU) based NovAtel SPAN GPS/IMU system and a Panasonic HD Video Camera system. Several techniques are reported that were used to maximize the performance of the GPS/IMU system in generating precisely merged point clouds. The airborne data used included eight flight lines all overflying the same building on the campus at Utah State University. These lines were flown at the FAA minimum altitude of 1000 feet for fixed-wing aircraft. The mobile data was then acquired with the same system mounted to look sideways out of a van several months later. The van was driven around the same building at variable speed in order to avoid pedestrians. An absolute accuracy of about 6 cm and a relative accuracy of less than 2.5 cm one-sigma are documented for the merged data. Several techniques are also reported for merging of the color video data stream with the lidar point cloud. A technique for back-projecting and burning lidar points within the video stream enables the verification of co-boresighting accuracy. The resulting pixel-level alignment is accurate with within the size of a lidar footprint. The techniques described in this paper enable the display of high-resolution colored points of high detail and color clarity.

  13. Vestibular and Oculomotor Assessments May Increase Accuracy of Subacute Concussion Assessment.

    PubMed

    McDevitt, J; Appiah-Kubi, K O; Tierney, R; Wright, W G

    2016-08-01

    In this study, we collected and analyzed preliminary data for the internal consistency of a new condensed model to assess vestibular and oculomotor impairments following a concussion. We also examined this model's ability to discriminate concussed athletes from healthy controls. Each participant was tested in a concussion assessment protocol that consisted of the Neurocom's Sensory Organization Test (SOT), Balance Error Scoring System exam, and a series of 8 vestibular and oculomotor assessments. Of these 10 assessments, only the SOT, near point convergence, and the signs and symptoms (S/S) scores collected following optokinetic stimulation, the horizontal eye saccades test, and the gaze stabilization test were significantly correlated with health status, and were used in further analyses. Multivariate logistic regression for binary outcomes was employed and these beta weights were used to calculate the area under the receiver operating characteristic curve ( area under the curve). The best model supported by our findings suggest that an exam consisting of the 4 SOT sensory ratios, near point convergence, and the optokinetic stimulation signs and symptoms score are sensitive in discriminating concussed athletes from healthy controls (accuracy=98.6%, AUC=0.983). However, an even more parsimonious model consisting of only the optokinetic stimulation and gaze stabilization test S/S scores and near point convergence was found to be a sensitive model for discriminating concussed athletes from healthy controls (accuracy=94.4%, AUC=0.951) without the need for expensive equipment. Although more investigation is needed, these findings will be helpful to health professionals potentially providing them with a sensitive and specific battery of simple vestibular and oculomotor assessments for concussion management. PMID:27176886

  14. Precision and accuracy of visual foliar injury assessments

    SciTech Connect

    Gumpertz, M.L.; Tingey, D.T.; Hogsett, W.E.

    1982-07-01

    The study compared three measures of foliar injury: (i) mean percent leaf area injured of all leaves on the plant, (ii) mean percent leaf area injured of the three most injured leaves, and (iii) the proportion of injured leaves to total number of leaves. For the first measure, the variation caused by reader biases and day-to-day variations were compared with the innate plant-to-plant variation. Bean (Phaseolus vulgaris 'Pinto'), pea (Pisum sativum 'Little Marvel'), radish (Rhaphanus sativus 'Cherry Belle'), and spinach (Spinacia oleracea 'Northland') plants were exposed to either 3 ..mu..L L/sup -1/ SO/sub 2/ or 0.3 ..mu..L L/sup -1/ ozone for 2 h. Three leaf readers visually assessed the percent injury on every leaf of each plant while a fourth reader used a transparent grid to make an unbiased assessment for each plant. The mean leaf area injured of the three most injured leaves was highly correlated with all leaves on the plant only if the three most injured leaves were <100% injured. The proportion of leaves injured was not highly correlated with percent leaf area injured of all leaves on the plant for any species in this study. The largest source of variation in visual assessments was plant-to-plant variation, which ranged from 44 to 97% of the total variance, followed by variation among readers (0-32% of the variance). Except for radish exposed to ozone, the day-to-day variation accounted for <18% of the total. Reader bias in assessment of ozone injury was significant but could be adjusted for each reader by a simple linear regression (R/sup 2/ = 0.89-0.91) of the visual assessments against the grid assessments.

  15. Multipolar Ewald Methods, 1: Theory, Accuracy, and Performance

    PubMed Central

    2015-01-01

    The Ewald, Particle Mesh Ewald (PME), and Fast Fourier–Poisson (FFP) methods are developed for systems composed of spherical multipole moment expansions. A unified set of equations is derived that takes advantage of a spherical tensor gradient operator formalism in both real space and reciprocal space to allow extension to arbitrary multipole order. The implementation of these methods into a novel linear-scaling modified “divide-and-conquer” (mDC) quantum mechanical force field is discussed. The evaluation times and relative force errors are compared between the three methods, as a function of multipole expansion order. Timings and errors are also compared within the context of the quantum mechanical force field, which encounters primary errors related to the quality of reproducing electrostatic forces for a given density matrix and secondary errors resulting from the propagation of the approximate electrostatics into the self-consistent field procedure, which yields a converged, variational, but nonetheless approximate density matrix. Condensed-phase simulations of an mDC water model are performed with the multipolar PME method and compared to an electrostatic cutoff method, which is shown to artificially increase the density of water and heat of vaporization relative to full electrostatic treatment. PMID:25691829

  16. Accuracy assessment of CKC high-density surface EMG decomposition in biceps femoris muscle

    NASA Astrophysics Data System (ADS)

    Marateb, H. R.; McGill, K. C.; Holobar, A.; Lateva, Z. C.; Mansourian, M.; Merletti, R.

    2011-10-01

    The aim of this study was to assess the accuracy of the convolution kernel compensation (CKC) method in decomposing high-density surface EMG (HDsEMG) signals from the pennate biceps femoris long-head muscle. Although the CKC method has already been thoroughly assessed in parallel-fibered muscles, there are several factors that could hinder its performance in pennate muscles. Namely, HDsEMG signals from pennate and parallel-fibered muscles differ considerably in terms of the number of detectable motor units (MUs) and the spatial distribution of the motor-unit action potentials (MUAPs). In this study, monopolar surface EMG signals were recorded from five normal subjects during low-force voluntary isometric contractions using a 92-channel electrode grid with 8 mm inter-electrode distances. Intramuscular EMG (iEMG) signals were recorded concurrently using monopolar needles. The HDsEMG and iEMG signals were independently decomposed into MUAP trains, and the iEMG results were verified using a rigorous a posteriori statistical analysis. HDsEMG decomposition identified from 2 to 30 MUAP trains per contraction. 3 ± 2 of these trains were also reliably detected by iEMG decomposition. The measured CKC decomposition accuracy of these common trains over a selected 10 s interval was 91.5 ± 5.8%. The other trains were not assessed. The significant factors that affected CKC decomposition accuracy were the number of HDsEMG channels that were free of technical artifact and the distinguishability of the MUAPs in the HDsEMG signal (P < 0.05). These results show that the CKC method reliably identifies at least a subset of MUAP trains in HDsEMG signals from low force contractions in pennate muscles.

  17. An assessment of the accuracy of orthotropic photoelasticity

    NASA Technical Reports Server (NTRS)

    Hyer, M. W.; Liu, D. H.

    1984-01-01

    The accuracy of orthotropic photoelasticity was studied. The study consisted of both theoretical and experimental phases. In the theoretical phase a stress-optic law was developed. The stress-optic law included the effects of residual birefringence in the relation between applied stress and the material's optical response. The experimental phase had several portions. First, it was shown that four-point bending tests and the concept of an optical neutral axis could be conveniently used to calibrate the stress-optic behavior of the material. Second, the actual optical response of an orthotropic disk in diametral compression was compared with theoretical predictions. Third, the stresses in the disk were determined from the observed optical response, the stress-optic law, and a finite-difference form of the plane stress equilibrium equations. It was concluded that orthotropic photoelasticity is not as accurate as isotropic photoelasticity. This is believed to be due to the lack of good fringe resolution and the low sensitivity of most orthotropic photoelastic materials.

  18. Laboratory assessment of impression accuracy by clinical simulation.

    PubMed

    Wassell, R W; Abuasi, H A

    1992-04-01

    Some laboratory tests of impression material accuracy mimic the clinical situation (simulatory) while others attempt to quantify a material's individual properties. This review concentrates on simulatory testing and aims to give a classification of the numerous tests available. Measurements can be made of the impression itself or the resulting cast. Cast measurements are divided into those made of individual dies and those made of interdie relations. Contact measurement techniques have the advantage of simplicity but are potentially inaccurate because of die abrasion. Non-contact techniques can overcome the abrasion problem but the measurements, especially those made in three dimensions, may be difficult to interpret. Nevertheless, providing that care is taken to avoid parallax error non-contact methods are preferable as experimental variables are easier to control. Where measurements are made of individual dies these should include the die width across the finishing line, as occlusal width measurements provide only limited information. A new concept of 'differential die distortion' (dimensional difference from the master model in one plane minus the dimensional difference in the perpendicular plane) provides a clinically relevant method of interpreting dimensional changes. Where measurements are made between dies movement of the individual dies within the master model must be prevented. Many of the test methods can be criticized as providing clinically unrealistic master models/dies or impression trays. Phantom head typodonts form a useful basis for the morphology of master models providing that undercuts are standardized and the master model temperature adequately controlled. PMID:1564180

  19. Accuracy assessment of high resolution satellite imagery orientation by leave-one-out method

    NASA Astrophysics Data System (ADS)

    Brovelli, Maria Antonia; Crespi, Mattia; Fratarcangeli, Francesca; Giannone, Francesca; Realini, Eugenio

    Interest in high-resolution satellite imagery (HRSI) is spreading in several application fields, at both scientific and commercial levels. Fundamental and critical goals for the geometric use of this kind of imagery are their orientation and orthorectification, processes able to georeference the imagery and correct the geometric deformations they undergo during acquisition. In order to exploit the actual potentialities of orthorectified imagery in Geomatics applications, the definition of a methodology to assess the spatial accuracy achievable from oriented imagery is a crucial topic. In this paper we want to propose a new method for accuracy assessment based on the Leave-One-Out Cross-Validation (LOOCV), a model validation method already applied in different fields such as machine learning, bioinformatics and generally in any other field requiring an evaluation of the performance of a learning algorithm (e.g. in geostatistics), but never applied to HRSI orientation accuracy assessment. The proposed method exhibits interesting features which are able to overcome the most remarkable drawbacks involved by the commonly used method (Hold-Out Validation — HOV), based on the partitioning of the known ground points in two sets: the first is used in the orientation-orthorectification model (GCPs — Ground Control Points) and the second is used to validate the model itself (CPs — Check Points). In fact the HOV is generally not reliable and it is not applicable when a low number of ground points is available. To test the proposed method we implemented a new routine that performs the LOOCV in the software SISAR, developed by the Geodesy and Geomatics Team at the Sapienza University of Rome to perform the rigorous orientation of HRSI; this routine was tested on some EROS-A and QuickBird images. Moreover, these images were also oriented using the world recognized commercial software OrthoEngine v. 10 (included in the Geomatica suite by PCI), manually performing the LOOCV

  20. The Attribute Accuracy Assessment of Land Cover Data in the National Geographic Conditions Survey

    NASA Astrophysics Data System (ADS)

    Ji, X.; Niu, X.

    2014-04-01

    With the widespread national survey of geographic conditions, object-based data has already became the most common data organization pattern in the area of land cover research. Assessing the accuracy of object-based land cover data is related to lots of processes of data production, such like the efficiency of inside production and the quality of final land cover data. Therefore,there are a great deal of requirements of accuracy assessment of object-based classification map. Traditional approaches for accuracy assessment in surveying and mapping are not aimed at land cover data. It is necessary to employ the accuracy assessment in imagery classification. However traditional pixel-based accuracy assessing methods are inadequate for the requirements. The measures we improved are based on error matrix and using objects as sample units, because the pixel sample units are not suitable for assessing the accuracy of object-based classification result. Compared to pixel samples, we realize that the uniformity of object samples has changed. In order to make the indexes generating from error matrix reliable, we using the areas of object samples as the weight to establish the error matrix of object-based image classification map. We compare the result of two error matrixes setting up by the number of object samples and the sum of area of object samples. The error matrix using the sum of area of object sample is proved to be an intuitive, useful technique for reflecting the actual accuracy of object-based imagery classification result.

  1. Classification Consistency and Accuracy for Complex Assessments under the Compound Multinomial Model

    ERIC Educational Resources Information Center

    Lee, Won-Chan; Brennan, Robert L.; Wan, Lei

    2009-01-01

    For a test that consists of dichotomously scored items, several approaches have been reported in the literature for estimating classification consistency and accuracy indices based on a single administration of a test. Classification consistency and accuracy have not been studied much, however, for "complex" assessments--for example, those that…

  2. Attribute-Level and Pattern-Level Classification Consistency and Accuracy Indices for Cognitive Diagnostic Assessment

    ERIC Educational Resources Information Center

    Wang, Wenyi; Song, Lihong; Chen, Ping; Meng, Yaru; Ding, Shuliang

    2015-01-01

    Classification consistency and accuracy are viewed as important indicators for evaluating the reliability and validity of classification results in cognitive diagnostic assessment (CDA). Pattern-level classification consistency and accuracy indices were introduced by Cui, Gierl, and Chang. However, the indices at the attribute level have not yet…

  3. Assessing the Accuracy of Quantitative Molecular Microbial Profiling

    PubMed Central

    O’Sullivan, Denise M.; Laver, Thomas; Temisak, Sasithon; Redshaw, Nicholas; Harris, Kathryn A.; Foy, Carole A.; Studholme, David J.; Huggett, Jim F.

    2014-01-01

    The application of high-throughput sequencing in profiling microbial communities is providing an unprecedented ability to investigate microbiomes. Such studies typically apply one of two methods: amplicon sequencing using PCR to target a conserved orthologous sequence (typically the 16S ribosomal RNA gene) or whole (meta)genome sequencing (WGS). Both methods have been used to catalog the microbial taxa present in a sample and quantify their respective abundances. However, a comparison of the inherent precision or bias of the different sequencing approaches has not been performed. We previously developed a metagenomic control material (MCM) to investigate error when performing different sequencing strategies. Amplicon sequencing using four different primer strategies and two 16S rRNA regions was examined (Roche 454 Junior) and compared to WGS (Illumina HiSeq). All sequencing methods generally performed comparably and in good agreement with organism specific digital PCR (dPCR); WGS notably demonstrated very high precision. Where discrepancies between relative abundances occurred they tended to differ by less than twofold. Our findings suggest that when alternative sequencing approaches are used for microbial molecular profiling they can perform with good reproducibility, but care should be taken when comparing small differences between distinct methods. This work provides a foundation for future work comparing relative differences between samples and the impact of extraction methods. We also highlight the value of control materials when conducting microbial profiling studies to benchmark methods and set appropriate thresholds. PMID:25421243

  4. Accuracy Assessment Study of UNB3m Neutral Atmosphere Model for Global Tropospheric Delay Mitigation

    NASA Astrophysics Data System (ADS)

    Farah, Ashraf

    2015-12-01

    Tropospheric delay is the second major source of error after the ionospheric delay for satellite navigation systems. The transmitted signal could face a delay caused by the troposphere of over 2m at zenith and 20m at lower satellite elevation angles of 10 degrees and below. Positioning errors of 10m or greater can result from the inaccurate mitigation of the tropospheric delay. Many techniques are available for tropospheric delay mitigation consisting of surface meteorological models and global empirical models. Surface meteorological models need surface meteorological data to give high accuracy mitigation while the global empirical models need not. Several hybrid neutral atmosphere delay models have been developed by (University of New Brunswick, Canada) UNB researchers over the past decade or so. The most widely applicable current version is UNB3m, which uses the Saastamoinen zenith delays, Niell mapping functions, and a look-up table with annual mean and amplitude for temperature, pressure, and water vapour pressure varying with respect to latitude and height. This paper presents an assessment study of the behaviour of the UNB3m model compared with highly accurate IGS-tropospheric estimation for three different (latitude/height) IGS stations. The study was performed over four nonconsecutive weeks on different seasons over one year (October 2014 to July 2015). It can be concluded that using UNB3m model gives tropospheric delay correction accuracy of 0.050m in average for low latitude regions in all seasons. The model's accuracy is about 0.075m for medium latitude regions, while its highest accuracy is about 0.014m for high latitude regions.

  5. 12 CFR 630.5 - Accuracy of reports and assessment of internal control over financial reporting.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... CREDIT SYSTEM General § 630.5 Accuracy of reports and assessment of internal control over financial... assessment of internal control over financial reporting. (1) Annual reports must include a report by the Funding Corporation's management assessing the effectiveness of the internal control over...

  6. Online Medical Device Use Prediction: Assessment of Accuracy.

    PubMed

    Maktabi, Marianne; Neumuth, Thomas

    2016-01-01

    Cost-intensive units in the hospital such as the operating room require effective resource management to improve surgical workflow and patient care. To maximize efficiency, online management systems should accurately forecast the use of technical resources (medical instruments and devices). We compare several surgical activities like using the coagulator based on spectral analysis and application of a linear time variant system to obtain future technical resource usage. In our study we examine the influence of the duration of usage and total usage rate of the technical equipment to the prediction performance in several time intervals. A cross validation was conducted with sixty-two neck dissections to evaluate the prediction performance. The performance of a use-state-forecast does not change whether duration is considered or not, but decreases with lower total usage rates of the observed instruments. A minimum number of surgical workflow recordings (here: 62) and >5 minute time intervals for use-state forecast are required for applying our described method to surgical practice. The work presented here might support the reduction of resource conflicts when resources are shared among different operating rooms. PMID:27577445

  7. Shuttle radar topography mission accuracy assessment and evaluation for hydrologic modeling

    NASA Astrophysics Data System (ADS)

    Mercuri, Pablo Alberto

    Digital Elevation Models (DEMs) are increasingly used even in low relief landscapes for multiple mapping applications and modeling approaches such as surface hydrology, flood risk mapping, agricultural suitability, and generation of topographic attributes. The National Aeronautics and Space Administration (NASA) has produced a nearly global database of highly accurate elevation data, the Shuttle Radar Topography Mission (SRTM) DEM. The main goals of this thesis were to investigate quality issues of SRTM, provide measures of vertical accuracy with emphasis on low relief areas, and to analyze the performance for the generation of physical boundaries and streams for watershed modeling and characterization. The absolute and relative accuracy of the two SRTM resolutions, at 1 and 3 arc-seconds, were investigated to generate information that can be used as a reference in areas with similar characteristics in other regions of the world. The absolute accuracy was obtained from accurate point estimates using the best available federal geodetic network in Indiana. The SRTM root mean square error for this area of the Midwest US surpassed data specifications. It was on the order of 2 meters for the 1 arc-second resolution in flat areas of the Midwest US. Estimates of error were smaller for the global coverage 3 arc-second data with very similar results obtained in the flat plains in Argentina. In addition to calculating the vertical accuracy, the impacts of physiography and terrain attributes, like slope, on the error magnitude were studied. The assessment also included analysis of the effects of land cover on vertical accuracy. Measures of local variability were described to identify the adjacency effects produced by surface features in the SRTM DEM, like forests and manmade features near the geodetic point. Spatial relationships among the bare-earth National Elevation Data and SRTM were also analyzed to assess the relative accuracy that was 2.33 meters in terms of the total

  8. Assessment of RFID Read Accuracy for ISS Water Kit

    NASA Technical Reports Server (NTRS)

    Chu, Andrew

    2011-01-01

    The Space Life Sciences Directorate/Medical Informatics and Health Care Systems Branch (SD4) is assessing the benefits Radio Frequency Identification (RFID) technology for tracking items flown onboard the International Space Station (ISS). As an initial study, the Avionic Systems Division Electromagnetic Systems Branch (EV4) is collaborating with SD4 to affix RFID tags to a water kit supplied by SD4 and studying the read success rate of the tagged items. The tagged water kit inside a Cargo Transfer Bag (CTB) was inventoried using three different RFID technologies, including the Johnson Space Center Building 14 Wireless Habitat Test Bed RFID portal, an RFID hand-held reader being targeted for use on board the ISS, and an RFID enclosure designed and prototyped by EV4.

  9. D Modelling and Accuracy Assessment of Granite Quarry Using Unmmanned Aerial Vehicle

    NASA Astrophysics Data System (ADS)

    González-Aguilera, D.; Fernández-Hernández, J.; Mancera-Taboada, J.; Rodríguez-Gonzálvez, P.; Hernández-López, D.; Felipe-García, B.; Gozalo-Sanz, I.; Arias-Perez, B.

    2012-07-01

    The unmanned aerial vehicles (UAVs) are automated systems whose main characteristic is that can be remotely piloted. This property is especially interesting in those civil engineering works in which the accuracy of the model is not reachable by common aerial or satellite systems, there is a difficult accessibility to the infrastructure due to location and geometry aspects, and the economic resources are limited. This paper aims to show the research, development and application of a UAV that will generate georeferenced spatial information at low cost, high quality, and high availability. In particular, a 3D modelling and accuracy assessment of granite quarry using UAV is applied. With regard to the image-based modelling pipeline, an automatic approach supported by open source tools is performed. The process encloses the well-known image-based modelling steps: calibration, extraction and matching of features; relative and absolute orientation of images and point cloud and surface generation. Beside this, an assessment of the final model accuracy is carried out by means of terrestrial laser scanner (TLS), imaging total station (ITS) and global navigation satellite system (GNSS) in order to ensure its validity. This step follows a twofold approach: (i) firstly, using singular check points to provide a dimensional control of the model and (ii) secondly, analyzing the level of agreement between the realitybased 3D model obtained from UAV and the generated with TLS. The main goal is to establish and validate an image-based modelling workflow using UAV technology which can be applied in the surveying and monitoring of different quarries.

  10. Psychometric characteristics of simulation-based assessment in anaesthesia and accuracy of self-assessed scores.

    PubMed

    Weller, J M; Robinson, B J; Jolly, B; Watterson, L M; Joseph, M; Bajenov, S; Haughton, A J; Larsen, P D

    2005-03-01

    The purpose of this study was to define the psychometric properties of a simulation-based assessment of anaesthetists. Twenty-one anaesthetic trainees took part in three highly standardised simulations of anaesthetic emergencies. Scenarios were videotaped and rated independently by four judges. Trainees also assessed their own performance in the simulations. Results were analysed using generalisability theory to determine the influence of subject, case and judge on the variance in judges' scores and to determine the number of cases and judges required to produce a reliable result. Self-assessed scores were compared to the mean score of the judges. The results suggest that 12-15 cases are required to rank trainees reliably on their ability to manage simulated crises. Greater reliability is gained by increasing the number of cases than by increasing the number of judges. There was modest but significant correlation between self-assessed scores and external assessors' scores (rho = 0.321; p = 0.01). At the lower levels of performance, trainees consistently overrated their performance compared to those performing at higher levels (p = 0.0001). PMID:15710009

  11. Assessing the accuracy of Landsat Thematic Mapper classification using double sampling

    USGS Publications Warehouse

    Kalkhan, M.A.; Reich, R.M.; Stohlgren, T.J.

    1998-01-01

    Double sampling was used to provide a cost efficient estimate of the accuracy of a Landsat Thematic Mapper (TM) classification map of a scene located in the Rocky Moutnain National Park, Colorado. In the first phase, 200 sample points were randomly selected to assess the accuracy between Landsat TM data and aerial photography. The overall accuracy and Kappa statistic were 49.5% and 32.5%, respectively. In the second phase, 25 sample points identified in the first phase were selected using stratified random sampling and located in the field. This information was used to correct for misclassification errors associated with the first phase samples. The overall accuracy and Kappa statistic increased to 59.6% and 45.6%, respectively.Double sampling was used to provide a cost efficient estimate of the accuracy of a Landsat Thematic Mapper (TM) classification map of a scene located in the Rocky Mountain National Park, Colorado. In the first phase, 200 sample points were randomly selected to assess the accuracy between Landsat TM data and aerial photography. The overall accuracy and Kappa statistic were 49.5 per cent and 32.5 per cent, respectively. In the second phase, 25 sample points identified in the first phase were selected using stratified random sampling and located in the field. This information was used to correct for misclassification errors associated with the first phase samples. The overall accuracy and Kappa statistic increased to 59.6 per cent and 45.6 per cent, respectively.

  12. Diagnostic accuracy of the vegetative and minimally conscious state: Clinical consensus versus standardized neurobehavioral assessment

    PubMed Central

    Schnakers, Caroline; Vanhaudenhuyse, Audrey; Giacino, Joseph; Ventura, Manfredi; Boly, Melanie; Majerus, Steve; Moonen, Gustave; Laureys, Steven

    2009-01-01

    Background Previously published studies have reported that up to 43% of patients with disorders of consciousness are erroneously assigned a diagnosis of vegetative state (VS). However, no recent studies have investigated the accuracy of this grave clinical diagnosis. In this study, we compared consensus-based diagnoses of VS and MCS to those based on a well-established standardized neurobehavioral rating scale, the JFK Coma Recovery Scale-Revised (CRS-R). Methods We prospectively followed 103 patients (55 ± 19 years) with mixed etiologies and compared the clinical consensus diagnosis provided by the physician on the basis of the medical staff's daily observations to diagnoses derived from CRS-R assessments performed by research staff. All patients were assigned a diagnosis of 'VS', 'MCS' or 'uncertain diagnosis.' Results Of the 44 patients diagnosed with VS based on the clinical consensus of the medical team, 18 (41%) were found to be in MCS following standardized assessment with the CRS-R. In the 41 patients with a consensus diagnosis of MCS, 4 (10%) had emerged from MCS, according to the CRS-R. We also found that the majority of patients assigned an uncertain diagnosis by clinical consensus (89%) were in MCS based on CRS-R findings. Conclusion Despite the importance of diagnostic accuracy, the rate of misdiagnosis of VS has not substantially changed in the past 15 years. Standardized neurobehavioral assessment is a more sensitive means of establishing differential diagnosis in patients with disorders of consciousness when compared to diagnoses determined by clinical consensus. PMID:19622138

  13. 20 CFR 404.1645 - How and when we determine whether the performance accuracy standard is met.

    Code of Federal Regulations, 2013 CFR

    2013-04-01

    ... performance accuracy standard is met. 404.1645 Section 404.1645 Employees' Benefits SOCIAL SECURITY... Performance Standards § 404.1645 How and when we determine whether the performance accuracy standard is met... quarterly basis. The determinations as to whether the performance accuracy threshold has been met is made...

  14. 20 CFR 404.1645 - How and when we determine whether the performance accuracy standard is met.

    Code of Federal Regulations, 2011 CFR

    2011-04-01

    ... performance accuracy standard is met. 404.1645 Section 404.1645 Employees' Benefits SOCIAL SECURITY... Performance Standards § 404.1645 How and when we determine whether the performance accuracy standard is met... quarterly basis. The determinations as to whether the performance accuracy threshold has been met is made...

  15. 20 CFR 416.1045 - How and when we determine whether the performance accuracy standard is met.

    Code of Federal Regulations, 2012 CFR

    2012-04-01

    ... performance accuracy standard is met. 416.1045 Section 416.1045 Employees' Benefits SOCIAL SECURITY... Performance Standards § 416.1045 How and when we determine whether the performance accuracy standard is met... quarterly basis. The determinations as to whether the performance accuracy threshold has been met is made...

  16. 20 CFR 416.1045 - How and when we determine whether the performance accuracy standard is met.

    Code of Federal Regulations, 2014 CFR

    2014-04-01

    ... performance accuracy standard is met. 416.1045 Section 416.1045 Employees' Benefits SOCIAL SECURITY... Performance Standards § 416.1045 How and when we determine whether the performance accuracy standard is met... quarterly basis. The determinations as to whether the performance accuracy threshold has been met is made...

  17. 20 CFR 416.1045 - How and when we determine whether the performance accuracy standard is met.

    Code of Federal Regulations, 2013 CFR

    2013-04-01

    ... performance accuracy standard is met. 416.1045 Section 416.1045 Employees' Benefits SOCIAL SECURITY... Performance Standards § 416.1045 How and when we determine whether the performance accuracy standard is met... quarterly basis. The determinations as to whether the performance accuracy threshold has been met is made...

  18. 20 CFR 416.1045 - How and when we determine whether the performance accuracy standard is met.

    Code of Federal Regulations, 2010 CFR

    2010-04-01

    ... performance accuracy standard is met. 416.1045 Section 416.1045 Employees' Benefits SOCIAL SECURITY... Performance Standards § 416.1045 How and when we determine whether the performance accuracy standard is met... quarterly basis. The determinations as to whether the performance accuracy threshold has been met is made...

  19. 20 CFR 404.1645 - How and when we determine whether the performance accuracy standard is met.

    Code of Federal Regulations, 2010 CFR

    2010-04-01

    ... performance accuracy standard is met. 404.1645 Section 404.1645 Employees' Benefits SOCIAL SECURITY... Performance Standards § 404.1645 How and when we determine whether the performance accuracy standard is met... quarterly basis. The determinations as to whether the performance accuracy threshold has been met is made...

  20. 20 CFR 416.1045 - How and when we determine whether the performance accuracy standard is met.

    Code of Federal Regulations, 2011 CFR

    2011-04-01

    ... performance accuracy standard is met. 416.1045 Section 416.1045 Employees' Benefits SOCIAL SECURITY... Performance Standards § 416.1045 How and when we determine whether the performance accuracy standard is met... quarterly basis. The determinations as to whether the performance accuracy threshold has been met is made...

  1. 20 CFR 404.1645 - How and when we determine whether the performance accuracy standard is met.

    Code of Federal Regulations, 2014 CFR

    2014-04-01

    ... performance accuracy standard is met. 404.1645 Section 404.1645 Employees' Benefits SOCIAL SECURITY... Performance Standards § 404.1645 How and when we determine whether the performance accuracy standard is met... quarterly basis. The determinations as to whether the performance accuracy threshold has been met is made...

  2. 20 CFR 404.1645 - How and when we determine whether the performance accuracy standard is met.

    Code of Federal Regulations, 2012 CFR

    2012-04-01

    ... performance accuracy standard is met. 404.1645 Section 404.1645 Employees' Benefits SOCIAL SECURITY... Performance Standards § 404.1645 How and when we determine whether the performance accuracy standard is met... quarterly basis. The determinations as to whether the performance accuracy threshold has been met is made...

  3. Influence of LCD color reproduction accuracy on observer performance using virtual pathology slides

    NASA Astrophysics Data System (ADS)

    Krupinski, Elizabeth A.; Silverstein, Louis D.; Hashmi, Syed F.; Graham, Anna R.; Weinstein, Ronald S.; Roehrig, Hans

    2012-02-01

    The use of color LCDs in medical imaging is growing as more clinical specialties use digital images as a resource in diagnosis and treatment decisions. Telemedicine applications such as telepathology, teledermatology and teleophthalmology rely heavily on color images. However, standard methods for calibrating, characterizing and profiling color displays do not exist, resulting in inconsistent presentation. To address this, we developed a calibration, characterization and profiling protocol for color-critical medical imaging applications. Physical characterization of displays calibrated with and without the protocol revealed high color reproduction accuracy with the protocol. The present study assessed the impact of this protocol on observer performance. A set of 250 breast biopsy virtual slide regions of interest (half malignant, half benign) were shown to 6 pathologists, once using the calibration protocol and once using the same display in its "native" off-the-shelf uncalibrated state. Diagnostic accuracy and time to render a decision were measured. In terms of ROC performance, Az (area under the curve) calibrated = 0.8640; uncalibrated = 0.8558. No statistically significant difference (p = 0.2719) was observed. In terms of interpretation speed, mean calibrated = 4.895 sec, mean uncalibrated = 6.304 sec which is statistically significant (p = 0.0460). Early results suggest a slight advantage diagnostically for a properly calibrated and color-managed display and a significant potential advantage in terms of improved workflow. Future work should be conducted using different types of color images that may be more dependent on accurate color rendering and a wider range of LCDs with varying characteristics.

  4. Comparative assessment of thematic accuracy of GLC maps for specific applications using existing reference data

    NASA Astrophysics Data System (ADS)

    Tsendbazar, N. E.; de Bruin, S.; Mora, B.; Schouten, L.; Herold, M.

    2016-02-01

    Inputs to various applications and models, current global land cover (GLC) maps are based on different data sources and methods. Therefore, comparing GLC maps is challenging. Statistical comparison of GLC maps is further complicated by the lack of a reference dataset that is suitable for validating multiple maps. This study utilizes the existing Globcover-2005 reference dataset to compare thematic accuracies of three GLC maps for the year 2005 (Globcover, LC-CCI and MODIS). We translated and reinterpreted the LCCS (land cover classification system) classifier information of the reference dataset into the different map legends. The three maps were evaluated for a variety of applications, i.e., general circulation models, dynamic global vegetation models, agriculture assessments, carbon estimation and biodiversity assessments, using weighted accuracy assessment. Based on the impact of land cover confusions on the overall weighted accuracy of the GLC maps, we identified map improvement priorities. Overall accuracies were 70.8 ± 1.4%, 71.4 ± 1.3%, and 61.3 ± 1.5% for LC-CCI, MODIS, and Globcover, respectively. Weighted accuracy assessments produced increased overall accuracies (80-93%) since not all class confusion errors are important for specific applications. As a common denominator for all applications, the classes mixed trees, shrubs, grasses, and cropland were identified as improvement priorities. The results demonstrate the necessity of accounting for dissimilarities in the importance of map classification errors for different user application. To determine the fitness of use of GLC maps, accuracy of GLC maps should be assessed per application; there is no single-figure accuracy estimate expressing map fitness for all purposes.

  5. Reconstruction Accuracy Assessment of Surface and Underwater 3D Motion Analysis: A New Approach

    PubMed Central

    de Jesus, Kelly; de Jesus, Karla; Figueiredo, Pedro; Vilas-Boas, João Paulo; Fernandes, Ricardo Jorge; Machado, Leandro José

    2015-01-01

    This study assessed accuracy of surface and underwater 3D reconstruction of a calibration volume with and without homography. A calibration volume (6000 × 2000 × 2500 mm) with 236 markers (64 above and 88 underwater control points—with 8 common points at water surface—and 92 validation points) was positioned on a 25 m swimming pool and recorded with two surface and four underwater cameras. Planar homography estimation for each calibration plane was computed to perform image rectification. Direct linear transformation algorithm for 3D reconstruction was applied, using 1600000 different combinations of 32 and 44 points out of the 64 and 88 control points for surface and underwater markers (resp.). Root Mean Square (RMS) error with homography of control and validations points was lower than without it for surface and underwater cameras (P ≤ 0.03). With homography, RMS errors of control and validation points were similar between surface and underwater cameras (P ≥ 0.47). Without homography, RMS error of control points was greater for underwater than surface cameras (P ≤ 0.04) and the opposite was observed for validation points (P ≤ 0.04). It is recommended that future studies using 3D reconstruction should include homography to improve swimming movement analysis accuracy. PMID:26175796

  6. Dynamic Assessment of School-Age Children's Narrative Ability: An Experimental Investigation of Classification Accuracy

    ERIC Educational Resources Information Center

    Pena, Elizabeth D.; Gillam, Ronald B.; Malek, Melynn; Ruiz-Felter, Roxanna; Resendiz, Maria; Fiestas, Christine; Sabel, Tracy

    2006-01-01

    Two experiments examined reliability and classification accuracy of a narration-based dynamic assessment task. Purpose: The first experiment evaluated whether parallel results were obtained from stories created in response to 2 different wordless picture books. If so, the tasks and measures would be appropriate for assessing pretest and posttest…

  7. Expansion and dissemination of a standardized accuracy and precision assessment technique

    NASA Astrophysics Data System (ADS)

    Kwartowitz, David M.; Riti, Rachel E.; Holmes, David R., III

    2011-03-01

    The advent and development of new imaging techniques and image-guidance have had a major impact on surgical practice. These techniques attempt to allow the clinician to not only visualize what is currently visible, but also what is beneath the surface, or function. These systems are often based on tracking systems coupled with registration and visualization technologies. The accuracy and precision of the tracking systems, thus is critical in the overall accuracy and precision of the image-guidance system. In this work the accuracy and precision of an Aurora tracking system is assessed, using the technique specified in " novel technique for analysis of accuracy of magnetic tracking systems used in image guided surgery." This analysis yielded a demonstration that accuracy is dependent on distance from the tracker's field generator, and had an RMS value of 1.48 mm. The error has the similar characteristics and values as the previous work, thus validating this method for tracker analysis.

  8. Accuracy of sequence alignment and fold assessment using reduced amino acid alphabets.

    PubMed

    Melo, Francisco; Marti-Renom, Marc A

    2006-06-01

    Reduced or simplified amino acid alphabets group the 20 naturally occurring amino acids into a smaller number of representative protein residues. To date, several reduced amino acid alphabets have been proposed, which have been derived and optimized by a variety of methods. The resulting reduced amino acid alphabets have been applied to pattern recognition, generation of consensus sequences from multiple alignments, protein folding, and protein structure prediction. In this work, amino acid substitution matrices and statistical potentials were derived based on several reduced amino acid alphabets and their performance assessed in a large benchmark for the tasks of sequence alignment and fold assessment of protein structure models, using as a reference frame the standard alphabet of 20 amino acids. The results showed that a large reduction in the total number of residue types does not necessarily translate into a significant loss of discriminative power for sequence alignment and fold assessment. Therefore, some definitions of a few residue types are able to encode most of the relevant sequence/structure information that is present in the 20 standard amino acids. Based on these results, we suggest that the use of reduced amino acid alphabets may allow to increasing the accuracy of current substitution matrices and statistical potentials for the prediction of protein structure of remote homologs. PMID:16506243

  9. Accuracy assessment of the GPS-based slant total electron content

    NASA Astrophysics Data System (ADS)

    Brunini, Claudio; Azpilicueta, Francisco Javier

    2009-08-01

    The main scope of this research is to assess the ultimate accuracy that can be achieved for the slant total electron content (sTEC) estimated from dual-frequency global positioning system (GPS) observations which depends, primarily, on the calibration of the inter-frequency biases (IFB). Two different calibration approaches are analyzed: the so-called satellite-by-satellite one, which involves levelling the carrier-phase to the code-delay GPS observations and then the IFB estimation; and the so-called arc-by-arc one, which avoids the use of code-delay observations but requires the estimation of arc-dependent biases. Two strategies are used for the analysis: the first one compares calibrated sTEC from two co-located GPS receivers that serve to assess the levelling errors; and the second one, assesses the model error using synthetic data free of calibration error, produced with a specially developed technique. The results show that the arc-by-arc calibration technique performs better than the satellite-by-satellite one for mid-latitudes, while the opposite happens for low-latitudes.

  10. Accuracy Assessment of the Integration of GNSS and a MEMS IMU in a Terrestrial Platform

    PubMed Central

    Madeira, Sergio; Yan, Wenlin; Bastos, Luísa; Gonçalves, José A.

    2014-01-01

    MEMS Inertial Measurement Units are available at low cost and can replace expensive units in mobile mapping platforms which need direct georeferencing. This is done through the integration with GNSS measurements in order to achieve a continuous positioning solution and to obtain orientation angles. This paper presents the results of the assessment of the accuracy of a system that integrates GNSS and a MEMS IMU in a terrestrial platform. We describe the methodology used and the tests realized where the accuracy of the positions and orientation parameters were assessed using an independent photogrammetric technique employing cameras that integrate the mobile mapping system developed by the authors. Results for the accuracy of attitude angles and coordinates show that accuracies better than a decimeter in positions, and under a degree in angles, can be achieved even considering that the terrestrial platform is operating in less than favorable environments. PMID:25375757

  11. 3D combinational curves for accuracy and performance analysis of positive biometrics identification

    NASA Astrophysics Data System (ADS)

    Du, Yingzi; Chang, Chein-I.

    2008-06-01

    The receiver operating characteristic (ROC) curve has been widely used as an evaluation criterion to measure the accuracy of biometrics system. Unfortunately, such an ROC curve provides no indication of the optimum threshold and cost function. In this paper, two kinds of 3D combinational curves are proposed: the 3D combinational accuracy curve and the 3D combinational performance curve. The 3D combinational accuracy curve gives a balanced view of the relationships among FAR (false alarm rate), FRR (false rejection rate), threshold t, and Cost. Six 2D curves can be derived from the 3D combinational accuracy curve: the conventional 2D ROC curve, 2D curve of (FRR, t), 2D curve of (FAR, t), 2D curve of (FRR, Cost), 2D curve of (FAR, Cost), and 2D curve of ( t, Cost). The 3D combinational performance curve can be derived from the 3D combinational accuracy curve which can give a balanced view among Security, Convenience, threshold t, and Cost. The advantages of using the proposed 3D combinational curves are demonstrated by iris recognition systems where the experimental results show that the proposed 3D combinational curves can provide more comprehensive information of the system accuracy and performance.

  12. The Relationship Between Level of Training and Accuracy of Violence Risk Assessment

    PubMed Central

    Teo, Alan R.; Holley, Sarah R.; Leary, Mark; McNiel, Dale E.

    2016-01-01

    Objective Although clinical training programs aspire to develop competency in violence risk assessment, little research has examined whether level of training is associated with the accuracy of clinicians’ evaluations of violence potential. This is the first study to compare the accuracy of risk assessments by experienced psychiatrists to those of psychiatric residents. It also examined the potential of a structured decision support tool to improve residents’ violence risk assessments. Methods Using a retrospective case control design, medical records were reviewed for 151 patients who assaulted staff at a county hospital and 150 comparison patients. At admission, violence risk assessments had been completed by psychiatric residents (N= 38) for 52 patients, and by attending psychiatrists (N = 41) for 249 patients. Trained, blinded research clinicians coded information available at hospital admission with a structured risk assessment tool, the HCR-20 Clinical (HCR-20-C) scale. Results Receiver operating characteristic analyses showed that clinical estimates of violence risk by attending psychiatrists had significantly higher predictive validity than those of psychiatric residents. Risk assessments by attending psychiatrists were moderately accurate (AUC = .70), whereas risk assessments by residents were no better than chance (AUC = .52). Incremental validity analyses showed that addition of information from the HCR-20-C had the potential to improve the accuracy of risk assessments by residents to a level (AUC = .67) close to that of attending psychiatrists. Conclusions Less training and experience is associated with inaccurate violence risk assessment. Structured methods hold promise for improving training in risk assessment for violence. PMID:22948947

  13. Making Better Performance Easier with Multisource Assessment.

    ERIC Educational Resources Information Center

    Edwards, Mark R.; Ewen, Ann J.

    1995-01-01

    Multisource assessment is a tool for obtaining quality performance feedback by collecting performance information from multiple work associates. Topics include performance facilitation; performance feedback; assessing training effectiveness; continuous learning; and development or performance management. (AEF)

  14. Challenges in Assessing PPP Performance

    NASA Astrophysics Data System (ADS)

    Seepersad, Garrett; Bisnath, Sunil

    2014-09-01

    The Precise Point Positioning (PPP) GPS data processing technique has developed over the past 15 years to become a standard method for growing categories of positioning and navigation applications. The technique relies on single receiver point positioning combined with precise satellite orbit and clock information, pseudorange and carrier-phase observable filtering, and additional error modelling. Uniquely addressed is the current accuracy of the technique, and explains the limits of performance, which will be used to define paths for future improvements of the technology. PPP processing of over 300 International GNSS Service (IGS) stations over one week results in few millimetre positioning rms error in the north and east components and centimetre-level in the vertical (all one sigma values). These results are categorised into quality classes in order to analyse the root causes of the resultant errors: "best", "worst", multipath, antenna displacement effects, satellite availability and geometry, etc. Also of interest in PPP performance is solution convergence period. Static, conventional solutions are slow to converge, with approximately 20 minutes required for 95% of solutions to reach a horizontal accuracy of 20 cm or better. From the above analysis, the limitations of PPP and the source of these limitations are isolated, including site displacement modelling, geometric measurement strength, pseudorange multipath and noise, etc. It is argued that new ambiguity resolution and multi-GNSS PPP processing will only partially address these limitations. Improved modelling is required for: site displacement effects, pseudorange noise and multipath, and pseudorange and carrier-phase biases. As well, more robust undifferenced carrier phase ambiguity validation and improved stochastic modelling is required for the pseudorange and carrier-phase observables to allow for more realistic position uncertainties.

  15. Accuracy assessment of the GPS-TEC calibration constants by means of a simulation technique

    NASA Astrophysics Data System (ADS)

    Conte, Juan Federico; Azpilicueta, Francisco; Brunini, Claudio

    2011-10-01

    During the last 2 decades, Global Positioning System (GPS) measurements have become a very important data-source for ionospheric studies. However, it is not a direct and easy task to obtain accurate ionospheric information from these measurements because it is necessary to perform a careful estimation of the calibration constants affecting the GPS observations, the so-called differential code biases (DCBs). In this paper, the most common approximations used in several GPS calibration methods, e.g. the La Plata Ionospheric Model (LPIM), are applied to a set of specially computed synthetic slant Total Electron Content datasets to assess the accuracy of the DCB estimation in a global scale scenario. These synthetic datasets were generated using a modified version of the NeQuick model, and have two important features: they show a realistic temporal and spatial behavior and all a-priori DCBs are set to zero by construction. Then, after the application of the calibration method the deviations from zero of the estimated DCBs are direct indicators of the accuracy of the method. To evaluate the effect of the solar activity radiation level the analysis was performed for years 2001 (high solar activity) and 2006 (low solar activity). To take into account seasonal changes of the ionosphere behavior, the analysis was repeated for three consecutive days close to each equinox and solstice of every year. Then, a data package comprising 24 days from approximately 200 IGS permanent stations was processed. In order to avoid unwanted geomagnetic storms effects, the selected days correspond to periods of quiet geomagnetic conditions. The most important results of this work are: i) the estimated DCBs can be affected by errors around ±8 TECu for high solar activity and ±3 TECu for low solar activity; and ii) DCB errors present a systematic behavior depending on the modip coordinate, that is more evident for the positive modip region.

  16. Thermal effects on human performance in office environment measured by integrating task speed and accuracy.

    PubMed

    Lan, Li; Wargocki, Pawel; Lian, Zhiwei

    2014-05-01

    We have proposed a method in which the speed and accuracy can be integrated into one metric of human performance. This was achieved by designing a performance task in which the subjects receive feedback on their performance by informing them whether they have committed errors, and if did, they can only proceed when the errors are corrected. Traditionally, the tasks are presented without giving this feedback and thus the speed and accuracy are treated separately. The method was examined in a subjective experiment with thermal environment as the prototypical example. During exposure in an office, 12 subjects performed tasks under two thermal conditions (neutral & warm) repeatedly. The tasks were presented with and without feedback on errors committed, as outlined above. The results indicate that there was a greater decrease in task performance due to thermal discomfort when feedback was given, compared to the performance of tasks presented without feedback. PMID:23871091

  17. Assessment of the Accuracy of Pharmacy Students’ Compounded Solutions Using Vapor Pressure Osmometry

    PubMed Central

    McPherson, Timothy B.

    2013-01-01

    Objective. To assess the effectiveness of using a vapor pressure osmometer to measure the accuracy of pharmacy students’ compounding skills. Design. Students calculated the theoretical osmotic pressure (mmol/kg) of a solution as a pre-laboratory exercise, compared their calculations with actual values, and then attempted to determine the cause of any errors found. Assessment. After the introduction of the vapor pressure osmometer, the first-time pass rate for solution compounding has varied from 85% to 100%. Approximately 85% of students surveyed reported that the instrument was valuable as a teaching tool because it objectively assessed their work and provided immediate formative assessment. Conclusions. This simple technique of measuring compounding accuracy using a vapor pressure osmometer allowed students to see the importance of quality control and assessment in practice for both pharmacists and technicians. PMID:23610476

  18. A priori evaluation of two-stage cluster sampling for accuracy assessment of large-area land-cover maps

    USGS Publications Warehouse

    Wickham, J.D.; Stehman, S.V.; Smith, J.H.; Wade, T.G.; Yang, L.

    2004-01-01

    Two-stage cluster sampling reduces the cost of collecting accuracy assessment reference data by constraining sample elements to fall within a limited number of geographic domains (clusters). However, because classification error is typically positively spatially correlated, within-cluster correlation may reduce the precision of the accuracy estimates. The detailed population information to quantify a priori the effect of within-cluster correlation on precision is typically unavailable. Consequently, a convenient, practical approach to evaluate the likely performance of a two-stage cluster sample is needed. We describe such an a priori evaluation protocol focusing on the spatial distribution of the sample by land-cover class across different cluster sizes and costs of different sampling options, including options not imposing clustering. This protocol also assesses the two-stage design's adequacy for estimating the precision of accuracy estimates for rare land-cover classes. We illustrate the approach using two large-area, regional accuracy assessments from the National Land-Cover Data (NLCD), and describe how the a priorievaluation was used as a decision-making tool when implementing the NLCD design.

  19. Development of a Haptic Elbow Spasticity Simulator (HESS) for Improving Accuracy and Reliability of Clinical Assessment of Spasticity

    PubMed Central

    Park, Hyung-Soon; Kim, Jonghyun; Damiano, Diane L.

    2013-01-01

    This paper presents the framework for developing a robotic system to improve accuracy and reliability of clinical assessment. Clinical assessment of spasticity tends to have poor reliability because of the nature of the in-person assessment. To improve accuracy and reliability of spasticity assessment, a haptic device, named the HESS (Haptic Elbow Spasticity Simulator) has been designed and constructed to recreate the clinical “feel” of elbow spasticity based on quantitative measurements. A mathematical model representing the spastic elbow joint was proposed based on clinical assessment using the Modified Ashworth Scale (MAS) and quantitative data (position, velocity, and torque) collected on subjects with elbow spasticity. Four haptic models (HMs) were created to represent the haptic feel of MAS 1, 1+, 2, and 3. The four HMs were assessed by experienced clinicians; three clinicians performed both in-person and haptic assessments, and had 100% agreement in MAS scores; and eight clinicians who were experienced with MAS assessed the four HMs without receiving any training prior to the test. Inter-rater reliability among the eight clinicians had substantial agreement (κ = 0.626). The eight clinicians also rated the level of realism (7.63 ± 0.92 out of 10) as compared to their experience with real patients. PMID:22562769

  20. Diagnostic accuracy of refractometry for assessing bovine colostrum quality: A systematic review and meta-analysis.

    PubMed

    Buczinski, S; Vandeweerd, J M

    2016-09-01

    Provision of good quality colostrum [i.e., immunoglobulin G (IgG) concentration ≥50g/L] is the first step toward ensuring proper passive transfer of immunity for young calves. Precise quantification of colostrum IgG levels cannot be easily performed on the farm. Assessment of the refractive index using a Brix scale with a refractometer has been described as being highly correlated with IgG concentration in colostrum. The aim of this study was to perform a systematic review of the diagnostic accuracy of Brix refractometry to diagnose good quality colostrum. From 101 references initially obtain ed, 11 were included in the systematic review meta-analysis representing 4,251 colostrum samples. The prevalence of good colostrum samples with IgG ≥50g/L varied from 67.3 to 92.3% (median 77.9%). Specific estimates of accuracy [sensitivity (Se) and specificity (Sp)] were obtained for different reported cut-points using a hierarchical summary receiver operating characteristic curve model. For the cut-point of 22% (n=8 studies), Se=80.2% (95% CI: 71.1-87.0%) and Sp=82.6% (71.4-90.0%). Decreasing the cut-point to 18% increased Se [96.1% (91.8-98.2%)] and decreased Sp [54.5% (26.9-79.6%)]. Modeling the effect of these Brix accuracy estimates using a stochastic simulation and Bayes theorem showed that a positive result with the 22% Brix cut-point can be used to diagnose good quality colostrum (posttest probability of a good colostrum: 94.3% (90.7-96.9%). The posttest probability of good colostrum with a Brix value <18% was only 22.7% (12.3-39.2%). Based on this study, the 2 cut-points could be alternatively used to select good quality colostrum (sample with Brix ≥22%) or to discard poor quality colostrum (sample with Brix <18%). When sample results are between these 2 values, colostrum supplementation should be considered. PMID:27423958

  1. An action-incongruent secondary task modulates prediction accuracy in experienced performers: evidence for motor simulation.

    PubMed

    Mulligan, Desmond; Lohse, Keith R; Hodges, Nicola J

    2016-07-01

    We provide behavioral evidence that the human motor system is involved in the perceptual decision processes of skilled performers, directly linking prediction accuracy to the (in)ability of the motor system to activate in a response-specific way. Experienced and non-experienced dart players were asked to predict, from temporally occluded video sequences, the landing position of a dart thrown previously by themselves (self) or another (other). This prediction task was performed while additionally performing (a) an action-incongruent secondary motor task (right arm force production), (b) a congruent secondary motor task (mimicking) or (c) an attention-matched task (tone-monitoring). Non-experienced dart players were not affected by any of the secondary task manipulations, relative to control conditions, yet prediction accuracy decreased for the experienced players when additionally performing the force-production, motor task. This interference effect was present for 'self' as well as 'other' decisions, reducing the accuracy of experienced participants to a novice level. The mimicking (congruent) secondary task condition did not interfere with (or facilitate) prediction accuracy for either group. We conclude that visual-motor experience moderates the process of decision making, such that a seemingly visual-cognitive prediction task relies on activation of the motor system for experienced performers. This fits with a motor simulation account of action prediction in sports and other tasks, and alerts to the specificity of these simulative processes. PMID:26021748

  2. Accuracy assessment of novel two-axes rotating and single-axis translating calibration equipment

    NASA Astrophysics Data System (ADS)

    Liu, Bo; Ye, Dong; Che, Rensheng

    2009-11-01

    There is a new method that the rocket nozzle 3D motion is measured by a motion tracking system based on the passive optical markers. However, an important issue is required to resolve-how to assess the accuracy of rocket nozzle motion test. Therefore, calibration equipment is designed and manufactured for generating the truth of nozzle model motion such as translation, angle, velocity, angular velocity, etc. It consists of a base, a lifting platform, a rotary table and a rocket nozzle model with precise geometry size. The nozzle model associated with the markers is installed on the rotary table, which can translate or rotate at the known velocity. The general accuracy of rocket nozzle motion test is evaluated by comparing the truth value with the static and dynamic test data. This paper puts emphasis on accuracy assessment of novel two-axes rotating and single-axis translating calibration equipment. By substituting measured value of the error source into error model, the pointing error reaches less than 0.005deg, rotation center position error reaches 0.08mm, and the rate stability is less than 10-3. The calibration equipment accuracy is much higher than the accuracy of nozzle motion test system, thus the former can be used to assess and calibrate the later.

  3. Preliminary melter performance assessment report

    SciTech Connect

    Elliott, M.L.; Eyler, L.L.; Mahoney, L.A.; Cooper, M.F.; Whitney, L.D.; Shafer, P.J.

    1994-08-01

    The Melter Performance Assessment activity, a component of the Pacific Northwest Laboratory`s (PNL) Vitrification Technology Development (PVTD) effort, was designed to determine the impact of noble metals on the operational life of the reference Hanford Waste Vitrification Plant (HWVP) melter. The melter performance assessment consisted of several activities, including a literature review of all work done with noble metals in glass, gradient furnace testing to study the behavior of noble metals during the melting process, research-scale and engineering-scale melter testing to evaluate effects of noble metals on melter operation, and computer modeling that used the experimental data to predict effects of noble metals on the full-scale melter. Feed used in these tests simulated neutralized current acid waste (NCAW) feed. This report summarizes the results of the melter performance assessment and predicts the lifetime of the HWVP melter. It should be noted that this work was conducted before the recent Tri-Party Agreement changes, so the reference melter referred to here is the Defense Waste Processing Facility (DWPF) melter design.

  4. Techniques for accuracy assessment of tree locations extracted from remotely sensed imagery.

    PubMed

    Nelson, Trisalyn; Boots, Barry; Wulder, Michael A

    2005-02-01

    Remotely sensed imagery is becoming a common source of environmental data. Consequently, there is an increasing need for tools to assess the accuracy and information content of such data. Particularly when the spatial resolution of imagery is fine, the accuracy of image processing is determined by comparisons with field data. However, the nature of error is more difficult to assess. In this paper we describe a set of tools intended for such an assessment when tree objects are extracted and field data are available for comparison. These techniques are demonstrated on individual tree locations extracted from an IKONOS image via local maximum filtering. The locations of the extracted trees are compared with field data to determine the number of found and missed trees. Aspatial and spatial (Voronoi) analysis methods are used to examine the nature of errors by searching for trends in characteristics of found and missed trees. As well, analysis is conducted to assess the information content of found trees. PMID:15644266

  5. Performance Assessment Institute-NV

    SciTech Connect

    Lombardo, Joesph

    2012-12-31

    The National Supercomputing Center for Energy and the Environment’s intention is to purchase a multi-purpose computer cluster in support of the Performance Assessment Institute (PA Institute). The PA Institute will serve as a research consortium located in Las Vegas Nevada with membership that includes: national laboratories, universities, industry partners, and domestic and international governments. This center will provide a one-of-a-kind centralized facility for the accumulation of information for use by Institutions of Higher Learning, the U.S. Government, and Regulatory Agencies and approved users. This initiative will enhance and extend High Performance Computing (HPC) resources in Nevada to support critical national and international needs in "scientific confirmation". The PA Institute will be promoted as the leading Modeling, Learning and Research Center worldwide. The program proposes to utilize the existing supercomputing capabilities and alliances of the University of Nevada Las Vegas as a base, and to extend these resource and capabilities through a collaborative relationship with its membership. The PA Institute will provide an academic setting for interactive sharing, learning, mentoring and monitoring of multi-disciplinary performance assessment and performance confirmation information. The role of the PA Institute is to facilitate research, knowledge-increase, and knowledge-sharing among users.

  6. The Influence of Overt Practice, Achievement Level, and Explanatory Style on Calibration Accuracy and Performance

    ERIC Educational Resources Information Center

    Bol, Linda; Hacker, Douglas J.; O'Shea, Patrick; Allen, Dwight

    2005-01-01

    The authors measured the influence of overt calibration practice, achievement level, and explanatory style on calibration accuracy and exam performance. Students (N = 356) were randomly assigned to either an overt practice or no-practice condition. Students in the overt practice condition made predictions and postdictions about their performance…

  7. Assessing the GPS-based sTEC accuracy by using experimental and synthetic dataset

    NASA Astrophysics Data System (ADS)

    Brunini, Claudio

    The main scope of this contribution is to assess the accuracy that can be achieved in the slant total electron content (sTEC) estimated from dual-frequency GPS observations, which depends, primarily, on the calibration of the inter-frequency biases (IFB). Two different calibration approaches are analysed: the so-called satellite-by-satellite, which involves the reduction of the carrier-phase ambiguities effects by levelling the carrier-phase to the code-delay GPS observations and then the estimation of satellite-dependent IFB; and the so-called arc-by-arc, which avoid the use of code-delay observations but requires the estimation of arc-dependent IFB. In principle, the first approach should produce more reliable results because it requires the estimation of les parameters than the second one, but the second approach presents the benefit of being not affected by the levelling error effects that are caused by the presence of the code-delay multi-path. This contribution discusses two different experiments specifically designed to asses the GPS- based sTEC accuracy: the so-called co-location and synthetic data experiments. The first one is based on the comparison of the calibrated sTEC estimated from the data collected by two nearby GPS receivers, while the second one is based on the use of a synthetic dataset free of calibration errors generated with an empirical ionospheric model. While the co-location experiment is sensitive to the levelling but not to the model error effects, the synthetic data experiment provides a way to assess the calibration biases errors caused by the inconsistencies of the ionospheric model involved in the estimation process. Both experiments used in a complementary way allowed the estimation of calibration errors of several TECu (total electron content unities) depending on the station location (low, mid or high latitude); the ionospheric conditions (solar and geomagnetic activity, season); characteristics of the GPS instruments (receivers

  8. Gender Differences in Structured Risk Assessment: Comparing the Accuracy of Five Instruments

    ERIC Educational Resources Information Center

    Coid, Jeremy; Yang, Min; Ullrich, Simone; Zhang, Tianqiang; Sizmur, Steve; Roberts, Colin; Farrington, David P.; Rogers, Robert D.

    2009-01-01

    Structured risk assessment should guide clinical risk management, but it is uncertain which instrument has the highest predictive accuracy among men and women. In the present study, the authors compared the Psychopathy Checklist-Revised (PCL-R; R. D. Hare, 1991, 2003); the Historical, Clinical, Risk Management-20 (HCR-20; C. D. Webster, K. S.…

  9. Gender Differences in the Self-Assessment of Accuracy on Cognitive Tasks.

    ERIC Educational Resources Information Center

    Pallier, Gerry

    2003-01-01

    Examined the effects of gender on the self-assessment of accuracy of visual perceptual judgments. College students completed a test of general knowledge and a visual perceptual task. When results were analyzed by sex, men were more confident than women. Next, people age 17-80 completed tests of cognitive ability. The tendency for men to express…

  10. Assessing the Accuracy of MODIS-NDVI Derived Land-Cover Across the Great Lakes Basin

    EPA Science Inventory

    This research describes the accuracy assessment process for a land-cover dataset developed for the Great Lakes Basin (GLB). This land-cover dataset was developed from the 2007 MODIS Normalized Difference Vegetation Index (NDVI) 16-day composite (MOD13Q) 250 m time-series data. Tr...

  11. Classification Consistency and Accuracy for Complex Assessments Using Item Response Theory

    ERIC Educational Resources Information Center

    Lee, Won-Chan

    2010-01-01

    In this article, procedures are described for estimating single-administration classification consistency and accuracy indices for complex assessments using item response theory (IRT). This IRT approach was applied to real test data comprising dichotomous and polytomous items. Several different IRT model combinations were considered. Comparisons…

  12. Accuracy of Revised and Traditional Parallel Analyses for Assessing Dimensionality with Binary Data

    ERIC Educational Resources Information Center

    Green, Samuel B.; Redell, Nickalus; Thompson, Marilyn S.; Levy, Roy

    2016-01-01

    Parallel analysis (PA) is a useful empirical tool for assessing the number of factors in exploratory factor analysis. On conceptual and empirical grounds, we argue for a revision to PA that makes it more consistent with hypothesis testing. Using Monte Carlo methods, we evaluated the relative accuracy of the revised PA (R-PA) and traditional PA…

  13. A PIXEL COMPOSITION-BASED REFERENCE DATA SET FOR THEMATIC ACCURACY ASSESSMENT

    EPA Science Inventory

    Developing reference data sets for accuracy assessment of land-cover classifications derived from coarse spatial resolution sensors such as MODIS can be difficult due to the large resolution differences between the image data and available reference data sources. Ideally, the spa...

  14. Assessing the Accuracy of Classwide Direct Observation Methods: Two Analyses Using Simulated and Naturalistic Data

    ERIC Educational Resources Information Center

    Dart, Evan H.; Radley, Keith C.; Briesch, Amy M.; Furlow, Christopher M.; Cavell, Hannah J.; Briesch, Amy M.

    2016-01-01

    Two studies investigated the accuracy of eight different interval-based group observation methods that are commonly used to assess the effects of classwide interventions. In Study 1, a Microsoft Visual Basic program was created to simulate a large set of observational data. Binary data were randomly generated at the student level to represent…

  15. Accuracy of scanography using storage phosphor plate systems and film for assessment of mandibular third molars

    PubMed Central

    Matzen, LH; Christensen, J; Wenzel, A

    2011-01-01

    Objectives The aim of this study was to compare the diagnostic accuracy of two digital photostimulable storage phosphor (PSP) systems and film for assessment of mandibular third molars before surgery. Methods 110 patients were referred to have both their mandibular third molars removed. Each patient underwent a radiographic examination with scanography using either Digora (Soredex, Helsinki, Finland) and film or VistaScan (Dürr Dental, Beitigheim-Bissingen, Germany) and film in a randomized paired design. Two observers examined the following variables on the scanograms: bone coverage, angulation of the tooth in the bone, number of roots, root morphology and the relationship to the mandibular canal. In 75 of the pairs (Digora/film pair = 38 and Vista/film pair = 37) both third molars were eventually removed. During and after surgery the same variables were assessed, which served as reference standard for the radiographic assessments. The Wilcoxon signed-rank test tested differences in accuracy (radiographic compared with surgical findings) between Digora/film and between Vista/film. Results There was no statistically significant difference between the diagnostic accuracy of film and either of the two digital receptors for assessment of mandibular third molars before surgery (P > 0.05), although Digora obtained a higher accuracy than film. Conclusions Scanography is a valuable method for examination of mandibular third molars before removal and the PSP digital receptors in this study were equal to film for this purpose. PMID:21697156

  16. Using Attribute Sampling to Assess the Accuracy of a Library Circulation System.

    ERIC Educational Resources Information Center

    Kiger, Jack E.; Wise, Kenneth

    1995-01-01

    Discusses how to use attribute sampling to assess the accuracy of a library circulation system. Describes the nature of sampling, sampling risk, and nonsampling error. Presents nine steps for using attribute sampling to determine the maximum percentage of incorrect records in a circulation system. (AEF)

  17. The Word Writing CAFE: Assessing Student Writing for Complexity, Accuracy, and Fluency

    ERIC Educational Resources Information Center

    Leal, Dorothy J.

    2005-01-01

    The Word Writing CAFE is a new assessment tool designed for teachers to evaluate objectively students' word-writing ability for fluency, accuracy, and complexity. It is designed to be given to the whole class at one time. This article describes the development of the CAFE and provides directions for administering and scoring it. The author also…

  18. Parallel Reaction Monitoring: A Targeted Experiment Performed Using High Resolution and High Mass Accuracy Mass Spectrometry

    PubMed Central

    Rauniyar, Navin

    2015-01-01

    The parallel reaction monitoring (PRM) assay has emerged as an alternative method of targeted quantification. The PRM assay is performed in a high resolution and high mass accuracy mode on a mass spectrometer. This review presents the features that make PRM a highly specific and selective method for targeted quantification using quadrupole-Orbitrap hybrid instruments. In addition, this review discusses the label-based and label-free methods of quantification that can be performed with the targeted approach. PMID:26633379

  19. Understanding and Application of Performance Assessment.

    ERIC Educational Resources Information Center

    Im, Byung-Bin

    2000-01-01

    Points to weaknesses of traditional tests and comments on the theoretical background and necessity of performance assessment. Presents specific information on performance assessment and provides assessment examples. (Author/VWL)

  20. Salt site performance assessment activities

    SciTech Connect

    Kircher, J.F.; Gupta, S.K.

    1983-01-01

    During this year the first selection of the tools (codes) for performance assessments of potential salt sites have been tentatively selected and documented; the emphasis has shifted from code development to applications. During this period prior to detailed characterization of a salt site, the focus is on bounding calculations, sensitivity and with the data available. The development and application of improved methods for sensitivity and uncertainty analysis is a focus for the coming years activities and the subject of a following paper in these proceedings. Although the assessments to date are preliminary and based on admittedly scant data, the results indicate that suitable salt sites can be identified and repository subsystems designed which will meet the established criteria for protecting the health and safety of the public. 36 references, 5 figures, 2 tables.

  1. Accuracy Assessment and Correction of Vaisala RS92 Radiosonde Water Vapor Measurements

    NASA Technical Reports Server (NTRS)

    Whiteman, David N.; Miloshevich, Larry M.; Vomel, Holger; Leblanc, Thierry

    2008-01-01

    Relative humidity (RH) measurements from Vaisala RS92 radiosondes are widely used in both research and operational applications, although the measurement accuracy is not well characterized as a function of its known dependences on height, RH, and time of day (or solar altitude angle). This study characterizes RS92 mean bias error as a function of its dependences by comparing simultaneous measurements from RS92 radiosondes and from three reference instruments of known accuracy. The cryogenic frostpoint hygrometer (CFH) gives the RS92 accuracy above the 700 mb level; the ARM microwave radiometer gives the RS92 accuracy in the lower troposphere; and the ARM SurTHref system gives the RS92 accuracy at the surface using 6 RH probes with NIST-traceable calibrations. These RS92 assessments are combined using the principle of Consensus Referencing to yield a detailed estimate of RS92 accuracy from the surface to the lowermost stratosphere. An empirical bias correction is derived to remove the mean bias error, yielding corrected RS92 measurements whose mean accuracy is estimated to be +/-3% of the measured RH value for nighttime soundings and +/-4% for daytime soundings, plus an RH offset uncertainty of +/-0.5%RH that is significant for dry conditions. The accuracy of individual RS92 soundings is further characterized by the 1-sigma "production variability," estimated to be +/-1.5% of the measured RH value. The daytime bias correction should not be applied to cloudy daytime soundings, because clouds affect the solar radiation error in a complicated and uncharacterized way.

  2. Self-Confidence and Performance Goal Orientation Interactively Predict Performance in a Reasoning Test with Accuracy Feedback

    ERIC Educational Resources Information Center

    Beckmann, Nadin; Beckmann, Jens F.; Elliott, Julian G.

    2009-01-01

    This study takes an individual differences' perspective on performance feedback effects in psychometric testing. A total of 105 students in a mainstream secondary school in North East England undertook a cognitive ability test on two occasions. In one condition, students received item-specific accuracy feedback while in the other (standard…

  3. Accuracy assessment of airborne photogrammetrically derived high-resolution digital elevation models in a high mountain environment

    NASA Astrophysics Data System (ADS)

    Müller, Johann; Gärtner-Roer, Isabelle; Thee, Patrick; Ginzler, Christian

    2014-12-01

    High-resolution digital elevation models (DEMs) generated by airborne remote sensing are frequently used to analyze landform structures (monotemporal) and geomorphological processes (multitemporal) in remote areas or areas of extreme terrain. In order to assess and quantify such structures and processes it is necessary to know the absolute accuracy of the available DEMs. This study assesses the absolute vertical accuracy of DEMs generated by the High Resolution Stereo Camera-Airborne (HRSC-A), the Leica Airborne Digital Sensors 40/80 (ADS40 and ADS80) and the analogue camera system RC30. The study area is located in the Turtmann valley, Valais, Switzerland, a glacially and periglacially formed hanging valley stretching from 2400 m to 3300 m a.s.l. The photogrammetrically derived DEMs are evaluated against geodetic field measurements and an airborne laser scan (ALS). Traditional and robust global and local accuracy measurements are used to describe the vertical quality of the DEMs, which show a non Gaussian distribution of errors. The results show that all four sensor systems produce DEMs with similar accuracy despite their different setups and generations. The ADS40 and ADS80 (both with a ground sampling distance of 0.50 m) generate the most accurate DEMs in complex high mountain areas with a RMSE of 0.8 m and NMAD of 0.6 m They also show the highest accuracy relating to flying height (0.14‰). The pushbroom scanning system HRSC-A produces a RMSE of 1.03 m and a NMAD of 0.83 m (0.21‰ accuracy of the flying height and 10 times the ground sampling distance). The analogue camera system RC30 produces DEMs with a vertical accuracy of 1.30 m RMSE and 0.83 m NMAD (0.17‰ accuracy of the flying height and two times the ground sampling distance). It is also shown that the performance of the DEMs strongly depends on the inclination of the terrain. The RMSE of areas up to an inclination <40° is better than 1 m. In more inclined areas the error and outlier occurrence

  4. Increasing accuracy in the assessment of motion sickness: A construct methodology

    NASA Technical Reports Server (NTRS)

    Stout, Cynthia S.; Cowings, Patricia S.

    1993-01-01

    The purpose is to introduce a new methodology that should improve the accuracy of the assessment of motion sickness. This construct methodology utilizes both subjective reports of motion sickness and objective measures of physiological correlates to assess motion sickness. Current techniques and methods used in the framework of a construct methodology are inadequate. Current assessment techniques for diagnosing motion sickness and space motion sickness are reviewed, and attention is called to the problems with the current methods. Further, principles of psychophysiology that when applied will probably resolve some of these problems are described in detail.

  5. A novel method for assessing the 3-D orientation accuracy of inertial/magnetic sensors.

    PubMed

    Faber, Gert S; Chang, Chien-Chi; Rizun, Peter; Dennerlein, Jack T

    2013-10-18

    A novel method for assessing the accuracy of inertial/magnetic sensors is presented. The method, referred to as the "residual matrix" method, is advantageous because it decouples the sensor's error with respect to Earth's gravity vector (attitude residual error: pitch and roll) from the sensor's error with respect to magnetic north (heading residual error), while remaining insensitive to singularity problems when the second Euler rotation is close to ±90°. As a demonstration, the accuracy of an inertial/magnetic sensor mounted to a participant's forearm was evaluated during a reaching task in a laboratory. Sensor orientation was measured internally (by the inertial/magnetic sensor) and externally using an optoelectronic measurement system with a marker cluster rigidly attached to the sensor's enclosure. Roll, pitch and heading residuals were calculated using the proposed novel method, as well as using a common orientation assessment method where the residuals are defined as the difference between the Euler angles measured by the inertial sensor and those measured by the optoelectronic system. Using the proposed residual matrix method, the roll and pitch residuals remained less than 1° and, as expected, no statistically significant difference between these two measures of attitude accuracy was found; the heading residuals were significantly larger than the attitude residuals but remained below 2°. Using the direct Euler angle comparison method, the residuals were in general larger due to singularity issues, and the expected significant difference between inertial/magnetic sensor attitude and heading accuracy was not present. PMID:24016678

  6. Radiative accuracy assessment of CrIS upper level channels using COSMIC RO data

    NASA Astrophysics Data System (ADS)

    Qi, C.; Weng, F.; Han, Y.; Lin, L.; Chen, Y.; Wang, L.

    2012-12-01

    The Cross-track Infrared Sounder(CrIS) onboard Suomi National Polar-orbiting Partnership(NPP) satellite is designed to provide high vertical resolution information on the atmosphere's three-dimensional structure of temperature and water vapor. There are much work has been done to verify the observation accuracy of CrIS since its launch date of Oct. 28, 2011, such as SNO cross comparison with other hyper-spectral infrared instruments and forward simulation comparison using radiative transfer model based on numerical prediction background profiles. Radio occultation technique can provide profiles of the Earth's ionosphere and neutral atmosphere with high accuracy, high vertical resolution and global coverage. It has advantages of all-weather capability, low expense, long-term stability etc. Assessing CrIS radiative calibration accuracy was conducted by comparison between observation and Line-by-line simulation using COSMIC RO data. The main process technique include : (a) COSMIC RO data downloading and collocation with CrIS measurements through weighting function (wf) peak altitude dependent collocation method; (b) High spectral resolution of Line-by-line radiance simulation using collocated COSMIC RO profiles ; (c) Generation of CrIS channel radiance by FFT transform method; (d): Bias analysis . This absolute calibration accuracy assessing method verified a 0.3K around bias error of CrIS measurements.

  7. Assessing the impact of measurement frequency on accuracy and uncertainty of water quality data

    NASA Astrophysics Data System (ADS)

    Helm, Björn; Schiffner, Stefanie; Krebs, Peter

    2014-05-01

    Physico-chemical water quality is a major objective for the evaluation of the ecological state of a river water body. Physical and chemical water properties are measured to assess the river state, identify prevalent pressures and develop mitigating measures. Regularly water quality is assessed based on weekly to quarterly grab samples. The increasing availability of online-sensor data measured at a high frequency allows for an enhanced understanding of emission and transport dynamics, as well as the identification of typical and critical states. In this study we present a systematic approach to assess the impact of measurement frequency on the accuracy and uncertainty of derived aggregate indicators of environmental quality. High frequency measured (10 min-1 and 15 min-1) data on water temperature, pH, turbidity, electric conductivity and concentrations of dissolved oxygen nitrate, ammonia and phosphate are assessed in resampling experiments. The data is collected at 14 sites in eastern and northern Germany representing catchments between 40 km2 and 140 000 km2 of varying properties. Resampling is performed to create series of hourly to quarterly frequency, including special restrictions like sampling at working hours or discharge compensation. Statistical properties and their confidence intervals are determined in a bootstrapping procedure and evaluated along a gradient of sampling frequency. For all variables the range of the aggregate indicators increases largely in the bootstrapping realizations with decreasing sampling frequency. Mean values of electric conductivity, pH and water temperature obtained with monthly frequency differ in average less than five percent from the original data. Mean dissolved oxygen, nitrate and phosphate had in most stations less than 15 % bias. Ammonia and turbidity are most sensitive to the increase of sampling frequency with up to 30 % in average and 250 % maximum bias at monthly sampling frequency. A systematic bias is recognized

  8. Communicating Performance Assessments Results - 13609

    SciTech Connect

    Layton, Mark

    2013-07-01

    The F-Area Tank Farms (FTF) and H-Area Tank Farm (HTF) are owned by the U.S. Department of Energy (DOE) and operated by Savannah River Remediation LLC (SRR), Liquid Waste Operations contractor at DOE's Savannah River Site (SRS). The FTF and HTF are active radioactive waste storage and treatment facilities consisting of 51 carbon steel waste tanks and ancillary equipment such as transfer lines, evaporators and pump tanks. Performance Assessments (PAs) for each Tank Farm have been prepared to support the eventual closure of the underground radioactive waste tanks and ancillary equipment. PAs provide the technical bases and results to be used in subsequent documents to demonstrate compliance with the pertinent requirements for final closure of the Tank Farms. The Tank Farms are subject to a number of regulatory requirements. The State regulates Tank Farm operations through an industrial waste water permit and through a Federal Facility Agreement approved by the State, DOE and the Environmental Protection Agency (EPA). Closure documentation will include State-approved Tank Farm Closure Plans and tank-specific closure modules utilizing information from the PAs. For this reason, the State of South Carolina and the EPA must be involved in the performance assessment review process. The residual material remaining after tank cleaning is also subject to reclassification prior to closure via a waste determination pursuant to Section 3116 of the Ronald W. Reagan National Defense Authorization Act of Fiscal Year 2005. PAs are performance-based, risk-informed analyses of the fate and transport of FTF and HTF residual wastes following final closure of the Tank Farms. Since the PAs serve as the primary risk assessment tools in evaluating readiness for closure, it is vital that PA conclusions be communicated effectively. In the course of developing the FTF and HTF PAs, several lessons learned have emerged regarding communicating PA results. When communicating PA results it is

  9. Calibration of ground-based microwave radiometers - Accuracy assessment and recommendations for network users

    NASA Astrophysics Data System (ADS)

    Pospichal, Bernhard; Küchler, Nils; Löhnert, Ulrich; Crewell, Susanne; Czekala, Harald; Güldner, Jürgen

    2016-04-01

    Ground-based microwave radiometers (MWR) are becoming widely used in atmospheric remote sensing and start to be routinely operated by national weather services and other institutions. However, common standards for calibration of these radiometers and a detailed knowledge about the error characteristics is needed, in order to assimilate the data into models. Intercomparisons of calibrations by different MWRs have rarely been done. Therefore, two calibration experiments in Lindenberg (2014) and Meckenheim (2015) were performed in the frame of TOPROF (Cost action ES1303) in order to assess uncertainties and differences between various instruments. In addition, a series of experiments were taken in Oklahoma in autumn 2014. The focus lay on the performance of the two main instrument types, which are currently used operationally. These are the MP-Profiler series by Radiometrics Corporation as well as the HATPRO series by Radiometer Physics GmbH (RPG). Both instrument types are operating in two frequency bands, one along the 22 GHz water vapour line, the other one at the lower wing of the 60 GHz oxygen absorption complex. The goal was to establish protocols for providing quality controlled (QC) MWR data and their uncertainties. To this end, standardized calibration procedures for MWR were developed and recommendations for radiometer users were compiled. We focus here mainly on data types, integration times and optimal settings for calibration intervals, both for absolute (liquid nitrogen, tipping curve) as well as relative (hot load, noise diode) calibrations. Besides the recommendations for ground-based MWR operators, we will present methods to determine the accuracy of the calibration as well as means for automatic data quality control. In addition, some results from the intercomparison of different radiometers will be discussed.

  10. Performance Assessment and Geometric Calibration of RESOURCESAT-2

    NASA Astrophysics Data System (ADS)

    Radhadevi, P. V.; Solanki, S. S.; Akilan, A.; Jyothi, M. V.; Nagasubramanian, V.

    2016-06-01

    Resourcesat-2 (RS-2) has successfully completed five years of operations in its orbit. This satellite has multi-resolution and multi-spectral capabilities in a single platform. A continuous and autonomous co-registration, geo-location and radiometric calibration of image data from different sensors with widely varying view angles and resolution was one of the challenges of RS-2 data processing. On-orbit geometric performance of RS-2 sensors has been widely assessed and calibrated during the initial phase operations. Since then, as an ongoing activity, various geometric performance data are being generated periodically. This is performed with sites of dense ground control points (GCPs). These parameters are correlated to the direct geo-location accuracy of the RS-2 sensors and are monitored and validated to maintain the performance. This paper brings out the geometric accuracy assessment, calibration and validation done for about 500 datasets of RS-2. The objectives of this study are to ensure the best absolute and relative location accuracy of different cameras, location performance with payload steering and co-registration of multiple bands. This is done using a viewing geometry model, given ephemeris and attitude data, precise camera geometry and datum transformation. In the model, the forward and reverse transformations between the coordinate systems associated with the focal plane, payload, body, orbit and ground are rigorously and explicitly defined. System level tests using comparisons to ground check points have validated the operational geo-location accuracy performance and the stability of the calibration parameters.

  11. Initial Performance Assessment of CALIOP

    NASA Technical Reports Server (NTRS)

    Winker, David; Hunt, Bill; McGill, Matthew

    2007-01-01

    The Cloud-Aerosol Lidar with Orthogonal Polarization (CALIOP, pronounced the same as "calliope") is a spaceborne two-wavelength polarizatio n lidar that has been acquiring global data since June 2006. CALIOP p rovides high resolution vertical profiles of clouds and aerosols, and has been designed with a very large linear dynamic range to encompas s the full range of signal returns from aerosols and clouds. CALIOP is the primary instrument carried by the Cloud-Aerosol Lidar and Infrar ed Pathfinder Satellite Observations (CALIPSO) satellite, which was l aunched on April, 28 2006. CALIPSO was developed within the framework of a collaboration between NASA and the French space agency, CNES. I nitial data analysis and validation intercomparisons indicate the qua lity of data from CALIOP meets or exceeds expectations. This paper presents a description of the CALIPSO mission, the CALIOP instrument, an d an initial assessment of on-orbit measurement performance.

  12. Analytic Guided-Search Model of Human Performance Accuracy in Target- Localization Search Tasks

    NASA Technical Reports Server (NTRS)

    Eckstein, Miguel P.; Beutter, Brent R.; Stone, Leland S.

    2000-01-01

    Current models of human visual search have extended the traditional serial/parallel search dichotomy. Two successful models for predicting human visual search are the Guided Search model and the Signal Detection Theory model. Although these models are inherently different, it has been difficult to compare them because the Guided Search model is designed to predict response time, while Signal Detection Theory models are designed to predict performance accuracy. Moreover, current implementations of the Guided Search model require the use of Monte-Carlo simulations, a method that makes fitting the model's performance quantitatively to human data more computationally time consuming. We have extended the Guided Search model to predict human accuracy in target-localization search tasks. We have also developed analytic expressions that simplify simulation of the model to the evaluation of a small set of equations using only three free parameters. This new implementation and extension of the Guided Search model will enable direct quantitative comparisons with human performance in target-localization search experiments and with the predictions of Signal Detection Theory and other search accuracy models.

  13. Accuracy assessment of single and double difference models for the single epoch GPS compass

    NASA Astrophysics Data System (ADS)

    Chen, Wantong; Qin, Honglei; Zhang, Yanzhong; Jin, Tian

    2012-02-01

    The single epoch GPS compass is an important field of study, since it is a valuable technique for the orientation estimation of vehicles and it can guarantee a total independence from carrier phase slips in practical applications. To achieve highly accurate angular estimates, the unknown integer ambiguities of the carrier phase observables need to be resolved. Past researches focus on the ambiguity resolution for single epoch; however, accuracy is another significant problem for many challenging applications. In this contribution, the accuracy is evaluated for the non-common clock scheme of the receivers and the common clock scheme of the receivers, respectively. We focus on three scenarios for either scheme: single difference model vs. double difference model, single frequency model vs. multiple frequency model and optimal linear combinations vs. traditional triple-frequency least squares. We deduce the short baseline precision for a number of different available models and analyze the difference in accuracy for those models. Compared with the single or double difference model of the non-common clock scheme, the single difference model of the common clock scheme can greatly reduce the vertical component error of baseline vector, which results in higher elevation accuracy. The least squares estimator can also reduce the error of fixed baseline vector with the aid of the multi-frequency observation, thereby improving the attitude accuracy. In essence, the "accuracy improvement" is attributed to the difference in accuracy for different models, not a real improvement for any specific model. If all noise levels of GPS triple frequency carrier phase are assumed the same in unit of cycles, it can be proved that the optimal linear combination approach is equivalent to the traditional triple-frequency least squares, no matter which scheme is utilized. Both simulations and actual experiments have been performed to verify the correctness of theoretical analysis.

  14. Standardizing the Protocol for Hemispherical Photographs: Accuracy Assessment of Binarization Algorithms

    PubMed Central

    Glatthorn, Jonas; Beckschäfer, Philip

    2014-01-01

    Hemispherical photography is a well-established method to optically assess ecological parameters related to plant canopies; e.g. ground-level light regimes and the distribution of foliage within the crown space. Interpreting hemispherical photographs involves classifying pixels as either sky or vegetation. A wide range of automatic thresholding or binarization algorithms exists to classify the photographs. The variety in methodology hampers ability to compare results across studies. To identify an optimal threshold selection method, this study assessed the accuracy of seven binarization methods implemented in software currently available for the processing of hemispherical photographs. Therefore, binarizations obtained by the algorithms were compared to reference data generated through a manual binarization of a stratified random selection of pixels. This approach was adopted from the accuracy assessment of map classifications known from remote sensing studies. Percentage correct () and kappa-statistics () were calculated. The accuracy of the algorithms was assessed for photographs taken with automatic exposure settings (auto-exposure) and photographs taken with settings which avoid overexposure (histogram-exposure). In addition, gap fraction values derived from hemispherical photographs were compared with estimates derived from the manually classified reference pixels. All tested algorithms were shown to be sensitive to overexposure. Three of the algorithms showed an accuracy which was high enough to be recommended for the processing of histogram-exposed hemispherical photographs: “Minimum” ( 98.8%; 0.952), “Edge Detection” ( 98.1%; 0.950), and “Minimum Histogram” ( 98.1%; 0.947). The Minimum algorithm overestimated gap fraction least of all (11%). The overestimation by the algorithms Edge Detection (63%) and Minimum Histogram (67%) were considerably larger. For the remaining four evaluated algorithms (IsoData, Maximum Entropy, MinError, and Otsu) an

  15. Physician performance assessment: prevention of cardiovascular disease.

    PubMed

    Lipner, Rebecca S; Weng, Weifeng; Caverzagie, Kelly J; Hess, Brian J

    2013-12-01

    Given the rising burden of healthcare costs, both patients and healthcare purchasers are interested in discerning which physicians deliver quality care. We proposed a methodology to assess physician clinical performance in preventive cardiology care, and determined a benchmark for minimally acceptable performance. We used data on eight evidence-based clinical measures from 811 physicians that completed the American Board of Internal Medicine's Preventive Cardiology Practice Improvement Module(SM) to form an overall composite score for preventive cardiology care. An expert panel of nine internists/cardiologists skilled in preventive care for cardiovascular disease used an adaptation of the Angoff standard-setting method and the Dunn-Rankin method to create the composite and establish a standard. Physician characteristics were used to examine the validity of the inferences made from the composite scores. The mean composite score was 73.88 % (SD = 11.88 %). Reliability of the composite was high at 0.87. Specialized cardiologists had significantly lower composite scores (P = 0.04), while physicians who reported spending more time in primary, longitudinal, and preventive consultative care had significantly higher scores (P = 0.01), providing some evidence of score validity. The panel established a standard of 47.38 % on the composite measure with high classification accuracy (0.98). Only 2.7 % of the physicians performed below the standard for minimally acceptable preventive cardiovascular disease care. Of those, 64 % (N = 14) were not general cardiologists. Our study presents a psychometrically defensible methodology for assessing physician performance in preventive cardiology while also providing relative feedback with the hope of heightening physician awareness about deficits and improving patient care. PMID:23417594

  16. Accuracy assessment of topographic mapping using UAV image integrated with satellite images

    NASA Astrophysics Data System (ADS)

    Azmi, S. M.; Ahmad, Baharin; Ahmad, Anuar

    2014-02-01

    Unmanned Aerial Vehicle or UAV is extensively applied in various fields such as military applications, archaeology, agriculture and scientific research. This study focuses on topographic mapping and map updating. UAV is one of the alternative ways to ease the process of acquiring data with lower operating costs, low manufacturing and operational costs, plus it is easy to operate. Furthermore, UAV images will be integrated with QuickBird images that are used as base maps. The objective of this study is to make accuracy assessment and comparison between topographic mapping using UAV images integrated with aerial photograph and satellite image. The main purpose of using UAV image is as a replacement for cloud covered area which normally exists in aerial photograph and satellite image, and for updating topographic map. Meanwhile, spatial resolution, pixel size, scale, geometric accuracy and correction, image quality and information contents are important requirements needed for the generation of topographic map using these kinds of data. In this study, ground control points (GCPs) and check points (CPs) were established using real time kinematic Global Positioning System (RTK-GPS) technique. There are two types of analysis that are carried out in this study which are quantitative and qualitative assessments. Quantitative assessment is carried out by calculating root mean square error (RMSE). The outputs of this study include topographic map and orthophoto. From this study, the accuracy of UAV image is ± 0.460 m. As conclusion, UAV image has the potential to be used for updating of topographic maps.

  17. A Method for Assessing the Accuracy of a Photogrammetry System for Precision Deployable Structures

    NASA Technical Reports Server (NTRS)

    Moore, Ashley

    2005-01-01

    The measurement techniques used to validate analytical models of large deployable structures are an integral Part of the technology development process and must be precise and accurate. Photogrammetry and videogrammetry are viable, accurate, and unobtrusive methods for measuring such large Structures. Photogrammetry uses Software to determine the three-dimensional position of a target using camera images. Videogrammetry is based on the same principle, except a series of timed images are analyzed. This work addresses the accuracy of a digital photogrammetry system used for measurement of large, deployable space structures at JPL. First, photogrammetry tests are performed on a precision space truss test article, and the images are processed using Photomodeler software. The accuracy of the Photomodeler results is determined through, comparison with measurements of the test article taken by an external testing group using the VSTARS photogrammetry system. These two measurements are then compared with Australis photogrammetry software that simulates a measurement test to predict its accuracy. The software is then used to study how particular factors, such as camera resolution and placement, affect the system accuracy to help design the setup for the videogrammetry system that will offer the highest level of accuracy for measurement of deploying structures.

  18. Accuracy of the orthopantomogram in assessment of tooth length in orthodontic patients.

    PubMed

    Lien, L C; Soh, G

    2000-12-01

    The orthopantomogram (OPG) provides as assessment of root length and characteristics before orthodontic tooth movement. This study determined the accuracy of the OPG in assessing tooth length. Investigators compared the radiographic and actual tooth lengths in permanent first premolars indicated for orthodontic extractions. Results showed that the mean lengths measured from OPG were consistently higher than the actual lengths by 22% (p < 0.001) for maxillary teeth and by 1% for mandibular teeth. This study found that there is elongation of root images in OPG. PMID:11699368

  19. A method to assess the accuracy of sonotubometry for detecting Eustachian tube openings.

    PubMed

    Swarts, J Douglas; Teixeira, Miriam S; Banks, Juliane; El-Wagaa, Jenna; Doyle, William J

    2015-09-01

    Sonotubometry is a simple test for Eustachian tube (ET) opening during a maneuver. Different sonotubometry configurations were suggested to maximize test accuracy, but no method has been described for comparing sonotubometry test results with those for a definitive measure of ET opening. Here, we present such a method and exemplify is use by an accuracy assessment of a simple sonotubometry configuration. A total of 502 data-sequences from 168 test sessions in 103 adult subjects were analyzed. For each session, subjects were seated in a pressure chamber and relative middle ear over- and under-pressures created by changing chamber pressure. At each pressure, the test sequence of bilateral tympanometry, bilateral sonotubometry while the subject swallowed twice, and bilateral tympanometry was done. Tympanometric data were expressed as the fractional gradient equilibrated (FGE) by swallowing and sonotubometric signals were analyzed to record the shape of detected sound signals. Tympanometric and sonotubometric tubal opening assignments were analyzed by cross-correlation. For the data sequences with FGE = 0 (n = 32) evidencing no tubal opening and one (n = 249) evidencing definitive tubal opening, detection of a sonotubometry sound signal during a swallow had a sensitivity and specificity of 74.2 and 65.6 % for identifying ET openings and an accuracy of 73.3 % for assigning ET opening/non-opening by swallowing. Measures of sound signal shape were significantly different between those groups. This protocol allows a sonotubometry accuracy assessment for detecting ET openings. For the test configuration used, accuracy was moderate, but this should improve as more sophisticated sonotubometry test configurations are evaluated. PMID:24710849

  20. Assessing accuracy and precision for field and laboratory data: a perspective in ecosystem restoration

    USGS Publications Warehouse

    Stapanian, Martin A.; Lewis, Timothy E; Palmer, Craig J.; Middlebrook Amos, Molly

    2016-01-01

    Unlike most laboratory studies, rigorous quality assurance/quality control (QA/QC) procedures may be lacking in ecosystem restoration (“ecorestoration”) projects, despite legislative mandates in the United States. This is due, in part, to ecorestoration specialists making the false assumption that some types of data (e.g. discrete variables such as species identification and abundance classes) are not subject to evaluations of data quality. Moreover, emergent behavior manifested by complex, adapting, and nonlinear organizations responsible for monitoring the success of ecorestoration projects tend to unconsciously minimize disorder, QA/QC being an activity perceived as creating disorder. We discuss similarities and differences in assessing precision and accuracy for field and laboratory data. Although the concepts for assessing precision and accuracy of ecorestoration field data are conceptually the same as laboratory data, the manner in which these data quality attributes are assessed is different. From a sample analysis perspective, a field crew is comparable to a laboratory instrument that requires regular “recalibration,” with results obtained by experts at the same plot treated as laboratory calibration standards. Unlike laboratory standards and reference materials, the “true” value for many field variables is commonly unknown. In the laboratory, specific QA/QC samples assess error for each aspect of the measurement process, whereas field revisits assess precision and accuracy of the entire data collection process following initial calibration. Rigorous QA/QC data in an ecorestoration project are essential for evaluating the success of a project, and they provide the only objective “legacy” of the dataset for potential legal challenges and future uses.

  1. 3D Surface Reconstruction of Plant Seeds by Volume Carving: Performance and Accuracies

    PubMed Central

    Roussel, Johanna; Geiger, Felix; Fischbach, Andreas; Jahnke, Siegfried; Scharr, Hanno

    2016-01-01

    We describe a method for 3D reconstruction of plant seed surfaces, focusing on small seeds with diameters as small as 200 μm. The method considers robotized systems allowing single seed handling in order to rotate a single seed in front of a camera. Even though such systems feature high position repeatability, at sub-millimeter object scales, camera pose variations have to be compensated. We do this by robustly estimating the tool center point from each acquired image. 3D reconstruction can then be performed by a simple shape-from-silhouette approach. In experiments we investigate runtimes, theoretically achievable accuracy, experimentally achieved accuracy, and show as a proof of principle that the proposed method is well sufficient for 3D seed phenotyping purposes. PMID:27375628

  2. Accuracy assessment of minimum control points for UAV photography and georeferencing

    NASA Astrophysics Data System (ADS)

    Skarlatos, D.; Procopiou, E.; Stavrou, G.; Gregoriou, M.

    2013-08-01

    In recent years, Autonomous Unmanned Aerial Vehicles (AUAV) became popular among researchers across disciplines because they combine many advantages. One major application is monitoring and mapping. Their ability to fly beyond eye sight autonomously, collecting data over large areas whenever, wherever, makes them excellent platform for monitoring hazardous areas or disasters. In both cases rapid mapping is needed while human access isn't always a given. Indeed, current automatic processing of aerial photos using photogrammetry and computer vision algorithms allows for rapid orthophomap production and Digital Surface Model (DSM) generation, as tools for monitoring and damage assessment. In such cases, control point measurement using GPS is either impossible, or time consuming or costly. This work investigates accuracies that can be attained using few or none control points over areas of one square kilometer, in two test sites; a typical block and a corridor survey. On board GPS data logged during AUAV's flight are being used for direct georeferencing, while ground check points are being used for evaluation. In addition various control point layouts are being tested using bundle adjustment for accuracy evaluation. Results indicate that it is possible to use on board single frequency GPS for direct georeferencing in cases of disaster management or areas without easy access, or even over featureless areas. Due to large numbers of tie points in the bundle adjustment, horizontal accuracy can be fulfilled with a rather small number of control points, but vertical accuracy may not.

  3. Mapping soil texture classes and optimization of the result by accuracy assessment

    NASA Astrophysics Data System (ADS)

    Laborczi, Annamária; Takács, Katalin; Bakacsi, Zsófia; Szabó, József; Pásztor, László

    2014-05-01

    There are increasing demands nowadays on spatial soil information in order to support environmental related and land use management decisions. The GlobalSoilMap.net (GSM) project aims to make a new digital soil map of the world using state-of-the-art and emerging technologies for soil mapping and predicting soil properties at fine resolution. Sand, silt and clay are among the mandatory GSM soil properties. Furthermore, soil texture class information is input data of significant agro-meteorological and hydrological models. Our present work aims to compare and evaluate different digital soil mapping methods and variables for producing the most accurate spatial prediction of texture classes in Hungary. In addition to the Hungarian Soil Information and Monitoring System as our basic data, digital elevation model and its derived components, geological database, and physical property maps of the Digital Kreybig Soil Information System have been applied as auxiliary elements. Two approaches have been applied for the mapping process. At first the sand, silt and clay rasters have been computed independently using regression kriging (RK). From these rasters, according to the USDA categories, we have compiled the texture class map. Different combinations of reference and training soil data and auxiliary covariables have resulted several different maps. However, these results consequentially include the uncertainty factor of the three kriged rasters. Therefore we have suited data mining methods as the other approach of digital soil mapping. By working out of classification trees and random forests we have got directly the texture class maps. In this way the various results can be compared to the RK maps. The performance of the different methods and data has been examined by testing the accuracy of the geostatistically computed and the directly classified results. We have used the GSM methodology to assess the most predictive and accurate way for getting the best among the

  4. Accuracy Assessment of Crown Delineation Methods for the Individual Trees Using LIDAR Data

    NASA Astrophysics Data System (ADS)

    Chang, K. T.; Lin, C.; Lin, Y. C.; Liu, J. K.

    2016-06-01

    Forest canopy density and height are used as variables in a number of environmental applications, including the estimation of biomass, forest extent and condition, and biodiversity. The airborne Light Detection and Ranging (LiDAR) is very useful to estimate forest canopy parameters according to the generated canopy height models (CHMs). The purpose of this work is to introduce an algorithm to delineate crown parameters, e.g. tree height and crown radii based on the generated rasterized CHMs. And accuracy assessment for the extraction of volumetric parameters of a single tree is also performed via manual measurement using corresponding aerial photo pairs. A LiDAR dataset of a golf course acquired by Leica ALS70-HP is used in this study. Two algorithms, i.e. a traditional one with the subtraction of a digital elevation model (DEM) from a digital surface model (DSM), and a pit-free approach are conducted to generate the CHMs firstly. Then two algorithms, a multilevel morphological active-contour (MMAC) and a variable window filter (VWF), are implemented and used in this study for individual tree delineation. Finally, experimental results of two automatic estimation methods for individual trees can be evaluated with manually measured stand-level parameters, i.e. tree height and crown diameter. The resulting CHM generated by a simple subtraction is full of empty pixels (called "pits") that will give vital impact on subsequent analysis for individual tree delineation. The experimental results indicated that if more individual trees can be extracted, tree crown shape will became more completely in the CHM data after the pit-free process.

  5. Toward a Science Performance Assessment Technology.

    ERIC Educational Resources Information Center

    Shavelson, Richard J.; Solano-Flores, Guillermo; Ruiz-Primo, Maria Araceli

    1998-01-01

    Research on developing technology for large-scale performance assessments in science is reported briefly, and a conceptual framework is presented for defining, generating, and evaluating science performance assessments. Types of tasks are discussed, and the technical qualities of performance assessments are discussed in the context of…

  6. Strategies used in post-operative pain assessment and their clinical accuracy.

    PubMed

    Sjöström, B; Dahlgren, L O; Haljamäe, H

    2000-01-01

    Our knowledge about the content of strategies used by staff members in a surgical recovery unit for assessment of post-operative pain is fairly limited. The aim of the present study was to describe variations in the content of strategies used by nurses and physicians in practical clinical pain assessments and to evaluate the clinical accuracy of the strategies used. Critical care nurses (n = 30), physicians (n = 30) and postsurgical patients (n = 180) comprise the respondents. Applying a phenomenographical approach, interview data were tape-recorded during 180 clinical pain assessments. The pain assessments were related to comparative bedside pain ratings (Visual analogue Scale, VAS), both by staff members and post-operative patients. The recorded interviews were analysed to describe variations in ways of assessing pain. Pain assessment strategies were established by combining categories describing the impact of experience and categories of assessment criteria. The present observations, if included in the education of clinical staff members, could increase the understanding and thereby the quality of the pain assessment process. PMID:11022499

  7. Behavior model for performance assessment.

    SciTech Connect

    Borwn-VanHoozer, S. A.

    1999-07-23

    Every individual channels information differently based on their preference of the sensory modality or representational system (visual auditory or kinesthetic) we tend to favor most (our primary representational system (PRS)). Therefore, some of us access and store our information primarily visually first, some auditorily, and others kinesthetically (through feel and touch); which in turn establishes our information processing patterns and strategies and external to internal (and subsequently vice versa) experiential language representation. Because of the different ways we channel our information, each of us will respond differently to a task--the way we gather and process the external information (input), our response time (process), and the outcome (behavior). Traditional human models of decision making and response time focus on perception, cognitive and motor systems stimulated and influenced by the three sensory modalities, visual, auditory and kinesthetic. For us, these are the building blocks to knowing how someone is thinking. Being aware of what is taking place and how to ask questions is essential in assessing performance toward reducing human errors. Existing models give predications based on time values or response times for a particular event, and may be summed and averaged for a generalization of behavior(s). However, by our not establishing a basic understanding of the foundation of how the behavior was predicated through a decision making strategy process, predicative models are overall inefficient in their analysis of the means by which behavior was generated. What is seen is the end result.

  8. Immediate Feedback on Accuracy and Performance: The Effects of Wireless Technology on Food Safety Tracking at a Distribution Center

    ERIC Educational Resources Information Center

    Goomas, David T.

    2012-01-01

    The effects of wireless ring scanners, which provided immediate auditory and visual feedback, were evaluated to increase the performance and accuracy of order selectors at a meat distribution center. The scanners not only increased performance and accuracy compared to paper pick sheets, but were also instrumental in immediate and accurate data…

  9. An accuracy assessment of Cartesian-mesh approaches for the Euler equations

    NASA Technical Reports Server (NTRS)

    Coirier, William J.; Powell, Kenneth G.

    1995-01-01

    A critical assessment of the accuracy of Cartesian-mesh approaches for steady, transonic solutions of the Euler equations of gas dynamics is made. An exact solution of the Euler equations (Ringleb's flow) is used not only to infer the order of the truncation error of the Cartesian-mesh approaches, but also to compare the magnitude of the discrete error directly to that obtained with a structured mesh approach. Uniformly and adaptively refined solutions using a Cartesian-mesh approach are obtained and compared to each other and to uniformly refined structured mesh results. The effect of cell merging is investigated as well as the use of two different K-exact reconstruction procedures. The solution methodology of the schemes is explained and tabulated results are presented to compare the solution accuracies.

  10. Theory and methods for accuracy assessment of thematic maps using fuzzy sets

    SciTech Connect

    Gopal, S.; Woodcock, C. )

    1994-02-01

    The use of fuzzy sets in map accuracy assessment expands the amount of information that can be provided regarding the nature, frequency, magnitude, and source of errors in a thematic map. The need for using fuzzy sets arises from the observation that all map locations do not fit unambiguously in a single map category. Fuzzy sets allow for varying levels of set membership for multiple map categories. A linguistic measurement scale allows the kinds of comments commonly made during map evaluations to be used to quantify map accuracy. Four tables result from the use of fuzzy functions, and when taken together they provide more information than traditional confusion matrices. The use of a hypothetical dataset helps illustrate the benefits of the new methods. It is hoped that the enhanced ability to evaluate maps resulting from the use of fuzzy sets will improve our understanding of uncertainty in maps and facilitate improved error modeling. 40 refs.

  11. Design and performance of a new high accuracy combined small sample neutron/gamma detector

    SciTech Connect

    Menlove, H.; Davidson, D.; Verplancke, J.; Vermeulen, P.; Wagner, H.G.; Wellum, R.; Brandelise, B.; Mayer, K.

    1993-08-01

    This paper describes the design of an optimized combined neutron and gamma detector installed around a measurement well protruding from the floor of a glove box. The objective of this design was to achieve an overall accuracy for the plutonium element concentration in gram-sized samples of plutonium oxide powder approaching the {approximately}0.1--0.2% accuracies routinely achieved by inspectors` chemical analysis. The efficiency of the clam-shell neutron detector was increased and the flat response zone extended in axial and radial directions. The sample holder introduced from within the glove box was designed to form the upper reflector, while two graphite half-shells fitted around the thin neck of the high-resolution LEGE detector replaced the lower plug. The Institute for Reference Materials and Measurements (IRMM) in Geel prepared special plutonium oxide test samples whose plutonium concentration was determined to better than 0.05%. During a three week initial performance test in July 1992 at ITU Karlsruhe and in long term tests, it was established that the target accuracy can be achieved provided sufficient care is taken to assure the reproducibility of sample bottling and sample positioning. The paper presents and discusses the results of all test measurements.

  12. Design and performance of a new high accuracy combined small sample neutron/gamma detector

    SciTech Connect

    Menlove, H.; Davidson, D.; Verplancke, J.; Vermeulen, P.; Wagner, H.G.; Wellum, R.; Brandelise, B.; Mayer, K.

    1993-12-31

    This paper describes the design of an optimized combined neutron and gamma detector installed around a measurement well protruding from the floor of a glove box. The objective of this design was to achieve an overall accuracy for the plutonium element concentration in gram-sized samples of plutonium oxide powder approaching the {approximately}0.1--0.2% accuracies routinely achieved by inspectors` chemical analysis. The efficiency of the clam-shell neutron detector was increased and the flat response zone extended in axial and radial directions. The sample holder introduced from within the glove box was designed to form the upper reflector, while two graphite half-shells fitted around the thin neck of the high-resolution LEGe detector replaced the lower plug. The Institute for Reference Materials and Measurements (IRMM) in Geel prepared special plutonium oxide test samples whose plutonium concentration was determined to better than 0.05%. During a three week initial performance test in July 1992 at ITU Karlsruhe and in long term tests, it was established that the target accuracy can be achieved provided sufficient care is taken to assure the reproducibility of sample bottling and sample positioning. The paper presents and discusses the results of all test measurements.

  13. Exploring Writing Accuracy and Writing Complexity as Predictors of High-Stakes State Assessments

    ERIC Educational Resources Information Center

    Edman, Ellie Whitner

    2012-01-01

    The advent of No Child Left Behind led to increased teacher accountability for student performance and placed strict sanctions in place for failure to meet a certain level of performance each year. With instructional time at a premium, it is imperative that educators have brief academic assessments that accurately predict performance on…

  14. Digital temperature sensor performance assessment report. [in simulated shuttle environments

    NASA Technical Reports Server (NTRS)

    Canniff, J. H.

    1974-01-01

    Performance assessment data accumulated during exposure of the digital temperature sensor to simulated shuttle flight type environments are presented. The test parameters were specifically designed to check the sensor for its: (1) ability to resolve temperature relative to the design specifications; (2) ability to maintain accuracy after interchanging the temperature probes with each electronics interface assembly; (3) stability (i.e., satisfactory operation and accuracy during and after exposure to flight environments); and (4) repeatability, or its ability to produce the same output on subsequent exposures to the identical stimulus. Equipment list, test descriptions, data summary, and conclusions are included.

  15. Computational Performance and Statistical Accuracy of *BEAST and Comparisons with Other Methods.

    PubMed

    Ogilvie, Huw A; Heled, Joseph; Xie, Dong; Drummond, Alexei J

    2016-05-01

    Under the multispecies coalescent model of molecular evolution, gene trees have independent evolutionary histories within a shared species tree. In comparison, supermatrix concatenation methods assume that gene trees share a single common genealogical history, thereby equating gene coalescence with species divergence. The multispecies coalescent is supported by previous studies which found that its predicted distributions fit empirical data, and that concatenation is not a consistent estimator of the species tree. *BEAST, a fully Bayesian implementation of the multispecies coalescent, is popular but computationally intensive, so the increasing size of phylogenetic data sets is both a computational challenge and an opportunity for better systematics. Using simulation studies, we characterize the scaling behavior of *BEAST, and enable quantitative prediction of the impact increasing the number of loci has on both computational performance and statistical accuracy. Follow-up simulations over a wide range of parameters show that the statistical performance of *BEAST relative to concatenation improves both as branch length is reduced and as the number of loci is increased. Finally, using simulations based on estimated parameters from two phylogenomic data sets, we compare the performance of a range of species tree and concatenation methods to show that using *BEAST with tens of loci can be preferable to using concatenation with thousands of loci. Our results provide insight into the practicalities of Bayesian species tree estimation, the number of loci required to obtain a given level of accuracy and the situations in which supermatrix or summary methods will be outperformed by the fully Bayesian multispecies coalescent. PMID:26821913

  16. Computational Performance and Statistical Accuracy of *BEAST and Comparisons with Other Methods

    PubMed Central

    Ogilvie, Huw A.; Heled, Joseph; Xie, Dong; Drummond, Alexei J.

    2016-01-01

    Under the multispecies coalescent model of molecular evolution, gene trees have independent evolutionary histories within a shared species tree. In comparison, supermatrix concatenation methods assume that gene trees share a single common genealogical history, thereby equating gene coalescence with species divergence. The multispecies coalescent is supported by previous studies which found that its predicted distributions fit empirical data, and that concatenation is not a consistent estimator of the species tree. *BEAST, a fully Bayesian implementation of the multispecies coalescent, is popular but computationally intensive, so the increasing size of phylogenetic data sets is both a computational challenge and an opportunity for better systematics. Using simulation studies, we characterize the scaling behavior of *BEAST, and enable quantitative prediction of the impact increasing the number of loci has on both computational performance and statistical accuracy. Follow-up simulations over a wide range of parameters show that the statistical performance of *BEAST relative to concatenation improves both as branch length is reduced and as the number of loci is increased. Finally, using simulations based on estimated parameters from two phylogenomic data sets, we compare the performance of a range of species tree and concatenation methods to show that using *BEAST with tens of loci can be preferable to using concatenation with thousands of loci. Our results provide insight into the practicalities of Bayesian species tree estimation, the number of loci required to obtain a given level of accuracy and the situations in which supermatrix or summary methods will be outperformed by the fully Bayesian multispecies coalescent. PMID:26821913

  17. Diffraction based overlay metrology: accuracy and performance on front end stack

    NASA Astrophysics Data System (ADS)

    Leray, Philippe; Cheng, Shaunee; Kandel, Daniel; Adel, Michael; Marchelli, Anat; Vakshtein, Irina; Vasconi, Mauro; Salski, Bartlomiej

    2008-03-01

    The overlay metrology budget is typically 1/10 of the overlay control budget resulting in overlay metrology total measurement uncertainty requirements of 0.57 nm for the most challenging use cases of the 32nm technology generation. Theoretical considerations show that overlay technology based on differential signal scatterometry (SCOL TM) has inherent advantages, which will allow it to achieve the 32nm technology generation requirements and go beyond it. In this work we present results of an experimental and theoretical study of SCOL. We present experimental results, comparing this technology with the standard imaging overlay metrology. In particular, we present performance results, such as precision and tool induced shift, for different target designs. The response to a large range of induced misalignment is also shown. SCOL performance on these targets for a real stack is reported. We also show results of simulations of the expected accuracy and performance associated with a variety of scatterometry overlay target designs. The simulations were carried out on several stacks including FEOL and BEOL materials. The inherent limitations and possible improvements of the SCOL technology are discussed. We show that with the appropriate target design and algorithms, scatterometry overlay achieves the accuracy required for future technology generations.

  18. A survey of the parallel performance and accuracy of Poisson solvers for electronic structure calculations.

    PubMed

    García-Risueño, Pablo; Alberdi-Rodriguez, Joseba; Oliveira, Micael J T; Andrade, Xavier; Pippig, Michael; Muguerza, Javier; Arruabarrena, Agustin; Rubio, Angel

    2014-03-01

    We present an analysis of different methods to calculate the classical electrostatic Hartree potential created by charge distributions. Our goal is to provide the reader with an estimation on the performance-in terms of both numerical complexity and accuracy-of popular Poisson solvers, and to give an intuitive idea on the way these solvers operate. Highly parallelizable routines have been implemented in a first-principle simulation code (Octopus) to be used in our tests, so that reliable conclusions about the capability of methods to tackle large systems in cluster computing can be obtained from our work. PMID:24249048

  19. Accuracy of Subjective Performance Appraisal is Not Modulated by the Method Used by the Learner During Motor Skill Acquisition.

    PubMed

    Patterson, Jae T; McRae, Matthew; Lai, Sharon

    2016-04-01

    The present experiment examined whether the method of subjectively appraising motor performance during skill acquisition would differentially strengthen performance appraisal capabilities and subsequent motor learning. Thirty-six participants (18 men and 18 women; M age = 20.8 years, SD = 1.0) learned to execute a serial key-pressing task at a particular overall movement time (2550 ms). Participants were randomly separated into three groups: the Generate group estimated their overall movement time then received knowledge of results of their actual movement time; the Choice group selected their perceived movement time from a list of three alternatives; the third group, the Control group, did not self-report their perceived movement time and received knowledge of results of their actual movement time on every trial. All groups practiced 90 acquisition trials and 30 no knowledge of results trials in a delayed retention test. Results from the delayed retention test showed that both methods of performance appraisal (Generate and Choice) facilitated superior motor performance and greater accuracy in assessing their actual motor performance compared with the control condition. Therefore, the processing required for accurate appraisal of performance was strengthened, independent of performance appraisal method. PMID:27166340

  20. Proposed Testing to Assess the Accuracy of Glass-To-Metal Seal Stress Analyses.

    SciTech Connect

    Chambers, Robert S.; Emery, John M; Tandon, Rajan; Antoun, Bonnie R.; Stavig, Mark E.; Newton, Clay S.; Gibson, Cory S; Bencoe, Denise N.

    2014-09-01

    The material characterization tests conducted on 304L VAR stainless steel and Schott 8061 glass have provided higher fidelity data for calibration of material models used in Glass - T o - Metal (GTM) seal analyses. Specifically, a Thermo - Multi - Linear Elastic Plastic ( thermo - MLEP) material model has be en defined for S S304L and the Simplified Potential Energy Clock nonlinear visc oelastic model has been calibrated for the S8061 glass. To assess the accuracy of finite element stress analyses of GTM seals, a suite of tests are proposed to provide data for comparison to mo del predictions.

  1. Accuracy assessment of the global ionospheric model over the Southern Ocean based on dynamic observation

    NASA Astrophysics Data System (ADS)

    Luo, xiaowen

    2016-04-01

    The global ionospheric model based on the reference stations of the Global Navigation Satellite System (GNSS) of the International GNSS Services is presently the most commonly used products of the global ionosphere. It is very important to comprehensively analyze and evaluate the accuracy and reliability of the model for the reasonable use of this kind of ionospheric product. This work is different to the traditional performance evaluation of the global ionosphere model based on observation data of ground-based static reference stations. The preliminary evaluation and analysis of the the global ionospheric model was conducted with the dynamic observation data across different latitudes over the southern oceans. The validation results showed that the accuracy of the global ionospheric model over the southern oceans is about 5 TECu, which deviates from the measured ionospheric TEC by about -0.6 TECu.

  2. Accuracy and Reliability of Haptic Spasticity Assessment Using HESS (Haptic Elbow Spasticity Simulator)

    PubMed Central

    Kim, Jonghyun; Park, Hyung-Soon; Damiano, Diane L.

    2013-01-01

    Clinical assessment of spasticity tends to be subjective because of the nature of the in-person assessment; severity of spasticity is judged based on the muscle tone felt by a clinician during manual manipulation of a patient’s limb. As an attempt to standardize the clinical assessment of spasticity, we developed HESS (Haptic Elbow Spasticity Simulator), a programmable robotic system that can provide accurate and consistent haptic responses of spasticity and thus can be used as a training tool for clinicians. The aim of this study is to evaluate the accuracy and reliability of the recreated haptic responses. Based on clinical data collected from children with cerebral palsy, four levels of elbow spasticity (1, 1+, 2, and 3 in the Modified Ashworth Scale [MAS]) were recreated by HESS. Seven experienced clinicians manipulated HESS to score the recreated haptic responses. The accuracy of the recreation was assessed by the percent agreement between intended and determined MAS scores. The inter-rater reliability among the clinicians was analyzed by using Fleiss’s kappa. In addition, the level of realism with the recreation was evaluated by a questionnaire on “how realistic” this felt in a qualitative way. The percent agreement was high (85.7±11.7%), and for inter-rater reliability, there was substantial agreement (κ=0.646) among the seven clinicians. The level of realism was 7.71±0.95 out of 10. These results show that the haptic recreation of spasticity by HESS has the potential to be used as a training tool for standardizing and enhancing reliability of clinical assessment. PMID:22256328

  3. Application of a Monte Carlo accuracy assessment tool to TDRS and GPS

    NASA Technical Reports Server (NTRS)

    Pavloff, Michael S.

    1994-01-01

    In support of a NASA study on the application of radio interferometry to satellite orbit determination, MITRE developed a simulation tool for assessing interferometric tracking accuracy. Initially, the tool was applied to the problem of determining optimal interferometric station siting for orbit determination of the Tracking and Data Relay Satellite (TDRS). Subsequently, the Orbit Determination Accuracy Estimator (ODAE) was expanded to model the general batch maximum likelihood orbit determination algorithms of the Goddard Trajectory Determination System (GTDS) with measurement types including not only group and phase delay from radio interferometry, but also range, range rate, angular measurements, and satellite-to-satellite measurements. The user of ODAE specifies the statistical properties of error sources, including inherent observable imprecision, atmospheric delays, station location uncertainty, and measurement biases. Upon Monte Carlo simulation of the orbit determination process, ODAE calculates the statistical properties of the error in the satellite state vector and any other parameters for which a solution was obtained in the orbit determination. This paper presents results from ODAE application to two different problems: (1)determination of optimal geometry for interferometirc tracking of TDRS, and (2) expected orbit determination accuracy for Global Positioning System (GPS) tracking of low-earth orbit (LEO) satellites. Conclusions about optimal ground station locations for TDRS orbit determination by radio interferometry are presented, and the feasibility of GPS-based tracking for IRIDIUM, a LEO mobile satellite communications (MOBILSATCOM) system, is demonstrated.

  4. Developing a Framework for Science Performance Assessment.

    ERIC Educational Resources Information Center

    Kim, Eunjin; Park, Hyun-Ju; Kang, Ho-Kam; Noh, Suk-Goo

    The purpose of this study was to develop a Framework for Performance Assessment in Science Education (F-PASE). Science educators in the past have paid more attention to science curriculum and teaching strategies than assessment. In recent years attention has turned toward performance assessment which addresses the concerns of science curriculum…

  5. The Tech Prep Handbook: Performance Assessment.

    ERIC Educational Resources Information Center

    Hensley, Oliver D., Ed.; And Others

    This handbook for tech prep practitioners in Texas consists of loose-leaf documents from the performance assessment areas currently available to tech prep practitioners. The first part of the handbook consists of 10 sample assessment documents that were selected from over 900 performance assessment based on a quantitative rating system. The…

  6. Estimating Orientation Using Magnetic and Inertial Sensors and Different Sensor Fusion Approaches: Accuracy Assessment in Manual and Locomotion Tasks

    PubMed Central

    Bergamini, Elena; Ligorio, Gabriele; Summa, Aurora; Vannozzi, Giuseppe; Cappozzo, Aurelio; Sabatini, Angelo Maria

    2014-01-01

    Magnetic and inertial measurement units are an emerging technology to obtain 3D orientation of body segments in human movement analysis. In this respect, sensor fusion is used to limit the drift errors resulting from the gyroscope data integration by exploiting accelerometer and magnetic aiding sensors. The present study aims at investigating the effectiveness of sensor fusion methods under different experimental conditions. Manual and locomotion tasks, differing in time duration, measurement volume, presence/absence of static phases, and out-of-plane movements, were performed by six subjects, and recorded by one unit located on the forearm or the lower trunk, respectively. Two sensor fusion methods, representative of the stochastic (Extended Kalman Filter) and complementary (Non-linear observer) filtering, were selected, and their accuracy was assessed in terms of attitude (pitch and roll angles) and heading (yaw angle) errors using stereophotogrammetric data as a reference. The sensor fusion approaches provided significantly more accurate results than gyroscope data integration. Accuracy improved mostly for heading and when the movement exhibited stationary phases, evenly distributed 3D rotations, it occurred in a small volume, and its duration was greater than approximately 20 s. These results were independent from the specific sensor fusion method used. Practice guidelines for improving the outcome accuracy are provided. PMID:25302810

  7. Estimating orientation using magnetic and inertial sensors and different sensor fusion approaches: accuracy assessment in manual and locomotion tasks.

    PubMed

    Bergamini, Elena; Ligorio, Gabriele; Summa, Aurora; Vannozzi, Giuseppe; Cappozzo, Aurelio; Sabatini, Angelo Maria

    2014-01-01

    Magnetic and inertial measurement units are an emerging technology to obtain 3D orientation of body segments in human movement analysis. In this respect, sensor fusion is used to limit the drift errors resulting from the gyroscope data integration by exploiting accelerometer and magnetic aiding sensors. The present study aims at investigating the effectiveness of sensor fusion methods under different experimental conditions. Manual and locomotion tasks, differing in time duration, measurement volume, presence/absence of static phases, and out-of-plane movements, were performed by six subjects, and recorded by one unit located on the forearm or the lower trunk, respectively. Two sensor fusion methods, representative of the stochastic (Extended Kalman Filter) and complementary (Non-linear observer) filtering, were selected, and their accuracy was assessed in terms of attitude (pitch and roll angles) and heading (yaw angle) errors using stereophotogrammetric data as a reference. The sensor fusion approaches provided significantly more accurate results than gyroscope data integration. Accuracy improved mostly for heading and when the movement exhibited stationary phases, evenly distributed 3D rotations, it occurred in a small volume, and its duration was greater than approximately 20 s. These results were independent from the specific sensor fusion method used. Practice guidelines for improving the outcome accuracy are provided. PMID:25302810

  8. The influence of sampling interval on the accuracy of trail impact assessment

    USGS Publications Warehouse

    Leung, Y.-F.; Marion, J.L.

    1999-01-01

    Trail impact assessment and monitoring (IA&M) programs have been growing in importance and application in recreation resource management at protected areas. Census-based and sampling-based approaches have been developed in such programs, with systematic point sampling being the most common survey design. This paper examines the influence of sampling interval on the accuracy of estimates for selected trail impact problems. A complete census of four impact types on 70 trails in Great Smoky Mountains National Park was utilized as the base data set for the analyses. The census data were resampled at increasing intervals to create a series of simulated point data sets. Estimates of frequency of occurrence and lineal extent for the four impact types were compared with the census data set. The responses of accuracy loss on lineal extent estimates to increasing sampling intervals varied across different impact types, while the responses on frequency of occurrence estimates were consistent, approximating an inverse asymptotic curve. These findings suggest that systematic point sampling may be an appropriate method for estimating the lineal extent but not the frequency of trail impacts. Sample intervals of less than 100 m appear to yield an excellent level of accuracy for the four impact types evaluated. Multiple regression analysis results suggest that appropriate sampling intervals are more likely to be determined by the type of impact in question rather than the length of trail. The census-based trail survey and the resampling-simulation method developed in this study can be a valuable first step in establishing long-term trail IA&M programs, in which an optimal sampling interval range with acceptable accuracy is determined before investing efforts in data collection.

  9. Assessing Sensor Accuracy for Non-Adjunct Use of Continuous Glucose Monitoring

    PubMed Central

    Patek, Stephen D.; Ortiz, Edward Andrew; Breton, Marc D.

    2015-01-01

    Abstract Background: The level of continuous glucose monitoring (CGM) accuracy needed for insulin dosing using sensor values (i.e., the level of accuracy permitting non-adjunct CGM use) is a topic of ongoing debate. Assessment of this level in clinical experiments is virtually impossible because the magnitude of CGM errors cannot be manipulated and related prospectively to clinical outcomes. Materials and Methods: A combination of archival data (parallel CGM, insulin pump, self-monitoring of blood glucose [SMBG] records, and meals for 56 pump users with type 1 diabetes) and in silico experiments was used to “replay” real-life treatment scenarios and relate sensor error to glycemic outcomes. Nominal blood glucose (BG) traces were extracted using a mathematical model, yielding 2,082 BG segments each initiated by insulin bolus and confirmed by SMBG. These segments were replayed at seven sensor accuracy levels (mean absolute relative differences [MARDs] of 3–22%) testing six scenarios: insulin dosing using sensor values, threshold, and predictive alarms, each without or with considering CGM trend arrows. Results: In all six scenarios, the occurrence of hypoglycemia (frequency of BG levels ≤50 mg/dL and BG levels ≤39 mg/dL) increased with sensor error, displaying an abrupt slope change at MARD =10%. Similarly, hyperglycemia (frequency of BG levels ≥250 mg/dL and BG levels ≥400 mg/dL) increased and displayed an abrupt slope change at MARD=10%. When added to insulin dosing decisions, information from CGM trend arrows, threshold, and predictive alarms resulted in improvement in average glycemia by 1.86, 8.17, and 8.88 mg/dL, respectively. Conclusions: Using CGM for insulin dosing decisions is feasible below a certain level of sensor error, estimated in silico at MARD=10%. In our experiments, further accuracy improvement did not contribute substantively to better glycemic outcomes. PMID:25436913

  10. Influence of spatial accuracy constraints on reaction time and maximum speed of performance of unilateral movements.

    PubMed

    Gutnik, B; Skurvydas, A; Zuoza, A; Zuoziene, I; Mickevičienė, D; Alekrinskis, B A; Pukenas, K; Nash, D

    2015-04-01

    The goal was to study reaction time and maximal velocity of upper limbs of healthy young adults of both sexes during transition from a simple to a more involved task. Performance of dominant and non-dominant arms was recorded. Participants were 43 healthy, right-handed, untrained men (n=22) and women (n=21), 18-22 years old. The simple task required a single jerk-like movement. The involved task required both speed and accuracy where necessity for high speed of performance was emphasized. The effectiveness of transition between tasks was calculated for both reaction time and maximal velocity. No lateral differences were found. Men usually had a shorter reaction time on both tasks and a higher maximal velocity in the simple task. Women were more effective at modifying velocity. PMID:25799027

  11. The objective assessment of cough frequency: accuracy of the LR102 device

    PubMed Central

    2011-01-01

    Background The measurement of cough frequency is problematic and most often based on subjective assessment. The aim of the study was to assess the accuracy of the automatic identification of cough episodes by LR102, a cough frequency meter based on electromyography and audio sensors. Methods Ten adult patients complaining of cough were recruited in primary care and hospital settings. Participants were asked to wear LR102 for 4 consecutive hours during which they were also filmed. Results Measures of cough frequency by LR102 and manual counting were closely correlated (r = 0.87 for number of cough episodes per hour; r = 0.89 for number of single coughs per hour) but LR102 overestimated cough frequency. Bland-Altman plots indicate that differences between the two measurements were not influenced by cough frequency. Conclusions LR102 offers a useful estimate of cough frequency in adults in their own environment, while significantly reducing the time required for analysis. PMID:22132691

  12. Accuracy of teacher assessments of second-language students at risk for reading disability.

    PubMed

    Limbos, M M; Geva, E

    2001-01-01

    This study examined the accuracy of teacher assessments in screening for reading disabilities among students of English as a second language (ESL) and as a first language (L1). Academic and oral language tests were administered to 369 children (249 ESL, 120 L1) at the beginning of Grade 1 and at the end of Grade 2. Concurrently, 51 teachers nominated children at risk for reading failure and completed rating scales assessing academic and oral language skills. Scholastic records were reviewed for notation of concern or referral. The criterion measure was a standardized reading score based on phonological awareness, rapid naming, and word recognition. Results indicated that teacher rating scales and nominations had low sensitivity in identifying ESL and L1 students at risk for reading disability at the 1-year mark. Relative to other forms of screening, teacher-expressed concern had lower sensitivity. Finally, oral language proficiency contributed to misclassifications in the ESL group. PMID:15497265

  13. Five-year accuracy of assessments of high risk for sexual recidivism of adolescents.

    PubMed

    Hagan, Michael P; Anderson, Debra L; Caldwell, Melissa S; Kemper, Therese S

    2010-02-01

    This study looked at 12 juveniles in Wisconsin who were recommended by experts for commitment under Chapter 980, known as the Sexually Violent Person Commitments Act, but who ultimately were not committed. The purpose was to determine the accuracy of these assessments and risk for sexual reoffending for juvenile sexual offenders. The results found a rate of 42% sexual recidivism among these individuals, with a 5-year at-risk period. This figure is in contrast to the low rates of sexual recidivism reported in the general juvenile sexual research. This provides evidence that the capability to assess the risk in juvenile sexual re-offending may at times be higher than previously estimated. Implications of these unusual results are discussed. PMID:18957553

  14. Accuracy of subjective assessment of fever by Nigerian mothers in under-5 children

    PubMed Central

    Odinaka, Kelechi Kenneth; Edelu, Benedict O.; Nwolisa, Emeka Charles; Amamilo, Ifeyinwa B.; Okolo, Seline N.

    2014-01-01

    Background: Many mothers still rely on palpation to determine if their children have fever at home before deciding to seek medical attention or administer self-medications. This study was carried out to determine the accuracy of subjective assessment of fever by Nigerian mothers in Under-5 Children. Patients and Methods: Each eligible child had a tactile assessment of fever by the mother after which the axillary temperature was measured. Statistical analysis was done using SPSS version 19 (IBM Inc. Chicago Illinois, USA, 2010). Result: A total of 113 mother/child pairs participated in the study. Palpation overestimates fever by 24.6%. Irrespective of the surface of the hand used for palpation, the sensitivity, specificity, positive predictive value (PPV) and negative predictive value (NPV) of tactile assessment were 82.4%, 37.1%, 51.9% and 71.9%, respectively. The use of the palmer surface of the hand had a better sensitivity (95.2%) than the dorsum of the hand (69.2%). The use of multiple sites had better sensitivity (86.7%) than the use of single site (76.2%). Conclusion: Tactile assessment of childhood fevers by mothers is still a relevant screening tool for the presence or absence fever. Palpation with the palmer surface of the hand using multiple sites improves the reliability of tactile assessment of fever. PMID:25114371

  15. Geometric calibration and accuracy assessment of a multispectral imager on UAVs

    NASA Astrophysics Data System (ADS)

    Zheng, Fengjie; Yu, Tao; Chen, Xingfeng; Chen, Jiping; Yuan, Guoti

    2012-11-01

    The increasing developments in Unmanned Aerial Vehicles (UAVs) platforms and associated sensing technologies have widely promoted UAVs remote sensing application. UAVs, especially low-cost UAVs, limit the sensor payload in weight and dimension. Mostly, cameras on UAVs are panoramic, fisheye lens, small-format CCD planar array camera, unknown intrinsic parameters and lens optical distortion will cause serious image aberrations, even leading a few meters or tens of meters errors in ground per pixel. However, the characteristic of high spatial resolution make accurate geolocation more critical to UAV quantitative remote sensing research. A method for MCC4-12F Multispectral Imager designed to load on UAVs has been developed and implemented. Using multi-image space resection algorithm to assess geometric calibration parameters of random position and different photogrammetric altitudes in 3D test field, which is suitable for multispectral cameras. Both theoretical and practical accuracy assessments were selected. The results of theoretical strategy, resolving object space and image point coordinate differences by space intersection, showed that object space RMSE were 0.2 and 0.14 pixels in X direction and in Y direction, image space RMSE were superior to 0.5 pixels. In order to verify the accuracy and reliability of the calibration parameters,practical study was carried out in Tianjin UAV flight experiments, the corrected accuracy validated by ground checkpoints was less than 0.3m. Typical surface reflectance retrieved on the basis of geo-rectified data was compared with ground ASD measurement resulting 4% discrepancy. Hence, the approach presented here was suitable for UAV multispectral imager.

  16. Biased binomial assessment of cross-validated estimation of classification accuracies illustrated in diagnosis predictions

    PubMed Central

    Noirhomme, Quentin; Lesenfants, Damien; Gomez, Francisco; Soddu, Andrea; Schrouff, Jessica; Garraux, Gaëtan; Luxen, André; Phillips, Christophe; Laureys, Steven

    2014-01-01

    Multivariate classification is used in neuroimaging studies to infer brain activation or in medical applications to infer diagnosis. Their results are often assessed through either a binomial or a permutation test. Here, we simulated classification results of generated random data to assess the influence of the cross-validation scheme on the significance of results. Distributions built from classification of random data with cross-validation did not follow the binomial distribution. The binomial test is therefore not adapted. On the contrary, the permutation test was unaffected by the cross-validation scheme. The influence of the cross-validation was further illustrated on real-data from a brain–computer interface experiment in patients with disorders of consciousness and from an fMRI study on patients with Parkinson disease. Three out of 16 patients with disorders of consciousness had significant accuracy on binomial testing, but only one showed significant accuracy using permutation testing. In the fMRI experiment, the mental imagery of gait could discriminate significantly between idiopathic Parkinson's disease patients and healthy subjects according to the permutation test but not according to the binomial test. Hence, binomial testing could lead to biased estimation of significance and false positive or negative results. In our view, permutation testing is thus recommended for clinical application of classification with cross-validation. PMID:24936420

  17. Improvement Accuracy of Assessment of Total Equivalent Dose Rate during Air Travel

    NASA Astrophysics Data System (ADS)

    Dorenskiy, Sergey; Minligareev, Vladimir

    For radiation safety on the classic flight altitudes 8-11 km is necessary to develop a methodology for calculating the total equivalent dose rate (EDR) to prevent excess exposure of passengers and crews of airliners. During development it became necessary to assess all components affecting the calculation of EDR Comprehensive analysis of the solution to this problem, based on the developed program basis, allowing to automate calculations , as well as on the assessment of the statistical data is introduced. The results have shown that: 1) Limiting accuracy of error of geomagnetic cutoff rigidity (GCR) in the period from 2005 to 2010 was 5% This error is not significant within the considered problems. 2) It is necessary to take into account seasonal variations of atmospheric parameters in the calculation of the EDR. The difference in the determination of dose rate can reach 31% Diurnal variations of atmospheric parameters are offered to consider to improve reliability of EDR estimates. 3) Introduction in the GCR calculations of additional parameters is necessary for reliability improvement and estimation accuracy of EDR on flight routs (Kp index of geomagnetic activity , etc.).

  18. Accuracy assessment of 3D bone reconstructions using CT: an intro comparison.

    PubMed

    Lalone, Emily A; Willing, Ryan T; Shannon, Hannah L; King, Graham J W; Johnson, James A

    2015-08-01

    Computed tomography provides high contrast imaging of the joint anatomy and is used routinely to reconstruct 3D models of the osseous and cartilage geometry (CT arthrography) for use in the design of orthopedic implants, for computer assisted surgeries and computational dynamic and structural analysis. The objective of this study was to assess the accuracy of bone and cartilage surface model reconstructions by comparing reconstructed geometries with bone digitizations obtained using an optical tracking system. Bone surface digitizations obtained in this study determined the ground truth measure for the underlying geometry. We evaluated the use of a commercially available reconstruction technique using clinical CT scanning protocols using the elbow joint as an example of a surface with complex geometry. To assess the accuracies of the reconstructed models (8 fresh frozen cadaveric specimens) against the ground truth bony digitization-as defined by this study-proximity mapping was used to calculate residual error. The overall mean error was less than 0.4 mm in the cortical region and 0.3 mm in the subchondral region of the bone. Similarly creating 3D cartilage surface models from CT scans using air contrast had a mean error of less than 0.3 mm. Results from this study indicate that clinical CT scanning protocols and commonly used and commercially available reconstruction algorithms can create models which accurately represent the true geometry. PMID:26037323

  19. Biased binomial assessment of cross-validated estimation of classification accuracies illustrated in diagnosis predictions.

    PubMed

    Noirhomme, Quentin; Lesenfants, Damien; Gomez, Francisco; Soddu, Andrea; Schrouff, Jessica; Garraux, Gaëtan; Luxen, André; Phillips, Christophe; Laureys, Steven

    2014-01-01

    Multivariate classification is used in neuroimaging studies to infer brain activation or in medical applications to infer diagnosis. Their results are often assessed through either a binomial or a permutation test. Here, we simulated classification results of generated random data to assess the influence of the cross-validation scheme on the significance of results. Distributions built from classification of random data with cross-validation did not follow the binomial distribution. The binomial test is therefore not adapted. On the contrary, the permutation test was unaffected by the cross-validation scheme. The influence of the cross-validation was further illustrated on real-data from a brain-computer interface experiment in patients with disorders of consciousness and from an fMRI study on patients with Parkinson disease. Three out of 16 patients with disorders of consciousness had significant accuracy on binomial testing, but only one showed significant accuracy using permutation testing. In the fMRI experiment, the mental imagery of gait could discriminate significantly between idiopathic Parkinson's disease patients and healthy subjects according to the permutation test but not according to the binomial test. Hence, binomial testing could lead to biased estimation of significance and false positive or negative results. In our view, permutation testing is thus recommended for clinical application of classification with cross-validation. PMID:24936420

  20. Dimensions of L2 Performance and Proficiency: Complexity, Accuracy and Fluency in SLA. Language Learning & Language Teaching. Volume 32

    ERIC Educational Resources Information Center

    Housen, Alex, Ed.; Kuiken, Folkert, Ed.; Vedder, Ineke, Ed.

    2012-01-01

    Research into complexity, accuracy and fluency (CAF) as basic dimensions of second language performance, proficiency and development has received increased attention in SLA. However, the larger picture in this field of research is often obscured by the breadth of scope, multiple objectives and lack of clarity as to how complexity, accuracy and…

  1. Psychometric Properties of Within-Person Across-Session Variability in Accuracy of Cognitive Performance

    ERIC Educational Resources Information Center

    Salthouse, Timothy A.

    2012-01-01

    Although most psychological assessments are based on measures related to an individual's average level of performance, it has been proposed that measures of variability around one's average may provide unique individual difference information and have clinical significance. The current study investigated properties of within-person variability in…

  2. Performance Characteristics and Accuracy in Perceptual Discrimination of Leather and Synthetic Basketballs.

    ERIC Educational Resources Information Center

    Mathes, Sharon; Flatten, Kay

    1982-01-01

    To assess the performance characteristics of synthetic and leather basketballs, individuals were asked to discriminate perceptually between the leather and synthetic basketballs under four treatment conditions. Rebound characteristics on five playing surfaces were measured. Leather basketballs rebounded significantly higher; no significant…

  3. A PRIOR EVALUATION OF TWO-STAGE CLUSTER SAMPLING FOR ACCURACY ASSESSMENT OF LARGE-AREA LAND-COVER MAPS

    EPA Science Inventory

    Two-stage cluster sampling reduces the cost of collecting accuracy assessment reference data by constraining sample elements to fall within a limited number of geographic domains (clusters). However, because classification error is typically positively spatially correlated, withi...

  4. Assessment of the accuracy of ABC/2 variations in traumatic epidural hematoma volume estimation: a retrospective study

    PubMed Central

    Hu, Tingting; Zhang, Zhen

    2016-01-01

    Background. The traumatic epidural hematoma (tEDH) volume is often used to assist in tEDH treatment planning and outcome prediction. ABC/2 is a well-accepted volume estimation method that can be used for tEDH volume estimation. Previous studies have proposed different variations of ABC/2; however, it is unclear which variation will provide a higher accuracy. Given the promising clinical contribution of accurate tEDH volume estimations, we sought to assess the accuracy of several ABC/2 variations in tEDH volume estimation. Methods. The study group comprised 53 patients with tEDH who had undergone non-contrast head computed tomography scans. For each patient, the tEDH volume was automatically estimated by eight ABC/2 variations (four traditional and four newly derived) with an in-house program, and results were compared to those from manual planimetry. Linear regression, the closest value, percentage deviation, and Bland-Altman plot were adopted to comprehensively assess accuracy. Results. Among all ABC/2 variations assessed, the traditional variations y = 0.5 × A1B1C1 (or A2B2C1) and the newly derived variations y = 0.65 × A1B1C1 (or A2B2C1) achieved higher accuracy than the other variations. No significant differences were observed between the estimated volume values generated by these variations and those of planimetry (p > 0.05). Comparatively, the former performed better than the latter in general, with smaller mean percentage deviations (7.28 ± 5.90% and 6.42 ± 5.74% versus 19.12 ± 6.33% and 21.28 ± 6.80%, respectively) and more values closest to planimetry (18/53 and 18/53 versus 2/53 and 0/53, respectively). Besides, deviations of most cases in the former fell within the range of <10% (71.70% and 84.91%, respectively), whereas deviations of most cases in the latter were in the range of 10–20% and >20% (90.57% and 96.23, respectively). Discussion. In the current study, we adopted an automatic approach to assess the accuracy of several ABC/2 variations

  5. 24 CFR 115.206 - Performance assessments; Performance standards.

    Code of Federal Regulations, 2010 CFR

    2010-04-01

    ... should continue to be interim certified or certified. In conducting the performance assessment, the FHEO... continue receiving funding under the FHAP. (d) At a minimum, the performance assessment will consider the... a charge has been issued, the agency will, to the extent feasible, continue to attempt...

  6. Reliability and accuracy of anthropometry performed by community health workers among infants under 6 months in rural Kenya

    PubMed Central

    Mwangome, Martha K; Fegan, Greg; Mbunya, Ronald; Prentice, Andrew M; Berkley, James A

    2012-01-01

    Objective To assess the inter-observer variability and accuracy of Mid Upper Arm Circumference (MUAC) and weight-for-length Z score (WFLz) among infants aged <6 months performed by community health workers (CHWs) in Kilifi District, Kenya. Methods A cross-sectional repeatability study estimated inter-observer variation and accuracy of measurements initially undertaken by an expert anthropometrist, nurses and public health technicians. Then, after training, 18 CHWs (three at each of six sites) repeatedly measured MUAC, weight and length of infants aged <6 months. Intra-class correlations (ICCs) and the Pitman’s statistic were calculated. Results Among CHWs, ICCs pooled across the six sites (924 infants) were 0.96 (95% CI 0.95–0.96) for MUAC and 0.71 (95% CI 0.68–0.74) for WFLz. MUAC measures by CHWs differed little from their trainers: the mean difference in MUAC was 0.65 mm (95% CI 0.023–1.07), with no significant difference in variance (P = 0.075). Conclusion Mid Upper Arm Circumference is more reliably measured by CHWs than WFLz among infants aged <6 months. Further work is needed to define cut-off values based on MUAC’s ability to predict mortality among younger infants. PMID:22364555

  7. Accuracy and performance of three water quality models for simulating nitrate nitrogen losses under corn.

    PubMed

    Jabro, J D; Jabro, A D; Fox, R H

    2006-01-01

    Simulation models can be used to predict N dynamics in a soil-water-plant system. The simulation accuracy and performance of three models: LEACHM (Leaching Estimation And CHemistry Model), NCSWAP (Nitrogen and Carbon cycling in Soil, Water And Plant), and SOILN to predict NO3-N leaching were evaluated and compared to field data from a 5-yr experiment conducted on a Hagerstown silt loam (fine, mixed, mesic Typic Hapludalf). Nitrate N losses past 1.2 m from N-fertilized and manured corn (Zea mays L.) were measured with zero-tension pan lysimeters for 5 yr. The models were calibrated using 1989-1990 data and validated using 1988-1989, 1990-1991, 1991-1992, and 1992-1993 NO3-N leaching data. Statistical analyses indicated that LEACHM, NCSWAP, and SOILN models were able to provide accurate simulations of annual NO3-N leaching losses below the 1.2-m depth for 8, 9, and 7 of 10 cases, respectively, in the validation years. The inaccuracy in the models' annual simulations for the control and manure treatments seems to be related to inadequate description of processes of N and C transformations in the models' code. The overall performance and accuracy of the SOILN model were worse than those of LEACHM and NCSWAP. The root mean square error (RMSE) and modeling efficiency (ME) were 10.7 and 0.9, 9.5 and 0.93, and 20.7 and 0.63 for LEACHM, NCSWAP, and SOILN, respectively. Overall, the three models have the potential to predict NO3-N losses below 1.2-m depth from fertilizer and manure nitrogen applied to corn without recalibration of models from year to year. PMID:16825442

  8. A view into performance assessment in science

    NASA Astrophysics Data System (ADS)

    Wozny, Paul David

    1998-12-01

    This collaborative research project involved the design, implementation, and analysis of five sets of performance assessment activities with a group of eight science teachers at an urban composite high school in Alberta. These science teachers shared their initial impressions of the performance assessment process and the feasibility of this mode of assessment in their classrooms. The teachers were somewhat reserved in the feasibility of enacting performance assessment tasks, largely due to the time constraints associated with larger classes. The five sets of performance assessment tasks designed and implemented by the research group included: (1) basic electronics (science nine), (2) density problems (science nine), (3) microscope skills (science ten), (4) uniform motion (science ten), (5) acid/base identification and neutralization (science ten). Analysis of the performance assessment results included standard deviation, Pearson's Product Moment Correlation Coefficient, and face validity evaluation. Inter-rater reliability varied from 0.83 to 0.91 (Pearson's Product Moment Correlation Coefficient) over the entire group of performance assessment tasks, which indicates very strong inter-rater reliability. These results reinforce the research of Gipps (1994) which found that the inclusion of "clear rubrics and training for markers, and exemplars of performance at each point or grade, levels of IRR (inter-judge reliabilities) can be high" (Gipps, 1994, p. 104). The face validity of the performance assessment tasks was also seen as very strong due to the close fit with suggested activities in the science curriculum. The participating teachers shared a strong appreciation and approval of the performance assessment process in the science classroom after designing and implementing the five sets of performance assessments, but had some reservations about the time involved in the set-up and implementation. In the appendix of this dissertation, I included a teacher

  9. Assessing effects of the e-Chasqui laboratory information system on accuracy and timeliness of bacteriology results in the Peruvian tuberculosis program.

    PubMed

    Blaya, Joaquin A; Shin, Sonya S; Yagui, Martin J A; Yale, Gloria; Suarez, Carmen; Asencios, Luis; Fraser, Hamish

    2007-01-01

    We created a web-based laboratory information system, e-Chasqui to connect public laboratories to health centers to improve communication and analysis. After one year, we performed a pre and post assessment of communication delays and found that e-Chasqui maintained the average delay but eliminated delays of over 60 days. Adding digital verification maintained the average delay, but should increase accuracy. We are currently performing a randomized evaluation of the impacts of e-Chasqui. PMID:18693974

  10. An accuracy assessment of realtime GNSS time series toward semi- real time seafloor geodetic observation

    NASA Astrophysics Data System (ADS)

    Osada, Y.; Ohta, Y.; Demachi, T.; Kido, M.; Fujimoto, H.; Azuma, R.; Hino, R.

    2013-12-01

    Large interplate earthquake repeatedly occurred in Japan Trench. Recently, the detail crustal deformation revealed by the nation-wide inland GPS network called as GEONET by GSI. However, the maximum displacement region for interplate earthquake is mainly located offshore region. GPS/Acoustic seafloor geodetic observation (hereafter GPS/A) is quite important and useful for understanding of shallower part of the interplate coupling between subducting and overriding plates. We typically conduct GPS/A in specific ocean area based on repeated campaign style using research vessel or buoy. Therefore, we cannot monitor the temporal variation of seafloor crustal deformation in real time. The one of technical issue on real time observation is kinematic GPS analysis because kinematic GPS analysis based on reference and rover data. If the precise kinematic GPS analysis will be possible in the offshore region, it should be promising method for real time GPS/A with USV (Unmanned Surface Vehicle) and a moored buoy. We assessed stability, precision and accuracy of StarFireTM global satellites based augmentation system. We primarily tested for StarFire in the static condition. In order to assess coordinate precision and accuracy, we compared 1Hz StarFire time series and post-processed precise point positioning (PPP) 1Hz time series by GIPSY-OASIS II processing software Ver. 6.1.2 with three difference product types (ultra-rapid, rapid, and final orbits). We also used difference interval clock information (30 and 300 seconds) for the post-processed PPP processing. The standard deviation of real time StarFire time series is less than 30 mm (horizontal components) and 60 mm (vertical component) based on 1 month continuous processing. We also assessed noise spectrum of the estimated time series by StarFire and post-processed GIPSY PPP results. We found that the noise spectrum of StarFire time series is similar pattern with GIPSY-OASIS II processing result based on JPL rapid orbit

  11. Verification of the performance accuracy of a real-time skin-dose tracking system for interventional fluoroscopic procedures

    PubMed Central

    Bednarek, Daniel R.; Barbarits, Jeffery; Rana, Vijay K.; Nagaraja, Srikanta P.; Josan, Madhur S.; Rudin, Stephen

    2011-01-01

    A tracking system has been developed to provide real-time feedback of skin dose and dose rate during interventional fluoroscopic procedures. The dose tracking system (DTS) calculates the radiation dose rate to the patient’s skin using the exposure technique parameters and exposure geometry obtained from the x-ray imaging system digital network (Toshiba Infinix) and presents the cumulative results in a color mapping on a 3D graphic of the patient. We performed a number of tests to verify the accuracy of the dose representation of this system. These tests included comparison of system–calculated dose-rate values with ionization-chamber (6 cc PTW) measured values with change in kVp, beam filter, field size, source-to-skin distance and beam angulation. To simulate a cardiac catheterization procedure, the ionization chamber was also placed at various positions on an Alderson Rando torso phantom and the dose agreement compared for a range of projection angles with the heart at isocenter. To assess the accuracy of the dose distribution representation, Gafchromic film (XR-RV3, ISP) was exposed with the beam at different locations. The DTS and film distributions were compared and excellent visual agreement was obtained within the cm-sized surface elements used for the patient graphic. The dose (rate) values agreed within about 10% for the range of variables tested. Correction factors could be applied to obtain even closer agreement since the variable values are known in real-time. The DTS provides skin-dose values and dose mapping with sufficient accuracy for use in monitoring diagnostic and interventional x-ray procedures. PMID:21731400

  12. Verification of the performance accuracy of a real-time skin-dose tracking system for interventional fluoroscopic procedures

    NASA Astrophysics Data System (ADS)

    Bednarek, Daniel R.; Barbarits, Jeffery; Rana, Vijay K.; Nagaraja, Srikanta P.; Josan, Madhur S.; Rudin, Stephen

    2011-03-01

    A tracking system has been developed to provide real-time feedback of skin dose and dose rate during interventional fluoroscopic procedures. The dose tracking system (DTS) calculates the radiation dose rate to the patient's skin using the exposure technique parameters and exposure geometry obtained from the x-ray imaging system digital network (Toshiba Infinix) and presents the cumulative results in a color mapping on a 3D graphic of the patient. We performed a number of tests to verify the accuracy of the dose representation of this system. These tests included comparison of system-calculated dose-rate values with ionization-chamber (6 cc PTW) measured values with change in kVp, beam filter, field size, source-to-skin distance and beam angulation. To simulate a cardiac catheterization procedure, the ionization chamber was also placed at various positions on an Alderson Rando torso phantom and the dose agreement compared for a range of projection angles with the heart at isocenter. To assess the accuracy of the dose distribution representation, Gafchromic film (XR-RV3, ISP) was exposed with the beam at different locations. The DTS and film distributions were compared and excellent visual agreement was obtained within the cm-sized surface elements used for the patient graphic. The dose (rate) values agreed within about 10% for the range of variables tested. Correction factors could be applied to obtain even closer agreement since the variable values are known in real-time. The DTS provides skin-dose values and dose mapping with sufficient accuracy for use in monitoring diagnostic and interventional x-ray procedures.

  13. Verification of the performance accuracy of a real-time skin-dose tracking system for interventional fluoroscopic procedures.

    PubMed

    Bednarek, Daniel R; Barbarits, Jeffery; Rana, Vijay K; Nagaraja, Srikanta P; Josan, Madhur S; Rudin, Stephen

    2011-02-13

    A tracking system has been developed to provide real-time feedback of skin dose and dose rate during interventional fluoroscopic procedures. The dose tracking system (DTS) calculates the radiation dose rate to the patient's skin using the exposure technique parameters and exposure geometry obtained from the x-ray imaging system digital network (Toshiba Infinix) and presents the cumulative results in a color mapping on a 3D graphic of the patient. We performed a number of tests to verify the accuracy of the dose representation of this system. These tests included comparison of system-calculated dose-rate values with ionization-chamber (6 cc PTW) measured values with change in kVp, beam filter, field size, source-to-skin distance and beam angulation. To simulate a cardiac catheterization procedure, the ionization chamber was also placed at various positions on an Alderson Rando torso phantom and the dose agreement compared for a range of projection angles with the heart at isocenter. To assess the accuracy of the dose distribution representation, Gafchromic film (XR-RV3, ISP) was exposed with the beam at different locations. The DTS and film distributions were compared and excellent visual agreement was obtained within the cm-sized surface elements used for the patient graphic. The dose (rate) values agreed within about 10% for the range of variables tested. Correction factors could be applied to obtain even closer agreement since the variable values are known in real-time. The DTS provides skin-dose values and dose mapping with sufficient accuracy for use in monitoring diagnostic and interventional x-ray procedures. PMID:21731400

  14. Assessment of the sources of error affecting the quantitative accuracy of SPECT imaging in small animals

    PubMed Central

    Hwang, Andrew B; Franc, Benjamin L; Gullberg, Grant T; Hasegawa, Bruce H

    2009-01-01

    Small animal SPECT imaging systems have multiple potential applications in biomedical research. Whereas SPECT data are commonly interpreted qualitatively in a clinical setting, the ability to accurately quantify measurements will increase the utility of the SPECT data for laboratory measurements involving small animals. In this work, we assess the effect of photon attenuation, scatter and partial volume errors on the quantitative accuracy of small animal SPECT measurements, first with Monte Carlo simulation and then confirmed with experimental measurements. The simulations modeled the imaging geometry of a commercially available small animal SPECT system. We simulated the imaging of a radioactive source within a cylinder of water, and reconstructed the projection data using iterative reconstruction algorithms. The size of the source and the size of the surrounding cylinder were varied to evaluate the effects of photon attenuation and scatter on quantitative accuracy. We found that photon attenuation can reduce the measured concentration of radioactivity in a volume of interest in the center of a rat-sized cylinder of water by up to 50% when imaging with iodine-125, and up to 25% when imaging with technetium-99m. When imaging with iodine-125, the scatter-to-primary ratio can reach up to approximately 30%, and can cause overestimation of the radioactivity concentration when reconstructing data with attenuation correction. We varied the size of the source to evaluate partial volume errors, which we found to be a strong function of the size of the volume of interest and the spatial resolution. These errors can result in large (>50%) changes in the measured amount of radioactivity. The simulation results were compared with and found to agree with experimental measurements. The inclusion of attenuation correction in the reconstruction algorithm improved quantitative accuracy. We also found that an improvement of the spatial resolution through the use of resolution

  15. Assessment of the sources of error affecting the quantitative accuracy of SPECT imaging in small animals

    SciTech Connect

    Joint Graduate Group in Bioengineering, University of California, San Francisco and University of California, Berkeley; Department of Radiology, University of California; Gullberg, Grant T; Hwang, Andrew B.; Franc, Benjamin L.; Gullberg, Grant T.; Hasegawa, Bruce H.

    2008-02-15

    Small animal SPECT imaging systems have multiple potential applications in biomedical research. Whereas SPECT data are commonly interpreted qualitatively in a clinical setting, the ability to accurately quantify measurements will increase the utility of the SPECT data for laboratory measurements involving small animals. In this work, we assess the effect of photon attenuation, scatter and partial volume errors on the quantitative accuracy of small animal SPECT measurements, first with Monte Carlo simulation and then confirmed with experimental measurements. The simulations modeled the imaging geometry of a commercially available small animal SPECT system. We simulated the imaging of a radioactive source within a cylinder of water, and reconstructed the projection data using iterative reconstruction algorithms. The size of the source and the size of the surrounding cylinder were varied to evaluate the effects of photon attenuation and scatter on quantitative accuracy. We found that photon attenuation can reduce the measured concentration of radioactivity in a volume of interest in the center of a rat-sized cylinder of water by up to 50percent when imaging with iodine-125, and up to 25percent when imaging with technetium-99m. When imaging with iodine-125, the scatter-to-primary ratio can reach up to approximately 30percent, and can cause overestimation of the radioactivity concentration when reconstructing data with attenuation correction. We varied the size of the source to evaluate partial volume errors, which we found to be a strong function of the size of the volume of interest and the spatial resolution. These errors can result in large (>50percent) changes in the measured amount of radioactivity. The simulation results were compared with and found to agree with experimental measurements. The inclusion of attenuation correction in the reconstruction algorithm improved quantitative accuracy. We also found that an improvement of the spatial resolution through the

  16. Quantitative Assessment of Shockwave Lithotripsy Accuracy and the Effect of Respiratory Motion*

    PubMed Central

    Bailey, Michael R.; Shah, Anup R.; Hsi, Ryan S.; Paun, Marla; Harper, Jonathan D.

    2012-01-01

    Abstract Background and Purpose Effective stone comminution during shockwave lithotripsy (SWL) is dependent on precise three-dimensional targeting of the shockwave. Respiratory motion, imprecise targeting or shockwave alignment, and stone movement may compromise treatment efficacy. The purpose of this study was to evaluate the accuracy of shockwave targeting during SWL treatment and the effect of motion from respiration. Patients and Methods Ten patients underwent SWL for the treatment of 13 renal stones. Stones were targeted fluoroscopically using a Healthtronics Lithotron (five cases) or Dornier Compact Delta II (five cases) shockwave lithotripter. Shocks were delivered at a rate of 1 to 2 Hz with ramping shockwave energy settings of 14 to 26 kV or level 1 to 5. After the low energy pretreatment and protective pause, a commercial diagnostic ultrasound (US) imaging system was used to record images of the stone during active SWL treatment. Shockwave accuracy, defined as the proportion of shockwaves that resulted in stone motion with shockwave delivery, and respiratory stone motion were determined by two independent observers who reviewed the ultrasonographic videos. Results Mean age was 51±15 years with 60% men, and mean stone size was 10.5±3.7 mm (range 5–18 mm). A mean of 2675±303 shocks was delivered. Shockwave-induced stone motion was observed with every stone. Accurate targeting of the stone occurred in 60%±15% of shockwaves. Conclusions US imaging during SWL revealed that 40% of shockwaves miss the stone and contribute solely to tissue injury, primarily from movement with respiration. These data support the need for a device to deliver shockwaves only when the stone is in target. US imaging provides real-time assessment of stone targeting and accuracy of shockwave delivery. PMID:22471349

  17. Accuracy assessment of land cover/land use classifiers in dry and humid areas of Iran.

    PubMed

    Yousefi, Saleh; Khatami, Reza; Mountrakis, Giorgos; Mirzaee, Somayeh; Pourghasemi, Hamid Reza; Tazeh, Mehdi

    2015-10-01

    Land cover/land use (LCLU) maps are essential inputs for environmental analysis. Remote sensing provides an opportunity to construct LCLU maps of large geographic areas in a timely fashion. Knowing the most accurate classification method to produce LCLU maps based on site characteristics is necessary for the environment managers. The aim of this research is to examine the performance of various classification algorithms for LCLU mapping in dry and humid climates (from June to August). Testing is performed in three case studies from each of the two climates in Iran. The reference dataset of each image was randomly selected from the entire images and was randomly divided into training and validation set. Training sets included 400 pixels, and validation sets included 200 pixels of each LCLU. Results indicate that the support vector machine (SVM) and neural network methods can achieve higher overall accuracy (86.7 and 86.6%) than other examined algorithms, with a slight advantage for the SVM. Dry areas exhibit higher classification difficulty as man-made features often have overlapping spectral responses to soil. A further observation is that spatial segregation and lower mixture of LCLU classes can increase classification overall accuracy. PMID:26403704

  18. Georeferencing Accuracy Assessment of Pléiades 1A Images Using Rational Function Model

    NASA Astrophysics Data System (ADS)

    Topan, H.; Taşkanat, T.; Cam, A.

    2013-10-01

    This paper presents the first experience of georeferencing accuracy analysis of Pléiades 1A mono images. The Pléiades Constellation has been founded by CNES (The Centre National d'Etudes Spatiales - National Centre for Space Studies) consisting of Pléiades 1A&1B, following the previous five sisters of SPOT series. CNES also organized a Pléiades Users Group following a world-wide invitation. The images investigated in this research were received as one of the member of this Group. A stereo-pair was evaluated on Zonguldak test field where the topography is mountainous and undulating overlapping urban, rural and forest landscapes. As a first experience, totally 22 ground control points (GCPs) already existing were marked on the images, and the bias compensated Rational Function Model (RFM) was carried out reaching ±0.8 pixel at the GCPs. The overall georeferencing accuracy was performed by the figure condition analysis (FCA), a new concept successfully applied to IKONOS, OrbView-3 and QuickBird images of the same test field. The range of figure condition is between ±0.3-2.7 pixels. These results were compared with the other images of three sensors mentioned above. Although a special GCP survey by GNSS has not been performed yet, these first results are satisfied for the highly accurate georeferencing of the Pléiades 1A images.

  19. ACSB: A minimum performance assessment

    NASA Technical Reports Server (NTRS)

    Jones, Lloyd Thomas; Kissick, William A.

    1988-01-01

    Amplitude companded sideband (ACSB) is a new modulation technique which uses a much smaller channel width than does conventional frequency modulation (FM). Among the requirements of a mobile communications system is adequate speech intelligibility. This paper explores this aspect of minimum required performance. First, the basic principles of ACSB are described, with emphasis on those features that affect speech quality. Second, the appropriate performance measures for ACSB are reviewed. Third, a subjective voice quality scoring method is used to determine the values of the performance measures that equate to the minimum level of intelligibility. It is assumed that the intelligibility of an FM system operating at 12 dB SINAD represents that minimum. It was determined that ACSB operating at 12 dB SINAD with an audio-to-pilot ratio of 10 dB provides approximately the same intelligibility as FM operating at 12 dB SINAD.

  20. The Effects of Performance-Based Assessment Criteria on Student Performance and Self-Assessment Skills

    ERIC Educational Resources Information Center

    Fastre, Greet Mia Jos; van der Klink, Marcel R.; van Merrienboer, Jeroen J. G.

    2010-01-01

    This study investigated the effect of performance-based versus competence-based assessment criteria on task performance and self-assessment skills among 39 novice secondary vocational education students in the domain of nursing and care. In a performance-based assessment group students are provided with a preset list of performance-based…

  1. Monte-Carlo Simulation for Accuracy Assessment of a Single Camera Navigation System

    NASA Astrophysics Data System (ADS)

    Bethmann, F.; Luhmann, T.

    2012-07-01

    The paper describes a simulation-based optimization of an optical tracking system that is used as a 6DOF navigation system for neurosurgery. Compared to classical system used in clinical navigation, the presented system has two unique properties: firstly, the system will be miniaturized and integrated into an operating microscope for neurosurgery; secondly, due to miniaturization a single camera approach has been designed. Single camera techniques for 6DOF measurements show a special sensitivity against weak geometric configurations between camera and object. In addition, the achievable accuracy potential depends significantly on the geometric properties of the tracked objects (locators). Besides quality and stability of the targets used on the locator, their geometric configuration is of major importance. In the following the development and investigation of a simulation program is presented which allows for the assessment and optimization of the system with respect to accuracy. Different system parameters can be altered as well as different scenarios indicating the operational use of the system. Measurement deviations are estimated based on the Monte-Carlo method. Practical measurements validate the correctness of the numerical simulation results.

  2. Assessing the accuracy of satellite derived global and national urban maps in Kenya.

    PubMed

    Tatem, A J; Noor, A M; Hay, S I

    2005-05-15

    Ninety percent of projected global urbanization will be concentrated in low income countries (United-Nations, 2004). This will have considerable environmental, economic and public health implications for those populations. Objective and efficient methods of delineating urban extent are a cross-sectoral need complicated by a diversity of urban definition rubrics world-wide. Large-area maps of urban extents are becoming increasingly available in the public domain, as are a wide-range of medium spatial resolution satellite imagery. Here we describe the extension of a methodology based on Landsat ETM and Radarsat imagery to the production of a human settlement map of Kenya. This map was then compared with five satellite imagery-derived, global maps of urban extent at Kenya national-level, against an expert opinion coverage for accuracy assessment. The results showed the map produced using medium spatial resolution satellite imagery was of comparable accuracy to the expert opinion coverage. The five global urban maps exhibited a range of inaccuracies, emphasising that care should be taken with use of these maps at national and sub-national scale. PMID:22581985

  3. Putting Performance Assessment to the Test.

    ERIC Educational Resources Information Center

    O'Neil, John

    1992-01-01

    The desire for students to graduate with more than basic skills has fueled interest in performance assessment methods such as essay writing, group science experiments, or portfolio preparation. Officials in Vermont, California, Kentucky, Maryland, and other states are betting that performance assessments may prove as powerful a classroom influence…

  4. Diagnostic accuracy of refractometer and Brix refractometer to assess failure of passive transfer in calves: protocol for a systematic review and meta-analysis.

    PubMed

    Buczinski, S; Fecteau, G; Chigerwe, M; Vandeweerd, J M

    2016-06-01

    Calves are highly dependent of colostrum (and antibody) intake because they are born agammaglobulinemic. The transfer of passive immunity in calves can be assessed directly by dosing immunoglobulin G (IgG) or by refractometry or Brix refractometry. The latter are easier to perform routinely in the field. This paper presents a protocol for a systematic review meta-analysis to assess the diagnostic accuracy of refractometry or Brix refractometry versus dosage of IgG as a reference standard test. With this review protocol we aim to be able to report refractometer and Brix refractometer accuracy in terms of sensitivity and specificity as well as to quantify the impact of any study characteristic on test accuracy. PMID:27427188

  5. Accuracy assessment of Kinect for Xbox One in point-based tracking applications

    NASA Astrophysics Data System (ADS)

    Goral, Adrian; Skalski, Andrzej

    2015-12-01

    We present the accuracy assessment of a point-based tracking system built on Kinect v2. In our approach, color, IR and depth data were used to determine the positions of spherical markers. To accomplish this task, we calibrated the depth/infrared and color cameras using a custom method. As a reference tool we used Polaris Spectra optical tracking system. The mean error obtained within the range from 0.9 to 2.9 m was 61.6 mm. Although the depth component of the error turned out to be the largest, the random error of depth estimation was only 1.24 mm on average. Our Kinect-based system also allowed for reliable angular measurements within the range of ±20° from the sensor's optical axis.

  6. A MODEL TO ASSESS THE ACCURACY OF DETECTING ARBOVIRUSES IN MOSQUITO POOLS

    PubMed Central

    VITEK, CHRISTOPHER J.; RICHARDS, STEPHANIE L.; ROBINSON, HEATHER L.; SMARTT, CHELSEA T.

    2009-01-01

    Vigilant surveillance of virus prevalence in mosquitoes is essential for risk assessment and outbreak prediction. Accurate virus detection methods are essential for arbovirus surveillance. We have developed a model to estimate the probability of accurately detecting a virus-positive mosquito from pooled field collections using standard molecular techniques. We discuss several factors influencing the probability of virus detection, including the number of virions in the sample, the total sample volume, and the portion of the sample volume that is being tested. Our model determines the probability of obtaining at least 1 virion in the sample that is tested. The model also determines the optimal sample volume that is required in any test to ensure a desired probability of virus detection is achieved, and can be used to support the accuracy of current tests or to optimize existing techniques. PMID:19852231

  7. Accuracy of actuarial procedures for assessment of sexual offender recidivism risk may vary across ethnicity.

    PubMed

    Långström, Niklas

    2004-04-01

    Little is known about whether the accuracy of tools for assessment of sexual offender recidivism risk holds across ethnic minority offenders. I investigated the predictive validity across ethnicity for the RRASOR and the Static-99 actuarial risk assessment procedures in a national cohort of all adult male sex offenders released from prison in Sweden 1993-1997. Subjects ordered out of Sweden upon release from prison were excluded and remaining subjects (N = 1303) divided into three subgroups based on citizenship. Eighty-three percent of the subjects were of Nordic ethnicity, and non-Nordic citizens were either of non-Nordic European (n = 49, hereafter called European) or African Asian descent (n = 128). The two tools were equally accurate among Nordic and European sexual offenders for the prediction of any sexual and any violent nonsexual recidivism. In contrast, neither measure could differentiate African Asian sexual or violent recidivists from nonrecidivists. Compared to European offenders, AfricanAsian offenders had more often sexually victimized a nonrelative or stranger, had higher Static-99 scores, were younger, more often single, and more often homeless. The results require replication, but suggest that the promising predictive validity seen with some risk assessment tools may not generalize across offender ethnicity or migration status. More speculatively, different risk factors or causal chains might be involved in the development or persistence of offending among minority or immigrant sexual abusers. PMID:15208896

  8. Accuracy Assessment of Direct Georeferencing for Photogrammetric Applications on Small Unmanned Aerial Platforms

    NASA Astrophysics Data System (ADS)

    Mian, O.; Lutes, J.; Lipa, G.; Hutton, J. J.; Gavelle, E.; Borghini, S.

    2016-03-01

    Microdrones md4-1000 quad-rotor VTOL UAV. The Sony A7R and each lens combination were focused and calibrated terrestrially using the Applanix camera calibration facility, and then integrated with the APX-15 GNSS-Inertial system using a custom mount specifically designed for UAV applications. The mount is constructed in such a way as to maintain the stability of both the interior orientation and IMU boresight calibration over shock and vibration, thus turning the Sony A7R into a metric imaging solution. In July and August 2015, Applanix and Avyon carried out a series of test flights of this system. The goal of these test flights was to assess the performance of DMS APX-15 direct georeferencing system under various scenarios. Furthermore, an examination of how DMS APX-15 can be used to produce accurate map products without the use of ground control points and with reduced sidelap was also carried out. Reducing the side lap for survey missions performed by small UAVs can significantly increase the mapping productivity of these platforms. The area mapped during the first flight campaign was a 250m x 300m block and a 775m long railway corridor in a rural setting in Ontario, Canada. The second area mapped was a 450m long corridor over a dam known as Fryer Dam (over Richelieu River in Quebec, Canada). Several ground control points were distributed within both test areas. The flight over the block area included 8 North-South lines and 1 cross strip flown at 80m AGL, resulting in a ~1cm GSD. The flight over the railway corridor included 2 North-South lines also flown at 80m AGL. Similarly, the flight over the dam corridor included 2 North-South lines flown at 50m AGL. The focus of this paper was to analyse the results obtained from the two corridors. Test results from both areas were processed using Direct Georeferencing techniques, and then compared for accuracy against the known positions of ground control points in each test area. The GNSS-Inertial data collected by the APX-15 was

  9. Accuracy Assessment of Direct Georeferencing for Photogrammetric Applications on Small Unmanned Aerial Platforms

    NASA Astrophysics Data System (ADS)

    Mian, O.; Lutes, J.; Lipa, G.; Hutton, J. J.; Gavelle, E.; Borghini, S.

    2016-03-01

    Microdrones md4-1000 quad-rotor VTOL UAV. The Sony A7R and each lens combination were focused and calibrated terrestrially using the Applanix camera calibration facility, and then integrated with the APX-15 GNSS-Inertial system using a custom mount specifically designed for UAV applications. The mount is constructed in such a way as to maintain the stability of both the interior orientation and IMU boresight calibration over shock and vibration, thus turning the Sony A7R into a metric imaging solution. In July and August 2015, Applanix and Avyon carried out a series of test flights of this system. The goal of these test flights was to assess the performance of DMS APX-15 direct georeferencing system under various scenarios. Furthermore, an examination of how DMS APX-15 can be used to produce accurate map products without the use of ground control points and with reduced sidelap was also carried out. Reducing the side lap for survey missions performed by small UAVs can significantly increase the mapping productivity of these platforms. The area mapped during the first flight campaign was a 250m x 300m block and a 775m long railway corridor in a rural setting in Ontario, Canada. The second area mapped was a 450m long corridor over a dam known as Fryer Dam (over Richelieu River in Quebec, Canada). Several ground control points were distributed within both test areas. The flight over the block area included 8 North-South lines and 1 cross strip flown at 80m AGL, resulting in a ~1cm GSD. The flight over the railway corridor included 2 North-South lines also flown at 80m AGL. Similarly, the flight over the dam corridor included 2 North-South lines flown at 50m AGL. The focus of this paper was to analyse the results obtained from the two corridors. Test results from both areas were processed using Direct Georeferencing techniques, and then compared for accuracy against the known positions of ground control points in each test area. The GNSS-Inertial data collected by the APX-15 was

  10. An accuracy assessment of different rigid body image registration methods and robotic couch positional corrections using a novel phantom

    SciTech Connect

    Arumugam, Sankar; Xing Aitang; Jameson, Michael G.; Holloway, Lois

    2013-03-15

    Purpose: Image guided radiotherapy (IGRT) using cone beam computed tomography (CBCT) images greatly reduces interfractional patient positional uncertainties. An understanding of uncertainties in the IGRT process itself is essential to ensure appropriate use of this technology. The purpose of this study was to develop a phantom capable of assessing the accuracy of IGRT hardware and software including a 6 degrees of freedom patient positioning system and to investigate the accuracy of the Elekta XVI system in combination with the HexaPOD robotic treatment couch top. Methods: The constructed phantom enabled verification of the three automatic rigid body registrations (gray value, bone, seed) available in the Elekta XVI software and includes an adjustable mount that introduces known rotational offsets to the phantom from its reference position. Repeated positioning of the phantom was undertaken to assess phantom rotational accuracy. Using this phantom the accuracy of the XVI registration algorithms was assessed considering CBCT hardware factors and image resolution together with the residual error in the overall image guidance process when positional corrections were performed through the HexaPOD couch system. Results: The phantom positioning was found to be within 0.04 ({sigma}= 0.12) Degree-Sign , 0.02 ({sigma}= 0.13) Degree-Sign , and -0.03 ({sigma}= 0.06) Degree-Sign in X, Y, and Z directions, respectively, enabling assessment of IGRT with a 6 degrees of freedom patient positioning system. The gray value registration algorithm showed the least error in calculated offsets with maximum mean difference of -0.2({sigma}= 0.4) mm in translational and -0.1({sigma}= 0.1) Degree-Sign in rotational directions for all image resolutions. Bone and seed registration were found to be sensitive to CBCT image resolution. Seed registration was found to be most sensitive demonstrating a maximum mean error of -0.3({sigma}= 0.9) mm and -1.4({sigma}= 1.7) Degree-Sign in translational

  11. Inertial Measures of Motion for Clinical Biomechanics: Comparative Assessment of Accuracy under Controlled Conditions - Effect of Velocity

    PubMed Central

    Lebel, Karina; Boissy, Patrick; Hamel, Mathieu; Duval, Christian

    2013-01-01

    Background Inertial measurement of motion with Attitude and Heading Reference Systems (AHRS) is emerging as an alternative to 3D motion capture systems in biomechanics. The objectives of this study are: 1) to describe the absolute and relative accuracy of multiple units of commercially available AHRS under various types of motion; and 2) to evaluate the effect of motion velocity on the accuracy of these measurements. Methods The criterion validity of accuracy was established under controlled conditions using an instrumented Gimbal table. AHRS modules were carefully attached to the center plate of the Gimbal table and put through experimental static and dynamic conditions. Static and absolute accuracy was assessed by comparing the AHRS orientation measurement to those obtained using an optical gold standard. Relative accuracy was assessed by measuring the variation in relative orientation between modules during trials. Findings Evaluated AHRS systems demonstrated good absolute static accuracy (mean error < 0.5o) and clinically acceptable absolute accuracy under condition of slow motions (mean error between 0.5o and 3.1o). In slow motions, relative accuracy varied from 2o to 7o depending on the type of AHRS and the type of rotation. Absolute and relative accuracy were significantly affected (p<0.05) by velocity during sustained motions. The extent of that effect varied across AHRS. Interpretation Absolute and relative accuracy of AHRS are affected by environmental magnetic perturbations and conditions of motions. Relative accuracy of AHRS is mostly affected by the ability of all modules to locate the same global reference coordinate system at all time. Conclusions Existing AHRS systems can be considered for use in clinical biomechanics under constrained conditions of use. While their individual capacity to track absolute motion is relatively consistent, the use of multiple AHRS modules to compute relative motion between rigid bodies needs to be optimized according to

  12. A simple device for high-precision head image registration: Preliminary performance and accuracy tests

    SciTech Connect

    Pallotta, Stefania

    2007-05-15

    The purpose of this paper is to present a new device for multimodal head study registration and to examine its performance in preliminary tests. The device consists of a system of eight markers fixed to mobile carbon pipes and bars which can be easily mounted on the patient's head using the ear canals and the nasal bridge. Four graduated scales fixed to the rigid support allow examiners to find the same device position on the patient's head during different acquisitions. The markers can be filled with appropriate substances for visualisation in computed tomography (CT), magnetic resonance, single photon emission computer tomography (SPECT) and positron emission tomography images. The device's rigidity and its position reproducibility were measured in 15 repeated CT acquisitions of the Alderson Rando anthropomorphic phantom and in two SPECT studies of a patient. The proposed system displays good rigidity and reproducibility characteristics. A relocation accuracy of less than 1,5 mm was found in more than 90% of the results. The registration parameters obtained using such a device were compared to those obtained using fiducial markers fixed on phantom and patient heads, resulting in differences of less than 1 deg. and 1 mm for rotation and translation parameters, respectively. Residual differences between fiducial marker coordinates in reference and in registered studies were less than 1 mm in more than 90% of the results, proving that the device performed as accurately as noninvasive stereotactic devices. Finally, an example of multimodal employment of the proposed device is reported.

  13. Assessment of Accuracy and Reliability in Acetabular Cup Placement Using an iPhone/iPad System.

    PubMed

    Kurosaka, Kenji; Fukunishi, Shigeo; Fukui, Tomokazu; Nishio, Shoji; Fujihara, Yuki; Okahisa, Shohei; Takeda, Yu; Daimon, Takashi; Yoshiya, Shinichi

    2016-07-01

    Implant positioning is one of the critical factors that influences postoperative outcome of total hip arthroplasty (THA). Malpositioning of the implant may lead to an increased risk of postoperative complications such as prosthetic impingement, dislocation, restricted range of motion, polyethylene wear, and loosening. In 2012, the intraoperative use of smartphone technology in THA for improved accuracy of acetabular cup placement was reported. The purpose of this study was to examine the accuracy of an iPhone/iPad-guided technique in positioning the acetabular cup in THA compared with the reference values obtained from the image-free navigation system in a cadaveric experiment. Five hips of 5 embalmed whole-body cadavers were used in the study. Seven orthopedic surgeons (4 residents and 3 senior hip surgeons) participated in the study. All of the surgeons examined each of the 5 hips 3 times. The target angle was 38°/19° for operative inclination/anteversion angles, which corresponded to radiographic inclination/anteversion angles of 40°/15°. The simultaneous assessment using the navigation system showed mean±SD radiographic alignment angles of 39.4°±2.6° and 16.4°±2.6° for inclination and anteversion, respectively. Assessment of cup positioning based on Lewinnek's safe zone criteria showed all of the procedures (n=105) achieved acceptable alignment within the safe zone. A comparison of the performances by resident and senior hip surgeons showed no significant difference between the groups (P=.74 for inclination and P=.81 for anteversion). The iPhone/iPad technique examined in this study could achieve acceptable performance in determining cup alignment in THA regardless of the surgeon's expertise. [Orthopedics. 2016; 39(4):e621-e626.]. PMID:27322169

  14. Pareto-based evolutionary algorithms for the calculation of transformation parameters and accuracy assessment of historical maps

    NASA Astrophysics Data System (ADS)

    Manzano-Agugliaro, F.; San-Antonio-Gómez, C.; López, S.; Montoya, F. G.; Gil, C.

    2013-08-01

    When historical map data are compared with modern cartography, the old map coordinates must be transformed to the current system. However, historical data often exhibit heterogeneous quality. In calculating the transformation parameters between the historical and modern maps, it is often necessary to discard highly uncertain data. An optimal balance between the objectives of minimising the transformation error and eliminating as few points as possible can be achieved by generating a Pareto front of solutions using evolutionary genetic algorithms. The aim of this paper is to assess the performance of evolutionary algorithms in determining the accuracy of historical maps in regard to modern cartography. When applied to the 1787 Tomas Lopez map, the use of evolutionary algorithms reduces the linear error by 40% while eliminating only 2% of the data points. The main conclusion of this paper is that evolutionary algorithms provide a promising alternative for the transformation of historical map coordinates and determining the accuracy of historical maps in regard to modern cartography, particularly when the positional quality of the data points used cannot be assured.

  15. Screening Accuracy for Risk of Autism Spectrum Disorder Using the Brief Infant-Toddler Social and Emotional Assessment (BITSEA)

    ERIC Educational Resources Information Center

    Gardner, Lauren M.; Murphy, Laura; Campbell, Jonathan M.; Tylavsky, Frances; Palmer, Frederick B.; Graff, J. Carolyn

    2013-01-01

    Early identification of autism spectrum disorders (ASDs) is facilitated by the use of standardized screening scales that assess the social emotional behaviors associated with ASD. Authors examined accuracy of Brief Infant-Toddler Social and Emotional Assessment (BITSEA) subscales in detecting Modified Checklist for Autism in Toddlers (M-CHAT) risk…

  16. Hardware performance assessment recommendations and tools for baropodometric sensor systems.

    PubMed

    Giacomozzi, Claudia

    2010-01-01

    Accurate plantar pressure measurements are mandatory in both clinical and research contexts. Differences in accuracy, precision, reliability of pressure measurement devices (PMDs) prevented so far the onset of standardization processes and of reliable reference datasets. The Italian National Institute of Health (ISS) approved and conducted a scientific project aimed to design, validate and implement dedicated testing methods for both in-factory and on-the-field PMD assessment. A general-purpose experimental set-up was built, complete and suitable for the assessment of PMDs based on different sensor technology, electronic conditioning and mechanical solutions. Preliminary assessments have been conducted on 5 commercial PMDs. The study lead to the definition of: i) an appropriate set of instruments and procedures for PMD technical assessment; ii) a minimum set of significant parameters for the technical characterization of the PMD performance; iii) some recommendations to both manufacturers and end users for an appropriate use in clinics and in research context. PMID:20567067

  17. Accuracy assessment of satellite altimetry over central East Antarctica by kinematic GNSS and crossover analysis

    NASA Astrophysics Data System (ADS)

    Schröder, Ludwig; Richter, Andreas; Fedorov, Denis; Knöfel, Christoph; Ewert, Heiko; Dietrich, Reinhard; Matveev, Aleksey Yu.; Scheinert, Mirko; Lukin, Valery

    2014-05-01

    Satellite altimetry is a unique technique to observe the contribution of the Antarctic ice sheet to global sea-level change. To fulfill the high quality requirements for its application, the respective products need to be validated against independent data like ground-based measurements. Kinematic GNSS provides a powerful method to acquire precise height information along the track of a vehicle. Within a collaboration of TU Dresden and Russian partners during the Russian Antarctic Expeditions in the seasons from 2001 to 2013 we recorded several such profiles in the region of the subglacial Lake Vostok, East Antarctica. After 2006 these datasets also include observations along seven continental traverses with a length of about 1600km each between the Antarctic coast and the Russian research station Vostok (78° 28' S, 106° 50' E). After discussing some special issues concerning the processing of the kinematic GNSS profiles under the very special conditions of the interior of the Antarctic ice sheet, we will show their application for the validation of NASA's laser altimeter satellite mission ICESat and of ESA's ice mission CryoSat-2. Analysing the height differences at crossover points, we can get clear insights into the height regime at the subglacial Lake Vostok. Thus, these profiles as well as the remarkably flat lake surface itself can be used to investigate the accuracy and possible error influences of these missions. We will show how the transmit-pulse reference selection correction (Gaussian vs. centroid, G-C) released in January 2013 helped to further improve the release R633 ICESat data and discuss the height offsets and other effects of the CryoSat-2 radar data. In conclusion we show that only a combination of laser and radar altimetry can provide both, a high precision and a good spatial coverage. An independent validation with ground-based observations is crucial for a thorough accuracy assessment.

  18. Accuracy Assessment of GO Pro Hero 3 (black) Camera in Underwater Environment

    NASA Astrophysics Data System (ADS)

    Helmholz, , P.; Long, J.; Munsie, T.; Belton, D.

    2016-06-01

    Modern digital cameras are increasing in quality whilst decreasing in size. In the last decade, a number of waterproof consumer digital cameras (action cameras) have become available, which often cost less than 500. A possible application of such action cameras is in the field of Underwater Photogrammetry. Especially with respect to the fact that with the change of the medium to below water can in turn counteract the distortions present. The goal of this paper is to investigate the suitability of such action cameras for underwater photogrammetric applications focusing on the stability of the camera and the accuracy of the derived coordinates for possible photogrammetric applications. For this paper a series of image sequences was capture in a water tank. A calibration frame was placed in the water tank allowing the calibration of the camera and the validation of the measurements using check points. The accuracy assessment covered three test sets operating three GoPro sports cameras of the same model (Hero 3 black). The test set included the handling of the camera in a controlled manner where the camera was only dunked into the water tank using 7MP and 12MP resolution and a rough handling where the camera was shaken as well as being removed from the waterproof case using 12MP resolution. The tests showed that the camera stability was given with a maximum standard deviation of the camera constant σc of 0.0031mm for 7MB (for an average c of 2.720mm) and 0.0072 mm for 12MB (for an average c of 3.642mm). The residual test of the check points gave for the 7MB test series the largest rms value with only 0.450mm and the largest maximal residual of only 2.5 mm. For the 12MB test series the maximum rms value is 0. 653mm.

  19. Accuracy assessment of the axial images obtained from cone beam computed tomography

    PubMed Central

    Panzarella, FK; Junqueira, JLC; Oliveira, LB; de Araújo, NS; Costa, C

    2011-01-01

    Objective The aim of this study was to evaluate accuracy of linear measurements assessed from axial tomograms and the influence of the use of different protocols in two cone beam CT (CBCT) units. Methods A cylinder object consisting of Nylon® (Day Brazil, Sao Paulo, Brazil) with radiopaque markers was radiographically examined applying different protocols from NewTom 3GTM (Quantitative Radiology s.r.l, Verona, Veneto, Italy) and i-CATTM (Imaging Sciences International, Hatfield, PA) units. Horizontal (A–B) and vertical (C–D) distances were assessed from axial tomograms and measured using a digital calliper that provided the gold standard for actual values. Results There were differences when considering acquisition protocols to each CBCT unit. Concerning all analysed protocols from i-CATTM and Newtom 3GTM, both A–B and C–D distances presented underestimated values. Measurements of the axial images obtained from NewTom 3GTM (6 inch 0.16 mm and 9 inch 0.25 mm) were similar to the ones obtained from i-CATTM (13 cm 20 s 0.3 mm, 13 cm 20 s 0.4 mm and 13 cm 40 s 0.25 mm). Conclusion The use of different protocols from CBCT machines influences linear measurements assessed from axial images. Linear distances were underestimated in both equipments. Our findings suggest that the best protocol for the i-CATTM is 13 cm 20 s 0.3 mm and for the NewTom 3GTM, the use of 6 inch or 9 inch is recommended. PMID:21831977

  20. The effects of performance-based assessment criteria on student performance and self-assessment skills.

    PubMed

    Fastré, Greet Mia Jos; van der Klink, Marcel R; van Merriënboer, Jeroen J G

    2010-10-01

    This study investigated the effect of performance-based versus competence-based assessment criteria on task performance and self-assessment skills among 39 novice secondary vocational education students in the domain of nursing and care. In a performance-based assessment group students are provided with a preset list of performance-based assessment criteria, describing what students should do, for the task at hand. The performance-based group is compared to a competence-based assessment group in which students receive a preset list of competence-based assessment criteria, describing what students should be able to do. The test phase revealed that the performance-based group outperformed the competence-based group on test task performance. In addition, higher performance of the performance-based group was reached with lower reported mental effort during training, indicating a higher instructional efficiency for novice students. PMID:20054648

  1. Finite-volume versus streaming-based lattice Boltzmann algorithm for fluid-dynamics simulations: A one-to-one accuracy and performance study.

    PubMed

    Shrestha, Kalyan; Mompean, Gilmar; Calzavarini, Enrico

    2016-02-01

    A finite-volume (FV) discretization method for the lattice Boltzmann (LB) equation, which combines high accuracy with limited computational cost is presented. In order to assess the performance of the FV method we carry out a systematic comparison, focused on accuracy and computational performances, with the standard streaming lattice Boltzmann equation algorithm. In particular we aim at clarifying whether and in which conditions the proposed algorithm, and more generally any FV algorithm, can be taken as the method of choice in fluid-dynamics LB simulations. For this reason the comparative analysis is further extended to the case of realistic flows, in particular thermally driven flows in turbulent conditions. We report the successful simulation of high-Rayleigh number convective flow performed by a lattice Boltzmann FV-based algorithm with wall grid refinement. PMID:26986438

  2. Finite-volume versus streaming-based lattice Boltzmann algorithm for fluid-dynamics simulations: A one-to-one accuracy and performance study

    NASA Astrophysics Data System (ADS)

    Shrestha, Kalyan; Mompean, Gilmar; Calzavarini, Enrico

    2016-02-01

    A finite-volume (FV) discretization method for the lattice Boltzmann (LB) equation, which combines high accuracy with limited computational cost is presented. In order to assess the performance of the FV method we carry out a systematic comparison, focused on accuracy and computational performances, with the standard streaming lattice Boltzmann equation algorithm. In particular we aim at clarifying whether and in which conditions the proposed algorithm, and more generally any FV algorithm, can be taken as the method of choice in fluid-dynamics LB simulations. For this reason the comparative analysis is further extended to the case of realistic flows, in particular thermally driven flows in turbulent conditions. We report the successful simulation of high-Rayleigh number convective flow performed by a lattice Boltzmann FV-based algorithm with wall grid refinement.

  3. Assessing BMP Performance Using Microtox Toxicity Analysis

    EPA Science Inventory

    Best Management Practices (BMPs) have been shown to be effective in reducing runoff and pollutants from urban areas and thus provide a mechanism to improve downstream water quality. Currently, BMP performance regarding water quality improvement is assessed through measuring each...

  4. The Impact of Self-Evaluation Instruction on Student Self-Evaluation, Music Performance, and Self-Evaluation Accuracy

    ERIC Educational Resources Information Center

    Hewitt, Michael P.

    2011-01-01

    The author sought to determine whether self-evaluation instruction had an impact on student self-evaluation, music performance, and self-evaluation accuracy of music performance among middle school instrumentalists. Participants (N = 211) were students at a private middle school located in a metropolitan area of a mid-Atlantic state. Students in…

  5. Assessing Vocal Performances Using Analytical Assessment: A Case Study

    ERIC Educational Resources Information Center

    Gynnild, Vidar

    2016-01-01

    This study investigated ways to improve the appraisal of vocal performances within a national academy of music. Since a criterion-based assessment framework had already been adopted, the conceptual foundation of an assessment rubric was used as a guide in an action research project. The group of teachers involved wanted to explore thinking…

  6. Performance Assessment at the College Level.

    ERIC Educational Resources Information Center

    Steele, Joe M.

    The College Outcome Measures Program (COMP) and its role in an era when standardized testing is being questioned and authentic assessment is championed are discussed. Authentic assessment should not mean discarding measurement expertise and existing technology. It is an approach to measuring the quality and level of performance that uses real…

  7. Personality, Assessment Methods and Academic Performance

    ERIC Educational Resources Information Center

    Furnham, Adrian; Nuygards, Sarah; Chamorro-Premuzic, Tomas

    2013-01-01

    This study examines the relationship between personality and two different academic performance (AP) assessment methods, namely exams and coursework. It aimed to examine whether the relationship between traits and AP was consistent across self-reported versus documented exam results, two different assessment techniques and across different…

  8. The Assessment of Performance in Science Project.

    ERIC Educational Resources Information Center

    Driver, Rosalind; Worsley, Christopher

    1979-01-01

    Described are national methods of assessing and monitoring the achievement in science of students of 11, 13, and 16 years old in England and Wales. The tasks of the Assessment of Performance Unit (APU), a unit within the Department of Education and Science, are also described. (HM)

  9. QuickBird and OrbView-3 Geopositional Accuracy Assessment

    NASA Technical Reports Server (NTRS)

    Helder, Dennis; Ross, Kenton

    2006-01-01

    Objective: Compare vendor-provided image coordinates with known references visible in the imagery. Approach: Use multiple, well-characterized sites with >40 ground control points (GCPs); sites that are a) Well distributed; b) Accurately surveyed; and c) Easily found in imagery. Perform independent assessments with independent teams. Each team has slightly different measurement techniques and data processing methods. NASA Stennis Space Center. South Dakota State University.

  10. Prostate Localization on Daily Cone-Beam Computed Tomography Images: Accuracy Assessment of Similarity Metrics

    SciTech Connect

    Kim, Jinkoo; Hammoud, Rabih; Pradhan, Deepak; Zhong Hualiang; Jin, Ryan Y.; Movsas, Benjamin; Chetty, Indrin J.

    2010-07-15

    Purpose: To evaluate different similarity metrics (SM) using natural calcifications and observation-based measures to determine the most accurate prostate and seminal vesicle localization on daily cone-beam CT (CBCT) images. Methods and Materials: CBCT images of 29 patients were retrospectively analyzed; 14 patients with prostate calcifications (calcification data set) and 15 patients without calcifications (no-calcification data set). Three groups of test registrations were performed. Test 1: 70 CT/CBCT pairs from calcification dataset were registered using 17 SMs (6,580 registrations) and compared using the calcification mismatch error as an endpoint. Test 2: Using the four best SMs from Test 1, 75 CT/CBCT pairs in the no-calcification data set were registered (300 registrations). Accuracy of contour overlays was ranked visually. Test 3: For the best SM from Tests 1 and 2, accuracy was estimated using 356 CT/CBCT registrations. Additionally, target expansion margins were investigated for generating registration regions of interest. Results: Test 1-Incremental sign correlation (ISC), gradient correlation (GC), gradient difference (GD), and normalized cross correlation (NCC) showed the smallest errors ({mu} {+-} {sigma}: 1.6 {+-} 0.9 {approx} 2.9 {+-} 2.1 mm). Test 2-Two of the three reviewers ranked GC higher. Test 3-Using GC, 96% of registrations showed <3-mm error when calcifications were filtered. Errors were left/right: 0.1 {+-} 0.5mm, anterior/posterior: 0.8 {+-} 1.0mm, and superior/inferior: 0.5 {+-} 1.1 mm. The existence of calcifications increased the success rate to 97%. Expansion margins of 4-10 mm were equally successful. Conclusion: Gradient-based SMs were most accurate. Estimated error was found to be <3 mm (1.1 mm SD) in 96% of the registrations. Results suggest that the contour expansion margin should be no less than 4 mm.

  11. Accuracy of task recall for epidemiological exposure assessment to construction noise

    PubMed Central

    Reeb-Whitaker, C; Seixas, N; Sheppard, L; Neitzel, R

    2004-01-01

    Aims: To validate the accuracy of construction worker recall of task and environment based information; and to evaluate the effect of task recall on estimates of noise exposure. Methods: A cohort of 25 construction workers recorded tasks daily and had dosimetry measurements weekly for six weeks. Worker recall of tasks reported on the daily activity cards was validated with research observations and compared directly to task recall at a six month interview. Results: The mean LEQ noise exposure level (dBA) from dosimeter measurements was 89.9 (n = 61) and 83.3 (n = 47) for carpenters and electricians, respectively. The percentage time at tasks reported during the interview was compared to that calculated from daily activity cards; only 2/22 tasks were different at the nominal 5% significance level. The accuracy, based on bias and precision, of percentage time reported for tasks from the interview was 53–100% (median 91%). For carpenters, the difference in noise estimates derived from activity cards (mean 91.9 dBA) was not different from those derived from the questionnaire (mean 91.7 dBA). This trend held for electricians as well. For all subjects, noise estimates derived from the activity card and the questionnaire were strongly correlated with dosimetry measurements. The average difference between the noise estimate derived from the questionnaire and dosimetry measurements was 2.0 dBA, and was independent of the actual exposure level. Conclusions: Six months after tasks were performed, construction workers were able to accurately recall the percentage time they spent at various tasks. Estimates of noise exposure based on long term recall (questionnaire) were no different from estimates derived from daily activity cards and were strongly correlated with dosimetry measurements, overestimating the level on average by 2.0 dBA. PMID:14739379

  12. Performance assessment to enhance training effectiveness.

    SciTech Connect

    Stevens-Adams, Susan Marie; Gieseler, Charles J.; Basilico, Justin Derrick; Abbott, Robert G.; Forsythe, James Chris

    2010-09-01

    Training simulators have become increasingly popular tools for instructing humans on performance in complex environments. However, the question of how to provide individualized and scenario-specific assessment and feedback to students remains largely an open question. To maximize training efficiency, new technologies are required that assist instructors in providing individually relevant instruction. Sandia National Laboratories has shown the feasibility of automated performance assessment tools, such as the Sandia-developed Automated Expert Modeling and Student Evaluation (AEMASE) software, through proof-of-concept demonstrations, a pilot study, and an experiment. In the pilot study, the AEMASE system, which automatically assesses student performance based on observed examples of good and bad performance in a given domain, achieved a high degree of agreement with a human grader (89%) in assessing tactical air engagement scenarios. In more recent work, we found that AEMASE achieved a high degree of agreement with human graders (83-99%) for three Navy E-2 domain-relevant performance metrics. The current study provides a rigorous empirical evaluation of the enhanced training effectiveness achievable with this technology. In particular, we assessed whether giving students feedback based on automated metrics would enhance training effectiveness and improve student performance. We trained two groups of employees (differentiated by type of feedback) on a Navy E-2 simulator and assessed their performance on three domain-specific performance metrics. We found that students given feedback via the AEMASE-based debrief tool performed significantly better than students given only instructor feedback on two out of three metrics. Future work will focus on extending these developments for automated assessment of teamwork.

  13. Accuracy of Cameriere's cut-off value for third molar in assessing 18 years of age.

    PubMed

    De Luca, S; Biagi, R; Begnoni, G; Farronato, G; Cingolani, M; Merelli, V; Ferrante, L; Cameriere, R

    2014-02-01

    Due to increasingly numerous international migrations, estimating the age of unaccompanied minors is becoming of enormous significance for forensic professionals who are required to deliver expert opinions. The third molar tooth is one of the few anatomical sites available for estimating the age of individuals in late adolescence. This study verifies the accuracy of Cameriere's cut-off value of the third molar index (I3M) in assessing 18 years of age. For this purpose, a sample of orthopantomographs (OPTs) of 397 living subjects aged between 13 and 22 years (192 female and 205 male) was analyzed. Age distribution gradually decreases as I3M increases in both males and females. The results show that the sensitivity of the test was 86.6%, with a 95% confidence interval of (80.8%, 91.1%), and its specificity was 95.7%, with a 95% confidence interval of (92.1%, 98%). The proportion of correctly classified individuals was 91.4%. Estimated post-test probability, p was 95.6%, with a 95% confidence interval of (92%, 98%). Hence, the probability that a subject positive on the test (i.e., I3M<0.08) was 18 years of age or older was 95.6%. PMID:24365729

  14. The accuracy of histological assessments of dental development and age at death

    PubMed Central

    Smith, T M; Reid, D J; Sirianni, J E

    2006-01-01

    Histological analyses of dental development have been conducted for several decades despite few studies assessing the accuracy of such methods. Using known-period incremental features, the crown formation time and age at death of five pig-tailed macaques (Macaca nemestrina) were estimated with standard histological techniques and compared with known ages. Estimates of age at death ranged from 8.6% underestimations to 15.0% overestimations, with an average 3.5% overestimate and a 7.2% average absolute difference. Several sources of error were identified relating to preparation quality and section obliquity. These results demonstrate that histological analyses of dental development involving counts and measurements of short- and long-period incremental features may yield accurate estimates, particularly in well-prepared material. Values from oblique sections (or most naturally fractured teeth) should be regarded with caution, as obliquity leads to inflated cuspal enamel formation time and underestimated imbricational formation time. Additionally, Shellis's formula for extension rate and crown formation time estimation was tested, which significantly overestimated crown formation time due to underestimated extension rate. It is suggested that Shellis' method should not be applied to teeth with short, rapid periods of development, and further study is necessary to validate this application in other material. PMID:16420385

  15. Assessment of accuracy of suicide mortality surveillance data in South Africa: investigation in an urban setting.

    PubMed

    Burrows, Stephanie; Laflamme, Lucie

    2007-01-01

    Although it is not a legal requirement in South Africa, medical practitioners determine the manner of injury death for a surveillance system that is currently the only source of epidemiological data on suicide. This study assessed the accuracy of suicide data as recorded in the system using the docket produced from standard medico-legal investigation procedures as the gold standard. It was conducted in one of three cities where the surveillance system had full coverage for the year 2000. In the medico-legal system, one-third of cases could not be tracked, had not been finalized, or had unclear outcomes. For the remaining cases, the sensitivity, specificity, and positive and negative predictive values were generally high, varying somewhat across sex and race groups. Poisoning, jumping, and railway suicides were more likely than other methods to be misclassified, and were more common among females and Whites. The study provides encouraging results regarding the use of medical practitioner expertise for the accurate determination of suicide deaths. However, suicides may still be underestimated in this process given the challenge of tracing disguised suicides and without the careful examination of potential misclassifications of true suicides as unintentional deaths. PMID:17722688

  16. Performance Assessment for Environmental Decision Making

    SciTech Connect

    Anderson, D.R.; Fewell, M.E.; Gomez, L.S.; Marietta, M.G.; Swift, P.N.; Trauth, K.M.; Vaughn, P.; MacKinnon, R.J.

    1997-12-01

    The Waste Isolation Pilot Plant (WIPP) Performance Assessment Departments at Sandia National Laboratories have, over the last twenty (20) years, developed unique, internationally-recognized performance and risk assessment methods to assess options for the safe disposal and remediation of radioactive and non-radioactive hazardous waste/contamination in geohydrologic systems. While these methods were originally developed for the disposal of nuclear waste, ongoing improvements and extensions make them equally applicable to a variety of environmental problems such as those associated with the remediation of EPA designated Superfund sites and the more generic Brownfield sites (industrial sites whose future use is restricted because of real or perceived contamination).

  17. Cluster Detection Tests in Spatial Epidemiology: A Global Indicator for Performance Assessment.

    PubMed

    Guttmann, Aline; Li, Xinran; Feschet, Fabien; Gaudart, Jean; Demongeot, Jacques; Boire, Jean-Yves; Ouchchane, Lemlih

    2015-01-01

    In cluster detection of disease, the use of local cluster detection tests (CDTs) is current. These methods aim both at locating likely clusters and testing for their statistical significance. New or improved CDTs are regularly proposed to epidemiologists and must be subjected to performance assessment. Because location accuracy has to be considered, performance assessment goes beyond the raw estimation of type I or II errors. As no consensus exists for performance evaluations, heterogeneous methods are used, and therefore studies are rarely comparable. A global indicator of performance, which assesses both spatial accuracy and usual power, would facilitate the exploration of CDTs behaviour and help between-studies comparisons. The Tanimoto coefficient (TC) is a well-known measure of similarity that can assess location accuracy but only for one detected cluster. In a simulation study, performance is measured for many tests. From the TC, we here propose two statistics, the averaged TC and the cumulated TC, as indicators able to provide a global overview of CDTs performance for both usual power and location accuracy. We evidence the properties of these two indicators and the superiority of the cumulated TC to assess performance. We tested these indicators to conduct a systematic spatial assessment displayed through performance maps. PMID:26086911

  18. Cluster Detection Tests in Spatial Epidemiology: A Global Indicator for Performance Assessment

    PubMed Central

    Guttmann, Aline; Li, Xinran; Feschet, Fabien; Gaudart, Jean; Demongeot, Jacques; Boire, Jean-Yves; Ouchchane, Lemlih

    2015-01-01

    In cluster detection of disease, the use of local cluster detection tests (CDTs) is current. These methods aim both at locating likely clusters and testing for their statistical significance. New or improved CDTs are regularly proposed to epidemiologists and must be subjected to performance assessment. Because location accuracy has to be considered, performance assessment goes beyond the raw estimation of type I or II errors. As no consensus exists for performance evaluations, heterogeneous methods are used, and therefore studies are rarely comparable. A global indicator of performance, which assesses both spatial accuracy and usual power, would facilitate the exploration of CDTs behaviour and help between-studies comparisons. The Tanimoto coefficient (TC) is a well-known measure of similarity that can assess location accuracy but only for one detected cluster. In a simulation study, performance is measured for many tests. From the TC, we here propose two statistics, the averaged TC and the cumulated TC, as indicators able to provide a global overview of CDTs performance for both usual power and location accuracy. We evidence the properties of these two indicators and the superiority of the cumulated TC to assess performance. We tested these indicators to conduct a systematic spatial assessment displayed through performance maps. PMID:26086911

  19. Accuracy assessment, using stratified plurality sampling, of portions of a LANDSAT classification of the Arctic National Wildlife Refuge Coastal Plain

    NASA Technical Reports Server (NTRS)

    Card, Don H.; Strong, Laurence L.

    1989-01-01

    An application of a classification accuracy assessment procedure is described for a vegetation and land cover map prepared by digital image processing of LANDSAT multispectral scanner data. A statistical sampling procedure called Stratified Plurality Sampling was used to assess the accuracy of portions of a map of the Arctic National Wildlife Refuge coastal plain. Results are tabulated as percent correct classification overall as well as per category with associated confidence intervals. Although values of percent correct were disappointingly low for most categories, the study was useful in highlighting sources of classification error and demonstrating shortcomings of the plurality sampling method.

  20. Construct Validity of Three Clerkship Performance Assessments

    ERIC Educational Resources Information Center

    Lee, Ming; Wimmers, Paul F.

    2010-01-01

    This study examined construct validity of three commonly used clerkship performance assessments: preceptors' evaluations, OSCE-type clinical performance measures, and the NBME [National Board of Medical Examiners] medicine subject examination. Six hundred and eighty-six students taking the inpatient medicine clerkship from 2003 to 2007…

  1. Physician Performance Assessment: Prevention of Cardiovascular Disease

    ERIC Educational Resources Information Center

    Lipner, Rebecca S.; Weng, Weifeng; Caverzagie, Kelly J.; Hess, Brian J.

    2013-01-01

    Given the rising burden of healthcare costs, both patients and healthcare purchasers are interested in discerning which physicians deliver quality care. We proposed a methodology to assess physician clinical performance in preventive cardiology care, and determined a benchmark for minimally acceptable performance. We used data on eight…

  2. Integrating Landsat and California pesticide exposure estimation at aggregated analysis scales: Accuracy assessment of rurality

    NASA Astrophysics Data System (ADS)

    Vopham, Trang Minh

    Pesticide exposure estimation in epidemiologic studies can be constrained to analysis scales commonly available for cancer data - census tracts and ZIP codes. Research goals included (1) demonstrating the feasibility of modifying an existing geographic information system (GIS) pesticide exposure method using California Pesticide Use Reports (PURs) and land use surveys to incorporate Landsat remote sensing and to accommodate aggregated analysis scales, and (2) assessing the accuracy of two rurality metrics (quality of geographic area being rural), Rural-Urban Commuting Area (RUCA) codes and the U.S. Census Bureau urban-rural system, as surrogates for pesticide exposure when compared to the GIS gold standard. Segments, derived from 1985 Landsat NDVI images, were classified using a crop signature library (CSL) created from 1990 Landsat NDVI images via a sum of squared differences (SSD) measure. Organochlorine, organophosphate, and carbamate Kern County PUR applications (1974-1990) were matched to crop fields using a modified three-tier approach. Annual pesticide application rates (lb/ac), and sensitivity and specificity of each rurality metric were calculated. The CSL (75 land use classes) classified 19,752 segments [median SSD 0.06 NDVI]. Of the 148,671 PUR records included in the analysis, Landsat contributed 3,750 (2.5%) additional tier matches. ZIP Code Tabulation Area (ZCTA) rates ranged between 0 and 1.36 lb/ac and census tract rates between 0 and 1.57 lb/ac. Rurality was a mediocre pesticide exposure surrogate; higher rates were observed among urban areal units. ZCTA-level RUCA codes offered greater specificity (39.1-60%) and sensitivity (25-42.9%). The U.S. Census Bureau metric offered greater specificity (92.9-97.5%) at the census tract level; sensitivity was low (≤6%). The feasibility of incorporating Landsat into a modified three-tier GIS approach was demonstrated. Rurality accuracy is affected by rurality metric, areal aggregation, pesticide chemical

  3. Assessing the Accuracy of Sentinel-3 SLSTR Sea-Surface Temperature Retrievals Using High Accuracy Infrared Radiiometers on Ships of Opportunity

    NASA Astrophysics Data System (ADS)

    Minnett, P. J.; Izaguirre, M. A.; Szcszodrak, M.; Williams, E.; Reynolds, R. M.

    2015-12-01

    The assessment of errors and uncertainties in satellite-derived SSTs can be achieved by comparisons with independent measurements of skin SST of high accuracy. Such validation measurements are provided by well-calibrated infrared radiometers mounted on ships. The second generation of Marine-Atmospheric Emitted Radiance Interferometers (M-AERIs) have recently been developed and two are now deployed on cruise ships of Royal Caribbean Cruise Lines that operate in the Caribbean Sea, North Atlantic and Mediterranean Sea. In addition, two Infrared SST Autonomous Radiometers (ISARs) are mounted alternately on a vehicle transporter of NYK Lines that crosses the Pacific Ocean between Japan and the USA. Both M-AERIs and ISARs are self-calibrating radiometers having two internal blackbody cavities to provide at-sea calibration of the measured radiances, and the accuracy of the internal calibration is periodically determined by measurements of a NIST-traceable blackbody cavity in the laboratory. This provides SI-traceability for the at-sea measurements. It is anticipated that these sensors will be deployed during the next several years and will be available for the validation of the SLSTRs on Sentinel-3a and -3b.

  4. A bootstrap method for assessing classification accuracy and confidence for agricultural land use mapping in Canada

    NASA Astrophysics Data System (ADS)

    Champagne, Catherine; McNairn, Heather; Daneshfar, Bahram; Shang, Jiali

    2014-06-01

    Land cover and land use classifications from remote sensing are increasingly becoming institutionalized framework data sets for monitoring environmental change. As such, the need for robust statements of classification accuracy is critical. This paper describes a method to estimate confidence in classification model accuracy using a bootstrap approach. Using this method, it was found that classification accuracy and confidence, while closely related, can be used in complementary ways to provide additional information on map accuracy and define groups of classes and to inform the future reference sampling strategies. Overall classification accuracy increases with an increase in the number of fields surveyed, where the width of classification confidence bounds decreases. Individual class accuracies and confidence were non-linearly related to the number of fields surveyed. Results indicate that some classes can be estimated accurately and confidently with fewer numbers of samples, whereas others require larger reference data sets to achieve satisfactory results. This approach is an improvement over other approaches for estimating class accuracy and confidence as it uses repetitive sampling to produce a more realistic estimate of the range in classification accuracy and confidence that can be obtained with different reference data inputs.

  5. Using Transcutaneous Laryngeal Ultrasonography (TLUSG) to Assess Post-thyroidectomy Patients' Vocal Cords: Which Maneuver Best Optimizes Visualization and Assessment Accuracy?

    PubMed

    Wong, Kai-Pun; Woo, Jung-Woo; Li, Jason Yu-Yin; Lee, Kyu Eun; Youn, Yeo Kyu; Lang, Brian Hung-Hin

    2016-03-01

    To assess vocal cord (VC) movement with transcutaneous laryngeal ultrasound (TLUSG), three maneuvers, namely passive (quiet respiration), active (phonation), and Valsalva maneuvers have been described. It remains unclear which maneuver or using more maneuvers provides better visualization and assessment accuracy. We prospectively evaluated 342 post-thyroidectomy patients from two centers. They underwent TLUSG with direct laryngoscopic (DL) validation afterwards. During TLUSG, patients were instructed to perform all three maneuvers (passive, active, and Valsalva). VC visualization rate and accuracy between three maneuvers were compared. Visualization rate tended to be higher in Valsalva maneuver than that in other two maneuvers (92.1 % vs. passive: 91.5 %; active: 89.8 %). While 19 patients had post-operative VC palsy, passive maneuver had lower test specificity than active (94.3 vs. 97.6 %, p = 0.01) and Valsalva maneuvers (94.3 vs. 97.4 %, p = 0.02). In assessable VCs, passive maneuver has a higher ability to differentiate between mobile VCs and VC palsy (Area under ROC curve-passive: 0.942, active: 0.863, Valsalva: 0.893). TLUSG with more maneuvers did not improve sensitivity or specificity. On applying TLUSG as a screening tool (i.e., only selected patient with "unassessable" VCs or VCP on TLUSG for DL), Valsalva maneuver (85.96 %) saved more patients from DL than passive (81.87 %) or active (84.81 %) maneuver. Passive maneuver has a higher ability to differentiate VC palsy from normal. Using TLUSG as a screening tool, Valsalva was the preferred maneuver as it was more specific, had high visualization rate, and saved more patients from DL. PMID:26552909

  6. Accuracy Assessments of Cloud Droplet Size Retrievals from Polarized Reflectance Measurements by the Research Scanning Polarimeter

    NASA Technical Reports Server (NTRS)

    Alexandrov, Mikhail Dmitrievic; Cairns, Brian; Emde, Claudia; Ackerman, Andrew S.; vanDiedenhove, Bastiaan

    2012-01-01

    We present an algorithm for the retrieval of cloud droplet size distribution parameters (effective radius and variance) from the Research Scanning Polarimeter (RSP) measurements. The RSP is an airborne prototype for the Aerosol Polarimetery Sensor (APS), which was on-board of the NASA Glory satellite. This instrument measures both polarized and total reflectance in 9 spectral channels with central wavelengths ranging from 410 to 2260 nm. The cloud droplet size retrievals use the polarized reflectance in the scattering angle range between 135deg and 165deg, where they exhibit the sharply defined structure known as the rain- or cloud-bow. The shape of the rainbow is determined mainly by the single scattering properties of cloud particles. This significantly simplifies both forward modeling and inversions, while also substantially reducing uncertainties caused by the aerosol loading and possible presence of undetected clouds nearby. In this study we present the accuracy evaluation of our algorithm based on the results of sensitivity tests performed using realistic simulated cloud radiation fields.

  7. Accuracy Assessment of Mobile Mapping Point Clouds Using the Existing Environment as Terrestrial Reference

    NASA Astrophysics Data System (ADS)

    Hofmann, S.; Brenner, C.

    2016-06-01

    Mobile mapping data is widely used in various applications, what makes it especially important for data users to get a statistically verified quality statement on the geometric accuracy of the acquired point clouds or its processed products. The accuracy of point clouds can be divided into an absolute and a relative quality, where the absolute quality describes the position of the point cloud in a world coordinate system such as WGS84 or UTM, whereas the relative accuracy describes the accuracy within the point cloud itself. Furthermore, the quality of processed products such as segmented features depends on the global accuracy of the point cloud but mainly on the quality of the processing steps. Several data sources with different characteristics and quality can be thought of as potential reference data, such as cadastral maps, orthophoto, artificial control objects or terrestrial surveys using a total station. In this work a test field in a selected residential area was acquired as reference data in a terrestrial survey using a total station. In order to reach high accuracy the stationing of the total station was based on a newly made geodetic network with a local accuracy of less than 3 mm. The global position of the network was determined using a long time GNSS survey reaching an accuracy of 8 mm. Based on this geodetic network a 3D test field with facades and street profiles was measured with a total station, each point with a two-dimensional position and altitude. In addition, the surface of poles of street lights, traffic signs and trees was acquired using the scanning mode of the total station. Comparing this reference data to the acquired mobile mapping point clouds of several measurement campaigns a detailed quality statement on the accuracy of the point cloud data is made. Additionally, the advantages and disadvantages of the described reference data source concerning availability, cost, accuracy and applicability are discussed.

  8. Designing a Multi-Objective Multi-Support Accuracy Assessment of the 2001 National Land Cover Data (NLCD 2001) of the Conterminous United States

    EPA Science Inventory

    The database design and diverse application of NLCD 2001 pose significant challenges for accuracy assessment because numerous objectives are of interest, including accuracy of land cover, percent urban imperviousness, percent tree canopy, land-cover composition, and net change. ...

  9. Accuracy of a Low-Cost Novel Computer-Vision Dynamic Movement Assessment: Potential Limitations and Future Directions

    NASA Astrophysics Data System (ADS)

    McGroarty, M.; Giblin, S.; Meldrum, D.; Wetterling, F.

    2016-04-01

    The aim of the study was to perform a preliminary validation of a low cost markerless motion capture system (CAPTURE) against an industry gold standard (Vicon). Measurements of knee valgus and flexion during the performance of a countermovement jump (CMJ) between CAPTURE and Vicon were compared. After correction algorithms were applied to the raw CAPTURE data acceptable levels of accuracy and precision were achieved. The knee flexion angle measured for three trials using Capture deviated by -3.8° ± 3° (left) and 1.7° ± 2.8° (right) compared to Vicon. The findings suggest that low-cost markerless motion capture has potential to provide an objective method for assessing lower limb jump and landing mechanics in an applied sports setting. Furthermore, the outcome of the study warrants the need for future research to examine more fully the potential implications of the use of low-cost markerless motion capture in the evaluation of dynamic movement for injury prevention.

  10. Accuracy assessment of noninvasive hematocrit measurement based on partial least squares and NIR reflectance spectroscopy

    NASA Astrophysics Data System (ADS)

    Zhang, Songbiao; Soller, Babs R.; Perras, Kristen; Khan, Tania; Favreau, Janice

    1999-07-01

    Hematocrit (Hct) is one of the most important parameters to monitor when the patient has large blood loss or blood dilution. The current standard method for measuring hematocrit is off-line and invasive. An accurate, continuous, and noninvasive method of measuring hematocrit is highly desired for physicians to response rapidly in life-threatening situations. A set of instrumental characterization experiments was performed to assess the effects of spectrometer drift and probe placement on patient's forearm. Several factors were investigated in order to minimize the patient-dependent offset encountered in a previous study.

  11. Assessing the Accuracy of the Tracer Dilution Method with Atmospheric Dispersion Modeling

    NASA Astrophysics Data System (ADS)

    Taylor, D.; Delkash, M.; Chow, F. K.; Imhoff, P. T.

    2015-12-01

    Landfill methane emissions are difficult to estimate due to limited observations and data uncertainty. The mobile tracer dilution method is a widely used and cost-effective approach for predicting landfill methane emissions. The method uses a tracer gas released on the surface of the landfill and measures the concentrations of both methane and the tracer gas downwind. Mobile measurements are conducted with a gas analyzer mounted on a vehicle to capture transects of both gas plumes. The idea behind the method is that if the measurements are performed far enough downwind, the methane plume from the large area source of the landfill and the tracer plume from a small number of point sources will be sufficiently well-mixed to behave similarly, and the ratio between the concentrations will be a good estimate of the ratio between the two emissions rates. The mobile tracer dilution method is sensitive to different factors of the setup such as placement of the tracer release locations and distance from the landfill to the downwind measurements, which have not been thoroughly examined. In this study, numerical modeling is used as an alternative to field measurements to study the sensitivity of the tracer dilution method and provide estimates of measurement accuracy. Using topography and wind conditions for an actual landfill, a landfill emissions rate is prescribed in the model and compared against the emissions rate predicted by application of the tracer dilution method. Two different methane emissions scenarios are simulated: homogeneous emissions over the entire surface of the landfill, and heterogeneous emissions with a hot spot containing 80% of the total emissions where the daily cover area is located. Numerical modeling of the tracer dilution method is a useful tool for evaluating the method without having the expense and labor commitment of multiple field campaigns. Factors tested include number of tracers, distance between tracers, distance from landfill to transect

  12. Do students know what they know? Exploring the accuracy of students' self-assessments

    NASA Astrophysics Data System (ADS)

    Lindsey, Beth A.; Nagel, Megan L.

    2015-12-01

    We have conducted an investigation into how well students in introductory science classes (both physics and chemistry) are able to predict which questions they will or will not be able to answer correctly on an upcoming assessment. An examination of the data at the level of students' overall scores reveals results consistent with the Dunning-Kruger effect, in which low-performing students tend to overestimate their abilities, while high-performing students estimate their abilities more accurately. Similar results have been widely reported in the science education literature. Breaking results out by students' responses to individual questions, however, reveals that students of all ability levels have difficulty distinguishing questions which they are able to answer correctly from those that they are not able to answer correctly. These results have implications for the future study and reporting of students' metacognitive abilities.

  13. Accuracy of Panoramic Radiograph in Assessment of the Relationship Between Mandibular Canal and Impacted Third Molars

    PubMed Central

    Tantanapornkul, Weeraya; Mavin, Darika; Prapaiphittayakun, Jaruthai; Phipatboonyarat, Natnicha; Julphantong, Wanchanok

    2016-01-01

    Background: The relationship between impacted mandibular third molar and mandibular canal is important for removal of this tooth. Panoramic radiography is one of the commonly used diagnostic tools for evaluating the relationship of these two structures. Objectives: To evaluate the accuracy of panoramic radiographic findings in predicting direct contact between mandibular canal and impacted third molars on 3D digital images, and to define panoramic criterion in predicting direct contact between the two structures. Methods: Two observers examined panoramic radiographs of 178 patients (256 impacted mandibular third molars). Panoramic findings of interruption of mandibular canal wall, isolated or with darkening of third molar root, diversion of mandibular canal and narrowing of third molar root were evaluated for 3D digital radiography. Direct contact between mandibular canal and impacted third molars on 3D digital images was then correlated with panoramic findings. Panoramic criterion was also defined in predicting direct contact between the two structures. Results: Panoramic findings of interruption of mandibular canal wall, isolated or with darkening of third molar root were statistically significantly correlated with direct contact between mandibular canal and impacted third molars on 3D digital images (p < 0.005), and were defined as panoramic criteria in predicting direct contact between the two structures. Conclusion: Interruption of mandibular canal wall, isolated or with darkening of third molar root observed on panoramic radiographs were effective in predicting direct contact between mandibular canal and impacted third molars on 3D digital images. Panoramic radiography is one of the efficient diagnostic tools for pre-operative assessment of impacted mandibular third molars. PMID:27398105

  14. Assessment of the accuracy of coupled cluster perturbation theory for open-shell systems. II. Quadruples expansions

    NASA Astrophysics Data System (ADS)

    Eriksen, Janus J.; Matthews, Devin A.; Jørgensen, Poul; Gauss, Jürgen

    2016-05-01

    We extend our assessment of the potential of perturbative coupled cluster (CC) expansions for a test set of open-shell atoms and organic radicals to the description of quadruple excitations. Namely, the second- through sixth-order models of the recently proposed CCSDT(Q-n) quadruples series [J. J. Eriksen et al., J. Chem. Phys. 140, 064108 (2014)] are compared to the prominent CCSDT(Q) and ΛCCSDT(Q) models. From a comparison of the models in terms of their recovery of total CC singles, doubles, triples, and quadruples (CCSDTQ) energies, we find that the performance of the CCSDT(Q-n) models is independent of the reference used (unrestricted or restricted (open-shell) Hartree-Fock), in contrast to the CCSDT(Q) and ΛCCSDT(Q) models, for which the accuracy is strongly dependent on the spin of the molecular ground state. By further comparing the ability of the models to recover relative CCSDTQ total atomization energies, the discrepancy between them is found to be even more pronounced, stressing how a balanced description of both closed- and open-shell species—as found in the CCSDT(Q-n) models—is indeed of paramount importance if any perturbative CC model is to be of chemical relevance for high-accuracy applications. In particular, the third-order CCSDT(Q-3) model is found to offer an encouraging alternative to the existing choices of quadruples models used in modern computational thermochemistry, since the model is still only of moderate cost, albeit markedly more costly than, e.g., the CCSDT(Q) and ΛCCSDT(Q) models.

  15. Assessment of the accuracy of coupled cluster perturbation theory for open-shell systems. II. Quadruples expansions.

    PubMed

    Eriksen, Janus J; Matthews, Devin A; Jørgensen, Poul; Gauss, Jürgen

    2016-05-21

    We extend our assessment of the potential of perturbative coupled cluster (CC) expansions for a test set of open-shell atoms and organic radicals to the description of quadruple excitations. Namely, the second- through sixth-order models of the recently proposed CCSDT(Q-n) quadruples series [J. J. Eriksen et al., J. Chem. Phys. 140, 064108 (2014)] are compared to the prominent CCSDT(Q) and ΛCCSDT(Q) models. From a comparison of the models in terms of their recovery of total CC singles, doubles, triples, and quadruples (CCSDTQ) energies, we find that the performance of the CCSDT(Q-n) models is independent of the reference used (unrestricted or restricted (open-shell) Hartree-Fock), in contrast to the CCSDT(Q) and ΛCCSDT(Q) models, for which the accuracy is strongly dependent on the spin of the molecular ground state. By further comparing the ability of the models to recover relative CCSDTQ total atomization energies, the discrepancy between them is found to be even more pronounced, stressing how a balanced description of both closed- and open-shell species-as found in the CCSDT(Q-n) models-is indeed of paramount importance if any perturbative CC model is to be of chemical relevance for high-accuracy applications. In particular, the third-order CCSDT(Q-3) model is found to offer an encouraging alternative to the existing choices of quadruples models used in modern computational thermochemistry, since the model is still only of moderate cost, albeit markedly more costly than, e.g., the CCSDT(Q) and ΛCCSDT(Q) models. PMID:27208932

  16. How Nonrecidivism Affects Predictive Accuracy: Evidence from a Cross-Validation of the Ontario Domestic Assault Risk Assessment (ODARA)

    ERIC Educational Resources Information Center

    Hilton, N. Zoe; Harris, Grant T.

    2009-01-01

    Prediction effect sizes such as ROC area are important for demonstrating a risk assessment's generalizability and utility. How a study defines recidivism might affect predictive accuracy. Nonrecidivism is problematic when predicting specialized violence (e.g., domestic violence). The present study cross-validates the ability of the Ontario…

  17. A TECHNIQUE FOR ASSESSING THE ACCURACY OF SUB-PIXEL IMPERVIOUS SURFACE ESTIMATES DERIVED FROM LANDSAT TM IMAGERY

    EPA Science Inventory

    We developed a technique for assessing the accuracy of sub-pixel derived estimates of impervious surface extracted from LANDSAT TM imagery. We utilized spatially coincident
    sub-pixel derived impervious surface estimates, high-resolution planimetric GIS data, vector--to-
    r...

  18. Diagnostic Accuracy of Computer-Aided Assessment of Intranodal Vascularity in Distinguishing Different Causes of Cervical Lymphadenopathy.

    PubMed

    Ying, Michael; Cheng, Sammy C H; Ahuja, Anil T

    2016-08-01

    Ultrasound is useful in assessing cervical lymphadenopathy. Advancement of computer science technology allows accurate and reliable assessment of medical images. The aim of the study described here was to evaluate the diagnostic accuracy of computer-aided assessment of the intranodal vascularity index (VI) in differentiating the various common causes of cervical lymphadenopathy. Power Doppler sonograms of 347 patients (155 with metastasis, 23 with lymphoma, 44 with tuberculous lymphadenitis, 125 reactive) with palpable cervical lymph nodes were reviewed. Ultrasound images of cervical nodes were evaluated, and the intranodal VI was quantified using a customized computer program. The diagnostic accuracy of using the intranodal VI to distinguish different disease groups was evaluated and compared. Metastatic and lymphomatous lymph nodes tend to be more vascular than tuberculous and reactive lymph nodes. The intranodal VI had the highest diagnostic accuracy in distinguishing metastatic and tuberculous nodes with a sensitivity of 80%, specificity of 73%, positive predictive value of 91%, negative predictive value of 51% and overall accuracy of 68% when a cutoff VI of 22% was used. Computer-aided assessment provides an objective and quantitative way to evaluate intranodal vascularity. The intranodal VI is a useful parameter in distinguishing certain causes of cervical lymphadenopathy and is particularly useful in differentiating metastatic and tuberculous lymph nodes. However, it has limited value in distinguishing lymphomatous nodes from metastatic and reactive nodes. PMID:27131839

  19. Documenting Student Performance through Effective Performance Assessments: Workshop Summary. Horticulture.

    ERIC Educational Resources Information Center

    Ohio State Univ., Columbus. Agricultural Education Curriculum Materials Service.

    This document contains materials about and from a workshop that was conducted to help Ohio horticulture teachers learn to document student competence through effective performance assessments. The document begins with background information about the workshop and a list of workshop objectives. Presented next is a key to the 40 performance…

  20. Accuracy VS Performance: Finding the Sweet Spot in the Geospatial Resolution of Satellite Metadata

    NASA Astrophysics Data System (ADS)

    Baskin, W. E.; Mangosing, D. C.; Rinsland, P. L.

    2010-12-01

    NASA’s Atmospheric Science Data Center (ASDC) and the Cloud-Aerosol LIDAR and Infrared Pathfinder Satellite Observation (CALIPSO) team at the NASA Langley Research Center recently collaborated in the development of a new CALIPSO Search and Subset web application. The web application is comprised of three elements: (1) A PostGIS-enabled PostgreSQL database system, which is used to store temporal and geospatial metadata from CALIPSO’s LIDAR, Infrared, and Wide Field Camera datasets, (2) the SciFlo engine, which is a data flow engine that enables semantic, scientific data flow executions in a grid or clustered network computational environment, and (3) PHP-based web application that incorporates some Web 2.0 / AJAX technologies used in the web interface. The search portion of the web application leverages geodetic indexing and search capabilities that became available in the February 2010 release of PostGIS version1.5. This presentation highlights the lessons learned in experimenting with various geospatial resolutions of CALIPSO’s LIDAR sensor ground track metadata. Details of the various spatial resolutions, spatial database schema designs, spatial indexing strategies, and performance results will be discussed. The focus will be on illustrating our findings on the spatial resolutions for ground track metadata that optimized search time and search accuracy in the CALIPSO Search and Subset Application. The CALIPSO satellite provides new insight into the role that clouds and atmospheric aerosols (airborne particles) play in regulating Earth's weather, climate, and air quality. CALIPSO combines an active LIDAR instrument with passive infrared and visible imagers to probe the vertical structure and properties of thin clouds and aerosols over the globe. The CALIPSO satellite was launched on April 28, 2006 and is part of the A-train satellite constellation. The ASDC in Langley’s Science Directorate leads NASA’s program for the processing, archival and

  1. Enabling performance skills: Assessment in engineering education

    NASA Astrophysics Data System (ADS)

    Ferrone, Jenny Kristina

    Current reform in engineering education is part of a national trend emphasizing student learning as well as accountability in instruction. Assessing student performance to demonstrate accountability has become a necessity in academia. In newly adopted criterion proposed by the Accreditation Board for Engineering and Technology (ABET), undergraduates are expected to demonstrate proficiency in outcomes considered essential for graduating engineers. The case study was designed as a formative evaluation of freshman engineering students to assess the perceived effectiveness of performance skills in a design laboratory environment. The mixed methodology used both quantitative and qualitative approaches to assess students' performance skills and congruency among the respondents, based on individual, team, and faculty perceptions of team effectiveness in three ABET areas: Communications Skills. Design Skills, and Teamwork. The findings of the research were used to address future use of the assessment tool and process. The results of the study found statistically significant differences in perceptions of Teamwork Skills (p < .05). When groups composed of students and professors were compared, professors were less likely to perceive student's teaming skills as effective. The study indicated the need to: (1) improve non-technical performance skills, such as teamwork, among freshman engineering students; (2) incorporate feedback into the learning process; (3) strengthen the assessment process with a follow-up plan that specifically targets performance skill deficiencies, and (4) integrate the assessment instrument and practice with ongoing curriculum development. The findings generated by this study provides engineering departments engaged in assessment activity, opportunity to reflect, refine, and develop their programs as it continues. It also extends research on ABET competencies of engineering students in an under-investigated topic of factors correlated with team

  2. An improved multivariate analytical method to assess the accuracy of acoustic sediment classification maps.

    NASA Astrophysics Data System (ADS)

    Biondo, M.; Bartholomä, A.

    2014-12-01

    High resolution hydro acoustic methods have been successfully employed for the detailed classification of sedimentary habitats. The fine-scale mapping of very heterogeneous, patchy sedimentary facies, and the compound effect of multiple non-linear physical processes on the acoustic signal, cause the classification of backscatter images to be subject to a great level of uncertainty. Standard procedures for assessing the accuracy of acoustic classification maps are not yet established. This study applies different statistical techniques to automated classified acoustic images with the aim of i) quantifying the ability of backscatter to resolve grain size distributions ii) understanding complex patterns influenced by factors other than grain size variations iii) designing innovative repeatable statistical procedures to spatially assess classification uncertainties. A high-frequency (450 kHz) sidescan sonar survey, carried out in the year 2012 in the shallow upper-mesotidal inlet the Jade Bay (German North Sea), allowed to map 100 km2 of surficial sediment with a resolution and coverage never acquired before in the area. The backscatter mosaic was ground-truthed using a large dataset of sediment grab sample information (2009-2011). Multivariate procedures were employed for modelling the relationship between acoustic descriptors and granulometric variables in order to evaluate the correctness of acoustic classes allocation and sediment group separation. Complex patterns in the acoustic signal appeared to be controlled by the combined effect of surface roughness, sorting and mean grain size variations. The area is dominated by silt and fine sand in very mixed compositions; in this fine grained matrix, percentages of gravel resulted to be the prevailing factor affecting backscatter variability. In the absence of coarse material, sorting mostly affected the ability to detect gradual but significant changes in seabed types. Misclassification due to temporal discrepancies

  3. Accuracy assessment of planimetric large-scale map data for decision-making

    NASA Astrophysics Data System (ADS)

    Doskocz, Adam

    2016-06-01

    This paper presents decision-making risk estimation based on planimetric large-scale map data, which are data sets or databases which are useful for creating planimetric maps on scales of 1:5,000 or larger. The studies were conducted on four data sets of large-scale map data. Errors of map data were used for a risk assessment of decision-making about the localization of objects, e.g. for land-use planning in realization of investments. An analysis was performed for a large statistical sample set of shift vectors of control points, which were identified with the position errors of these points (errors of map data). In this paper, empirical cumulative distribution function models for decision-making risk assessment were established. The established models of the empirical cumulative distribution functions of shift vectors of control points involve polynomial equations. An evaluation of the compatibility degree of the polynomial with empirical data was stated by the convergence coefficient and by the indicator of the mean relative compatibility of model. The application of an empirical cumulative distribution function allows an estimation of the probability of the occurrence of position errors of points in a database. The estimated decision-making risk assessment is represented by the probability of the errors of points stored in the database.

  4. Performance evaluation and accuracy of passive capillary samplers (PCAPs) for estimating real-time drainage water fluxes

    Technology Transfer Automated Retrieval System (TEKTRAN)

    Successful monitoring of pollutant transport through the soil profile requires accurate, reliable, and appropriate instrumentation to measure amount of drainage water or flux within the vadose layer. We evaluated the performance and accuracy of automated passive capillary wick samplers (PCAPs) for ...

  5. Accuracy assessment of TRMM 3B42 V6 and V7 products over regions with varying hydro-climatology and topography using station data over Turkey

    NASA Astrophysics Data System (ADS)

    Amjad, Muhammad; Tugrul Yilmaz, M.

    2015-04-01

    From water resources point of view, precipitation is arguably the most important water cycle element over watersheds with complex topography and/or arid/semi-arid climatology where the ground water contribution to runoff is limited. Remote sensing-based precipitation estimates offer tremendous advantage in estimation of this parameter over remote locations. However, it is crucial that these observations should be validated to assess their error structure before its further use in hydrological studies. In this study, Tropical Rainfall Measurement Mission (TRMM) 3B42 V6 and V7 products are assessed for their accuracy using ground observations obtained from network of rain gauges in Western Turkey. Both station and satellite (V6 and V7 separately), and satellite and satellite (V6 and V7) comparisons were performed to assess the uncertainty in satellite products. Accuracy assessments include statistics of false alarm ratio, probability of detection, bias, monthly difference, etc. Overall, results showed TRMM 3B42 V7 has favorable (i.e. more accurate) statistics when compared to V6 over most of the study area, while there are regions where V6 has higher accuracy.

  6. Accuracy of forced oscillation technique to assess lung function in geriatric COPD population

    PubMed Central

    Tse, Hoi Nam; Tseng, Cee Zhung Steven; Wong, King Ying; Yee, Kwok Sang; Ng, Lai Yun

    2016-01-01

    Introduction Performing lung function test in geriatric patients has never been an easy task. With well-established evidence indicating impaired small airway function and air trapping in patients with geriatric COPD, utilizing forced oscillation technique (FOT) as a supplementary tool may aid in the assessment of lung function in this population. Aims To study the use of FOT in the assessment of airflow limitation and air trapping in geriatric COPD patients. Study design A cross-sectional study in a public hospital in Hong Kong. ClinicalTrials.gov ID: NCT01553812. Methods Geriatric patients who had spirometry-diagnosed COPD were recruited, with both FOT and plethysmography performed. “Resistance” and “reactance” FOT parameters were compared to plethysmography for the assessment of air trapping and airflow limitation. Results In total, 158 COPD subjects with a mean age of 71.9±0.7 years and percentage of forced expiratory volume in 1 second of 53.4±1.7 L were recruited. FOT values had a good correlation (r=0.4–0.7) to spirometric data. In general, X values (reactance) were better than R values (resistance), showing a higher correlation with spirometric data in airflow limitation (r=0.07–0.49 vs 0.61–0.67), small airway (r=0.05–0.48 vs 0.56–0.65), and lung volume (r=0.12–0.29 vs 0.43–0.49). In addition, resonance frequency (Fres) and frequency dependence (FDep) could well identify the severe type (percentage of forced expiratory volume in 1 second <50%) of COPD with high sensitivity (0.76, 0.71) and specificity (0.72, 0.64) (area under the curve: 0.8 and 0.77, respectively). Moreover, X values could stratify different severities of air trapping, while R values could not. Conclusion FOT may act as a simple and accurate tool in the assessment of severity of airflow limitation, small and central airway function, and air trapping in patients with geriatric COPD who have difficulties performing conventional lung function test. Moreover, reactance

  7. Evaluation of Application Accuracy and Performance of a Hydraulically Operated Variable-Rate Aerial Application System

    Technology Transfer Automated Retrieval System (TEKTRAN)

    An aerial variable-rate application system consisting of a DGPS-based guidance system, automatic flow controller, and hydraulically controlled pump/valve was evaluated for response time to rapidly changing flow requirements and accuracy of application. Spray deposition position error was evaluated ...

  8. Judgment of Learning, Monitoring Accuracy, and Student Performance in the Classroom Context

    ERIC Educational Resources Information Center

    Cao, Li; Nietfeld, John L.

    2005-01-01

    As a key component in self-regulated learning, the ability to accurately judge the status of learning enables students to become strategic and effective in the learning process. Weekly monitoring exercises were used to improve college students' (N = 94) accuracy of judgment of learning over a 14-week educational psychology course. A time series…

  9. Performance-Based Cognitive Screening Instruments: An Extended Analysis of the Time versus Accuracy Trade-off

    PubMed Central

    Larner, Andrew J.

    2015-01-01

    Early and accurate diagnosis of dementia is key to appropriate treatment and management. Clinical assessment, including the use of cognitive screening instruments, remains integral to the diagnostic process. Many cognitive screening instruments have been described, varying in length and hence administration time, but it is not known whether longer tests offer greater diagnostic accuracy than shorter tests. Data from several pragmatic diagnostic test accuracy studies examining various cognitive screening instruments in a secondary care setting were analysed to correlate measures of test diagnostic accuracy and test duration, building on the findings of a preliminary study. High correlations which were statistically significant were found between one measure of diagnostic accuracy, area under the receiver operating characteristic curve, and surrogate measures of test duration, namely total test score and total number of test items/questions. Longer cognitive screening instruments may offer greater accuracy for the diagnosis of dementia, an observation which has possible implications for the optimal organisation of dedicated cognitive disorders clinics. PMID:26854168

  10. Development, preliminary usability and accuracy testing of the EBMT 'eGVHD App' to support GvHD assessment according to NIH criteria-a proof of concept.

    PubMed

    Schoemans, H; Goris, K; Durm, R V; Vanhoof, J; Wolff, D; Greinix, H; Pavletic, S; Lee, S J; Maertens, J; Geest, S D; Dobbels, F; Duarte, R F

    2016-08-01

    The EBMT Complications and Quality of Life Working Party has developed a computer-based algorithm, the 'eGVHD App', using a user-centered design process. Accuracy was tested using a quasi-experimental crossover design with four expert-reviewed case vignettes in a convenience sample of 28 clinical professionals. Perceived usefulness was evaluated by the technology acceptance model (TAM) and User satisfaction by the Post-Study System Usability Questionnaire (PSSUQ). User experience was positive, with a median of 6 TAM points (interquartile range: 1) and beneficial median total, and subscale PSSUQ scores. The initial standard practice assessment of the vignettes yielded 65% correct results for diagnosis and 45% for scoring. The 'eGVHD App' significantly increased diagnostic and scoring accuracy to 93% (+28%) and 88% (+43%), respectively (both P<0.05). The same trend was observed in the repeated analysis of case 2: accuracy improved by using the App (+31% for diagnosis and +39% for scoring), whereas performance tended to decrease once the App was taken away. The 'eGVHD App' could dramatically improve the quality of care and research as it increased the performance of the whole user group by about 30% at the first assessment and showed a trend for improvement of individual performance on repeated case evaluation. PMID:27042834

  11. Assessment of the accuracy of global geodetic satellite laser ranging observations 1993-2013

    NASA Astrophysics Data System (ADS)

    Appleby, Graham; Rodriguez, Jose

    2014-05-01

    We continue efforts to estimate the intrinsic accuracy of range measurements made by the major satellite laser ranging stations of the ILRS Network using normal point observations of the primary geodetic satellites LAGEOS and LAGEOS-II. In a novel, but risky, approach we carry out weekly, loosely constrained, reference frame solutions for satellite initial state vectors, station coordinates and daily EOPs (X-pole, Y-pole and LoD), as well as estimating range bias for all the stations. We apply known range errors a-priori from the table developed and maintained through the efforts of the ILRS Analysis Working Group and apply station- and time-specific satellite centre of mass corrections (Appleby and Otsubo, 2014), both corrections that are currently implemented in the standard ILRS reference frame products. Our approach, to solve simultaneously for station coordinates and possible range bias for all the stations, has the strength that any bias results are independent of the coordinates taken for example from ITRF2008; thus the approach has the potential to discover bias that may have become absorbed primarily in station height had the coordinates been determined on the assumption of zero bias. A serious complication of the approach is that correlations will inevitably exist between station height and range bias. However, for the major stations of the Network, and using LAGEOS and LAGEOS-II observations simultaneously in our weekly solutions, we are developing techniques and testing their sensitivity in performing a partial separation between these parameters at the expense of an increase in the variance of the stations' height time series. In this paper we discuss the results in terms of potential impact on coordinate solutions, including the reference frame scale, and in the context of preparations for ITRF2013.

  12. Dynamic Accuracy of GPS Receivers for Use in Health Research: A Novel Method to Assess GPS Accuracy in Real-World Settings.

    PubMed

    Schipperijn, Jasper; Kerr, Jacqueline; Duncan, Scott; Madsen, Thomas; Klinker, Charlotte Demant; Troelsen, Jens

    2014-01-01

    The emergence of portable global positioning system (GPS) receivers over the last 10 years has provided researchers with a means to objectively assess spatial position in free-living conditions. However, the use of GPS in free-living conditions is not without challenges and the aim of this study was to test the dynamic accuracy of a portable GPS device under real-world environmental conditions, for four modes of transport, and using three data collection intervals. We selected four routes on different bearings, passing through a variation of environmental conditions in the City of Copenhagen, Denmark, to test the dynamic accuracy of the Qstarz BT-Q1000XT GPS device. Each route consisted of a walk, bicycle, and vehicle lane in each direction. The actual width of each walking, cycling, and vehicle lane was digitized as accurately as possible using ultra-high-resolution aerial photographs as background. For each trip, we calculated the percentage that actually fell within the lane polygon, and within the 2.5, 5, and 10 m buffers respectively, as well as the mean and median error in meters. Our results showed that 49.6% of all ≈68,000 GPS points fell within 2.5 m of the expected location, 78.7% fell within 10 m and the median error was 2.9 m. The median error during walking trips was 3.9, 2.0 m for bicycle trips, 1.5 m for bus, and 0.5 m for car. The different area types showed considerable variation in the median error: 0.7 m in open areas, 2.6 m in half-open areas, and 5.2 m in urban canyons. The dynamic spatial accuracy of the tested device is not perfect, but we feel that it is within acceptable limits for larger population studies. Longer recording periods, for a larger population are likely to reduce the potentially negative effects of measurement inaccuracy. Furthermore, special care should be taken when the environment in which the study takes place could compromise the GPS signal. PMID:24653984

  13. Self-Assessed Intelligence and Academic Performance

    ERIC Educational Resources Information Center

    Chamorro-Premuzic, Tomas; Furnham, Adrian

    2006-01-01

    This paper reports the results of a two-year longitudinal study of the relationship between self-assessed intelligence (SAI) and academic performance (AP) in a sample of 184 British undergraduate students. Results showed significant correlations between SAI (both before and after taking an IQ test) and academic exam marks obtained two years later,…

  14. Assessing Performance When the Stakes are High.

    ERIC Educational Resources Information Center

    Crawford, William R.

    This paper is concerned with measuring achievement levels of medical students. Precise tools are needed to assess the readiness of an individual to practice. The basic question then becomes, what can this candidate do, at a given time, under given circumstances. Given the definition of the circumstances, and the candidate's performance, the…

  15. 10 Steps to District Performance Assessment.

    ERIC Educational Resources Information Center

    Driscoll, Lydia Abell

    In the 1995-96 school year, the Memphis (Tennessee) City Schools released standards for student performance in seven content areas and began laying the foundation for a standards-based curriculum and assessment system. The steps taken to develop and implement this project are outlined as follows: (1) defining the objectives and the project scope;…

  16. A Litmus Test for Performance Assessment.

    ERIC Educational Resources Information Center

    Finson, Kevin D.; Beaver, John B.

    1992-01-01

    Presents 10 guidelines for developing performance-based assessment items. Presents a sample activity developed from the guidelines. The activity tests students ability to observe, classify, and infer, using red and blue litmus paper, a pH-range finder, vinegar, ammonia, an unknown solution, distilled water, and paper towels. (PR)

  17. Accuracy Assessment of Geometrical Elements for Setting-Out in Horizontal Plane of Conveying Chambers at the Bauxite Mine "KOSTURI" Srebrenica

    NASA Astrophysics Data System (ADS)

    Milutinović, Aleksandar; Ganić, Aleksandar; Tokalić, Rade

    2014-03-01

    Setting-out of objects on the exploitation field of the mine, both in surface mining and in the underground mines, is determined by the specified setting-out accuracy of reference points, which are best to define spatial position of the object projected. For the purpose of achieving of the specified accuracy, it is necessary to perform a priori accuracy assessment of parameters, which are to be used when performing setting-out. Based on the a priori accuracy assessment, verification of the quality of geometrical setting- -out elements specified in the layout; definition of the accuracy for setting-out of geometrical elements; selection of setting-out method; selection at the type and class of instruments and tools that need to be applied in order to achieve predefined accuracy. The paper displays the accuracy assessment of geometrical elements for setting-out of the main haul gallery, haul downcast and helical conveying downcasts in shape of an inclined helix in horizontal plane, using the example of the underground bauxite mine »Kosturi«, Srebrenica. Wytyczanie obiektów na polu wydobywczym w kopalniach, zarówno podziemnych jak i odkrywkowych, zależy w dużej mierze od określonej dokładności wytyczania punktów referencyjnych, przy pomocy których określane jest następnie położenie przestrzenne pozostałych obiektów. W celu uzyskania założonej dokładności, należy przeprowadzić wstępną analizę dokładności oszacowania parametrów które następnie wykorzystane będą w procesie wytyczania. W oparciu o wyniki wstępnej analizy dokładności dokonuje się weryfikacji jakości geometrycznego wytyczenia elementów zaznaczonych na szkicu, uwzględniając te wyniki dobrać należy odpowiednią metodę wytyczania i rodzaj oraz klasę wykorzystywanych narzędzi i instrumentów, tak by osiągnąć założony poziom dokładności. W pracy przedstawiono oszacowanie dokładności wytyczania elementów geometrycznych dla głównego chodnika transportowego

  18. An objective spinal motion imaging assessment (OSMIA): reliability, accuracy and exposure data

    PubMed Central

    Breen, Alan C; Muggleton, Jennifer M; Mellor, Fiona E

    2006-01-01

    Background Minimally-invasive measurement of continuous inter-vertebral motion in clinical settings is difficult to achieve. This paper describes the reliability, validity and radiation exposure levels in a new Objective Spinal Motion Imaging Assessment system (OSMIA) based on low-dose fluoroscopy and image processing. Methods Fluoroscopic sequences in coronal and sagittal planes were obtained from 2 calibration models using dry lumbar vertebrae, plus the lumbar spines of 30 asymptomatic volunteers. Calibration model 1 (mobile) was screened upright, in 7 inter-vertebral positions. The volunteers and calibration model 2 (fixed) were screened on a motorised table comprising 2 horizontal sections, one of which moved through 80 degrees. Model 2 was screened during motion 5 times and the L2-S1 levels of the volunteers twice. Images were digitised at 5fps. Inter-vertebral motion from model 1 was compared to its pre-settings to investigate accuracy. For volunteers and model 2, the first digitised image in each sequence was marked with templates. Vertebrae were tracked throughout the motion using automated frame-to-frame registration. For each frame, vertebral angles were subtracted giving inter-vertebral motion graphs. Volunteer data were acquired twice on the same day and analysed by two blinded observers. The root-mean-square (RMS) differences between paired data were used as the measure of reliability. Results RMS difference between reference and computed inter-vertebral angles in model 1 was 0.32 degrees for side-bending and 0.52 degrees for flexion-extension. For model 2, X-ray positioning contributed more to the variance of range measurement than did automated registration. For volunteer image sequences, RMS inter-observer variation in intervertebral motion range in the coronal plane was 1.86 degreesand intra-subject biological variation was between 2.75 degrees and 2.91 degrees. RMS inter-observer variation in the sagittal plane was 1.94 degrees. Radiation dosages

  19. Diagnostic accuracy of Magnetic Resonance Imaging in assessment of Meniscal and ACL tear: Correlation with arthroscopy

    PubMed Central

    Yaqoob, Jamal; Alam, Muhammad Shahbaz; Khalid, Nadeem

    2015-01-01

    Objective: To determine the diagnostic accuracy of magnetic resonance imaging (MRI) in injuries related to anterior cruciate ligament and menisci and compare its effectiveness with that of arthroscopy. Methods: This retrospective cross-sectional study was conducted in the department of Radiology & Medical Imaging of Dallah Hospital, Riyadh, Kingdom of Saudi Arabia from September 2012 to March 2014. Fifty four patients (including 30 men and 24 women) with internal derangement of knee referred from the orthopedic consulting clinics underwent MR imaging followed by arthroscopic evaluation. The presence of meniscal and ligamentous abnormality on the imaging was documented by two trained radiologist. Findings were later compared with arthroscopic findings. Results: The sensitivity, specificity and accuracy of MR imaging for menisci and ACL injury were calculated: 100% sensitivity, 88.4% specificity, 90% positive predictive value, 100% negative predictive value, and 94.4% accuracy were noted for medial meniscal injury. Similarly, MR had sensitivity of 85.7%, specificity of 95%, positive predictive value of 85.7%, negative predictive value of 95%, and accuracy of 92.5% for lateral meniscal injuries. Likewise, anterior cruciate ligament had 91.6% sensitivity, 95.2% specificity, 84.6% positive predictive value, 97.5% negative predictive value, and 94.4% accuracy. Conclusion: MRI is extremely helpful in identifying meniscal and anterior cruciate ligaments tears. MR imaging has high negative predictive value making it better choice as screening tool compared to diagnostic arthroscopic evaluation in most patients with soft tissue trauma to knee. PMID:26101472

  20. Accuracy Assessment of Underwater Photogrammetric Three Dimensional Modelling for Coral Reefs

    NASA Astrophysics Data System (ADS)

    Guo, T.; Capra, A.; Troyer, M.; Gruen, A.; Brooks, A. J.; Hench, J. L.; Schmitt, R. J.; Holbrook, S. J.; Dubbini, M.

    2016-06-01

    Recent advances in automation of photogrammetric 3D modelling software packages have stimulated interest in reconstructing highly accurate 3D object geometry in unconventional environments such as underwater utilizing simple and low-cost camera systems. The accuracy of underwater 3D modelling is affected by more parameters than in single media cases. This study is part of a larger project on 3D measurements of temporal change of coral cover in tropical waters. It compares the accuracies of 3D point clouds generated by using images acquired from a system camera mounted in an underwater housing and the popular GoPro cameras respectively. A precisely measured calibration frame was placed in the target scene in order to provide accurate control information and also quantify the errors of the modelling procedure. In addition, several objects (cinder blocks) with various shapes were arranged in the air and underwater and 3D point clouds were generated by automated image matching. These were further used to examine the relative accuracy of the point cloud generation by comparing the point clouds of the individual objects with the objects measured by the system camera in air (the best possible values). Given a working distance of about 1.5 m, the GoPro camera can achieve a relative accuracy of 1.3 mm in air and 2.0 mm in water. The system camera achieved an accuracy of 1.8 mm in water, which meets our requirements for coral measurement in this system.

  1. Geometric Accuracy Assessment of LANDSAT-4 Multispectral Scanner (MSS). [Washington, D.C.

    NASA Technical Reports Server (NTRS)

    Imhoff, M. L.; Alford, W. L.

    1984-01-01

    Standard LANDSAT-4 MSS digital image data were analyzed for geometric accuracy using two P-format (UTM projection) images of the Washington, D.C. area, scene day 109 (ID number 4010915140) and scene day 125 (ID number 4012515144). Both scenes were tested for geodetic registration accuracy (scene-to-map), temporal registration accuracy (scene-to-scene), and band-to-band registration accuracy (within a scene). The combined RMS error for geodetic registration accuracy was 0.43 pixel (25.51 meters), well within specifications. The comparison between the 2 scenes was made on a band-by-band basis. The 90 percent error figure for temporal registration was 0.68 (57x57) pixel (38.8 meters). Although this figure is larger than the specification, it can be considered excellent with respect to user application. The best case registration errors between bands 1 and 2, and 3 and 4 were 14.2m and 13.7m, respectively, both within specifications. The worst case registration error was 38.0 m between bands 2 and 3.

  2. Radioactive Waste Management Complex performance assessment: Draft

    SciTech Connect

    Case, M.J.; Maheras, S.J.; McKenzie-Carter, M.A.; Sussman, M.E.; Voilleque, P.

    1990-06-01

    A radiological performance assessment of the Radioactive Waste Management Complex at the Idaho National Engineering Laboratory was conducted to demonstrate compliance with appropriate radiological criteria of the US Department of Energy and the US Environmental Protection Agency for protection of the general public. The calculations involved modeling the transport of radionuclides from buried waste, to surface soil and subsurface media, and eventually to members of the general public via air, ground water, and food chain pathways. Projections of doses were made for both offsite receptors and individuals intruding onto the site after closure. In addition, uncertainty analyses were performed. Results of calculations made using nominal data indicate that the radiological doses will be below appropriate radiological criteria throughout operations and after closure of the facility. Recommendations were made for future performance assessment calculations.

  3. Accuracy of audio computer-assisted self-interviewing (ACASI) and self-administered questionnaires for the assessment of sexual behavior.

    PubMed

    Morrison-Beedy, Dianne; Carey, Michael P; Tu, Xin

    2006-09-01

    This study examined the accuracy of two retrospective methods and assessment intervals for recall of sexual behavior and assessed predictors of recall accuracy. Using a 2 [mode: audio-computer assisted self-interview (ACASI) vs. self-administered questionnaire (SAQ)] by 2 (frequency: monthly vs. quarterly) design, young women (N =102) were randomly assigned to one of four conditions. Participants completed baseline measures, monitored their behavior with a daily diary, and returned monthly (or quarterly) for assessments. A mixed pattern of accuracy between the four assessment methods was identified. Monthly assessments yielded more accurate recall for protected and unprotected vaginal sex but quarterly assessments yielded more accurate recall for unprotected oral sex. Mode differences were not strong, and hypothesized predictors of accuracy tended not to be associated with recall accuracy. Choice of assessment mode and frequency should be based upon the research question(s), population, resources, and context in which data collection will occur. PMID:16721506

  4. Accuracy of Assessment of Eligibility for Early Medical Abortion by Community Health Workers in Ethiopia, India and South Africa

    PubMed Central

    Nguyen, My Huong; Habib, Ndema; Afework, Mesganaw Fantahun; Harries, Jane; Iyengar, Kirti; Moodley, Jennifer; Constant, Deborah; Sen, Swapnaleen

    2016-01-01

    Objective To assess the accuracy of assessment of eligibility for early medical abortion by community health workers using a simple checklist toolkit. Design Diagnostic accuracy study. Setting Ethiopia, India and South Africa. Methods Two hundred seventeen women in Ethiopia, 258 in India and 236 in South Africa were enrolled into the study. A checklist toolkit to determine eligibility for early medical abortion was validated by comparing results of clinician and community health worker assessment of eligibility using the checklist toolkit with the reference standard exam. Results Accuracy was over 90% and the negative likelihood ratio <0.1 at all three sites when used by clinician assessors. Positive likelihood ratios were 4.3 in Ethiopia, 5.8 in India and 6.3 in South Africa. When used by community health workers the overall accuracy of the toolkit was 92% in Ethiopia, 80% in India and 77% in South Africa negative likelihood ratios were 0.08 in Ethiopia, 0.25 in India and 0.22 in South Africa and positive likelihood ratios were 5.9 in Ethiopia and 2.0 in India and South Africa. Conclusion The checklist toolkit, as used by clinicians, was excellent at ruling out participants who were not eligible, and moderately effective at ruling in participants who were eligible for medical abortion. Results were promising when used by community health workers particularly in Ethiopia where they had more prior experience with use of diagnostic aids and longer professional training. The checklist toolkit assessments resulted in some participants being wrongly assessed as eligible for medical abortion which is an area of concern. Further research is needed to streamline the components of the tool, explore optimal duration and content of training for community health workers, and test feasibility and acceptability. PMID:26731176

  5. Assessing the Accuracy of a Child's Account of Sexual Abuse: A Case Study.

    ERIC Educational Resources Information Center

    Orbach, Yael; Lamb, Michael E.

    1999-01-01

    This study examined the accuracy of a 13-year-old girl's account of a sexually abusive incident. Information given by the victim was compared with an audiotaped record. Over 50% of information reported by the victim was corroborated by the audio record and 64% was confirmed by more than one source. (Author/CR)

  6. Comparative analysis of Worldview-2 and Landsat 8 for coastal saltmarsh mapping accuracy assessment

    NASA Astrophysics Data System (ADS)

    Rasel, Sikdar M. M.; Chang, Hsing-Chung; Diti, Israt Jahan; Ralph, Tim; Saintilan, Neil

    2016-05-01

    Coastal saltmarsh and their constituent components and processes are of an interest scientifically due to their ecological function and services. However, heterogeneity and seasonal dynamic of the coastal wetland system makes it challenging to map saltmarshes with remotely sensed data. This study selected four important saltmarsh species Pragmitis australis, Sporobolus virginicus, Ficiona nodosa and Schoeloplectus sp. as well as a Mangrove and Pine tree species, Avecinia and Casuarina sp respectively. High Spatial Resolution Worldview-2 data and Coarse Spatial resolution Landsat 8 imagery were selected in this study. Among the selected vegetation types some patches ware fragmented and close to the spatial resolution of Worldview-2 data while and some patch were larger than the 30 meter resolution of Landsat 8 data. This study aims to test the effectiveness of different classifier for the imagery with various spatial and spectral resolutions. Three different classification algorithm, Maximum Likelihood Classifier (MLC), Support Vector Machine (SVM) and Artificial Neural Network (ANN) were tested and compared with their mapping accuracy of the results derived from both satellite imagery. For Worldview-2 data SVM was giving the higher overall accuracy (92.12%, kappa =0.90) followed by ANN (90.82%, Kappa 0.89) and MLC (90.55%, kappa = 0.88). For Landsat 8 data, MLC (82.04%) showed the highest classification accuracy comparing to SVM (77.31%) and ANN (75.23%). The producer accuracy of the classification results were also presented in the paper.

  7. PRECISION AND ACCURACY ASSESSMENTS FOR STATE AND LOCAL AIR MONITORING NETWORKS, 1984

    EPA Science Inventory

    Precision and accuracy data obtained from state and local agencies during 1984 are summarized and compared to data reported earlier for the period 1981-1983. A continual improvement in the completeness of the data is evident. Improvement is also evident in the size of the precisi...

  8. PRECISION AND ACCURACY ASSESSMENTS FOR STATE AND LOCAL AIR MONITORING NETWORKS, 1983

    EPA Science Inventory

    Precision and accuracy data obtained from State and local agencies during 1983 are summarized and evaluated. Some comparisons are made with the results previously reported for 1981 and 1982 to determine the indication of any trends. Some trends indicated improvement in the comple...

  9. PRECISION AND ACCURACY ASSESSMENTS FOR STATE AND LOCAL AIR MONITORING NETWORKS, 1985

    EPA Science Inventory

    Precision and accuracy data obtained from State and local agencies during 1985 are summarized and evaluated. Some comparisons are made with the results reported for prior years to determine any trends. Some trends indicated continued improvement in the completeness of reporting o...

  10. Accuracy, Confidence, and Calibration: How Young Children and Adults Assess Credibility

    ERIC Educational Resources Information Center

    Tenney, Elizabeth R.; Small, Jenna E.; Kondrad, Robyn L.; Jaswal, Vikram K.; Spellman, Barbara A.

    2011-01-01

    Do children and adults use the same cues to judge whether someone is a reliable source of information? In 4 experiments, we investigated whether children (ages 5 and 6) and adults used information regarding accuracy, confidence, and calibration (i.e., how well an informant's confidence predicts the likelihood of being correct) to judge informants'…

  11. Interrater Reliability Estimators Commonly Used in Scoring Language Assessments: A Monte Carlo Investigation of Estimator Accuracy

    ERIC Educational Resources Information Center

    Morgan, Grant B.; Zhu, Min; Johnson, Robert L.; Hodge, Kari J.

    2014-01-01

    Common estimators of interrater reliability include Pearson product-moment correlation coefficients, Spearman rank-order correlations, and the generalizability coefficient. The purpose of this study was to examine the accuracy of estimators of interrater reliability when varying the true reliability, number of scale categories, and number of…

  12. Portable device to assess dynamic accuracy of global positioning systems (GPS) receivers used in agricultural aircraft

    Technology Transfer Automated Retrieval System (TEKTRAN)

    A device was designed to test the dynamic accuracy of Global Positioning System (GPS) receivers used in aerial vehicles. The system works by directing a sun-reflected light beam from the ground to the aircraft using mirrors. A photodetector is placed pointing downward from the aircraft and circuitry...

  13. Assessing the Accuracy and Consistency of Language Proficiency Classification under Competing Measurement Models

    ERIC Educational Resources Information Center

    Zhang, Bo

    2010-01-01

    This article investigates how measurement models and statistical procedures can be applied to estimate the accuracy of proficiency classification in language testing. The paper starts with a concise introduction of four measurement models: the classical test theory (CTT) model, the dichotomous item response theory (IRT) model, the testlet response…

  14. ASSESSMENT OF THE PRECISION AND ACCURACY OF SAM AND MFC MICROCOSMS EXPOSED TO TOXICANTS

    EPA Science Inventory

    The results of 30 mixed flank culture (MFC) and four standardized aquatic microcosm (SAM) microcosm experiments were used to describe the precision and accuracy of these two protocols. oefficients of variation (CV) for chemicals measurements (DO,pH) were generally less than 7%, f...

  15. Applying Signal-Detection Theory to the Study of Observer Accuracy and Bias in Behavioral Assessment

    ERIC Educational Resources Information Center

    Lerman, Dorothea C.; Tetreault, Allison; Hovanetz, Alyson; Bellaci, Emily; Miller, Jonathan; Karp, Hilary; Mahmood, Angela; Strobel, Maggie; Mullen, Shelley; Keyl, Alice; Toupard, Alexis

    2010-01-01

    We evaluated the feasibility and utility of a laboratory model for examining observer accuracy within the framework of signal-detection theory (SDT). Sixty-one individuals collected data on aggression while viewing videotaped segments of simulated teacher-child interactions. The purpose of Experiment 1 was to determine if brief feedback and…

  16. Accuracy of Unenhanced MR Imaging in the Detection of Acute Appendicitis: Single-Institution Clinical Performance Review.

    PubMed

    Petkovska, Iva; Martin, Diego R; Covington, Matthew F; Urbina, Shannon; Duke, Eugene; Daye, Z John; Stolz, Lori A; Keim, Samuel M; Costello, James R; Chundru, Surya; Arif-Tiwari, Hina; Gilbertson-Dahdal, Dorothy; Gries, Lynn; Kalb, Bobby

    2016-05-01

    Purpose To determine the accuracy of unenhanced magnetic resonance (MR) imaging in the detection of acute appendicitis in patients younger than 50 years who present to the emergency department with right lower quadrant (RLQ) pain. Materials and Methods The institutional review board approved this retrospective study of 403 patients from August 1, 2012, to July 30, 2014, and waived the informed consent requirement. A cross-department strategy was instituted to use MR imaging as the primary diagnostic modality in patients aged 3-49 years who presented to the emergency department with RLQ pain. All MR examinations were performed with a 1.5- or 3.0-T system. Images were acquired without breath holding by using multiplanar half-Fourier single-shot T2-weighted imaging without and with spectral adiabatic inversion recovery fat suppression without oral or intravenous contrast material. MR imaging room time was measured for each patient. Prospective image interpretations from clinical records were reviewed to document acute appendicitis or other causes of abdominal pain. Final clinical outcomes were determined by using (a) surgical results (n = 77), (b) telephone follow-up combined with review of the patient's medical records (n = 291), or (c) consensus expert panel assessment if no follow-up data were available (n = 35). Logistic regression analysis was performed to evaluate the sensitivity and specificity of MR imaging in the detection of acute appendicitis, and corresponding 95% confidence intervals were determined. Results Of the 403 patients, 67 had MR imaging findings that were positive for acute appendicitis, and 336 had negative findings. MR imaging had a sensitivity of 97.0% (65 of 67) and a specificity of 99.4% (334 of 336). The mean total room time was 14 minutes (range, 8-62 minutes). An alternate diagnosis was offered in 173 (51.5%) of 336 patients. Conclusion MR imaging is a highly sensitive and specific test in the evaluation of patients younger than 50 years

  17. Assessing the accuracy of the van der Waals density functionals for rare-gas and small molecular systems

    NASA Astrophysics Data System (ADS)

    Callsen, Martin; Hamada, Ikutaro

    2015-05-01

    The precise description of chemical bonds with different natures is a prerequisite for an accurate electronic structure method. The van der Waals density functional is a promising approach that meets such a requirement. Nevertheless, the accuracy should be assessed for a variety of materials to test the robustness of the method. We present benchmark calculations for weakly interacting molecular complexes and rare-gas systems as well as covalently bound molecular systems, in order to assess the accuracy and applicability of rev-vdW-DF2, a recently proposed variant [I. Hamada, Phys. Rev. B 89, 121103 (2014), 10.1103/PhysRevB.89.121103] of the van der Waals density functional. It is shown that although the calculated atomization energies for small molecules are less accurate rev-vdW-DF2 describes the interaction energy curves for the weakly interacting molecules and rare-gas complexes, as well as the bond lengths of diatomic molecules, reasonably well.

  18. Improving performance through self-assessment.

    PubMed

    Pitt, D J

    1999-01-01

    Wakefield and Pontefract Community Health NHS Trust uses the European Business Excellence Model self-assessment for continuous improvement. An outline of the key aspects of the model, an approach to TQM, is presented. This article sets out the context that led to the adoption of the model in the Trust and describes the approach that has been taken to completing self-assessments. Use of the model to secure continuous improvement is reviewed against Bhopal and Thomson's Audit Cycle and consideration is given to lessons learned. The article concludes with a discussion on applicability of the model to health care organisations. It is concluded that, after an initial learning curve, the model has facilitated integration of a range of quality initiatives, and progress with continuous improvement. Critical to this was the linking of self-assessment to business planning and performance management systems. PMID:10537856

  19. Computational Tools to Assess Turbine Biological Performance

    SciTech Connect

    Richmond, Marshall C.; Serkowski, John A.; Rakowski, Cynthia L.; Strickler, Brad; Weisbeck, Molly; Dotson, Curtis L.

    2014-07-24

    Public Utility District No. 2 of Grant County (GCPUD) operates the Priest Rapids Dam (PRD), a hydroelectric facility on the Columbia River in Washington State. The dam contains 10 Kaplan-type turbine units that are now more than 50 years old. Plans are underway to refit these aging turbines with new runners. The Columbia River at PRD is a migratory pathway for several species of juvenile and adult salmonids, so passage of fish through the dam is a major consideration when upgrading the turbines. In this paper, a method for turbine biological performance assessment (BioPA) is demonstrated. Using this method, a suite of biological performance indicators is computed based on simulated data from a CFD model of a proposed turbine design. Each performance indicator is a measure of the probability of exposure to a certain dose of an injury mechanism. Using known relationships between the dose of an injury mechanism and frequency of injury (dose–response) from laboratory or field studies, the likelihood of fish injury for a turbine design can be computed from the performance indicator. By comparing the values of the indicators from proposed designs, the engineer can identify the more-promising alternatives. We present an application of the BioPA method for baseline risk assessment calculations for the existing Kaplan turbines at PRD that will be used as the minimum biological performance that a proposed new design must achieve.

  20. Assessing the accuracy of the Second Military Survey for the Doren Landslide (Vorarlberg, Austria)

    NASA Astrophysics Data System (ADS)

    Zámolyi, András.; Székely, Balázs; Biszak, Sándor

    2010-05-01

    Reconstruction of the early and long-term evolution of landslide areas is especially important for determining the proportion of anthropogenic influence on the evolution of the region affected by mass movements. The recent geologic and geomorphological setting of the prominent Doren landslide in Vorarlberg (Western Austria) has been studied extensively by various research groups and civil engineering companies. Civil aerial imaging of the area dates back to the 1950's. Modern monitoring techniques include aerial imaging as well as airborne and terrestrial laser scanning (LiDAR) providing us with almost yearly assessment of the changing geomorphology of the area. However, initiation of the landslide occurred most probably earlier than the application of these methods, since there is evidence that the landslide was already active in the 1930's. For studying the initial phase of landslide formation one possibility is to get back on information recorded on historic photographs or historic maps. In this case study we integrated topographic information from the map sheets of the Second Military Survey of the Habsburg Empire that was conducted in Vorarlberg during the years 1816-1821 (Kretschmer et al., 2004) into a comprehensive GIS. The region of interest around the Doren landslide was georeferenced using the method of Timár et al. (2006) refined by Molnár (2009) thus providing a geodetically correct positioning and the possibility of matching the topographic features from the historic map with features recognized in the LiDAR DTM. The landslide of Doren is clearly visible in the historic map. Additionally, prominent geomorphologic features such as morphological scarps, rills and gullies, mass movement lobes and the course of the Weißach rivulet can be matched. Not only the shape and character of these elements can be recognized and matched, but also the positional accuracy is adequate for geomorphological studies. Since the settlement structure is very stable in the

  1. Performance and accuracy investigations of two Doppler global velocimetry systems applied in parallel

    NASA Astrophysics Data System (ADS)

    Willert, Christian; Stockhausen, Guido; Klinner, Joachim; Lempereur, Christine; Barricau, Philippe; Loiret, Philippe; Raynal, Jean Claude

    2007-08-01

    Two Doppler global velocimetry systems were applied in parallel to assess their performance in wind tunnel environments. Both DGV systems were mounted on a common traverse surrounding the glass-walled 1.4 × 1.8 m2 test section of the wind tunnel. The traverse normally supports a three-component forward-scatter laser Doppler velocimetry system. The reproducible tip-vortex flow field generated by the blunt tip of an airfoil was chosen for this investigation and was precisely surveyed by LDA just prior to the DGV measurements. Both DGV systems shared the same continuous wave laser light source, laser frequency monitoring and fibre optic light sheet delivery system. The principal differences between the DGV implementations are with regard to the imaging configuration. One configuration relied on a single camera view that observed three successively operated light sheets. In the second configuration, three camera views simultaneously observed a single light sheet using a four-branch fibre imaging bundle. The imaging bundle system had all three viewpoints in a forward scattering arrangement which increased the scattering efficiency but reduced the frequency shift sensitivity. Since all three light sheet observation components were acquired onto the same image frame, acquisition times could be reduced to a minimum. On the other hand, the triple light sheet-single camera system observed two light sheets in forward scatter and one light sheet in backscatter. Although three separate images had to be recorded in succession, the image quality, spatial resolution and signal-to-noise ratio were superior to the imaging bundle system. Comparison of the DGV data with LDV measurements shows very good agreement to within 1-2 m s-1. The remaining discrepancy has a variety of causes, some are related to the reduced resolving power of the fibre imaging bundle system (graininess, smoothing), exact localization of the receiver head with respect to the scene, laser frequency drift or

  2. Assessing the accuracy of software predictions of mammalian and microbial metabolites

    EPA Science Inventory

    New chemical development and hazard assessments benefit from accurate predictions of mammalian and microbial metabolites. Fourteen biotransformation libraries encoded in eight software packages that predict metabolite structures were assessed for their sensitivity (proportion of ...

  3. Accuracy in Student Self-Assessment: Directions and Cautions for Research

    ERIC Educational Resources Information Center

    Brown, Gavin T. L.; Andrade, Heidi L.; Chen, Fei

    2015-01-01

    Student self-assessment is a central component of current conceptions of formative and classroom assessment. The research on self-assessment has focused on its efficacy in promoting both academic achievement and self-regulated learning, with little concern for issues of validity. Because reliability of testing is considered a sine qua non for the…

  4. 12 CFR 620.3 - Accuracy of reports and assessment of internal control over financial reporting.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... assessment of internal control over financial reporting. Annual reports of those institutions with over $1... assessing the effectiveness of the institution's internal control over financial reporting. The assessment... the prior fiscal year) must disclose any material change(s) in the internal control over...

  5. Speech variability effects on recognition accuracy associated with concurrent task performance by pilots

    NASA Technical Reports Server (NTRS)

    Simpson, C. A.

    1985-01-01

    In the present study of the responses of pairs of pilots to aircraft warning classification tasks using an isolated word, speaker-dependent speech recognition system, the induced stress was manipulated by means of different scoring procedures for the classification task and by the inclusion of a competitive manual control task. Both speech patterns and recognition accuracy were analyzed, and recognition errors were recorded by type for an isolated word speaker-dependent system and by an offline technique for a connected word speaker-dependent system. While errors increased with task loading for the isolated word system, there was no such effect for task loading in the case of the connected word system.

  6. An assessment of accuracy, error, and conflict with support values from genome-scale phylogenetic data.

    PubMed

    Taylor, Derek J; Piel, William H

    2004-08-01

    Despite the importance of molecular phylogenetics, few of its assumptions have been tested with real data. It is commonly assumed that nonparametric bootstrap values are an underestimate of the actual support, Bayesian posterior probabilities are an overestimate of the actual support, and among-gene phylogenetic conflict is low. We directly tested these assumptions by using a well-supported yeast reference tree. We found that bootstrap values were not significantly different from accuracy. Bayesian support values were, however, significant overestimates of accuracy but still had low false-positive error rates (0% to 2.8%) at the highest values (>99%). Although we found evidence for a branch-length bias contributing to conflict, there was little evidence for widespread, strongly supported among-gene conflict from bootstraps. The results demonstrate that caution is warranted concerning conclusions of conflict based on the assumption of underestimation for support values in real data. PMID:15140947

  7. Accuracy assessment of high frequency 3D ultrasound for digital impression-taking of prepared teeth

    NASA Astrophysics Data System (ADS)

    Heger, Stefan; Vollborn, Thorsten; Tinschert, Joachim; Wolfart, Stefan; Radermacher, Klaus

    2013-03-01

    Silicone based impression-taking of prepared teeth followed by plaster casting is well-established but potentially less reliable, error-prone and inefficient, particularly in combination with emerging techniques like computer aided design and manufacturing (CAD/CAM) of dental prosthesis. Intra-oral optical scanners for digital impression-taking have been introduced but until now some drawbacks still exist. Because optical waves can hardly penetrate liquids or soft-tissues, sub-gingival preparations still need to be uncovered invasively prior to scanning. High frequency ultrasound (HFUS) based micro-scanning has been recently investigated as an alternative to optical intra-oral scanning. Ultrasound is less sensitive against oral fluids and in principal able to penetrate gingiva without invasively exposing of sub-gingival preparations. Nevertheless, spatial resolution as well as digitization accuracy of an ultrasound based micro-scanning system remains a critical parameter because the ultrasound wavelength in water-like media such as gingiva is typically smaller than that of optical waves. In this contribution, the in-vitro accuracy of ultrasound based micro-scanning for tooth geometry reconstruction is being investigated and compared to its extra-oral optical counterpart. In order to increase the spatial resolution of the system, 2nd harmonic frequencies from a mechanically driven focused single element transducer were separated and corresponding 3D surface models were calculated for both fundamentals and 2nd harmonics. Measurements on phantoms, model teeth and human teeth were carried out for evaluation of spatial resolution and surface detection accuracy. Comparison of optical and ultrasound digital impression taking indicate that, in terms of accuracy, ultrasound based tooth digitization can be an alternative for optical impression-taking.

  8. Performance viewing and editing in ASSESS Outsider

    SciTech Connect

    Snell, M.K.; Key, B.; Bingham, B.

    1993-07-01

    The Analytic System and Software for Evaluation of Safeguards and Security (ASSESS) Facility module records site information in the path elements and areas of an Adversary Sequence Diagram. The ASSESS Outsider evaluation module takes this information and first calculates performance values describing how much detection and delay is assigned at each path element and then uses the performance values to determine most-vulnerable paths. This paper discusses new Outsider capabilities that allow the user to view how elements are being defeated and to modify some of these values in Outsider. Outsider now displays how different path element segments are defeated and contrasts the probability of detection for alternate methods of defeating a door (e.g., the lock or the door face itself). The user can also override element segment delays and detection probabilities directly during analysis in Outsider. These capabilities allow users to compare element performance and to verify correct path element performance for all elements, not just those on the most-vulnerable path as is the case currently. Improvements or reductions in protection can be easily checked without creating a set of new facility files to accomplish it.

  9. Complexity, Accuracy, Fluency and Lexis in Task-Based Performance: A Synthesis of the Ealing Research

    ERIC Educational Resources Information Center

    Skehan, Peter; Foster, Pauline

    2012-01-01

    This chapter will present a research synthesis of a series of studies, termed here the Ealing research. The studies use the same general framework to conceptualise tasks and task performance, enabling easier comparability. The different studies, although each is self-contained, build into a wider picture of task performance. The major point of…

  10. Reproducibility and accuracy of body composition assessments in mice by dual energy x-ray absorptiometry and time domain nuclear magnetic resonance

    PubMed Central

    Halldorsdottir, Solveig; Carmody, Jill; Boozer, Carol N.; Leduc, Charles A.; Leibel, Rudolph L.

    2011-01-01

    Objective To assess the accuracy and reproducibility of dual-energy absorptiometry (DXA; PIXImus™) and time domain nuclear magnetic resonance (TD-NMR; Bruker Optics) for the measurement of body composition of lean and obese mice. Subjects and measurements Thirty lean and obese mice (body weight range 19–67 g) were studied. Coefficients of variation for repeated (x 4) DXA and NMR scans of mice were calculated to assess reproducibility. Accuracy was assessed by comparing DXA and NMR results of ten mice to chemical carcass analyses. Accuracy of the respective techniques was also assessed by comparing DXA and NMR results obtained with ground meat samples to chemical analyses. Repeated scans of 10–25 gram samples were performed to test the sensitivity of the DXA and NMR methods to variation in sample mass. Results In mice, DXA and NMR reproducibility measures were similar for fat tissue mass (FTM) (DXA coefficient of variation [CV]=2.3%; and NMR CV=2.8%) (P=0.47), while reproducibility of lean tissue mass (LTM) estimates were better for DXA (1.0%) than NMR (2.2%) (

    accuracy, in mice, DXA overestimated (vs chemical composition) LTM (+1.7 ± 1.3 g [SD], ~ 8%, P <0.001) as well as FTM (+2.0 ± 1.2 g, ~ 46%, P <0.001). NMR estimated LTM and FTM virtually identical to chemical composition analysis (LTM: −0.05 ± 0.5 g, ~0.2%, P =0.79) (FTM: +0.02 ± 0.7 g, ~15%, P =0.93). DXA and NMR-determined LTM and FTM measurements were highly correlated with the corresponding chemical analyses (r2=0.92 and r2=0.99 for DXA LTM and FTM, respectively; r2=0.99 and r2=0.99 for NMR LTM and FTM, respectively.) Sample mass did not affect accuracy in assessing chemical composition of small ground meat samples by either DXA or NMR. Conclusion DXA and NMR provide comparable levels of reproducibility in measurements of body composition lean and obese mice. While DXA and NMR measures are highly correlated with chemical analysis measures, DXA consistently overestimates LTM

  11. Guidance for performing preliminary assessments under CERCLA

    SciTech Connect

    1991-09-01

    EPA headquarters and a national site assessment workgroup produced this guidance for Regional, State, and contractor staff who manage or perform preliminary assessments (PAs). EPA has focused this guidance on the types of sites and site conditions most commonly encountered. The PA approach described in this guidance is generally applicable to a wide variety of sites. However, because of the variability among sites, the amount of information available, and the level of investigative effort required, it is not possible to provide guidance that is equally applicable to all sites. PA investigators should recognize this and be aware that variation from this guidance may be necessary for some sites, particularly for PAs performed at Federal facilities, PAs conducted under EPA`s Environmental Priorities Initiative (EPI), and PAs at sites that have previously been extensively investigated by EPA or others. The purpose of this guidance is to provide instructions for conducting a PA and reporting results. This guidance discusses the information required to evaluate a site and how to obtain it, how to score a site, and reporting requirements. This document also provides guidelines and instruction on PA evaluation, scoring, and the use of standard PA scoresheets. The overall goal of this guidance is to assist PA investigators in conducting high-quality assessments that result in correct site screening or further action recommendations on a nationally consistent basis.

  12. Performance characterization of precision micro robot using a machine vision system over the Internet for guaranteed positioning accuracy

    NASA Astrophysics Data System (ADS)

    Kwon, Yongjin; Chiou, Richard; Rauniar, Shreepud; Sosa, Horacio

    2005-11-01

    There is a missing link between a virtual development environment (e.g., a CAD/CAM driven offline robotic programming) and production requirements of the actual robotic workcell. Simulated robot path planning and generation of pick-and-place coordinate points will not exactly coincide with the robot performance due to lack of consideration in variations in individual robot repeatability and thermal expansion of robot linkages. This is especially important when robots are controlled and programmed remotely (e.g., through Internet or Ethernet) since remote users have no physical contact with robotic systems. Using the current technology in Internet-based manufacturing that is limited to a web camera for live image transfer has been a significant challenge for the robot task performance. Consequently, the calibration and accuracy quantification of robot critical to precision assembly have to be performed on-site and the verification of robot positioning accuracy cannot be ascertained remotely. In worst case, the remote users have to assume the robot performance envelope provided by the manufacturers, which may causes a potentially serious hazard for system crash and damage to the parts and robot arms. Currently, there is no reliable methodology for remotely calibrating the robot performance. The objective of this research is, therefore, to advance the current state-of-the-art in Internet-based control and monitoring technology, with a specific aim in the accuracy calibration of micro precision robotic system for the development of a novel methodology utilizing Ethernet-based smart image sensors and other advanced precision sensory control network.

  13. Rigorous Training of Dogs Leads to High Accuracy in Human Scent Matching-To-Sample Performance

    PubMed Central

    Marchal, Sophie; Bregeras, Olivier; Puaux, Didier; Gervais, Rémi; Ferry, Barbara

    2016-01-01

    Human scent identification is based on a matching-to-sample task in which trained dogs are required to compare a scent sample collected from an object found at a crime scene to that of a suspect. Based on dogs’ greater olfactory ability to detect and process odours, this method has been used in forensic investigations to identify the odour of a suspect at a crime scene. The excellent reliability and reproducibility of the method largely depend on rigor in dog training. The present study describes the various steps of training that lead to high sensitivity scores, with dogs matching samples with 90% efficiency when the complexity of the scents presented during the task in the sample is similar to that presented in the in lineups, and specificity reaching a ceiling, with no false alarms in human scent matching-to-sample tasks. This high level of accuracy ensures reliable results in judicial human scent identification tests. Also, our data should convince law enforcement authorities to use these results as official forensic evidence when dogs are trained appropriately. PMID:26863620

  14. Rigorous Training of Dogs Leads to High Accuracy in Human Scent Matching-To-Sample Performance.

    PubMed

    Marchal, Sophie; Bregeras, Olivier; Puaux, Didier; Gervais, Rémi; Ferry, Barbara

    2016-01-01

    Human scent identification is based on a matching-to-sample task in which trained dogs are required to compare a scent sample collected from an object found at a crime scene to that of a suspect. Based on dogs' greater olfactory ability to detect and process odours, this method has been used in forensic investigations to identify the odour of a suspect at a crime scene. The excellent reliability and reproducibility of the method largely depend on rigor in dog training. The present study describes the various steps of training that lead to high sensitivity scores, with dogs matching samples with 90% efficiency when the complexity of the scents presented during the task in the sample is similar to that presented in the in lineups, and specificity reaching a ceiling, with no false alarms in human scent matching-to-sample tasks. This high level of accuracy ensures reliable results in judicial human scent identification tests. Also, our data should convince law enforcement authorities to use these results as official forensic evidence when dogs are trained appropriately. PMID:26863620

  15. Assessing the impacts of precipitation bias on distributed hydrologic model calibration and prediction accuracy

    NASA Astrophysics Data System (ADS)

    Looper, Jonathan P.; Vieux, Baxter E.; Moreno, Maria A.

    2012-02-01

    SummaryPhysics-based distributed (PBD) hydrologic models predict runoff throughout a basin using the laws of conservation of mass and momentum, and benefit from more accurate and representative precipitation input. V flo™ is a gridded distributed hydrologic model that predicts runoff and continuously updates soil moisture. As a participating model in the second Distributed Model Intercomparison Project (DMIP2), V flo™ is applied to the Illinois and Blue River basins in Oklahoma. Model parameters are derived from geospatial data for initial setup, and then adjusted to reproduce the observed flow under continuous time-series simulations and on an event basis. Simulation results demonstrate that certain runoff events are governed by saturation excess processes, while in others, infiltration-rate excess processes dominate. Streamflow prediction accuracy is enhanced when multi-sensor precipitation estimates (MPE) are bias corrected through re-analysis of the MPE provided in the DMIP2 experiment, resulting in gauge-corrected precipitation estimates (GCPE). Model calibration identified a set of parameters that minimized objective functions for errors in runoff volume and instantaneous discharge. Simulated streamflow for the Blue and Illinois River basins, have Nash-Sutcliffe efficiency coefficients between 0.61 and 0.68, respectively, for the 1996-2002 period using GCPE. The streamflow prediction accuracy improves by 74% in terms of Nash-Sutcliffe efficiency when GCPE is used during the calibration period. Without model calibration, excellent agreement between hourly simulated and observed discharge is obtained for the Illinois, whereas in the Blue River, adjustment of parameters affecting both saturation and infiltration-rate excess processes were necessary. During the 1996-2002 period, GCPE input was more important than model calibration for the Blue River, while model calibration proved more important for the Illinois River. During the verification period (2002

  16. Assessment of Photogrammetric Mapping Accuracy Based on Variation Flying Altitude Using Unmanned Aerial Vehicle

    NASA Astrophysics Data System (ADS)

    Udin, W. S.; Ahmad, A.

    2014-02-01

    Photogrammetry is the earliest technique used to collect data for topographic mapping. The recent development in aerial photogrammetry is the used of large format digital aerial camera for producing topographic map. The aerial photograph can be in the form of metric or non-metric imagery. The cost of mapping using aerial photogrammetry is very expensive. In certain application, there is a need to map small area with limited budget. Due to the development of technology, small format aerial photogrammetry technology has been introduced and offers many advantages. Currently, digital map can be extracted from digital aerial imagery of small format camera mounted on light weight platform such as unmanned aerial vehicle (UAV). This study utilizes UAV system for large scale stream mapping. The first objective of this study is to investigate the use of light weight rotary-wing UAV for stream mapping based on different flying height. Aerial photograph were acquired at 60% forward lap and 30% sidelap specifications. Ground control points and check points were established using Total Station technique. The digital camera attached to the UAV was calibrated and the recovered camera calibration parameters were then used in the digital images processing. The second objective is to determine the accuracy of the photogrammetric output. In this study, the photogrammetric output such as stereomodel in three dimensional (3D), contour lines, digital elevation model (DEM) and orthophoto were produced from a small stream of 200m long and 10m width. The research output is evaluated for planimetry and vertical accuracy using root mean square error (RMSE). Based on the finding, sub-meter accuracy is achieved and the RMSE value decreases as the flying height increases. The difference is relatively small. Finally, this study shows that UAV is very useful platform for obtaining aerial photograph and subsequently used for photogrammetric mapping and other applications.

  17. Hidden Markov model and nuisance attribute projection based bearing performance degradation assessment

    NASA Astrophysics Data System (ADS)

    Jiang, Huiming; Chen, Jin; Dong, Guangming

    2016-05-01

    Hidden Markov model (HMM) has been widely applied in bearing performance degradation assessment. As a machine learning-based model, its accuracy, subsequently, is dependent on the sensitivity of the features used to estimate the degradation performance of bearings. It's a big challenge to extract effective features which are not influenced by other qualities or attributes uncorrelated with the bearing degradation condition. In this paper, a bearing performance degradation assessment method based on HMM and nuisance attribute projection (NAP) is proposed. NAP can filter out the effect of nuisance attributes in feature space through projection. The new feature space projected by NAP is more sensitive to bearing health changes and barely influenced by other interferences occurring in operation condition. To verify the effectiveness of the proposed method, two different experimental databases are utilized. The results show that the combination of HMM and NAP can effectively improve the accuracy and robustness of the bearing performance degradation assessment system.

  18. Using Covariance Analysis to Assess Pointing Performance

    NASA Technical Reports Server (NTRS)

    Bayard, David; Kang, Bryan

    2009-01-01

    A Pointing Covariance Analysis Tool (PCAT) has been developed for evaluating the expected performance of the pointing control system for NASA s Space Interferometry Mission (SIM). The SIM pointing control system is very complex, consisting of multiple feedback and feedforward loops, and operating with multiple latencies and data rates. The SIM pointing problem is particularly challenging due to the effects of thermomechanical drifts in concert with the long camera exposures needed to image dim stars. Other pointing error sources include sensor noises, mechanical vibrations, and errors in the feedforward signals. PCAT models the effects of finite camera exposures and all other error sources using linear system elements. This allows the pointing analysis to be performed using linear covariance analysis. PCAT propagates the error covariance using a Lyapunov equation associated with time-varying discrete and continuous-time system matrices. Unlike Monte Carlo analysis, which could involve thousands of computational runs for a single assessment, the PCAT analysis performs the same assessment in a single run. This capability facilitates the analysis of parametric studies, design trades, and "what-if" scenarios for quickly evaluating and optimizing the control system architecture and design.

  19. Methods in Use for Sensitivity Analysis, Uncertainty Evaluation, and Target Accuracy Assessment

    SciTech Connect

    G. Palmiotti; M. Salvatores; G. Aliberti

    2007-10-01

    Sensitivity coefficients can be used for different objectives like uncertainty estimates, design optimization, determination of target accuracy requirements, adjustment of input parameters, and evaluations of the representativity of an experiment with respect to a reference design configuration. In this paper the theory, based on the adjoint approach, that is implemented in the ERANOS fast reactor code system is presented along with some unique tools and features related to specific types of problems as is the case for nuclide transmutation, reactivity loss during the cycle, decay heat, neutron source associated to fuel fabrication, and experiment representativity.

  20. Accuracy assessment of the ERP prediction method based on analysis of 100-year ERP series

    NASA Astrophysics Data System (ADS)

    Malkin, Z.; Tissen, V. M.

    2012-12-01

    A new method has been developed at the Siberian Research Institute of Metrology (SNIIM) for highly accurate prediction of UT1 and Pole motion (PM). In this study, a detailed comparison was made of real-time UT1 predictions made in 2006-2011 and PMpredictions made in 2009-2011making use of the SNIIM method with simultaneous predictions computed at the International Earth Rotation and Reference Systems Service (IERS), USNO. Obtained results have shown that proposed method provides better accuracy at different prediction lengths.

  1. An Automated Grass-Based Procedure to Assess the Geometrical Accuracy of the Openstreetmap Paris Road Network

    NASA Astrophysics Data System (ADS)

    Brovelli, M. A.; Minghini, M.; Molinari, M. E.

    2016-06-01

    OpenStreetMap (OSM) is the largest spatial database of the world. One of the most frequently occurring geospatial elements within this database is the road network, whose quality is crucial for applications such as routing and navigation. Several methods have been proposed for the assessment of OSM road network quality, however they are often tightly coupled to the characteristics of the authoritative dataset involved in the comparison. This makes it hard to replicate and extend these methods. This study relies on an automated procedure which was recently developed for comparing OSM with any road network dataset. It is based on three Python modules for the open source GRASS GIS software and provides measures of OSM road network spatial accuracy and completeness. Provided that the user is familiar with the authoritative dataset used, he can adjust the values of the parameters involved thanks to the flexibility of the procedure. The method is applied to assess the quality of the Paris OSM road network dataset through a comparison against the French official dataset provided by the French National Institute of Geographic and Forest Information (IGN). The results show that the Paris OSM road network has both a high completeness and spatial accuracy. It has a greater length than the IGN road network, and is found to be suitable for applications requiring spatial accuracies up to 5-6 m. Also, the results confirm the flexibility of the procedure for supporting users in carrying out their own comparisons between OSM and reference road datasets.

  2. Phase segmentation of X-ray computer tomography rock images using machine learning techniques: an accuracy and performance study

    NASA Astrophysics Data System (ADS)

    Chauhan, Swarup; Rühaak, Wolfram; Anbergen, Hauke; Kabdenov, Alen; Freise, Marcus; Wille, Thorsten; Sass, Ingo

    2016-07-01

    Performance and accuracy of machine learning techniques to segment rock grains, matrix and pore voxels from a 3-D volume of X-ray tomographic (XCT) grayscale rock images was evaluated. The segmentation and classification capability of unsupervised (k-means, fuzzy c-means, self-organized maps), supervised (artificial neural networks, least-squares support vector machines) and ensemble classifiers (bragging and boosting) were tested using XCT images of andesite volcanic rock, Berea sandstone, Rotliegend sandstone and a synthetic sample. The averaged porosity obtained for andesite (15.8 ± 2.5 %), Berea sandstone (16.3 ± 2.6 %), Rotliegend sandstone (13.4 ± 7.4 %) and the synthetic sample (48.3 ± 13.3 %) is in very good agreement with the respective laboratory measurement data and varies by a factor of 0.2. The k-means algorithm is the fastest of all machine learning algorithms, whereas a least-squares support vector machine is the most computationally expensive. Metrics entropy, purity, mean square root error, receiver operational characteristic curve and 10 K-fold cross-validation were used to determine the accuracy of unsupervised, supervised and ensemble classifier techniques. In general, the accuracy was found to be largely affected by the feature vector selection scheme. As it is always a trade-off between performance and accuracy, it is difficult to isolate one particular machine learning algorithm which is best suited for the complex phase segmentation problem. Therefore, our investigation provides parameters that can help in selecting the appropriate machine learning techniques for phase segmentation.

  3. Pulsed Lidar Performance/Technical Maturity Assessment

    NASA Technical Reports Server (NTRS)

    Gimmestad, Gary G.; West, Leanne L.; Wood, Jack W.; Frehlich, Rod

    2004-01-01

    This report describes the results of investigations performed by the Georgia Tech Research Institute (GTRI) and the National Center for Atmospheric Research (NCAR) under a task entitled 'Pulsed Lidar Performance/Technical Maturity Assessment' funded by the Crew Systems Branch of the Airborne Systems Competency at the NASA Langley Research Center. The investigations included two tasks, 1.1(a) and 1.1(b). The Tasks discussed in this report are in support of the NASA Virtual Airspace Modeling and Simulation (VAMS) program and are designed to evaluate a pulsed lidar that will be required for active wake vortex avoidance solutions. The Coherent Technologies, Inc. (CTI) WindTracer LIDAR is an eye-safe, 2-micron, coherent, pulsed Doppler lidar with wake tracking capability. The actual performance of the WindTracer system was to be quantified. In addition, the sensor performance has been assessed and modeled, and the models have been included in simulation efforts. The WindTracer LIDAR was purchased by the Federal Aviation Administration (FAA) for use in near-term field data collection efforts as part of a joint NASA/FAA wake vortex research program. In the joint research program, a minimum common wake and weather data collection platform will be defined. NASA Langley will use the field data to support wake model development and operational concept investigation in support of the VAMS project, where the ultimate goal is to improve airport capacity and safety. Task 1.1(a), performed by NCAR in Boulder, Colorado to analyze the lidar system to determine its performance and capabilities based on results from simulated lidar data with analytic wake vortex models provided by NASA, which were then compared to the vendor's claims for the operational specifications of the lidar. Task 1.1(a) is described in Section 3, including the vortex model, lidar parameters and simulations, and results for both detection and tracking of wake vortices generated by Boeing 737s and 747s. Task 1

  4. Measurement issues in assessing employee performance: A generalizability theory approach

    SciTech Connect

    Stephenson, B.O.

    1996-08-01

    Increasingly, organizations are assessing employee performance through the use of rating instruments employed in the context of varied data collection strategies. For example, the focus may be on obtaining multiple perspectives regarding employee performance (360{degree} evaluation). From the standpoint of evaluating managers, upward assessments and ``peer to peer`` evaluations are perhaps two of the more common examples of such a multiple perspective approach. Unfortunately, it is probably fair to say that the increased interest and use of such data collection strategies has not been accompanied by a corresponding interest in addressing both validity and reliability concerns that have traditionally been associated with other forms of employee assessment (e.g., testing, assessment centers, structured interviews). As a consequence, many organizations may be basing decisions upon information collected under less than ideal measurement conditions. To the extent that such conditions produce unreliable measurements, the process may be both dysfunctional to the organization and/or unfair to the individual(s) being evaluated. Conversely, the establishment of reliable and valid measurement processes may in itself support the utilization of results in pursuit of organizational goals and enhance the credibility of the measurement process (see McEvoy (1990), who found the acceptance of subordinate ratings to be related to perceived accuracy and fairness of the measurement process). The present paper discusses a recent ``peer to peer`` evaluation conducted in our organization. The intent is to focus on the design of the study and present a Generalizability Theory (GT) approach to assessing the overall quality of the data collection strategy, along with suggestions for improving future designs. 9 refs., 3 tabs.

  5. An assessment of the direction-finding accuracy of bat biosonar beampatterns.

    PubMed

    Gilani, Uzair S; Müller, Rolf

    2016-02-01

    In the biosonar systems of bats, emitted acoustic energy and receiver sensitivity are distributed over direction and frequency through beampattern functions that have diverse and often complicated geometries. This complexity could be used by the animals to determine the direction of incoming sounds based on spectral signatures. The present study has investigated how well bat biosonar beampatterns are suited for direction finding using a measure of the smallest estimator variance that is possible for a given direction [Cramér-Rao lower bound (CRLB)]. CRLB values were estimated for numerical beampattern estimates derived from 330 individual shape samples, 157 noseleaves (used for emission), and 173 outer ears (pinnae). At an assumed 60 dB signal-to-noise ratio, the average value of the CRLB was 3.9°, which is similar to previous behavioral findings. Distribution for the CRLBs in individual beampatterns had a positive skew indicating the existence of regions where a given beampattern does not support a high accuracy. The highest supported accuracies were for direction finding in elevation (with the exception of phyllostomid emission patterns). No large, obvious differences in the CRLB (greater 2° in the mean) were found between the investigated major taxonomic groups, suggesting that different bat species have access to similar direction-finding information. PMID:26936541

  6. Mapping from ASTER stereo image data: DEM validation and accuracy assessment

    NASA Astrophysics Data System (ADS)

    Hirano, Akira; Welch, Roy; Lang, Harold

    The Advanced Spaceborne Thermal Emission and Reflection Radiometer (ASTER) on-board the National Aeronautics and Space Administration's (NASA's) Terra spacecraft provides along-track digital stereo image data at 15-m resolution. As part of ASTER digital elevation model (DEM) accuracy evaluation efforts by the US/Japan ASTER Science Team, stereo image data for four study sites around the world have been employed to validate prelaunch estimates of heighting accuracy. Automated stereocorrelation procedures were implemented using the Desktop Mapping System (DMS) software on a personal computer to derive DEMs with 30- to 150-m postings. Results indicate that a root-mean-square error (RMSE) in elevation between ±7 and ±15 m can be achieved with ASTER stereo image data of good quality. An evaluation of an ASTER DEM data product produced at the US Geological Survey (USGS) EROS Data Center (EDC) yielded an RMSE of ±8.6 m. Overall, the ability to extract elevations from ASTER stereopairs using stereocorrelation techniques meets expectations.

  7. Accuracy Assessment of Geostationary-Earth-Orbit with Simplified Perturbations Models

    NASA Astrophysics Data System (ADS)

    Ma, Lihua; Xu, Xiaojun; Pang, Feng

    2016-06-01

    A two-line element set (TLE) is a data format encoding orbital elements of an Earth-orbiting object for a given epoch. Using suitable prediction formula, the motion state of the object can be obtained at any time. The TLE data representation is specific to the simplified perturbations models, so any algorithm using a TLE as a data source must implement one of these models to correctly compute the state at a specific time. Accurately adjustment of antenna direction on the earth station is the key to satellite communications. With the TLE set topocentric elevation and azimuth direction angles can be calculated. The accuracy of perturbations models directly affect communication signal quality. Therefore, finding the error variations of the satellite orbits is really meaningful. In this present paper, the authors investigate the accuracy of the Geostationary-Earth-Orbit (GEO) with simplified perturbations models. The coordinate residuals of the simplified perturbations models in this paper can give references for engineers to predict the satellite orbits with TLE.

  8. Assessment of Classification Accuracies of SENTINEL-2 and LANDSAT-8 Data for Land Cover / Use Mapping

    NASA Astrophysics Data System (ADS)

    Hale Topaloğlu, Raziye; Sertel, Elif; Musaoğlu, Nebiye

    2016-06-01

    This study aims to compare classification accuracies of land cover/use maps created from Sentinel-2 and Landsat-8 data. Istanbul metropolitan city of Turkey, with a population of around 14 million, having different landscape characteristics was selected as study area. Water, forest, agricultural areas, grasslands, transport network, urban, airport- industrial units and barren land- mine land cover/use classes adapted from CORINE nomenclature were used as main land cover/use classes to identify. To fulfil the aims of this research, recently acquired dated 08/02/2016 Sentinel-2 and dated 22/02/2016 Landsat-8 images of Istanbul were obtained and image pre-processing steps like atmospheric and geometric correction were employed. Both Sentinel-2 and Landsat-8 images were resampled to 30m pixel size after geometric correction and similar spectral bands for both satellites were selected to create a similar base for these multi-sensor data. Maximum Likelihood (MLC) and Support Vector Machine (SVM) supervised classification methods were applied to both data sets to accurately identify eight different land cover/ use classes. Error matrix was created using same reference points for Sentinel-2 and Landsat-8 classifications. After the classification accuracy, results were compared to find out the best approach to create current land cover/use map of the region. The results of MLC and SVM classification methods were compared for both images.

  9. Methodology issues concerning the accuracy of kinematic data collection and analysis using the ariel performance analysis system

    NASA Technical Reports Server (NTRS)

    Wilmington, R. P.; Klute, Glenn K. (Editor); Carroll, Amy E. (Editor); Stuart, Mark A. (Editor); Poliner, Jeff (Editor); Rajulu, Sudhakar (Editor); Stanush, Julie (Editor)

    1992-01-01

    Kinematics, the study of motion exclusive of the influences of mass and force, is one of the primary methods used for the analysis of human biomechanical systems as well as other types of mechanical systems. The Anthropometry and Biomechanics Laboratory (ABL) in the Crew Interface Analysis section of the Man-Systems Division performs both human body kinematics as well as mechanical system kinematics using the Ariel Performance Analysis System (APAS). The APAS supports both analysis of analog signals (e.g. force plate data collection) as well as digitization and analysis of video data. The current evaluations address several methodology issues concerning the accuracy of the kinematic data collection and analysis used in the ABL. This document describes a series of evaluations performed to gain quantitative data pertaining to position and constant angular velocity movements under several operating conditions. Two-dimensional as well as three-dimensional data collection and analyses were completed in a controlled laboratory environment using typical hardware setups. In addition, an evaluation was performed to evaluate the accuracy impact due to a single axis camera offset. Segment length and positional data exhibited errors within 3 percent when using three-dimensional analysis and yielded errors within 8 percent through two-dimensional analysis (Direct Linear Software). Peak angular velocities displayed errors within 6 percent through three-dimensional analyses and exhibited errors of 12 percent when using two-dimensional analysis (Direct Linear Software). The specific results from this series of evaluations and their impacts on the methodology issues of kinematic data collection and analyses are presented in detail. The accuracy levels observed in these evaluations are also presented.

  10. Measurement accuracy and Cerenkov removal for high performance, high spatial resolution scintillation dosimetry

    SciTech Connect

    Archambault, Louis; Beddar, A. Sam; Gingras, Luc

    2006-01-15

    With highly conformal radiation therapy techniques such as intensity-modulated radiation therapy, radiosurgery, and tomotherapy becoming more common in clinical practice, the use of these narrow beams requires a higher level of precision in quality assurance and dosimetry. Plastic scintillators with their water equivalence, energy independence, and dose rate linearity have been shown to possess excellent qualities that suit the most complex and demanding radiation therapy treatment plans. The primary disadvantage of plastic scintillators is the presence of Cerenkov radiation generated in the light guide, which results in an undesired stem effect. Several techniques have been proposed to minimize this effect. In this study, we compared three such techniques--background subtraction, simple filtering, and chromatic removal--in terms of reproducibility and dose accuracy as gauges of their ability to remove the Cerenkov stem effect from the dose signal. The dosimeter used in this study comprised a 6-mm{sup 3} plastic scintillating fiber probe, an optical fiber, and a color charge-coupled device camera. The whole system was shown to be linear and the total light collected by the camera was reproducible to within 0.31% for 5-s integration time. Background subtraction and chromatic removal were both found to be suitable for precise dose evaluation, with average absolute dose discrepancies of 0.52% and 0.67%, respectively, from ion chamber values. Background subtraction required two optical fibers, but chromatic removal used only one, thereby preventing possible measurement artifacts when a strong dose gradient was perpendicular to the optical fiber. Our findings showed that a plastic scintillation dosimeter could be made free of the effect of Cerenkov radiation.

  11. Accuracy of ultrasound and oral cholecystography in assessing the number and size of gallstones: implications for non-surgical therapy.

    PubMed

    Brakel, K; Laméris, J S; Nijs, H G; Ginai, A Z; Terpstra, O T

    1992-09-01

    Prior to non-surgical therapy of gallstones it is important to assess their number and size. In order to evaluate the accuracy of ultrasound (US) and oral cholecystography (OCG) in counting and measuring gallstones, a prospective blind study was conducted to compare the results of US (n = 99) and OCG (n = 36), either alone or in combination (n = 34), with the number and size of gallstones retrieved after cholecystectomy. The number of gallstones was accurately estimated by US and OCG in 74% and 69% of the cases, respectively. In assessing the presence of up to three, five or 10 gallstones both US and OCG proved reliable. In measuring the size of gallstones, there was 19% accuracy with US compared with only 3% with OCG. With an accepted measurement error of 3 mm these values increased to 80% for US and 44% for OCG. US proved more reliable than OCG in discriminating gallstones smaller or larger than 10 mm and smaller or larger than 20 mm, but with US, detection of gallstones larger than 30 mm was problematic. Both US and OCG underestimated gallstone size. The combination of both techniques did not significantly improve the assessment of either number or size of gallstones compared with the results obtained with US or OCG alone. It is concluded that (1) both US and OCG have some limitations in assessing the number and size of gallstones, (2) the combination of both examinations does not improve accuracy, and (3) patient selection for non-surgical treatment of gallstones can be started by US alone. PMID:1393414

  12. Exploring Proficiency-Based vs. Performance-Based Items with Elicited Imitation Assessment

    ERIC Educational Resources Information Center

    Cox, Troy L.; Bown, Jennifer; Burdis, Jacob

    2015-01-01

    This study investigates the effect of proficiency- vs. performance-based elicited imitation (EI) assessment. EI requires test-takers to repeat sentences in the target language. The accuracy at which test-takers are able to repeat sentences highly correlates with test-takers' language proficiency. However, in EI, the factors that render an item…

  13. Building Confidence in LLW Performance Assessments - 13386

    SciTech Connect

    Rustick, Joseph H.; Kosson, David S.; Krahn, Steven L.; Clarke, James H.

    2013-07-01

    The performance assessment process and incorporated input assumptions for four active and one planned DOE disposal sites were analyzed using a systems approach. The sites selected were the Savannah River E-Area Slit and Engineered Trenches, Hanford Integrated Disposal Facility, Idaho Radioactive Waste Management Complex, Oak Ridge Environmental Management Waste Management Facility, and Nevada National Security Site Area 5. Each disposal facility evaluation incorporated three overall system components (1) site characteristics (climate, geology, geochemistry, etc.), (2) waste properties (waste form and package), and (3) engineered barrier designs (cover system, liner system). Site conceptual models were also analyzed to identity the main risk drivers and risk insights controlling performance for each disposal facility. (authors)

  14. Assessing the accuracy and repeatability of automated photogrammetrically generated digital surface models from unmanned aerial system imagery

    NASA Astrophysics Data System (ADS)

    Chavis, Christopher

    Using commercial digital cameras in conjunction with Unmanned Aerial Systems (UAS) to generate 3-D Digital Surface Models (DSMs) and orthomosaics is emerging as a cost-effective alternative to Light Detection and Ranging (LiDAR). Powerful software applications such as Pix4D and APS can automate the generation of DSM and orthomosaic products from a handful of inputs. However, the accuracy of these models is relatively untested. The objectives of this study were to generate multiple DSM and orthomosaic pairs of the same area using Pix4D and APS from flights of imagery collected with a lightweight UAS. The accuracy of each individual DSM was assessed in addition to the consistency of the method to model one location over a period of time. Finally, this study determined if the DSMs automatically generated using lightweight UAS and commercial digital cameras could be used for detecting changes in elevation and at what scale. Accuracy was determined by comparing DSMs to a series of reference points collected with survey grade GPS. Other GPS points were also used as control points to georeference the products within Pix4D and APS. The effectiveness of the products for change detection was assessed through image differencing and observance of artificially induced, known elevation changes. The vertical accuracy with the optimal data and model is ≈ 25 cm and the highest consistency over repeat flights is a standard deviation of ≈ 5 cm. Elevation change detection based on such UAS imagery and DSM models should be viable for detecting infrastructure change in urban or suburban environments with little dense canopy vegetation.

  15. Monitoring Rater Performance over Time: A Framework for Detecting Differential Accuracy and Differential Scale Category Use

    ERIC Educational Resources Information Center

    Myford, Carol M.; Wolfe, Edward W.

    2009-01-01

    In this study, we describe a framework for monitoring rater performance over time. We present several statistical indices to identify raters whose standards drift and explain how to use those indices operationally. To illustrate the use of the framework, we analyzed rating data from the 2002 Advanced Placement English Literature and Composition…

  16. Effects of Familiarity with a Melody Prior to Instruction on Children's Piano Performance Accuracy

    ERIC Educational Resources Information Center

    Frewen, Katherine Goins

    2010-01-01

    The main purpose of this study was to examine the effects of familiarity with the sound of a melody on children's performance of the melody. Children in kindergarten through fourth grade (N = 97) with no previous formal instrumental instruction were taught to play a four-measure melody on a keyboard during an individual instruction session. Before…

  17. Assessment of accuracy of adopted centre of mass corrections for the Etalon geodetic satellites

    NASA Astrophysics Data System (ADS)

    Appleby, Graham; Dunn, Peter; Otsubo, Toshimichi; Rodriguez, Jose

    2016-04-01

    Accurate centre-of-mass corrections are key parameters in the analysis of satellite laser ranging observations. In order to meet current accuracy requirements, the vector from the reflection point of a laser retroreflector array to the centre of mass of the orbiting spacecraft must be known with mm-level accuracy. In general, the centre-of-mass correction will be dependent on the characteristics of the target (geometry, construction materials, type of retroreflectors), the hardware employed by the tracking station (laser system, detector type), the intensity of the returned laser pulses, and the post-processing strategy employed to reduce the observations [1]. For the geodetic targets used by the ILRS to produce the SLR contribution to the ITRF, the LAGEOS and Etalon satellite pairs, there are centre-of-mass correction tables available for each tracking station [2]. These values are based on theoretical considerations, empirical determination of the optical response functions of each satellite, and knowledge of the tracking technology and return intensity employed [1]. Here we present results that put into question the accuracy of some of the current values for the centre-of-mass corrections of the Etalon satellites. We have computed weekly reference frame solutions using LAGEOS and Etalon observations for the period 1996-2014, estimating range bias parameters for each satellite type along with station coordinates. Analysis of the range bias time series reveals an unexplained, cm-level positive bias for the Etalon satellites in the case of most stations operating at high energy return levels. The time series of tracking stations that have undergone a transition from different modes of operation provide the evidence pointing to an inadequate centre-of-mass modelling. [1] Otsubo, T., and G.M. Appleby, System-dependent centre-of-mass correction for spherical geodetic satellites, J Geophys. Res., 108(B4), 2201, 2003 [2] Appleby, G.M., and T. Otsubo, Centre of Mass

  18. Accuracy assessment of photogrammetric digital elevation models generated for the Schultz Fire burn area

    NASA Astrophysics Data System (ADS)

    Muise, Danna K.

    This paper evaluates the accuracy of two digital photogrammetric software programs (ERDAS Imagine LPS and PCI Geomatica OrthoEngine) with respect to high-resolution terrain modeling in a complex topographic setting affected by fire and flooding. The site investigated is the 2010 Schultz Fire burn area, situated on the eastern edge of the San Francisco Peaks approximately 10 km northeast of Flagstaff, Arizona. Here, the fire coupled with monsoon rains typical of northern Arizona drastically altered the terrain of the steep mountainous slopes and residential areas below the burn area. To quantify these changes, high resolution (1 m and 3 m) digital elevation models (DEMs) were generated of the burn area using color stereoscopic aerial photographs taken at a scale of approximately 1:12000. Using a combination of pre-marked and post-marked ground control points (GCPs), I first used ERDAS Imagine LPS to generate a 3 m DEM covering 8365 ha of the affected area. This data was then compared to a reference DEM (USGS 10 m) to evaluate the accuracy of the resultant DEM. Findings were then divided into blunders (errors) and bias (slight differences) and further analyzed to determine if different factors (elevation, slope, aspect and burn severity) affected the accuracy of the DEM. Results indicated that both blunders and bias increased with an increase in slope, elevation and burn severity. It was also found that southern facing slopes contained the highest amount of bias while northern facing slopes contained the highest proportion of blunders. Further investigations compared a 1 m DEM generated using ERDAS Imagine LPS with a 1 m DEM generated using PCI Geomatica OrthoEngine for a specific region of the burn area. This area was limited to the overlap of two images due to OrthoEngine requiring at least three GCPs to be located in the overlap of the imagery. Results indicated that although LPS produced a less accurate DEM, it was much more flexible than OrthoEngine. It was also

  19. A SUB-PIXEL ACCURACY ASSESSMENT FRAMEWORK FOR DETERMINING LANDSAT TM DERIVED IMPERVIOUS SURFACE ESTIMATES.

    EPA Science Inventory

    The amount of impervious surface in a watershed is a landscape indicator integrating a number of concurrent interactions that influence a watershed's hydrology. Remote sensing data and techniques are viable tools to assess anthropogenic impervious surfaces. However a fundamental ...

  20. Consideration of environmental change in performance assessments.

    PubMed

    Pinedo, P; Thorne, M; Egan, M; Calvez, M; Kautsky, U

    2005-01-01

    Depending on the particular circumstances in which a post-closure performance assessment of a radioactive waste repository is made, it may be appropriate to follow simple or more complex approaches in characterising the biosphere. Several different Example Reference Biospheres were explored in BIOMASS Theme 1 to address a range of issues that arise. Here, consideration is given to Example Reference Biospheres relevant to representing the implications of changes that may occur within the biosphere system during the period over which releases of radionuclides from a disposal facility might take place. Mechanisms of change considered include those extrinsic and intrinsic to the system of interest. An overall methodology for incorporating environmental change into assessments is proposed. This includes screening of primary mechanisms of change; identification of possible time sequences of change; development of a coherent description of the regional landscape response for each time sequence; integration of source term and geosphere-biosphere interface information; identification and description of one or more time series of assessment biospheres; and evaluation of the advantages and disadvantages of simulating the effects of sequences of biosphere systems and the transitions between them, or of defining a set of biosphere systems to be represented individually in a non-sequential analysis. The usefulness of the methodology is explored in two site-specific examples and one generic example. PMID:16198459

  1. Performance assessment task team progress report

    SciTech Connect

    Wood, D.E.; Curl, R.U.; Armstrong, D.R.; Cook, J.R.; Dolenc, M.R.; Kocher, D.C.; Owens, K.W.; Regnier, E.P.; Roles, G.W.; Seitz, R.R.

    1994-05-01

    The U.S. Department of Energy (DOE) Headquarters EM-35, established a Performance Assessment Task Team (referred to as the Team) to integrate the activities of the sites that are preparing performance assessments (PAs) for disposal of new low-level waste, as required by Chapter III of DOE Order 5820.2A, {open_quotes}Low-Level Waste Management{close_quotes}. The intent of the Team is to achieve a degree of consistency among these PAs as the analyses proceed at the disposal sites. The Team`s purpose is to recommend policy and guidance to the DOE on issues that impact the PAs, including release scenarios and parameters, so that the approaches are as consistent as possible across the DOE complex. The Team has identified issues requiring attention and developed discussion papers for those issues. Some issues have been completed, and the recommendations are provided in this document. Other issues are still being discussed, and the status summaries are provided in this document. A major initiative was to establish a subteam to develop a set of test scenarios and parameters for benchmarking codes in use at the various sites. The activities of the Team are reported here through December 1993.

  2. Mathematical accuracy of Aztec land surveys assessed from records in the Codex Vergara.

    PubMed

    Jorge, María del Carmen; Williams, Barbara J; Garza-Hume, C E; Olvera, Arturo

    2011-09-13

    Land surveying in ancient states is documented not only for Eurasia but also for the Americas, amply attested by two Acolhua-Aztec pictorial manuscripts from the Valley of Mexico. The Codex Vergara and the Códice de Santa María Asunción consist of hundreds of drawings of agricultural fields that uniquely record surface areas as well as perimeter measurements. A previous study of the Codex Vergara examines how Acolhua-Aztecs determined field area by reconstructing their calculation procedures. Here we evaluate the accuracy of their area values using modern mathematics. The findings verify the overall mathematical validity of the codex records. Three-quarters of the areas are within 5% of the maximum possible value, and 85% are within 10%, which compares well with reported errors by Western surveyors that postdate Aztec-Acolhua work by several centuries. PMID:21876138

  3. Mathematical accuracy of Aztec land surveys assessed from records in the Codex Vergara

    PubMed Central

    Williams, Barbara J.; Garza-Hume, C. E.; Olvera, Arturo

    2011-01-01

    Land surveying in ancient states is documented not only for Eurasia but also for the Americas, amply attested by two Acolhua–Aztec pictorial manuscripts from the Valley of Mexico. The Codex Vergara and the Códice de Santa María Asunción consist of hundreds of drawings of agricultural fields that uniquely record surface areas as well as perimeter measurements. A previous study of the Codex Vergara examines how Acolhua–Aztecs determined field area by reconstructing their calculation procedures. Here we evaluate the accuracy of their area values using modern mathematics. The findings verify the overall mathematical validity of the codex records. Three-quarters of the areas are within 5% of the maximum possible value, and 85% are within 10%, which compares well with reported errors by Western surveyors that postdate Aztec–Acolhua work by several centuries. PMID:21876138

  4. Assessment of Required Accuracy of Digital Elevation Data for Hydrologic Modeling

    NASA Technical Reports Server (NTRS)

    Kenward, T.; Lettenmaier, D. P.

    1997-01-01

    The effect of vertical accuracy of Digital Elevation Models (DEMs) on hydrologic models is evaluated by comparing three DEMs and resulting hydrologic model predictions applied to a 7.2 sq km USDA - ARS watershed at Mahantango Creek, PA. The high resolution (5 m) DEM was resempled to a 30 m resolution using method that constrained the spatial structure of the elevations to be comparable with the USGS and SIR-C DEMs. This resulting 30 m DEM was used as the reference product for subsequent comparisons. Spatial fields of directly derived quantities, such as elevation differences, slope, and contributing area, were compared to the reference product, as were hydrologic model output fields derived using each of the three DEMs at the common 30 m spatial resolution.

  5. Geodetic and geophysical results from a Taiwan airborne gravity survey: Data reduction and accuracy assessment

    NASA Astrophysics Data System (ADS)

    Hwang, Cheinway; Hsiao, Yu-Shen; Shih, Hsuan-Chang; Yang, Ming; Chen, Kwo-Hwa; Forsberg, Rene; Olesen, Arne V.

    2007-04-01

    An airborne gravity survey was conducted over Taiwan using a LaCoste and Romberg (LCR) System II air-sea gravimeter with gravity and global positioning system (GPS) data sampled at 1 Hz. The aircraft trajectories were determined using a GPS network kinematic adjustment relative to eight GPS tracking stations. Long-wavelength errors in position are reduced when doing numerical differentiations for velocity and acceleration. A procedure for computing resolvable wavelength of error-free airborne gravimetry is derived. The accuracy requirements of position, velocity, and accelerations for a 1-mgal accuracy in gravity anomaly are derived. GPS will fulfill these requirements except for vertical acceleration. An iterative Gaussian filter is used to reduce errors in vertical acceleration. A compromising filter width for noise reduction and gravity detail is 150 s. The airborne gravity anomalies are compared with surface values, and large differences are found over high mountains where the gravity field is rough and surface data density is low. The root mean square (RMS) crossover differences before and after a bias-only adjustment are 4.92 and 2.88 mgal, the latter corresponding to a 2-mgal standard error in gravity anomaly. Repeatability analyses at two survey lines suggest that GPS is the dominating factor affecting the repeatability. Fourier transform and least-squares collocation are used for downward continuation, and the latter produces a better result. Two geoid models are computed, one using airborne and surface gravity data and the other using surface data only, and the former yields a better agreement with the GPS-derived geoidal heights. Bouguer anomalies derived from airborne gravity by a rigorous numerical integration reveal important tectonic features.

  6. Performance advantages of dynamically tuned gyroscopes in high accuracy spacecraft pointing and stabilization applications

    NASA Technical Reports Server (NTRS)

    Irvine, R.; Van Alstine, R.

    1979-01-01

    The paper compares and describes the advantages of dry tuned gyros over floated gyros for space applications. Attention is given to describing the Teledyne SDG-5 gyro and the second-generation NASA Standard Dry Rotor Inertial Reference Unit (DRIRU II). Certain tests which were conducted to evaluate the SDG-5 and DRIRU II for specific mission requirements are outlined, and their results are compared with published test results on other gyro types. Performance advantages are highlighted.

  7. Envisat Ocean Altimetry Performance Assessment and Cross-calibration

    PubMed Central

    Faugere, Yannice; Dorandeu, Joël; Lefevre, Fabien; Picot, Nicolas; Femenias, Pierre

    2006-01-01

    Nearly three years of Envisat altimetric observations over ocean are available in Geophysical Data Record (GDR) products. The quality assessment of these data is routinely performed at the CLS Space Oceanography Division in the frame of the CNES Segment Sol Altimétrie et Orbitographie (SSALTO) and ESA French Processing and Archiving Center (F-PAC) activities. This paper presents the main results in terms of Envisat data quality: verification of data availability and validity, monitoring of the most relevant altimeter (ocean1 retracking) and radiometer parameters, assessment of the Envisat altimeter system performances. This includes a cross-calibration analysis of Envisat data with Jason-1, ERS-2 and T/P. Envisat data show good general quality. A good orbit quality and a low level of noise allow Envisat to reach the high level of accuracy of other precise missions such as T/P and Jason-1. Some issues raised in this paper, as the gravity induced orbit errors, will be solved in the next version of GDR products. Some others, as the Envisat Mean Sea Level in the first year, still need further investigation.

  8. 43 CFR 3836.10 - Performing assessment work.

    Code of Federal Regulations, 2011 CFR

    2011-10-01

    ... 43 Public Lands: Interior 2 2011-10-01 2011-10-01 false Performing assessment work. 3836.10... MANAGEMENT, DEPARTMENT OF THE INTERIOR MINERALS MANAGEMENT (3000) ANNUAL ASSESSMENT WORK REQUIREMENTS FOR MINING CLAIMS Performing Assessment Work § 3836.10 Performing assessment work....

  9. 43 CFR 3836.10 - Performing assessment work.

    Code of Federal Regulations, 2013 CFR

    2013-10-01

    ... 43 Public Lands: Interior 2 2013-10-01 2013-10-01 false Performing assessment work. 3836.10... MANAGEMENT, DEPARTMENT OF THE INTERIOR MINERALS MANAGEMENT (3000) ANNUAL ASSESSMENT WORK REQUIREMENTS FOR MINING CLAIMS Performing Assessment Work § 3836.10 Performing assessment work....

  10. 43 CFR 3836.10 - Performing assessment work.

    Code of Federal Regulations, 2012 CFR

    2012-10-01

    ... 43 Public Lands: Interior 2 2012-10-01 2012-10-01 false Performing assessment work. 3836.10... MANAGEMENT, DEPARTMENT OF THE INTERIOR MINERALS MANAGEMENT (3000) ANNUAL ASSESSMENT WORK REQUIREMENTS FOR MINING CLAIMS Performing Assessment Work § 3836.10 Performing assessment work....

  11. 43 CFR 3836.10 - Performing assessment work.

    Code of Federal Regulations, 2014 CFR

    2014-10-01

    ... 43 Public Lands: Interior 2 2014-10-01 2014-10-01 false Performing assessment work. 3836.10... MANAGEMENT, DEPARTMENT OF THE INTERIOR MINERALS MANAGEMENT (3000) ANNUAL ASSESSMENT WORK REQUIREMENTS FOR MINING CLAIMS Performing Assessment Work § 3836.10 Performing assessment work....

  12. Assessment of Geometrical Accuracy of Multimodal Images Used for Treatment Planning in Stereotactic Radiotherapy and Radiosurgery: CT, MRI and PET

    SciTech Connect

    Garcia-Garduno, O. A.; Larraga-Gutierrez, J. M.; Celis, M. A.; Suarez-Campos, J. J.; Rodriguez-Villafuerte, M.; Martinez-Davalos, A.

    2006-09-08

    An acrylic phantom was designed and constructed to assess the geometrical accuracy of CT, MRI and PET images for stereotactic radiotherapy (SRT) and radiosurgery (SRS) applications. The phantom was suited for each image modality with a specific tracer and compared with CT images to measure the radial deviation between the reference marks in the phantom. It was found that for MRI the maximum mean deviation is 1.9 {+-} 0.2 mm compared to 2.4 {+-} 0.3 mm reported for PET. These results will be used for margin outlining in SRS and SRT treatment planning.

  13. Performance-based assessment of reconstructed images

    SciTech Connect

    Hanson, Kenneth

    2009-01-01

    During the early 90s, I engaged in a productive and enjoyable collaboration with Robert Wagner and his colleague, Kyle Myers. We explored the ramifications of the principle that tbe quality of an image should be assessed on the basis of how well it facilitates the performance of appropriate visual tasks. We applied this principle to algorithms used to reconstruct scenes from incomplete and/or noisy projection data. For binary visual tasks, we used both the conventional disk detection and a new challenging task, inspired by the Rayleigh resolution criterion, of deciding whether an object was a blurred version of two dots or a bar. The results of human and machine observer tests were summarized with the detectability index based on the area under the ROC curve. We investigated a variety of reconstruction algorithms, including ART, with and without a nonnegativity constraint, and the MEMSYS3 algorithm. We concluded that the performance of the Raleigh task was optimized when the strength of the prior was near MEMSYS's default 'classic' value for both human and machine observers. A notable result was that the most-often-used metric of rms error in the reconstruction was not necessarily indicative of the value of a reconstructed image for the purpose of performing visual tasks.

  14. The Impact of Performance Level Misclassification on the Accuracy and Precision of Percent at Performance Level Measures

    ERIC Educational Resources Information Center

    Betebenner, Damian W.; Shang, Yi; Xiang, Yun; Zhao, Yan; Yue, Xiaohui

    2008-01-01

    No Child Left Behind (NCLB) performance mandates, embedded within state accountability systems, focus school AYP (adequate yearly progress) compliance squarely on the percentage of students at or above proficient. The singular importance of this quantity for decision-making purposes has initiated extensive research into percent proficient as a…

  15. Assessing liner performance using on-farm milk meters.

    PubMed

    Penry, J F; Leonardi, S; Upton, J; Thompson, P D; Reinemann, D J

    2016-08-01

    The primary objective of this study was to quantify and compare the interactive effects of liner compression, milking vacuum level, and pulsation settings on average milk flow rates for liners representing the range of liner compression of commercial liners. A secondary objective was to evaluate a methodology for assessing liner performance that can be applied on commercial dairy farms. Eight different liner types were assessed using 9 different combinations of milking system vacuum and pulsation settings applied to a herd of 80 cows with vacuum and pulsation conditions changed daily for 36d using a central composite experimental design. Liner response surfaces were created for explanatory variables milking system vacuum (Vsystem) and pulsator ratio (PR) and response variable average milk flow rate (AMF=total yield/total cups-on time) expressed as a fraction of the within-cow average flow rate for all treatments (average milk flow rate fraction, AMFf). Response surfaces were also created for between-liner comparisons for standardized conditions of claw vacuum and milk ratio (fraction of pulsation cycle during which milk is flowing). The highest AMFf was observed at the highest levels of Vsystem, PR, and overpressure. All liners showed an increase in AMF as milking conditions were changed from low to high standardized conditions of claw vacuum and milk ratio. Differences in AMF between liners were smallest at the most gentle milking conditions (low Vsystem and low milk ratio), and these between-liner differences in AMF increased as liner overpressure increased. Differences were noted with vacuum drop between Vsystem and claw vacuum depending on the liner venting system, with short milk tube vented liners having the greater vacuum drop than mouthpiece chamber vented liners. The accuracy of liner performance assessment in commercial parlors fitted with milk meters can be improved by using a central composite experimental design with a repeated center point treatment

  16. Assessing the accuracy of auralizations computed using a hybrid geometrical-acoustics and wave-acoustics method

    NASA Astrophysics Data System (ADS)

    Summers, Jason E.; Takahashi, Kengo; Shimizu, Yasushi; Yamakawa, Takashi

    2001-05-01

    When based on geometrical acoustics, computational models used for auralization of auditorium sound fields are physically inaccurate at low frequencies. To increase accuracy while keeping computation tractable, hybrid methods using computational wave acoustics at low frequencies have been proposed and implemented in small enclosures such as simplified models of car cabins [Granier et al., J. Audio Eng. Soc. 44, 835-849 (1996)]. The present work extends such an approach to an actual 2400-m3 auditorium using the boundary-element method for frequencies below 100 Hz. The effect of including wave-acoustics at low frequencies is assessed by comparing the predictions of the hybrid model with those of the geometrical-acoustics model and comparing both with measurements. Conventional room-acoustical metrics are used together with new methods based on two-dimensional distance measures applied to time-frequency representations of impulse responses. Despite in situ measurements of boundary impedance, uncertainties in input parameters limit the accuracy of the computed results at low frequencies. However, aural perception ultimately defines the required accuracy of computational models. An algorithmic method for making such evaluations is proposed based on correlating listening-test results with distance measures between time-frequency representations derived from auditory models of the ear-brain system. Preliminary results are presented.

  17. Assessment of the accuracy of plasma shape reconstruction by the Cauchy condition surface method in JT-60SA.

    PubMed

    Miyata, Y; Suzuki, T; Takechi, M; Urano, H; Ide, S

    2015-07-01

    For the purpose of stable plasma equilibrium control and detailed analysis, it is essential to reconstruct an accurate plasma boundary on the poloidal cross section in tokamak devices. The Cauchy condition surface (CCS) method is a numerical approach for calculating the spatial distribution of the magnetic flux outside a hypothetical surface and reconstructing the plasma boundary from the magnetic measurements located outside the plasma. The accuracy of the plasma shape reconstruction has been assessed by comparing the CCS method and an equilibrium calculation in JT-60SA with a high elongation and triangularity of plasma shape. The CCS, on which both Dirichlet and Neumann conditions are unknown, is defined as a hypothetical surface located inside the real plasma region. The accuracy of the plasma shape reconstruction is sensitive to the CCS free parameters such as the number of unknown parameters and the shape in JT-60SA. It is found that the optimum number of unknown parameters and the size of the CCS that minimizes errors in the reconstructed plasma shape are in proportion to the plasma size. Furthermore, it is shown that the accuracy of the plasma shape reconstruction is greatly improved using the optimum number of unknown parameters and shape of the CCS, and the reachable reconstruction errors in plasma shape and locations of strike points are within the target ranges in JT-60SA. PMID:26233387

  18. Assessment of the accuracy of plasma shape reconstruction by the Cauchy condition surface method in JT-60SA

    SciTech Connect

    Miyata, Y.; Suzuki, T.; Takechi, M.; Urano, H.; Ide, S.

    2015-07-15

    For the purpose of stable plasma equilibrium control and detailed analysis, it is essential to reconstruct an accurate plasma boundary on the poloidal cross section in tokamak devices. The Cauchy condition surface (CCS) method is a numerical approach for calculating the spatial distribution of the magnetic flux outside a hypothetical surface and reconstructing the plasma boundary from the magnetic measurements located outside the plasma. The accuracy of the plasma shape reconstruction has been assessed by comparing the CCS method and an equilibrium calculation in JT-60SA with a high elongation and triangularity of plasma shape. The CCS, on which both Dirichlet and Neumann conditions are unknown, is defined as a hypothetical surface located inside the real plasma region. The accuracy of the plasma shape reconstruction is sensitive to the CCS free parameters such as the number of unknown parameters and the shape in JT-60SA. It is found that the optimum number of unknown parameters and the size of the CCS that minimizes errors in the reconstructed plasma shape are in proportion to the plasma size. Furthermore, it is shown that the accuracy of the plasma shape reconstruction is greatly improved using the optimum number of unknown parameters and shape of the CCS, and the reachable reconstruction errors in plasma shape and locations of strike points are within the target ranges in JT-60SA.

  19. Do Students Know What They Know? Exploring the Accuracy of Students' Self-Assessments

    ERIC Educational Resources Information Center

    Lindsey, Beth A.; Nagel, Megan L.

    2015-01-01

    We have conducted an investigation into how well students in introductory science classes (both physics and chemistry) are able to predict which questions they will or will not be able to answer correctly on an upcoming assessment. An examination of the data at the level of students' overall scores reveals results consistent with the…

  20. Assessing posttraumatic stress in military service members: improving efficiency and accuracy.

    PubMed

    Fissette, Caitlin L; Snyder, Douglas K; Balderrama-Durbin, Christina; Balsis, Steve; Cigrang, Jeffrey; Talcott, G Wayne; Tatum, JoLyn; Baker, Monty; Cassidy, Daniel; Sonnek, Scott; Heyman, Richard E; Smith Slep, Amy M

    2014-03-01

    Posttraumatic stress disorder (PTSD) is assessed across many different populations and assessment contexts. However, measures of PTSD symptomatology often are not tailored to meet the needs and demands of these different populations and settings. In order to develop population- and context-specific measures of PTSD it is useful first to examine the item-level functioning of existing assessment methods. One such assessment measure is the 17-item PTSD Checklist-Military version (PCL-M; Weathers, Litz, Herman, Huska, & Keane, 1993). Although the PCL-M is widely used in both military and veteran health-care settings, it is limited by interpretations based on aggregate scores that ignore variability in item endorsement rates and relatedness to PTSD. Based on item response theory, this study conducted 2-parameter logistic analyses of the PCL-M in a sample of 196 service members returning from a yearlong, high-risk deployment to Iraq. Results confirmed substantial variability across items both in terms of their relatedness to PTSD and their likelihood of endorsement at any given level of PTSD. The test information curve for the full 17-item PCL-M peaked sharply at a value of θ = 0.71, reflecting greatest information at approximately the 76th percentile level of underlying PTSD symptom levels in this sample. Implications of findings are discussed as they relate to identifying more efficient, accurate subsets of items tailored to military service members as well as other specific populations and evaluation contexts. PMID:24015857

  1. Assessing the Accuracy of Psychology Undergraduates' Perceptions of Graduate Admission Criteria.

    ERIC Educational Resources Information Center

    Nauta, Margaret M.

    2000-01-01

    Assesses how accurately psychology undergraduates perceive: (1) the importance of various graduate admission criteria, including minimum grade point averages needed for consideration by graduate programs; (2) the length of time required to complete graduate degrees; and (3) starting salaries at various educational levels. Presents and discusses…

  2. Accuracy of Cameriere's third molar maturity index in assessing legal adulthood on Serbian population.

    PubMed

    Zelic, Ksenija; Galic, Ivan; Nedeljkovic, Nenad; Jakovljevic, Aleksandar; Milosevic, Olga; Djuric, Marija; Cameriere, Roberto

    2016-02-01

    At the moment, a large number of asylum seekers from the Middle East are passing through Serbia. Most of them do not have identification documents. Also, the past wars in the Balkan region have left many unidentified victims and missing persons. From a legal point of view, it is crucial to determine whether a person is a minor or an adult (≥18 years of age). In recent years, methods based on the third molar development have been used for this purpose. The present article aims to verify the third molar maturity index (I3M) based on the correlation between the chronological age and normalized measures of the open apices and height of the third mandibular molar. The sample consisted of 598 panoramic radiographs (290 males and 299 females) from 13 to 24 years of age. The cut-off value of I3M=0.08 was used to discriminate adults and minors. The results demonstrated high sensitivity (0.96, 0.86) and specificity (0.94, 0.98) in males and females, respectively. The proportion of correctly classified individuals was 0.95 in males and 0.91 in females. In conclusion, the suggested value of I3M=0.08 can be used on Serbian population with high accuracy. PMID:26773223

  3. Accuracy Assessment of the Precise Point Positioning for Different Troposphere Models

    NASA Astrophysics Data System (ADS)

    Oguz Selbesoglu, Mahmut; Gurturk, Mert; Soycan, Metin

    2016-04-01

    This study investigates the accuracy and repeatability of PPP technique at different latitudes by using different troposphere delay models. Nine IGS stations were selected between 00-800 latitudes at northern hemisphere and southern hemisphere. Coordinates were obtained for 7 days at 1 hour intervals in summer and winter. At first, the coordinates were estimated by using Niell troposphere delay model with and without including north and east gradients in order to investigate the contribution of troposphere delay gradients to the positioning . Secondly, Saastamoinen model was used to eliminate troposphere path delays by using standart atmosphere parameters were extrapolated for all station levels. Finally, coordinates were estimated by using RTCA-MOPS empirical troposphere delay model. Results demonstrate that Niell troposphere delay model with horizontal gradients has better mean values of rms errors 0.09 % and 65 % than the Niell troposphere model without horizontal gradients and RTCA-MOPS model, respectively. Saastamoinen model mean values of rms errors were obtained approximately 4 times bigger than the Niell troposphere delay model with horizontal gradients.

  4. Accuracy Assessments of ATMS Upper-Level Temperature Sounding Channels Using COSMIC RO Data

    NASA Astrophysics Data System (ADS)

    Lin, L.; Weng, F.; Zou, X.

    2012-12-01

    The Advanced Technology Microwave Sounder (ATMS) on board Suomi National Polar-orbiting Partnership (NPP) is a 22-channel passive microwave radiometer that can provide high-spatial-resolution data for generating temperature and moisture soundings in cloudy conditions. Global Positioning System (GPS) radio occultation (RO) data have high vertical resolution, are not affected by clouds, and are most accurate from 8 to 30 km, making them ideally suited for estimating the precision of ATMS measurements for upper level temperature sounding channels. In this study, Constellation Observing System for Meteorology, Ionosphere, and Climate (COSMIC) RO data are collocated with ATMS observation from December 10, 2011 to June 30, 2012. Compared with GPS simulations using the U.S. Joint Center of Satellite Data Assimilation (JCSDA) Community Radiative Transfer Model (CRTM), the global biases of brightness temperatures from ATMS measurements are within 0.5K for channels 6 to 13 for clear-sky data over ocean. This value is well within the pre-launch specification, indicating that the ATMS upper level temperature sounding channels have high accuracy. The monthly variation and angular dependence of ATMS bias are also examined.

  5. Accuracy assessment of building point clouds automatically generated from iphone images

    NASA Astrophysics Data System (ADS)

    Sirmacek, B.; Lindenbergh, R.

    2014-06-01

    Low-cost sensor generated 3D models can be useful for quick 3D urban model updating, yet the quality of the models is questionable. In this article, we evaluate the reliability of an automatic point cloud generation method using multi-view iPhone images or an iPhone video file as an input. We register such automatically generated point cloud on a TLS point cloud of the same object to discuss accuracy, advantages and limitations of the iPhone generated point clouds. For the chosen example showcase, we have classified 1.23% of the iPhone point cloud points as outliers, and calculated the mean of the point to point distances to the TLS point cloud as 0.11 m. Since a TLS point cloud might also include measurement errors and noise, we computed local noise values for the point clouds from both sources. Mean (μ) and standard deviation (σ) of roughness histograms are calculated as (μ1 = 0.44 m., σ1 = 0.071 m.) and (μ2 = 0.025 m., σ2 = 0.037 m.) for the iPhone and TLS point clouds respectively. Our experimental results indicate possible usage of the proposed automatic 3D model generation framework for 3D urban map updating, fusion and detail enhancing, quick and real-time change detection purposes. However, further insights should be obtained first on the circumstances that are needed to guarantee a successful point cloud generation from smartphone images.

  6. Assessment of Completeness and Positional Accuracy of Linear Features in Volunteered Geographic Information (vgi)

    NASA Astrophysics Data System (ADS)

    Eshghi, M.; Alesheikh, A. A.

    2015-12-01

    Recent advances in spatial data collection technologies and online services dramatically increase the contribution of ordinary people to produce, share, and use geographic information. Collecting spatial data as well as disseminating them on the internet by citizens has led to a huge source of spatial data termed as Volunteered Geographic Information (VGI) by Mike Goodchild. Although, VGI has produced previously unavailable data assets, and enriched existing ones. But its quality can be highly variable and challengeable. This presents several challenges to potential end users who are concerned about the validation and the quality assurance of the data which are collected. Almost, all the existing researches are based on how to find accurate VGI data from existing VGI data which consist of a) comparing the VGI data with the accurate official data, or b) in cases that there is no access to correct data; therefore, looking for an alternative way to determine the quality of VGI data is essential, and so forth. In this paper it has been attempt to develop a useful method to reach this goal. In this process, the positional accuracy of linear feature of Iran, Tehran OSM data have been analyzed.

  7. Image intensifier distortion correction for fluoroscopic RSA: the need for independent accuracy assessment.

    PubMed

    Kedgley, Angela E; Fox, Anne-Marie V; Jenkyn, Thomas R

    2012-01-01

    Fluoroscopic images suffer from multiple modes of image distortion. Therefore, the purpose of this study was to compare the effects of correction using a range of two-dimensional polynomials and a global approach. The primary measure of interest was the average error in the distances between four beads of an accuracy phantom, as measured using RSA. Secondary measures of interest were the root mean squared errors of the fit of the chosen polynomial to the grid of beads used for correction, and the errors in the corrected distances between the points of the grid in a second position. Based upon the two-dimensional measures, a polynomial of order three in the axis of correction and two in the perpendicular axis was preferred. However, based upon the RSA reconstruction, a polynomial of order three in the axis of correction and one in the perpendicular axis was preferred. The use of a calibration frame for these three-dimensional applications most likely tempers the effects of distortion. This study suggests that distortion correction should be validated for each of its applications with an independent "gold standard" phantom. PMID:22231207

  8. Performing Probabilistic Risk Assessment Through RAVEN

    SciTech Connect

    A. Alfonsi; C. Rabiti; D. Mandelli; J. Cogliati; R. Kinoshita

    2013-06-01

    The Reactor Analysis and Virtual control ENviroment (RAVEN) code is a software tool that acts as the control logic driver and post-processing engine for the newly developed Thermal-Hydraulic code RELAP-7. RAVEN is now a multi-purpose Probabilistic Risk Assessment (PRA) software framework that allows dispatching different functionalities: Derive and actuate the control logic required to simulate the plant control system and operator actions (guided procedures), allowing on-line monitoring/controlling in the Phase Space Perform both Monte-Carlo sampling of random distributed events and Dynamic Event Tree based analysis Facilitate the input/output handling through a Graphical User Interface (GUI) and a post-processing data mining module

  9. Kristallin-I performance assessment: First results

    SciTech Connect

    Zuidema, P.; McKinley, I.G.; Smith, P.A.; Curti, E.; Klos, R.; Hugi, M.; Niemeyer, M.

    1993-12-31

    The Kristallin-I performance assessment indicates that the Swiss concept for disposal of vitrified HLW deep in the crystalline basement of Northern Switzerland will offer sufficient safety. This conclusion is based on a scenario analysis and an associated consequence analysis using an extensive model chain. The planned system of engineered barriers is shown to be particularly robust and ensures that most of the radionuclide inventory decays to insignificance in the near-field--both in the base case and in most altered evolution scenarios. The geosphere barrier can also be very effective, but conclusive demonstration of this places strong requirements on characterization of the geosphere. The radiological impact on hypothetical individuals inhabiting the groundwater discharge area is estimated by calculating doses via a variety of exposure pathways, following dilution, transport and accumulation in the biosphere. Detailed evaluation of perturbations of the base case scenario and of altered evolution scenarios is currently ongoing.

  10. Gene expression profiling in mitochondrial disease: assessment of microarray accuracy by high-throughput Q-PCR.

    PubMed

    Beckman, Kenneth B; Lee, Kathleen Y; Golden, Tamara; Melov, Simon

    2004-09-01

    Mitochondrial diseases are a heterogeneous array of disorders with a complex etiology. Use of microarrays as a tool to investigate complex human disease is increasingly common, however, a principle drawback of microarrays is their limited dynamic range, due to the poor quantification of weak signals. Although it is generally understood that low-intensity microarray 'spots' may be unreliable, there exists little documentation of their accuracy. Quantitative PCR (Q-PCR) is frequently used to validate microarray data, yet few Q-PCR validation studies have focused on the accuracy of low-intensity microarray signals. Hence, we have used Q-PCR to systematically assess microarray accuracy as a function of signal strength in a mouse model of mitochondrial disease, the superoxide dismutase 2 (SOD2) nullizygous mouse. We have focused on a unique category of data--spots with only one weak signal in a two-dye comparative hybridization--and show that such 'high-low' signal intensities are common for differentially expressed genes. This category of differential expression may be more important in mitochondrial disease in which there are often mosaic expression patterns due to the idiosyncratic distribution of mutant mtDNA in heteroplasmic individuals. Using RNA from the SOD2 mouse, we found that when spotted cDNA microarray data are filtered for quality (low variance between many technical replicates) and spot intensity (above a negative control threshold in both channels), there is an excellent quantitative concordance with Q-PCR (R2 = 0.94). The accuracy of gene expression ratios from low-intensity spots (R2 = 0.27) and 'high-low' spots (R2 = 0.32) is considerably lower. Our results should serve as guidelines for microarray interpretation and the selection of genes for validation in mitochondrial disorders. PMID:16120406

  11. Assessment of VIIRS radiometric performance using vicarious calibration sites

    NASA Astrophysics Data System (ADS)

    Uprety, Sirish; Cao, Changyong; Blonski, Slawomir; Wang, Wenhui

    2014-09-01

    Radiometric performance of satellite instruments needs to be regularly monitored to determine if there is any drift in the instrument response over time despite the calibration with the best effort. If a drift occurs, it needs to be characterized in order to keep the radiometric accuracy and stability well within the specification. Instrument gain change over time can be validated independently using many techniques such as using stable earth targets (desert, ocean, snow sites etc), inter-comparison with other well calibrated radiometers (using SNO, SNO-x), deep convective clouds (DCC), lunar observations or other methods. This study focus on using vicarious calibration sites for the assessment of radiometric performance of Suomi National Polar-Orbiting Partnership (S-NPP) Visible Infrared Imaging Radiometer Suite (VIIRS) reflective solar bands. The calibration stability is primarily analyzed by developing the top-of-atmosphere (TOA) reflectance time series over these sites. In addition, the radiometric bias relative to AQUA MODIS is estimated over these calibration sites and analyzed. The radiometric bias is quantified in terms of observed and spectral bias. The spectral characterization and bias analysis will be performed using hyperspectral measurements and radiative transfer models such as MODTRAN.

  12. In vitro assessment of the accuracy of extraoral periapical radiography in root length determination

    PubMed Central

    Nazeer, Muhammad Rizwan; Khan, Farhan Raza; Rahman, Munawwar

    2016-01-01

    Objective: To determine the accuracy of extra oral periapical radiography in obtaining root length by comparing it with the radiographs obtained from standard intraoral approach and extended distance intraoral approach. Materials and Methods: It was an in vitro, comparative study conducted at the dental clinics of Aga Khan University Hospital. ERC exemption was obtained for this work, ref number 3407Sur-ERC-14. We included premolars and molars of a standard phantom head mounted with metal and radiopaque teeth. Radiation was exposed using three radiographic approaches: Standard intraoral, extended length intraoral and extraoral. Since, the unit of analysis was individual root, thus, we had a total of 24 images. The images were stored in VixWin software. The length of the roots was determined using the scale function of the measuring tool inbuilt in the software. Data were analyzed using SPSS version 19.0 and GraphPad software. Pearson correlation coefficient and Bland–Altman test was applied to determine whether the tooth length readings obtained from three different approaches were correlated. P = 0.05 was taken as statistically significant. Results: The correlation between standard intraoral and extended intraoral was 0.97; the correlation between standard intraoral and extraoral method was 0.82 while the correlation between extended intraoral and extraoral was 0.76. The results of Bland–Altman test showed that the average discrepancy between these methods is not large enough to be considered as significant. Conclusions: It appears that the extraoral radiographic method can be used in root length determination in subjects where intraoral radiography is not possible. PMID:27011737

  13. Assessing Inter-Sensor Variability and Sensible Heat Flux Derivation Accuracy for a Large Aperture Scintillometer

    PubMed Central

    Rambikur, Evan H.; Chávez, José L.

    2014-01-01

    The accuracy in determining sensible heat flux (H) of three Kipp and Zonen large aperture scintillometers (LAS) was evaluated with reference to an eddy covariance (EC) system over relatively flat and uniform grassland near Timpas (CO, USA). Other tests have revealed inherent variability between Kipp and Zonen LAS units and bias to overestimate H. Average H fluxes were compared between LAS units and between LAS and EC. Despite good correlation, inter-LAS biases in H were found between 6% and 13% in terms of the linear regression slope. Physical misalignment was observed to result in increased scatter and bias between H solutions of a well-aligned and poorly-aligned LAS unit. Comparison of LAS and EC H showed little bias for one LAS unit, while the other two units overestimated EC H by more than 10%. A detector alignment issue may have caused the inter-LAS variability, supported by the observation in this study of differing power requirements between LAS units. It is possible that the LAS physical misalignment may have caused edge-of-beam signal noise as well as vulnerability to signal noise from wind-induced vibrations, both having an impact on the solution of H. In addition, there were some uncertainties in the solutions of H from the LAS and EC instruments, including lack of energy balance closure with the EC unit. However, the results obtained do not show clear evidence of inherent bias for the Kipp and Zonen LAS to overestimate H as found in other studies. PMID:24473285

  14. Validation of selected analytical methods using accuracy profiles to assess the impact of a Tobacco Heating System on indoor air quality.

    PubMed

    Mottier, Nicolas; Tharin, Manuel; Cluse, Camille; Crudo, Jean-René; Lueso, María Gómez; Goujon-Ginglinger, Catherine G; Jaquier, Anne; Mitova, Maya I; Rouget, Emmanuel G R; Schaller, Mathieu; Solioz, Jennifer

    2016-09-01

    Studies in environmentally controlled rooms have been used over the years to assess the impact of environmental tobacco smoke on indoor air quality. As new tobacco products are developed, it is important to determine their impact on air quality when used indoors. Before such an assessment can take place it is essential that the analytical methods used to assess indoor air quality are validated and shown to be fit for their intended purpose. Consequently, for this assessment, an environmentally controlled room was built and seven analytical methods, representing eighteen analytes, were validated. The validations were carried out with smoking machines using a matrix-based approach applying the accuracy profile procedure. The performances of the methods were compared for all three matrices under investigation: background air samples, the environmental aerosol of Tobacco Heating System THS 2.2, a heat-not-burn tobacco product developed by Philip Morris International, and the environmental tobacco smoke of a cigarette. The environmental aerosol generated by the THS 2.2 device did not have any appreciable impact on the performances of the methods. The comparison between the background and THS 2.2 environmental aerosol samples generated by smoking machines showed that only five compounds were higher when THS 2.2 was used in the environmentally controlled room. Regarding environmental tobacco smoke from cigarettes, the yields of all analytes were clearly above those obtained with the other two air sample types. PMID:27343591

  15. Accuracy of dual energy X-ray absorptiometry (DXA) in assessing carcass composition from different pig populations.

    PubMed

    Soladoye, O P; López Campos, Ó; Aalhus, J L; Gariépy, C; Shand, P; Juárez, M

    2016-11-01

    The accuracy of dual energy X-ray absorptiometry (DXA) in assessing carcass composition from pigs with diverse characteristics was examined in the present study. A total of 648 pigs from three different sire breeds, two sexes, two slaughter weights and three different diets were employed. DXA estimations were used to predict the dissected/chemical yield for lean and fat of carcass sides and primal cuts. The accuracy of the predictions was assessed based on coefficient of determination (R(2)) and residual standard deviation (RSD). The linear relationships for dissected fat and lean for all the primal cuts and carcass sides were high (R(2)>0.94, P<0.01), with low RSD (<1.9%). Relationships between DXA and chemical fat and lean of pork bellies were also high (R(2)>0.94, P<0.01), with RSD <2.9%. These linear relationships remained high over the full range of variation in the pig population, except for sire breed, where the coefficient of determination decreased when carcasses were classified based on this variable. PMID:27395824

  16. Assessing weight perception accuracy to promote weight loss among U.S. female adolescents: A secondary analysis

    PubMed Central

    2010-01-01

    Background Overweight and obesity have become a global epidemic. The prevalence of overweight and obesity among U.S. adolescents has almost tripled in the last 30 years. Results from recent systematic reviews demonstrate that no single, particular intervention or strategy successfully assists overweight or obese adolescents in losing weight. An understanding of factors that influence healthy weight-loss behaviors among overweight and obese female adolescents promotes effective, multi-component weight-loss interventions. There is limited evidence demonstrating associations between demographic variables, body-mass index, and weight perception among female adolescents trying to lose weight. There is also a lack of previous studies examining the association of the accuracy of female adolescents' weight perception with their efforts to lose weight. This study, therefore, examined the associations of body-mass index, weight perception, and weight-perception accuracy with trying to lose weight and engaging in exercise as a weight-loss method among a representative sample of U.S. female adolescents. Methods A nonexperimental, descriptive, comparative secondary analysis design was conducted using data from Wave II (1996) of the National Longitudinal Study of Adolescent Health (Add Health). Data representative of U.S. female adolescents (N = 2216) were analyzed using STATA statistical software. Descriptive statistics and survey weight logistic regression were performed to determine if demographic and independent (body-mass index, weight perception, and weight perception accuracy) variables were associated with trying to lose weight and engaging in exercise as a weight-loss method. Results Age, Black or African American race, body-mass index, weight perception, and weight perceptions accuracy were consistently associated with the likeliness of trying to lose weight among U.S. female adolescents. Age, body-mass index, weight perception, and weight-perception accuracy were

  17. E-Area Performance Assessment Interim Measures Assessment FY2005

    SciTech Connect

    Stallings, M

    2006-01-31

    After major changes to the limits for various disposal units of the E-Area Low Level Waste Facility (ELLWF) last year, no major changes have been made during FY2005. A Special Analysis was completed which removes the air pathway {sup 14}C limit from the Intermediate Level Vault (ILV). This analysis will allow the disposal of reactor moderator deionizers which previously had no pathway to disposal. Several studies have also been completed providing groundwater transport input for future special analyses. During the past year, since Slit Trenches No.1 and No.2 were nearing volumetric capacity, they were operationally closed under a preliminary closure analysis. This analysis was performed using as-disposed conditions and data and showed that concrete rubble from the demolition of 232-F was acceptable for disposal in the STs even though the latest special analysis for the STs had reduced the tritium limits so that the inventory in the rubble exceeded limits. A number of special studies are planned during the next years; perhaps the largest of these will be revision of the Performance Assessment (PA) for the ELLWF. The revision will be accomplished by incorporating special analyses performed since the last PA revision as well as revising analyses to include new data. Projected impacts on disposal limits of more recent studies have been estimated. No interim measures will be applied during this year. However, it is being recommended that tritium disposals to the Components-in-Grout (CIG) Trenches be suspended until a limited Special Analysis (SA) currently in progress is completed. This SA will give recommendations for optimum placement of tritiated D-Area tower waste. Further recommendations for tritiated waste placement in the CIG Trenches will be given in the upcoming PA revision.

  18. Effect of training, education, professional experience, and need for cognition on accuracy of exposure assessment decision-making.

    PubMed

    Vadali, Monika; Ramachandran, Gurumurthy; Banerjee, Sudipto

    2012-04-01

    Results are presented from a study that investigated the effect of characteristics of occupational hygienists relating to educational and professional experience and task-specific experience on the accuracy of occupational exposure judgments. A total of 49 occupational hygienists from six companies participated in the study and 22 tasks were evaluated. Participating companies provided monitoring data on specific tasks. Information on nine educational and professional experience determinants (e.g. educational background, years of occupational hygiene and exposure assessment experience, professional certifications, statistical training and experience, and the 'need for cognition (NFC)', which is a measure of an individual's motivation for thinking) and four task-specific determinants was also collected from each occupational hygienist. Hygienists had a wide range of educational and professional backgrounds for tasks across a range of industries with different workplace and task characteristics. The American Industrial Hygiene Association exposure assessment strategy was used to make exposure judgments on the probability of the 95th percentile of the underlying exposure distribution being located in one of four exposure categories relative to the occupational exposure limit. After reviewing all available job/task/chemical information, hygienists were asked to provide their judgment in probabilistic terms. Both qualitative (judgments without monitoring data) and quantitative judgments (judgments with monitoring data) were recorded. Ninety-three qualitative judgments and 2142 quantitative judgments were obtained. Data interpretation training, with simple rules of thumb for estimating the 95th percentiles of lognormal distributions, was provided to all hygienists. A data interpretation test (DIT) was also administered and judgments were elicited before and after training. General linear models and cumulative logit models were used to analyze the relationship between

  19. The comparison index: A tool for assessing the accuracy of image segmentation

    NASA Astrophysics Data System (ADS)

    Möller, M.; Lymburner, L.; Volk, M.

    2007-08-01

    Segmentation algorithms applied to remote sensing data provide valuable information about the size, distribution and context of landscape objects at a range of scales. However, there is a need for well-defined and robust validation tools to assessing the reliability of segmentation results. Such tools are required to assess whether image segments are based on 'real' objects, such as field boundaries, or on artefacts of the image segmentation algorithm. These tools can be used to improve the reliability of any land-use/land-cover classifications or landscape analyses that is based on the image segments. The validation algorithm developed in this paper aims to: (a) localize and quantify segmentation inaccuracies; and (b) allow the assessment of segmentation results on the whole. The first aim is achieved using object metrics that enable the quantification of topological and geometric object differences. The second aim is achieved by combining these object metrics into a 'Comparison Index', which allows a relative comparison of different segmentation results. The approach demonstrates how the Comparison Index CI can be used to guide trial-and-error techniques, enabling the identification of a segmentation scale H that is close to optimal. Once this scale has been identified a more detailed examination of the CI- H- diagrams can be used to identify precisely what H value and associated parameter settings will yield the most accurate image segmentation results. The procedure is applied to segmented Landsat scenes in an agricultural area in Saxony-Anhalt, Germany. The segmentations were generated using the 'Fractal Net Evolution Approach', which is implemented in the eCognition software.

  20. Performance Assessment of Two GPS Receivers on Space Shuttle

    NASA Technical Reports Server (NTRS)

    Schroeder, Christine A.; Schutz, Bob E.

    1996-01-01

    Space Shuttle STS-69 was launched on September 7, 1995, carrying the Wake Shield Facility (WSF-02) among its payloads. The mission included two GPS receivers: a Collins 3M receiver onboard the Endeavour and an Osborne flight TurboRogue, known as the TurboStar, onboard the WSF-02. Two of the WSF-02 GPS Experiment objectives were to: (1) assess the ability to use GPS in a relative satellite positioning mode using the receivers on Endeavour and WSF-02; and (2) assess the performance of the receivers to support high precision orbit determination at the 400 km altitude. Three ground tests of the receivers were conducted in order to characterize the respective receivers. The analysis of the tests utilized the Double Differencing technique. A similar test in orbit was conducted during STS-69 while the WSF-02 was held by the Endeavour robot arm for a one hour period. In these tests, biases were observed in the double difference pseudorange measurements, implying that biases up to 140 m exist which do not cancel in double differencing. These biases appear to exist in the Collins receiver, but their effect can be mitigated by including measurement bias parameters to accommodate them in an estimation process. An additional test was conducted in which the orbit of the combined Endeavour/WSF-02 was determined independently with each receiver. These one hour arcs were based on forming double differences with 13 TurboRogue receivers in the global IGS network and estimating pseudorange biases for the Collins. Various analyses suggest the TurboStar overall orbit accuracy is about one to two meters for this period, based on double differenced phase residuals of 34 cm. These residuals indicate the level of unmodeled forces on Endeavour produced by gravitational and nongravitational effects. The rms differences between the two independently determined orbits are better than 10 meters, thereby demonstrating the accuracy of the Collins-determined orbit at this level as well as the

  1. Validating the Accuracy of Reaction Time Assessment on Computer-Based Tablet Devices.

    PubMed

    Schatz, Philip; Ybarra, Vincent; Leitner, Donald

    2015-08-01

    Computer-based assessment has evolved to tablet-based devices. Despite the availability of tablets and "apps," there is limited research validating their use. We documented timing delays between stimulus presentation and (simulated) touch response on iOS devices (3rd- and 4th-generation Apple iPads) and Android devices (Kindle Fire, Google Nexus, Samsung Galaxy) at response intervals of 100, 250, 500, and 1,000 milliseconds (ms). Results showed significantly greater timing error on Google Nexus and Samsung tablets (81-97 ms), than Kindle Fire and Apple iPads (27-33 ms). Within Apple devices, iOS 7 obtained significantly lower timing error than iOS 6. Simple reaction time (RT) trials (250 ms) on tablet devices represent 12% to 40% error (30-100 ms), depending on the device, which decreases considerably for choice RT trials (3-5% error at 1,000 ms). Results raise implications for using the same device for serial clinical assessment of RT using tablets, as well as the need for calibration of software and hardware. PMID:25612627

  2. Tumour tracking with scanned proton beams: assessing the accuracy and practicalities

    NASA Astrophysics Data System (ADS)

    van de Water, S.; Kreuger, R.; Zenklusen, S.; Hug, E.; Lomax, A. J.

    2009-11-01

    The potential of tumour tracking for active spot-scanned proton therapy was assessed. Using a 4D-dose calculation and simulated target motion, a tumour tracking algorithm has been implemented and applied to a simple target volume in both homogenous and heterogeneous in silico phantoms. For tracking and retracking (a hybrid solution combining tumour tracking and rescanning), three tracking modes were analysed: 'no tracking' (uncorrected irradiation of a moving target), 'perfect tracking' (no time delays and exact knowledge of target position) and 'imperfect tracking' (simulated time delays or position prediction errors). For all plans, dose homogeneity in the target volume was assessed as the difference between D5 and D95 in the CTV. For the homogeneous phantom, perfect tracking could retrieve nominal dose homogeneity for all motion phases and amplitudes while severe deterioration of treatment outcomes was found for imperfect tracking. The use of retracking reduced the sensitivity to position errors significantly in the homogeneous phantom. In the heterogeneous phantoms (simulated rib proximal to target), the nominal dose homogeneity could not be obtained with perfect tracking. Adjustments in pencil beam positions could cause pencil beams to deform under the influence of the bone, resulting in loss of dose homogeneity. As retracking was not capable of reducing these effects, rescanning provided the best treatment outcomes for moving heterogeneous targets in this study.

  3. Policy and Validity Prospects for Performance-Based Assessment.

    ERIC Educational Resources Information Center

    Baker, Eva L.; And Others

    1994-01-01

    This article describes performance-based assessment as expounded by its proponents, comments on these conceptions, reviews evidence regarding the technical quality of performance-based assessment, and considers its validity under various policy options. (JDD)

  4. Accuracy Assessment of Digital Surface Models Based on WorldView-2 and ADS80 Stereo Remote Sensing Data

    PubMed Central

    Hobi, Martina L.; Ginzler, Christian

    2012-01-01

    Digital surface models (DSMs) are widely used in forest science to model the forest canopy. Stereo pairs of very high resolution satellite and digital aerial images are relatively new and their absolute accuracy for DSM generation is largely unknown. For an assessment of these input data two DSMs based on a WorldView-2 stereo pair and a ADS80 DSM were generated with photogrammetric instruments. Rational polynomial coefficients (RPCs) are defining the orientation of the WorldView-2 satellite images, which can be enhanced with ground control points (GCPs). Thus two WorldView-2 DSMs were distinguished: a WorldView-2 RPCs-only DSM and a WorldView-2 GCP-enhanced RPCs DSM. The accuracy of the three DSMs was estimated with GPS measurements, manual stereo-measurements, and airborne laser scanning data (ALS). With GCP-enhanced RPCs the WorldView-2 image orientation could be optimised to a root mean square error (RMSE) of 0.56 m in planimetry and 0.32 m in height. This improvement in orientation allowed for a vertical median error of −0.24 m for the WorldView-2 GCP-enhanced RPCs DSM in flat terrain. Overall, the DSM based on ADS80 images showed the highest accuracy of the three models with a median error of 0.08 m over bare ground. As the accuracy of a DSM varies with land cover three classes were distinguished: herb and grass, forests, and artificial areas. The study suggested the ADS80 DSM to best model actual surface height in all three land cover classes, with median errors <1.1 m. The WorldView-2 GCP-enhanced RPCs model achieved good accuracy, too, with median errors of −0.43 m for the herb and grass vegetation and −0.26 m for artificial areas. Forested areas emerged as the most difficult land cover type for height modelling; still, with median errors of −1.85 m for the WorldView-2 GCP-enhanced RPCs model and −1.12 m for the ADS80 model, the input data sets evaluated here are quite promising for forest canopy modelling. PMID:22778645

  5. The accuracy and precision of DXA for assessing body composition in team sport athletes.

    PubMed

    Bilsborough, Johann Christopher; Greenway, Kate; Opar, David; Livingstone, Steuart; Cordy, Justin; Coutts, Aaron James

    2014-01-01

    This study determined the precision of pencil and fan beam dual-energy X-ray absorptiometry (DXA) devices for assessing body composition in professional Australian Football players. Thirty-six professional Australian Football players, in two groups (fan DXA, N = 22; pencil DXA, N = 25), underwent two consecutive DXA scans. A whole body phantom with known values for fat mass, bone mineral content and fat-free soft tissue mass was also used to validate each DXA device. Additionally, the criterion phantom was scanned 20 times by each DXA to assess reliability. Test-retest reliability of DXA anthropometric measures were derived from repeated fan and pencil DXA scans. Fat-free soft tissue mass and bone mineral content from both DXA units showed strong correlations with, and trivial differences to, the criterion phantom values. Fat mass from both DXA showed moderate correlations with criterion measures (pencil: r = 0.64; fan: r = 0.67) and moderate differences with the criterion value. The limits of agreement were similar for both fan beam DXA and pencil beam DXA (fan: fat-free soft tissue mass = -1650 ± 179 g, fat mass = -357 ± 316 g, bone mineral content = 289 ± 122 g; pencil: fat-free soft tissue mass = -1701 ± 257 g, fat mass = -359 ± 326 g, bone mineral content = 177 ± 117 g). DXA also showed excellent precision for bone mineral content (coefficient of variation (%CV) fan = 0.6%; pencil = 1.5%) and fat-free soft tissue mass (%CV fan = 0.3%; pencil = 0.5%) and acceptable reliability for fat measures (%CV fan: fat mass = 2.5%, percent body fat = 2.5%; pencil: fat mass = 5.9%, percent body fat = 5.7%). Both DXA provide precise measures of fat-free soft tissue mass and bone mineral content in lean Australian Football players. DXA-derived fat-free soft tissue mass and bone mineral content are suitable for assessing body composition in lean team sport athletes. PMID:24914773

  6. Accuracy Assessment of Three-dimensional Surface Reconstructions of In vivo Teeth from Cone-beam Computed Tomography

    PubMed Central

    Sang, Yan-Hui; Hu, Hong-Cheng; Lu, Song-He; Wu, Yu-Wei; Li, Wei-Ran; Tang, Zhi-Hui

    2016-01-01

    Background: The accuracy of three-dimensional (3D) reconstructions from cone-beam computed tomography (CBCT) has been particularly important in dentistry, which will affect the effectiveness of diagnosis, treatment plan, and outcome in clinical practice. The aims of this study were to assess the linear, volumetric, and geometric accuracy of 3D reconstructions from CBCT and to investigate the influence of voxel size and CBCT system on the reconstructions results. Methods: Fifty teeth from 18 orthodontic patients were assigned to three groups as NewTom VG 0.15 mm group (NewTom VG; voxel size: 0.15 mm; n = 17), NewTom VG 0.30 mm group (NewTom VG; voxel size: 0.30 mm; n = 16), and VATECH DCTPRO 0.30 mm group (VATECH DCTPRO; voxel size: 0.30 mm; n = 17). The 3D reconstruction models of the teeth were segmented from CBCT data manually using Mimics 18.0 (Materialise Dental, Leuven, Belgium), and the extracted teeth were scanned by 3Shape optical scanner (3Shape A/S, Denmark). Linear and volumetric deviations were separately assessed by comparing the length and volume of the 3D reconstruction model with physical measurement by paired t-test. Geometric deviations were assessed by the root mean square value of the imposed 3D reconstruction and optical models by one-sample t-test. To assess the influence of voxel size and CBCT system on 3D reconstruction, analysis of variance (ANOVA) was used (α = 0.05). Results: The linear, volumetric, and geometric deviations were −0.03 ± 0.48 mm, −5.4 ± 2.8%, and 0.117 ± 0.018 mm for NewTom VG 0.15 mm group; −0.45 ± 0.42 mm, −4.5 ± 3.4%, and 0.116 ± 0.014 mm for NewTom VG 0.30 mm group; and −0.93 ± 0.40 mm, −4.8 ± 5.1%, and 0.194 ± 0.117 mm for VATECH DCTPRO 0.30 mm group, respectively. There were statistically significant differences between groups in terms of linear measurement (P < 0.001), but no significant difference in terms of volumetric measurement (P = 0.774). No statistically significant difference were

  7. Assessment of relative accuracy in the determination of organic matter concentrations in aquatic systems

    USGS Publications Warehouse

    Aiken, G.; Kaplan, L.A.; Weishaar, J.

    2002-01-01

    Accurate determinations of total (TOC), dissolved (DOC) and particulate (POC) organic carbon concentrations are critical for understanding the geochemical, environmental, and ecological roles of aquatic organic matter. Of particular significance for the drinking water industry, TOC measurements are the basis for compliance with US EPA regulations. The results of an interlaboratory comparison designed to identify problems associated with the determination of organic matter concentrations in drinking water supplies are presented. The study involved 31 laboratories and a variety of commercially available analytical instruments. All participating laboratories performed well on samples of potassium hydrogen phthalate (KHP), a compound commonly used as a standard in carbon analysis. However, problems associated with the oxidation of difficult to oxidize compounds, such as dodecylbenzene sulfonic acid and caffeine, were noted. Humic substances posed fewer problems for analysts. Particulate organic matter (POM) in the form of polystyrene beads, freeze-dried bacteria and pulverized leaf material were the most difficult for all analysts, with a wide range of performances reported. The POM results indicate that the methods surveyed in this study are inappropriate for the accurate determination of POC and TOC concentration. Finally, several analysts had difficulty in efficiently separating inorganic carbon from KHP solutions, thereby biasing DOC results.

  8. Assessment of accuracy of acetabular cup orientation in CT-free navigated total hip arthroplasty.

    PubMed

    Fukunishi, Shigeo; Fukui, Tomokazu; Imamura, Fumiaki; Nishio, Shoji

    2008-10-01

    We have used the Orthopilot (Aesculap AG, Tuttlingen, Germany) computed tomography (CT)-free navigation system to ensure accurate and reproducible acetabular cup orientation. In this system, cup orientation is assessed with respect to bony configuration as determined by palpation of the anatomical landmarks (the bilateral anterosuperior iliac spines and the upper margin of the pubic symphysis). In this study, intraoperative cup orientation as presented by the OrthoPilot navigation system was compared with the value obtained through postoperative radiological assessment using CT Digital Imaging and Communications in Medicine (DICOM) data and Medical Image Processing, Analysis, and Visualization (MIPAV; National Institutes of Health, US Department of Health and Human Services, Bethesda, Maryland). Intra- and postoperative results obtained from 27 consecutive navigated total hip arthroplasties (THAs) were analyzed. For cup positioning, the desired inclination and anteversion angles were set within the "safe zone" proposed by Lewinnek. In the intraoperative evaluation, the mean inclination angle as determined by the navigation system was 43.5 degrees +/- 2.17 degrees (range, 39.9 degrees to 46.6 degrees ) after the final implantation. In contrast, the mean inclination angle determined by postoperative calculation using MIPAV was 44.9 +/- 3.3 degrees (range, 38.1 degrees to 55.0 degrees ). A discrepancy of >5 degrees was observed in only 1 hip. For the anteversion, the mean intra- and postoperative values were 11.1 degrees +/- 5.6 degrees (range, 0 degrees to 17.8 degrees ) and 13.5 degrees +/- 5.9 degrees (range, 5.1 degrees to 21.6 degrees ), respectively. Again, a discrepancy of >5 degrees was observed in 1 case. Mean differences between the intra- and postoperative values were 1.9 degrees +/- 1.9 degrees and 2.6 degrees +/- 1.6 degrees for inclination and anteversion, respectively. A good agreement between the intraoperative values presented by the navigation system

  9. Approaches to assess biocover performance on landfills.

    PubMed

    Huber-Humer, M; Röder, S; Lechner, P

    2009-07-01

    Methane emissions from active or closed landfills can be reduced by means of methane oxidation enhanced in properly designed landfill covers, known as "biocovers". Biocovers usually consist of a coarse gas distribution layer to balance gas fluxes placed beneath an appropriate substrate layer. The application of such covers implies use of measurement methods and evaluation approaches, both during the planning stage and throughout the operation of biocovers in order to demonstrate their efficiency. Principally, various techniques, commonly used to monitor landfill surface emissions, can be applied to control biocovers. However, particularly when using engineered materials such as compost substrates, biocovers often feature several altered, specific properties when compared to conventional covers, e.g., respect to gas permeability, physical parameters including water retention capacity and texture, and methane oxidation activity. Therefore, existing measuring methods should be carefully evaluated or even modified prior to application on biocovers. This paper discusses possible strategies to be applied in monitoring biocover functionality. On the basis of experiences derived from investigations and large-scale field trials with compost biocovers in Austria, an assessment approach has been developed. A conceptual draft for monitoring biocover performance and recommendations for practical application are presented. PMID:19282167

  10. Assessment of ultrasound monitor image display performance.

    PubMed

    Moore, Sally C; Munnings, Craig R; Brettle, David S; Evans, J Anthony

    2011-06-01

    The display monitor on an ultrasound scanner is used to make primary diagnoses. In this study, 31 ultrasound systems were assessed against current American Association of Physicists in Medicine (AAPM) display standards. Measurements of peak levels (L(max) and L(min)) were generated. Ambient light, L(amb) (cd/m(2)) and room illuminance, L(x) (Lux) were measured. Luminance ratio was calculated (LR' = (L(max)+L(amb))/(L(min)+L(amb))). Initially, only 8/31 systems (26%) passed all the criteria. After adjustment, a further 7/31 (23%) passed making a total of 15/31 passes (48%). A total of 16/31 (52%) were considered overall fails: three due to poor room lighting, 14 due to poor monitor performance. Considering errors this could be as low as 6/31 (19%). Although further work is required to confirm the applicability of these results, it is of concern that three-quarters of ultrasound scanners could be suboptimally adjusted with 19%-55% unable to pass the AAPM criteria. The impact of this on clinical practice is unknown but there is clearly a need to review display quality assurance on ultrasound scanners. PMID:21601138

  11. Performance enhancement of low-cost, high-accuracy, state estimation for vehicle collision prevention system using ANFIS

    NASA Astrophysics Data System (ADS)

    Saadeddin, Kamal; Abdel-Hafez, Mamoun F.; Jaradat, Mohammad A.; Jarrah, Mohammad Amin

    2013-12-01

    In this paper, a low-cost navigation system that fuses the measurements of the inertial navigation system (INS) and the global positioning system (GPS) receiver is developed. First, the system's dynamics are obtained based on a vehicle's kinematic model. Second, the INS and GPS measurements are fused using an extended Kalman filter (EKF) approach. Subsequently, an artificial intelligence based approach for the fusion of INS/GPS measurements is developed based on an Input-Delayed Adaptive Neuro-Fuzzy Inference System (IDANFIS). Experimental tests are conducted to demonstrate the performance of the two sensor fusion approaches. It is found that the use of the proposed IDANFIS approach achieves a reduction in the integration development time and an improvement in the estimation accuracy of the vehicle's position and velocity compared to the EKF based approach.

  12. Accuracy and Utility of Deformable Image Registration in {sup 68}Ga 4D PET/CT Assessment of Pulmonary Perfusion Changes During and After Lung Radiation Therapy

    SciTech Connect

    Hardcastle, Nicholas; Hofman, Michael S.; Hicks, Rodney J.; Callahan, Jason; Kron, Tomas; MacManus, Michael P.; Ball, David L.; Jackson, Price; Siva, Shankar

    2015-09-01

    Purpose: Measuring changes in lung perfusion resulting from radiation therapy dose requires registration of the functional imaging to the radiation therapy treatment planning scan. This study investigates registration accuracy and utility for positron emission tomography (PET)/computed tomography (CT) perfusion imaging in radiation therapy for non–small cell lung cancer. Methods: {sup 68}Ga 4-dimensional PET/CT ventilation-perfusion imaging was performed before, during, and after radiation therapy for 5 patients. Rigid registration and deformable image registration (DIR) using B-splines and Demons algorithms was performed with the CT data to obtain a deformation map between the functional images and planning CT. Contour propagation accuracy and correspondence of anatomic features were used to assess registration accuracy. Wilcoxon signed-rank test was used to determine statistical significance. Changes in lung perfusion resulting from radiation therapy dose were calculated for each registration method for each patient and averaged over all patients. Results: With B-splines/Demons DIR, median distance to agreement between lung contours reduced modestly by 0.9/1.1 mm, 1.3/1.6 mm, and 1.3/1.6 mm for pretreatment, midtreatment, and posttreatment (P<.01 for all), and median Dice score between lung contours improved by 0.04/0.04, 0.05/0.05, and 0.05/0.05 for pretreatment, midtreatment, and posttreatment (P<.001 for all). Distance between anatomic features reduced with DIR by median 2.5 mm and 2.8 for pretreatment and midtreatment time points, respectively (P=.001) and 1.4 mm for posttreatment (P>.2). Poorer posttreatment results were likely caused by posttreatment pneumonitis and tumor regression. Up to 80% standardized uptake value loss in perfusion scans was observed. There was limited change in the loss in lung perfusion between registration methods; however, Demons resulted in larger interpatient variation compared with rigid and B-splines registration

  13. Ground-based differential absorption lidar for water-vapor profiling: assessment of accuracy, resolution, and meteorological applications.

    PubMed

    Wulfmeyer, V; Bösenberg, J

    1998-06-20

    The accuracy and the resolution of water-vapor measurements by use of the ground-based differential absorption lidar (DIAL) system of the Max-Planck-Institute (MPI) are determined. A theoretical analysis, intercomparisons with radiosondes, and measurements in high-altitude clouds allow the conclusion that, with the MPI DIAL system, water-vapor measurements with a systematic error of <5% in the whole troposphere can be performed. Special emphasis is laid on the outstanding daytime and nighttime performance of the DIAL system in the lower troposphere. With a time resolution of 1 min the statistical error varies between 0.05 g/m(3) in the near range using 75 m and-depending on the meteorological conditions-approximately 0.25 g/m(3) at 2 km using 150-m vertical resolution. When the eddy correlation method is applied, this accuracy and resolution are sufficient to determine water-vapor flux profiles in the convective boundary layer with a statistical error of <10% in each data point to approximately 1700 m. The results have contributed to the fact that the DIAL method has finally won recognition as an excellent tool for tropospheric research, in particular for boundary layer research and as a calibration standard for radiosondes and satellites. PMID:18273352

  14. Neoadjuvant chemotherapy-related histologic changes in radical cystectomy: assessment accuracy and prediction of response.

    PubMed

    Wang, Hui Jun; Solanki, Shraddha; Traboulsi, Samer; Kassouf, Wassim; Brimo, Fadi

    2016-07-01

    We evaluated the spectrum of histologic changes associated with neoadjuvant chemotherapy (NAC) and compared them with those resulting from transurethral resection (TUR). Twenty-five patients who received NAC were divided based on both their preoperative clinical/radiographic findings (clinical stage, hydronephrosis, palpable mass) and the cystectomy (RC) findings into NAC respondents (advanced clinical stage and assessed: fibrosis/myofibroblastic reaction, hyalinization in the bladder wall, inflammatory reaction, calcification, foreign-body giant cells, necrosis, sheets of foamy macrophages, and fibrosis/hyalinization/necrosis in the lymph nodes (LNs). Overall, there was a significant histologic overlap between all groups. However, patients who received NAC had a significantly higher likelihood of showing hyalinization and less giant cells and inflammatory reaction than did those who received TUR only. Moreover, the only significantly different histologic features in NAC respondents versus TUR respondents were hyalinization and LN changes, with those 2 features in 25% and 0% of the possible NAC respondents group, respectively. Lastly, there was no significant difference in the possible NAC respondent group in comparison to the TUR-only arm. It appears that TUR and NAC result in overlapping histologic changes. In cases with no/minimal residual disease on RC, it is difficult to attribute the changes to NAC effect only, except if (1) hyalinization of the bladder wall or LN changes are present, or (2) if the preoperative clinical stage was beyond what could be resected by TUR. PMID:27321168

  15. Constraining OCT with Knowledge of Device Design Enables High Accuracy Hemodynamic Assessment of Endovascular Implants

    PubMed Central

    Brown, Jonathan; Lopes, Augusto C.; Kunio, Mie; Kolachalama, Vijaya B.; Edelman, Elazer R.

    2016-01-01

    Background Stacking cross-sectional intravascular images permits three-dimensional rendering of endovascular implants, yet introduces between-frame uncertainties that limit characterization of device placement and the hemodynamic microenvironment. In a porcine coronary stent model, we demonstrate enhanced OCT reconstruction with preservation of between-frame features through fusion with angiography and a priori knowledge of stent design. Methods and Results Strut positions were extracted from sequential OCT frames. Reconstruction with standard interpolation generated discontinuous stent structures. By computationally constraining interpolation to known stent skeletons fitted to 3D ‘clouds’ of OCT-Angio-derived struts, implant anatomy was resolved, accurately rendering features from implant diameter and curvature (n = 1 vessels, r2 = 0.91, 0.90, respectively) to individual strut-wall configurations (average displacement error ~15 μm). This framework facilitated hemodynamic simulation (n = 1 vessel), showing the critical importance of accurate anatomic rendering in characterizing both quantitative and basic qualitative flow patterns. Discontinuities with standard approaches systematically introduced noise and bias, poorly capturing regional flow effects. In contrast, the enhanced method preserved multi-scale (local strut to regional stent) flow interactions, demonstrating the impact of regional contexts in defining the hemodynamic consequence of local deployment errors. Conclusion Fusion of planar angiography and knowledge of device design permits enhanced OCT image analysis of in situ tissue-device interactions. Given emerging interests in simulation-derived hemodynamic assessment as surrogate measures of biological risk, such fused modalities offer a new window into patient-specific implant environments. PMID:26906566

  16. Assessment of the DNS Data Accuracy Using RANS-DNS Simulations

    NASA Astrophysics Data System (ADS)

    Colmenares F., Juan D.; Poroseva, Svetlana V.; Murman, Scott M.

    2015-11-01

    Direct numerical simulations (DNS) provide the most accurate computational description of a turbulent flow field and its statistical characteristics. Therefore, results of simulations with Reynolds-Averaged Navier-Stokes (RANS) turbulence models are often evaluated against DNS data. The goal of our study is to determine a limit of RANS model performance in relation to existing DNS data. Since no model can outperform DNS, this limit can be determined by solving RANS equations with all unknown terms being represented by their DNS data (RANS-DNS simulations). In the presentation, results of RANS-DNS simulations conducted using transport equations for velocity moments of second, third, and fourth orders in incompressible planar wall-bounded flows are discussed. The results were obtained with two solvers: OpenFOAM and in-house code for fully-developed flows at different Reynolds numbers using different DNS databases. The material is in part based upon work supported by NASA under award NNX12AJ61A.

  17. Assessing the accuracy of the isotropic periodic sum method through Madelung energy computation.

    PubMed

    Ojeda-May, Pedro; Pu, Jingzhi

    2014-04-28

    We tested the isotropic periodic sum (IPS) method for computing Madelung energies of ionic crystals. The performance of the method, both in its nonpolar (IPSn) and polar (IPSp) forms, was compared with that of the zero-charge and Wolf potentials [D. Wolf, P. Keblinski, S. R. Phillpot, and J. Eggebrecht, J. Chem. Phys. 110, 8254 (1999)]. The results show that the IPSn and IPSp methods converge the Madelung energy to its reference value with an average deviation of ∼10(-4) and ∼10(-7) energy units, respectively, for a cutoff range of 18-24a (a/2 being the nearest-neighbor ion separation). However, minor oscillations were detected for the IPS methods when deviations of the computed Madelung energies were plotted on a logarithmic scale as a function of the cutoff distance. To remove such oscillations, we introduced a modified IPSn potential in which both the local-region and long-range electrostatic terms are damped, in analogy to the Wolf potential. With the damped-IPSn potential, a smoother convergence was achieved. In addition, we observed a better agreement between the damped-IPSn and IPSp methods, which suggests that damping the IPSn potential is in effect similar to adding a screening potential in IPSp. PMID:24784252

  18. A geostatistical methodology to assess the accuracy of unsaturated flow models

    SciTech Connect

    Smoot, J.L.; Williams, R.E.

    1996-04-01

    The Pacific Northwest National Laboratory spatiotemporal movement of water injected into (PNNL) has developed a Hydrologic unsaturated sediments at the Hanford Site in Evaluation Methodology (HEM) to assist the Washington State was used to develop a new U.S. Nuclear Regulatory Commission in method for evaluating mathematical model evaluating the potential that infiltrating meteoric predictions. Measured water content data were water will produce leachate at commercial low- interpolated geostatistically to a 16 x 16 x 36 level radioactive waste disposal sites. Two key grid at several time intervals. Then a issues are raised in the HEM: (1) evaluation of mathematical model was used to predict water mathematical models that predict facility content at the same grid locations at the selected performance, and (2) estimation of the times. Node-by-node comparison of the uncertainty associated with these mathematical mathematical model predictions with the model predictions. The technical objective of geostatistically interpolated values was this research is to adapt geostatistical tools conducted. The method facilitates a complete commonly used for model parameter estimation accounting and categorization of model error at to the problem of estimating the spatial every node. The comparison suggests that distribution of the dependent variable to be model results generally are within measurement calculated by the model. To fulfill this error. The worst model error occurs in silt objective, a database describing the lenses and is in excess of measurement error.

  19. Topographic accuracy assessment of bare earth lidar-derived unstructured meshes

    NASA Astrophysics Data System (ADS)

    Bilskie, Matthew V.; Hagen, Scott C.

    2013-02-01

    This study is focused on the integration of bare earth lidar (Light Detection and Ranging) data into unstructured (triangular) finite element meshes and the implications on simulating storm surge inundation using a shallow water equations model. A methodology is developed to compute root mean square error (RMSE) and the 95th percentile of vertical elevation errors using four different interpolation methods (linear, inverse distance weighted, natural neighbor, and cell averaging) to resample bare earth lidar and lidar-derived digital elevation models (DEMs) onto unstructured meshes at different resolutions. The results are consolidated into a table of optimal interpolation methods that minimize the vertical elevation error of an unstructured mesh for a given mesh node density. The cell area averaging method performed most accurate when DEM grid cells within 0.25 times the ratio of local element size and DEM cell size were averaged. The methodology is applied to simulate inundation extent and maximum water levels in southern Mississippi due to Hurricane Katrina, which illustrates that local changes in topography such as adjusting element size and interpolation method drastically alter simulated storm surge locally and non-locally. The methods and results presented have utility and implications to any modeling application that uses bare earth lidar.

  20. DEVELOPMENT AND ASSESSMENT OF METHODS FOR ESTIMATING PROTECTIVE CLOTHING PERFORMANCE

    EPA Science Inventory

    Approaches for predicting the permeation resistance of chemical protective clothing polymers were assessed for accuracy and applicability to the Premanufacture Notification (PMN) review process of the U.S. EPA Office of Toxic Substances (OTS). The predictive models are based on r...

  1. Performance model assessment for multi-junction concentrating photovoltaic systems.

    SciTech Connect

    Riley, Daniel M.; McConnell, Robert.; Sahm, Aaron; Crawford, Clark; King, David L.; Cameron, Christopher P.; Foresi, James S.

    2010-03-01

    Four approaches to modeling multi-junction concentrating photovoltaic system performance are assessed by comparing modeled performance to measured performance. Measured weather, irradiance, and system performance data were collected on two systems over a one month period. Residual analysis is used to assess the models and to identify opportunities for model improvement.

  2. Assessing the accuracy of improved force-matched water models derived from Ab initio molecular dynamics simulations.

    PubMed

    Köster, Andreas; Spura, Thomas; Rutkai, Gábor; Kessler, Jan; Wiebeler, Hendrik; Vrabec, Jadran; Kühne, Thomas D

    2016-07-15

    The accuracy of water models derived from ab initio molecular dynamics simulations by means on an improved force-matching scheme is assessed for various thermodynamic, transport, and structural properties. It is found that although the resulting force-matched water models are typically less accurate than fully empirical force fields in predicting thermodynamic properties, they are nevertheless much more accurate than generally appreciated in reproducing the structure of liquid water and in fact superseding most of the commonly used empirical water models. This development demonstrates the feasibility to routinely parametrize computationally efficient yet predictive potential energy functions based on accurate ab initio molecular dynamics simulations for a large variety of different systems. © 2016 Wiley Periodicals, Inc. PMID:27232117

  3. Assessing the accuracy of the general AMBER force field for 2,2,2-trifluoroethanol as solvent.

    PubMed

    Jia, Xiangyu; Zhang, John Z H; Mei, Ye

    2013-06-01

    The alcohol-based cosolvent 2,2,2-trifluoroethanol (TFE) has been used widely in protein science and engineering. Many experimental and computational studies of its impact on protein structure have been carried out, but consensus on the mechanism has not been reached. In the past decade, several molecular mechanical models have been proposed to model the structure and dynamics of TFE. However, further calibration is still necessary. In particular, its compatibility with protein force fields has not been well examined. The general AMBER force field (GAFF) has proved quite successful in modeling small organic molecules, and is compatible with contemporary AMBER force field. In this work, we assessed the accuracy of GAFF for the TFE molecule as a bulk solvent. Several properties, such as density, dipole moment, radial distribution function, etc., were calculated and compared with experimental data. The results show that GAFF plays fairly well in the description of bulk TFE, although there is still room for improvement. PMID:23397068

  4. Modeling colloid transport for performance assessment.

    PubMed

    Contardi, J S; Turner, D R; Ahn, T M

    2001-02-01

    The natural system is expected to contribute to isolation at the proposed high-level nuclear waste (HLW) geologic repository at Yucca Mountain, NV (YM). In developing performance assessment (PA) computer models to simulate long-term behavior at YM, colloidal transport of radionuclides has been proposed as a critical factor because of the possible reduced interaction with the geologic media. Site-specific information on the chemistry and natural colloid concentration of saturated zone groundwaters in the vicinity of YM is combined with a surface complexation sorption model to evaluate the impact of natural colloids on calculated retardation factors (RF) for several radioelements of concern in PA. Inclusion of colloids into the conceptual model can reduce the calculated effective retardation significantly. Strongly sorbed radionuclides such as americium and thorium are most affected by pseudocolloid formation and transport, with a potential reduction in RF of several orders of magnitude. Radioelements that are less strongly sorbed under YM conditions, such as uranium and neptunium, are not affected significantly by colloid transport, and transport of plutonium in the valence state is only moderately enhanced. Model results showed no increase in the peak mean annual total effective dose equivalent (TEDE) within a compliance period of 10,000 years, although this is strongly dependent on container life in the base case scenario. At longer times, simulated container failures increase and the TEDE from the colloidal models increased by a factor of 60 from the base case. By using mechanistic models and sensitivity analyses to determine what parameters and transport processes affect the TEDE, colloidal transport in future versions of the TPA code can be represented more accurately. PMID:11288586

  5. Accuracy assessment of an automatic image-based PET/CT registration for ultrasound-guided biopsies and ablations

    NASA Astrophysics Data System (ADS)

    Kadoury, Samuel; Wood, Bradford J.; Venkatesan, Aradhana M.; Dalal, Sandeep; Xu, Sheng; Kruecker, Jochen

    2011-03-01

    The multimodal fusion of spatially tracked real-time ultrasound (US) with a prior CT scan has demonstrated clinical utility, accuracy, and positive impact upon clinical outcomes when used for guidance during biopsy and radiofrequency ablation in the treatment of cancer. Additionally, the combination of CT-guided procedures with positron emission tomography (PET) may not only enhance navigation, but add valuable information regarding the specific location and volume of the targeted masses which may be invisible on CT and US. The accuracy of this fusion depends on reliable, reproducible registration methods between PET and CT. This can avoid extensive manual efforts to correct registration which can be long and tedious in an interventional setting. In this paper, we present a registration workflow for PET/CT/US fusion by analyzing various image metrics based on normalized mutual information and cross-correlation, using both rigid and affine transformations to automatically align PET and CT. Registration is performed between the CT component of the prior PET-CT and the intra-procedural CT scan used for navigation to maximize image congruence. We evaluate the accuracy of the PET/CT registration by computing fiducial and target registration errors using anatomical landmarks and lesion locations respectively. We also report differences to gold-standard manual alignment as well as the root mean square errors for CT/US fusion. Ten patients with prior PET/CT who underwent ablation or biopsy procedures were selected for this study. Studies show that optimal results were obtained using a crosscorrelation based rigid registration with a landmark localization error of 1.1 +/- 0.7 mm using a discrete graphminimizing scheme. We demonstrate the feasibility of automated fusion of PET/CT and its suitability for multi-modality ultrasound guided navigation procedures.

  6. High-Capacity Communications from Martian Distances Part 4: Assessment of Spacecraft Pointing Accuracy Capabilities Required For Large Ka-Band Reflector Antennas

    NASA Technical Reports Server (NTRS)

    Hodges, Richard E.; Sands, O. Scott; Huang, John; Bassily, Samir

    2006-01-01

    Improved surface accuracy for deployable reflectors has brought with it the possibility of Ka-band reflector antennas with extents on the order of 1000 wavelengths. Such antennas are being considered for high-rate data delivery from planetary distances. To maintain losses at reasonable levels requires a sufficiently capable Attitude Determination and Control System (ADCS) onboard the spacecraft. This paper provides an assessment of currently available ADCS strategies and performance levels. In addition to other issues, specific factors considered include: (1) use of "beaconless" or open loop tracking versus use of a beacon on the Earth side of the link, and (2) selection of fine pointing strategy (body-fixed/spacecraft pointing, reflector pointing or various forms of electronic beam steering). Capabilities of recent spacecraft are discussed.

  7. Electrode replacement does not affect classification accuracy in dual-session use of a passive brain-computer interface for assessing cognitive workload

    PubMed Central

    Estepp, Justin R.; Christensen, James C.

    2015-01-01

    The passive brain-computer interface (pBCI) framework has been shown to be a very promising construct for assessing cognitive and affective state in both individuals and teams. There is a growing body of work that focuses on solving the challenges of transitioning pBCI systems from the research laboratory environment to practical, everyday use. An interesting issue is what impact methodological variability may have on the ability to reliably identify (neuro)physiological patterns that are useful for state assessment. This work aimed at quantifying the effects of methodological variability in a pBCI design for detecting changes in cognitive workload. Specific focus was directed toward the effects of replacing electrodes over dual sessions (thus inducing changes in placement, electromechanical properties, and/or impedance between the electrode and skin surface) on the accuracy of several machine learning approaches in a binary classification problem. In investigating these methodological variables, it was determined that the removal and replacement of the electrode suite between sessions does not impact the accuracy of a number of learning approaches when trained on one session and tested on a second. This finding was confirmed by comparing to a control group for which the electrode suite was not replaced between sessions. This result suggests that sensors (both neurological and peripheral) may be removed and replaced over the course of many interactions with a pBCI system without affecting its performance. Future work on multi-session and multi-day pBCI system use should seek to replicate this (lack of) effect between sessions in other tasks, temporal time courses, and data analytic approaches while also focusing on non-stationarity and variable classification performance due to intrinsic factors. PMID:25805963

  8. Implementing Performance Assessment: Promises, Problems, and Challenges.

    ERIC Educational Resources Information Center

    Kane, Michael B., Ed.; Mitchell, Ruth, Ed.

    The chapters in this collection contribute to the debate about the value and usefulness of radically different kinds of assessments in the U.S. educational system by considering and expanding on the theoretical underpinnings of reports and speculation. The chapters are: (1) "Assessment Reform: Promises and Challenges" (Nidhi Khattri and David…

  9. Scalable Methods for Uncertainty Quantification, Data Assimilation and Target Accuracy Assessment for Multi-Physics Advanced Simulation of Light Water Reactors

    NASA Astrophysics Data System (ADS)

    Khuwaileh, Bassam

    High fidelity simulation of nuclear reactors entails large scale applications characterized with high dimensionality and tremendous complexity where various physics models are integrated in the form of coupled models (e.g. neutronic with thermal-hydraulic feedback). Each of the coupled modules represents a high fidelity formulation of the first principles governing the physics of interest. Therefore, new developments in high fidelity multi-physics simulation and the corresponding sensitivity/uncertainty quantification analysis are paramount to the development and competitiveness of reactors achieved through enhanced understanding of the design and safety margins. Accordingly, this dissertation introduces efficient and scalable algorithms for performing efficient Uncertainty Quantification (UQ), Data Assimilation (DA) and Target Accuracy Assessment (TAA) for large scale, multi-physics reactor design and safety problems. This dissertation builds upon previous efforts for adaptive core simulation and reduced order modeling algorithms and extends these efforts towards coupled multi-physics models with feedback. The core idea is to recast the reactor physics analysis in terms of reduced order models. This can be achieved via identifying the important/influential degrees of freedom (DoF) via the subspace analysis, such that the required analysis can be recast by considering the important DoF only. In this dissertation, efficient algorithms for lower dimensional subspace construction have been developed for single physics and multi-physics applications with feedback. Then the reduced subspace is used to solve realistic, large scale forward (UQ) and inverse problems (DA and TAA). Once the elite set of DoF is determined, the uncertainty/sensitivity/target accuracy assessment and data assimilation analysis can be performed accurately and efficiently for large scale, high dimensional multi-physics nuclear engineering applications. Hence, in this work a Karhunen-Loeve (KL

  10. NREL Evaluates the Thermal Performance of Uninsulated Walls to Improve the Accuracy of Building Energy Simulation Tools (Fact Sheet)

    SciTech Connect

    Not Available

    2012-01-01

    This technical highlight describes NREL research to develop models of uninsulated wall assemblies that help to improve the accuracy of building energy simulation tools when modeling potential energy savings in older homes. Researchers at the National Renewable Energy Laboratory (NREL) have developed models for evaluating the thermal performance of walls in existing homes that will improve the accuracy of building energy simulation tools when predicting potential energy savings of existing homes. Uninsulated walls are typical in older homes where the wall cavities were not insulated during construction or where the insulating material has settled. Accurate calculation of heat transfer through building enclosures will help determine the benefit of energy efficiency upgrades in order to reduce energy consumption in older American homes. NREL performed detailed computational fluid dynamics (CFD) analysis to quantify the energy loss/gain through the walls and to visualize different airflow regimes within the uninsulated cavities. The effects of ambient outdoor temperature, radiative properties of building materials, and insulation level were investigated. The study showed that multi-dimensional airflows occur in walls with uninsulated cavities and that the thermal resistance is a function of the outdoor temperature - an effect not accounted for in existing building energy simulation tools. The study quantified the difference between CFD prediction and the approach currently used in building energy simulation tools over a wide range of conditions. For example, researchers found that CFD predicted lower heating loads and slightly higher cooling loads. Implementation of CFD results into building energy simulation tools such as DOE2 and EnergyPlus will likely reduce the predicted heating load of homes. Researchers also determined that a small air gap in a partially insulated cavity can lead to a significant reduction in thermal resistance. For instance, a 4-in. tall air gap

  11. Positional Accuracy Assessment of the Openstreetmap Buildings Layer Through Automatic Homologous Pairs Detection: the Method and a Case Study

    NASA Astrophysics Data System (ADS)

    Brovelli, M. A.; Minghini, M.; Molinari, M. E.; Zamboni, G.

    2016-06-01

    OpenStreetMap (OSM) is currently the largest openly licensed collection of geospatial data. Being OSM increasingly exploited in a variety of applications, research has placed great attention on the assessment of its quality. This work focuses on assessing the quality of OSM buildings. While most of the studies available in literature are limited to the evaluation of OSM building completeness, this work proposes an original approach to assess the positional accuracy of OSM buildings based on comparison with a reference dataset. The comparison relies on a quasi-automated detection of homologous pairs on the two datasets. Based on the homologous pairs found, warping algorithms like e.g. affine transformations and multi-resolution splines can be applied to the OSM buildings to generate a new version having an optimal local match to the reference layer. A quality assessment of the OSM buildings of Milan Municipality (Northern Italy), having an area of about 180 km2, is then presented. After computing some measures of completeness, the algorithm based on homologous points is run using the building layer of the official vector cartography of Milan Municipality as the reference dataset. Approximately 100000 homologous points are found, which show a systematic translation of about 0.4 m on both the X and Y directions and a mean distance of about 0.8 m between the datasets. Besides its efficiency and high degree of automation, the algorithm generates a warped version of OSM buildings which, having by definition a closest match to the reference buildings, can be eventually integrated in the OSM database.

  12. Performance Assessment and Student Motivation: Questioning Construct Relevant Variance.

    ERIC Educational Resources Information Center

    Parkes, Jay

    Whether performance assessments can be claimed to be more inclusive than traditional assessments was studied in an investigation of whether a student's perceptions of control can be detected in a performance assessment score. It was hypothesized that students' perceptions of control would show no effect on their scores on an objective test of…

  13. Assessment in Performance-Based Secondary Music Classes

    ERIC Educational Resources Information Center

    Pellegrino, Kristen; Conway, Colleen M.; Russell, Joshua A.

    2015-01-01

    After sharing research findings about grading and assessment practices in secondary music ensemble classes, we offer examples of commonly used assessment tools (ratings scale, checklist, rubric) for the performance ensemble. Then, we explore the various purposes of assessment in performance-based music courses: (1) to meet state, national, and…

  14. Exploring the Utility of a Virtual Performance Assessment

    ERIC Educational Resources Information Center

    Clarke-Midura, Jody; Code, Jillianne; Zap, Nick; Dede, Chris

    2011-01-01

    With funding from the Institute of Education Sciences (IES), the Virtual Performance Assessment project at the Harvard Graduate School of Education is developing and studying the feasibility of immersive virtual performance assessments (VPAs) to assess scientific inquiry of middle school students as a standardized component of an accountability…

  15. AN ACCURACY ASSESSMENT OF 1992 LANDSAT-MSS DERIVED LAND COVER FOR THE UPPER SAN PEDRO WATERSHED (U.S./MEXICO)

    EPA Science Inventory

    The utility of Digital Orthophoto Quads (DOQS) in assessing the classification accuracy of land cover derived from Landsat MSS data was investigated. Initially, the suitability of DOQs in distinguishing between different land cover classes was assessed using high-resolution airbo...

  16. Students' and Teachers' Assessments of the Need for Accuracy in the Oral Production of German as a Foreign Language

    ERIC Educational Resources Information Center

    Chavez, Monika

    2007-01-01

    Previous research indicates that foreign language learners are much more focused on accuracy, particularly grammatical accuracy, than their teachers are. The purpose of the current study was to gain a more detailed understanding of American learners' views of the need for accuracy in the oral production of a foreign language (German) by (a)…

  17. Accuracy and Usefulness of Select Methods for Assessing Complete Collection of 24-Hour Urine: A Systematic Review.

    PubMed

    John, Katherine A; Cogswell, Mary E; Campbell, Norm R; Nowson, Caryl A; Legetic, Branka; Hennis, Anselm J M; Patel, Sheena M

    2016-05-01

    Twenty-four-hour urine collection is the recommended method for estimating sodium intake. To investigate the strengths and limitations of methods used to assess completion of 24-hour urine collection, the authors systematically reviewed the literature on the accuracy and usefulness of methods vs para-aminobenzoic acid (PABA) recovery (referent). The percentage of incomplete collections, based on PABA, was 6% to 47% (n=8 studies). The sensitivity and specificity for identifying incomplete collection using creatinine criteria (n=4 studies) was 6% to 63% and 57% to 99.7%, respectively. The most sensitive method for removing incomplete collections was a creatinine index <0.7. In pooled analysis (≥2 studies), mean urine creatinine excretion and volume were higher among participants with complete collection (P<.05); whereas, self-reported collection time did not differ by completion status. Compared with participants with incomplete collection, mean 24-hour sodium excretion was 19.6 mmol higher (n=1781 specimens, 5 studies) in patients with complete collection. Sodium excretion may be underestimated by inclusion of incomplete 24-hour urine collections. None of the current approaches reliably assess completion of 24-hour urine collection. PMID:26726000

  18. Performance and Accuracy of Lightweight and Low-Cost GPS Data Loggers According to Antenna Positions, Fix Intervals, Habitats and Animal Movements.

    PubMed

    Forin-Wiart, Marie-Amélie; Hubert, Pauline; Sirguey, Pascal; Poulle, Marie-Lazarine

    2015-01-01

    Recently developed low-cost Global Positioning System (GPS) data loggers are promising tools for wildlife research because of their affordability for low-budget projects and ability to simultaneously track a greater number of individuals compared with expensive built-in wildlife GPS. However, the reliability of these devices must be carefully examined because they were not developed to track wildlife. This study aimed to assess the performance and accuracy of commercially available GPS data loggers for the first time using the same methods applied to test built-in wildlife GPS. The effects of antenna position, fix interval and habitat on the fix-success rate (FSR) and location error (LE) of CatLog data loggers were investigated in stationary tests, whereas the effects of animal movements on these errors were investigated in motion tests. The units operated well and presented consistent performance and accuracy over time in stationary tests, and the FSR was good for all antenna positions and fix intervals. However, the LE was affected by the GPS antenna and fix interval. Furthermore, completely or partially obstructed habitats reduced the FSR by up to 80% in households and increased the LE. Movement across habitats had no effect on the FSR, whereas forest habitat influenced the LE. Finally, the mean FSR (0.90 ± 0.26) and LE (15.4 ± 10.1 m) values from low-cost GPS data loggers were comparable to those of built-in wildlife GPS collars (71.6% of fixes with LE < 10 m for motion tests), thus confirming their suitability for use in wildlife studies. PMID:26086958

  19. Performance and Accuracy of Lightweight and Low-Cost GPS Data Loggers According to Antenna Positions, Fix Intervals, Habitats and Animal Movements

    PubMed Central

    Forin-Wiart, Marie-Amélie; Hubert, Pauline; Sirguey, Pascal; Poulle, Marie-Lazarine

    2015-01-01

    Recently developed low-cost Global Positioning System (GPS) data loggers are promising tools for wildlife research because of their affordability for low-budget projects and ability to simultaneously track a greater number of individuals compared with expensive built-in wildlife GPS. However, the reliability of these devices must be carefully examined because they were not developed to track wildlife. This study aimed to assess the performance and accuracy of commercially available GPS data loggers for the first time using the same methods applied to test built-in wildlife GPS. The effects of antenna position, fix interval and habitat on the fix-success rate (FSR) and location error (LE) of CatLog data loggers were investigated in stationary tests, whereas the effects of animal movements on these errors were investigated in motion tests. The units operated well and presented consistent performance and accuracy over time in stationary tests, and the FSR was good for all antenna positions and fix intervals. However, the LE was affected by the GPS antenna and fix interval. Furthermore, completely or partially obstructed habitats reduced the FSR by up to 80% in households and increased the LE. Movement across habitats had no effect on the FSR, whereas forest habitat influenced the LE. Finally, the mean FSR (0.90 ± 0.26) and LE (15.4 ± 10.1 m) values from low-cost GPS data loggers were comparable to those of built-in wildlife GPS collars (71.6% of fixes with LE < 10 m for motion tests), thus confirming their suitability for use in wildlife studies. PMID:26086958

  20. The Cost of Performance Assessment in Science: The NAEP Perspective.

    ERIC Educational Resources Information Center

    O'Sullivan, Christine

    The cost of including hands-on tasks in the National Assessment of Educational Progress (NAEP) science assessment was analyzed. In recognition of growing interest in performance assessment in science, the framework for the NAEP science assessment called for the inclusion of hands-on tasks with hands-on activities and questions about the…