Science.gov

Sample records for accuracy assessments performed

  1. Assessment of ambulatory blood pressure recorders: accuracy and clinical performance.

    PubMed

    White, W B

    1991-06-01

    There are now more than ten different manufacturers of non-invasive, portable blood pressure monitors in North America, Europe, and Japan. These ambulatory blood pressure recorders measure blood pressure by either auscultatory or oscillometric methodology. Technologic advances in the recorders have resulted in reduction in monitor size, reduction in or absence of motor noise during cuff inflation, ability to program the recorder without an external computer system, and enhanced precision. Recently, there has been concern that more structured validation protocols have not been implemented prior to the widespread marking of ambulatory blood pressure recorders. There is a need for proper assessment of recorders prior to use in clinical research or practice. Data on several existing recorders suggest that while most are reasonably accurate during resting measurements, many lose this accuracy during motion, and clinical performance may vary among the monitors. Validation studies of ambulatory recorders should include comparison with mercury column and intra-arterial determinations, resting and motion measurements, and assessment of clinical performance in hypertensive patients. PMID:1893652

  2. Teacher Compliance and Accuracy in State Assessment of Student Motor Skill Performance

    ERIC Educational Resources Information Center

    Hall, Tina J.; Hicklin, Lori K.; French, Karen E.

    2015-01-01

    Purpose: The purpose of this study was to investigate teacher compliance with state mandated assessment protocols and teacher accuracy in assessing student motor skill performance. Method: Middle school teachers (N = 116) submitted eighth grade student motor skill performance data from 318 physical education classes to a trained monitoring…

  3. GEOSPATIAL DATA ACCURACY ASSESSMENT

    EPA Science Inventory

    The development of robust accuracy assessment methods for the validation of spatial data represent's a difficult scientific challenge for the geospatial science community. The importance and timeliness of this issue is related directly to the dramatic escalation in the developmen...

  4. Assessment of the relationship between lesion segmentation accuracy and computer-aided diagnosis scheme performance

    NASA Astrophysics Data System (ADS)

    Zheng, Bin; Pu, Jiantao; Park, Sang Cheol; Zuley, Margarita; Gur, David

    2008-03-01

    In this study we randomly select 250 malignant and 250 benign mass regions as a training dataset. The boundary contours of these regions were manually identified and marked. Twelve image features were computed for each region. An artificial neural network (ANN) was trained as a classifier. To select a specific testing dataset, we applied a topographic multi-layer region growth algorithm to detect boundary contours of 1,903 mass regions in an initial pool of testing regions. All processed regions are sorted based on a size difference ratio between manual and automated segmentation. We selected a testing dataset involving 250 malignant and 250 benign mass regions with larger size difference ratios. Using the area under ROC curve (A Z value) as performance index we investigated the relationship between the accuracy of mass segmentation and the performance of a computer-aided diagnosis (CAD) scheme. CAD performance degrades as the size difference ratio increases. Then, we developed and tested a hybrid region growth algorithm that combined the topographic region growth with an active contour approach. In this hybrid algorithm, the boundary contour detected by the topographic region growth is used as the initial contour of the active contour algorithm. The algorithm iteratively searches for the optimal region boundaries. A CAD likelihood score of the growth region being a true-positive mass is computed in each iteration. The region growth is automatically terminated once the first maximum CAD score is reached. This hybrid region growth algorithm reduces the size difference ratios between two areas segmented automatically and manually to less than +/-15% for all testing regions and the testing A Z value increases to from 0.63 to 0.90. The results indicate that CAD performance heavily depends on the accuracy of mass segmentation. In order to achieve robust CAD performance, reducing lesion segmentation error is important.

  5. Increased Throwing Accuracy Improves Children's Catching Performance in a Ball-Catching Task from the Movement Assessment Battery (MABC-2)

    PubMed Central

    Dirksen, Tim; De Lussanet, Marc H. E.; Zentgraf, Karen; Slupinski, Lena; Wagner, Heiko

    2016-01-01

    The Movement Assessment Battery for Children (MABC-2) is a functional test for identifying deficits in the motor performance of children. The test contains a ball-catching task that requires the children to catch a self-thrown ball with one hand. As the task can be executed with a variety of different catching strategies, it is assumed that the task success can also vary considerably. Even though it is not clear, whether the performance merely depends on the catching skills or also to some extent on the throwing skills, the MABC-2 takes into account only the movement outcome. Therefore, the purpose of the current study was to examine (1) to what extent the throwing accuracy has an effect on the children's catching performance and (2) to what extent the throwing accuracy influences their choice of catching strategy. In line with the test manual, the children's catching performance was quantified on basis of the number of correctly caught balls. The throwing accuracy and the catching strategy were quantified by applying a kinematic analysis on the ball's trajectory and the hand movements. Based on linear regression analyses, we then investigated the relation between throwing accuracy, catching performance and catching strategy. The results show that an increased throwing accuracy is significantly correlated with an increased catching performance. Moreover, a higher throwing accuracy is significantly correlated with a longer duration of the hand on the ball's parabola, which indicates that throwing the ball more accurately could enable the children to effectively reduce the requirements on temporal precision. As the children's catching performance and their choice of catching strategy in the ball-catching task of the MABC-2 are substantially determined by their throwing accuracy, the test evaluation should not be based on the movement outcome alone, but should also take into account the children's throwing performance. Our findings could be of particular value for the

  6. Increased Throwing Accuracy Improves Children's Catching Performance in a Ball-Catching Task from the Movement Assessment Battery (MABC-2).

    PubMed

    Dirksen, Tim; De Lussanet, Marc H E; Zentgraf, Karen; Slupinski, Lena; Wagner, Heiko

    2016-01-01

    The Movement Assessment Battery for Children (MABC-2) is a functional test for identifying deficits in the motor performance of children. The test contains a ball-catching task that requires the children to catch a self-thrown ball with one hand. As the task can be executed with a variety of different catching strategies, it is assumed that the task success can also vary considerably. Even though it is not clear, whether the performance merely depends on the catching skills or also to some extent on the throwing skills, the MABC-2 takes into account only the movement outcome. Therefore, the purpose of the current study was to examine (1) to what extent the throwing accuracy has an effect on the children's catching performance and (2) to what extent the throwing accuracy influences their choice of catching strategy. In line with the test manual, the children's catching performance was quantified on basis of the number of correctly caught balls. The throwing accuracy and the catching strategy were quantified by applying a kinematic analysis on the ball's trajectory and the hand movements. Based on linear regression analyses, we then investigated the relation between throwing accuracy, catching performance and catching strategy. The results show that an increased throwing accuracy is significantly correlated with an increased catching performance. Moreover, a higher throwing accuracy is significantly correlated with a longer duration of the hand on the ball's parabola, which indicates that throwing the ball more accurately could enable the children to effectively reduce the requirements on temporal precision. As the children's catching performance and their choice of catching strategy in the ball-catching task of the MABC-2 are substantially determined by their throwing accuracy, the test evaluation should not be based on the movement outcome alone, but should also take into account the children's throwing performance. Our findings could be of particular value for the

  7. Landsat classification accuracy assessment procedures

    USGS Publications Warehouse

    Mead, R. R.; Szajgin, John

    1982-01-01

    A working conference was held in Sioux Falls, South Dakota, 12-14 November, 1980 dealing with Landsat classification Accuracy Assessment Procedures. Thirteen formal presentations were made on three general topics: (1) sampling procedures, (2) statistical analysis techniques, and (3) examples of projects which included accuracy assessment and the associated costs, logistical problems, and value of the accuracy data to the remote sensing specialist and the resource manager. Nearly twenty conference attendees participated in two discussion sessions addressing various issues associated with accuracy assessment. This paper presents an account of the accomplishments of the conference.

  8. An Accuracy--Response Time Capacity Assessment Function that Measures Performance against Standard Parallel Predictions

    ERIC Educational Resources Information Center

    Townsend, James T.; Altieri, Nicholas

    2012-01-01

    Measures of human efficiency under increases in mental workload or attentional limitations are vital in studying human perception, cognition, and action. Assays of efficiency as workload changes have typically been confined to either reaction times (RTs) or accuracy alone. Within the realm of RTs, a nonparametric measure called the "workload…

  9. Future dedicated Venus-SGG flight mission: Accuracy assessment and performance analysis

    NASA Astrophysics Data System (ADS)

    Zheng, Wei; Hsu, Houtse; Zhong, Min; Yun, Meijuan

    2016-01-01

    This study concentrates principally on the systematic requirements analysis for the future dedicated Venus-SGG (spacecraft gravity gradiometry) flight mission in China in respect of the matching measurement accuracies of the spacecraft-based scientific instruments and the orbital parameters of the spacecraft. Firstly, we created and proved the single and combined analytical error models of the cumulative Venusian geoid height influenced by the gravity gradient error of the spacecraft-borne atom-interferometer gravity gradiometer (AIGG) and the orbital position error and orbital velocity error tracked by the deep space network (DSN) on the Earth station. Secondly, the ultra-high-precision spacecraft-borne AIGG is propitious to making a significant contribution to globally mapping the Venusian gravitational field and modeling the geoid with unprecedented accuracy and spatial resolution through weighing the advantages and disadvantages among the electrostatically suspended gravity gradiometer, the superconducting gravity gradiometer and the AIGG. Finally, the future dedicated Venus-SGG spacecraft had better adopt the optimal matching accuracy indices consisting of 3 × 10-13/s2 in gravity gradient, 10 m in orbital position and 8 × 10-4 m/s in orbital velocity and the preferred orbital parameters comprising an orbital altitude of 300 ± 50 km, an observation time of 60 months and a sampling interval of 1 s.

  10. Classification Accuracy of Oral Reading Fluency and Maze in Predicting Performance on Large-Scale Reading Assessments

    ERIC Educational Resources Information Center

    Decker, Dawn M.; Hixson, Michael D.; Shaw, Amber; Johnson, Gloria

    2014-01-01

    The purpose of this study was to examine whether using a multiple-measure framework yielded better classification accuracy than oral reading fluency (ORF) or maze alone in predicting pass/fail rates for middle-school students on a large-scale reading assessment. Participants were 178 students in Grades 7 and 8 from a Midwestern school district.…

  11. Diagnostic accuracy of emergency-performed focused assessment with sonography for trauma (FAST) in blunt abdominal trauma

    PubMed Central

    Ghafouri, Hamed Basir; Zare, Morteza; Bazrafshan, Azam; Modirian, Ehsan; Farahmand, Shervin; Abazarian, Niloofar

    2016-01-01

    Introduction Intra-abdominal hemorrhage due to blunt abdominal trauma is a major cause of trauma-related mortality. Therefore, any action taken for facilitating the diagnosis of intra-abdominal hemorrhage could save the lives of patients more effectively. The aim of this study was to determine the accuracy of focused assessment with sonography for trauma (FAST) performed by emergency physicians. Methods In this cross-sectional study from February 2011 to January 2012 at 7th Tir Hospital in Tehran (Iran), 120 patients with abdominal blunt trauma were chosen and evaluated for abdominal fluid. FAST sonography was performed for all the subjects by emergency residents and radiologists while they were blind to the other tests. Abdominal CTs, which is the gold standard, were done for all of the cases. SPSS 20.0 was used to analyze the results. Results During the study, 120 patients with abdominal blunt trauma were evaluated; the mean age of the patients was 33.0 ± 16.6 and the gender ratio was 3/1 (M/F). The results of FAST sonography by emergency physicians showed free fluid in the abdomen or pelvic spaces in 33 patients (27.5%), but this was not observed by the results of CT scans of six patients; sensitivity and specificity were 93.1 and 93.4%, respectively. As for tests performed by radiology residents, sensitivity was a bit higher (96.5%) with lower specificity (92.3%). Conclusion The results suggested that emergency physicians can use ultrasonography as a safe and reliable method in evaluating blunt abdominal trauma. PMID:27790349

  12. PLÉIADES Project: Assessment of Georeferencing Accuracy, Image Quality, Pansharpening Performence and Dsm/dtm Quality

    NASA Astrophysics Data System (ADS)

    Topan, Hüseyin; Cam, Ali; Özendi, Mustafa; Oruç, Murat; Jacobsen, Karsten; Taşkanat, Talha

    2016-06-01

    Pléiades 1A and 1B are twin optical satellites of Optical and Radar Federated Earth Observation (ORFEO) program jointly running by France and Italy. They are the first satellites of Europe with sub-meter resolution. Airbus DS (formerly Astrium Geo) runs a MyGIC (formerly Pléiades Users Group) program to validate Pléiades images worldwide for various application purposes. The authors conduct three projects, one is within this program, the second is supported by BEU Scientific Research Project Program, and the third is supported by TÜBİTAK. Assessment of georeferencing accuracy, image quality, pansharpening performance and Digital Surface Model/Digital Terrain Model (DSM/DTM) quality subjects are investigated in these projects. For these purposes, triplet panchromatic (50 cm Ground Sampling Distance (GSD)) and VNIR (2 m GSD) Pléiades 1A images were investigated over Zonguldak test site (Turkey) which is urbanised, mountainous and covered by dense forest. The georeferencing accuracy was estimated with a standard deviation in X and Y (SX, SY) in the range of 0.45m by bias corrected Rational Polynomial Coefficient (RPC) orientation, using ~170 Ground Control Points (GCPs). 3D standard deviation of ±0.44m in X, ±0.51m in Y, and ±1.82m in Z directions have been reached in spite of the very narrow angle of convergence by bias corrected RPC orientation. The image quality was also investigated with respect to effective resolution, Signal to Noise Ratio (SNR) and blur coefficient. The effective resolution was estimated with factor slightly below 1.0, meaning that the image quality corresponds to the nominal resolution of 50cm. The blur coefficients were achieved between 0.39-0.46 for triplet panchromatic images, indicating a satisfying image quality. SNR is in the range of other comparable space borne images which may be caused by de-noising of Pléiades images. The pansharpened images were generated by various methods, and are validated by most common statistical

  13. Accuracy of TCP performance models

    NASA Astrophysics Data System (ADS)

    Schwefel, Hans Peter; Jobmann, Manfred; Hoellisch, Daniel; Heyman, Daniel P.

    2001-07-01

    Despite the fact that most of todays' Internet traffic is transmitted via the TCP protocol, the performance behavior of networks with TCP traffic is still not well understood. Recent research activities have lead to a number of performance models for TCP traffic, but the degree of accuracy of these models in realistic scenarios is still questionable. This paper provides a comparison of the results (in terms of average throughput per connection) of three different `analytic' TCP models: I. the throughput formula in [Padhye et al. 98], II. the modified Engset model of [Heyman et al. 97], and III. the analytic TCP queueing model of [Schwefel 01] that is a packet based extension of (II). Results for all three models are computed for a scenario of N identical TCP sources that transmit data in individual TCP connections of stochastically varying size. The results for the average throughput per connection in the analytic models are compared with simulations of detailed TCP behavior. All of the analytic models are expected to show deficiencies in certain scenarios, since they neglect highly influential parameters of the actual real simulation model: The approach of Model (I) and (II) only indirectly considers queueing in bottleneck routers, and in certain scenarios those models are not able to adequately describe the impact of buffer-space, neither qualitatively nor quantitatively. Furthermore, (II) is insensitive to the actual distribution of the connection sizes. As a consequence, their prediction would also be insensitive of so-called long-range dependent properties in the traffic that are caused by heavy-tailed connection size distributions. The simulation results show that such properties cannot be neglected for certain network topologies: LRD properties can even have counter-intuitive impact on the average goodput, namely the goodput can be higher for small buffer-sizes.

  14. When Does Choice of Accuracy Measure Alter Imputation Accuracy Assessments?

    PubMed

    Ramnarine, Shelina; Zhang, Juan; Chen, Li-Shiun; Culverhouse, Robert; Duan, Weimin; Hancock, Dana B; Hartz, Sarah M; Johnson, Eric O; Olfson, Emily; Schwantes-An, Tae-Hwi; Saccone, Nancy L

    2015-01-01

    Imputation, the process of inferring genotypes for untyped variants, is used to identify and refine genetic association findings. Inaccuracies in imputed data can distort the observed association between variants and a disease. Many statistics are used to assess accuracy; some compare imputed to genotyped data and others are calculated without reference to true genotypes. Prior work has shown that the Imputation Quality Score (IQS), which is based on Cohen's kappa statistic and compares imputed genotype probabilities to true genotypes, appropriately adjusts for chance agreement; however, it is not commonly used. To identify differences in accuracy assessment, we compared IQS with concordance rate, squared correlation, and accuracy measures built into imputation programs. Genotypes from the 1000 Genomes reference populations (AFR N = 246 and EUR N = 379) were masked to match the typed single nucleotide polymorphism (SNP) coverage of several SNP arrays and were imputed with BEAGLE 3.3.2 and IMPUTE2 in regions associated with smoking behaviors. Additional masking and imputation was conducted for sequenced subjects from the Collaborative Genetic Study of Nicotine Dependence and the Genetic Study of Nicotine Dependence in African Americans (N = 1,481 African Americans and N = 1,480 European Americans). Our results offer further evidence that concordance rate inflates accuracy estimates, particularly for rare and low frequency variants. For common variants, squared correlation, BEAGLE R2, IMPUTE2 INFO, and IQS produce similar assessments of imputation accuracy. However, for rare and low frequency variants, compared to IQS, the other statistics tend to be more liberal in their assessment of accuracy. IQS is important to consider when evaluating imputation accuracy, particularly for rare and low frequency variants. PMID:26458263

  15. Ground Truth Sampling and LANDSAT Accuracy Assessment

    NASA Technical Reports Server (NTRS)

    Robinson, J. W.; Gunther, F. J.; Campbell, W. J.

    1982-01-01

    It is noted that the key factor in any accuracy assessment of remote sensing data is the method used for determining the ground truth, independent of the remote sensing data itself. The sampling and accuracy procedures developed for nuclear power plant siting study are described. The purpose of the sampling procedure was to provide data for developing supervised classifications for two study sites and for assessing the accuracy of that and the other procedures used. The purpose of the accuracy assessment was to allow the comparison of the cost and accuracy of various classification procedures as applied to various data types.

  16. Orbit accuracy assessment for Seasat

    NASA Technical Reports Server (NTRS)

    Schutz, B. E.; Tapley, B. D.

    1980-01-01

    Laser range measurements are used to determine the orbit of Seasat during the period from July 28, 1978, to Aug. 14, 1978, and the influence of the gravity field, atmospheric drag, and solar radiation pressure on the orbit accuracy is investigated. It is noted that for the orbits of three-day duration, little distinction can be made between the influence of different atmospheric models. It is found that the special Seasat gravity field PGS-S3 is most consistent with the data for three-day orbits, but an unmodeled systematic effect in radiation pressure is noted. For orbits of 18-day duration, little distinction can be made between the results derived from the PGS gravity fields. It is also found that the geomagnetic field is an influential factor in the atmospheric modeling during this time period. Seasat altimeter measurements are used to determine the accuracy of the altimeter measurement time tag and to evaluate the orbital accuracy.

  17. Arizona Vegetation Resource Inventory (AVRI) accuracy assessment

    USGS Publications Warehouse

    Szajgin, John; Pettinger, L.R.; Linden, D.S.; Ohlen, D.O.

    1982-01-01

    A quantitative accuracy assessment was performed for the vegetation classification map produced as part of the Arizona Vegetation Resource Inventory (AVRI) project. This project was a cooperative effort between the Bureau of Land Management (BLM) and the Earth Resources Observation Systems (EROS) Data Center. The objective of the accuracy assessment was to estimate (with a precision of ?10 percent at the 90 percent confidence level) the comission error in each of the eight level II hierarchical vegetation cover types. A stratified two-phase (double) cluster sample was used. Phase I consisted of 160 photointerpreted plots representing clusters of Landsat pixels, and phase II consisted of ground data collection at 80 of the phase I cluster sites. Ground data were used to refine the phase I error estimates by means of a linear regression model. The classified image was stratified by assigning each 15-pixel cluster to the stratum corresponding to the dominant cover type within each cluster. This method is known as stratified plurality sampling. Overall error was estimated to be 36 percent with a standard error of 2 percent. Estimated error for individual vegetation classes ranged from a low of 10 percent ?6 percent for evergreen woodland to 81 percent ?7 percent for cropland and pasture. Total cost of the accuracy assessment was $106,950 for the one-million-hectare study area. The combination of the stratified plurality sampling (SPS) method of sample allocation with double sampling provided the desired estimates within the required precision levels. The overall accuracy results confirmed that highly accurate digital classification of vegetation is difficult to perform in semiarid environments, due largely to the sparse vegetation cover. Nevertheless, these techniques show promise for providing more accurate information than is presently available for many BLM-administered lands.

  18. Skinfold Assessment: Accuracy and Application

    ERIC Educational Resources Information Center

    Ball, Stephen; Swan, Pamela D.; Altena, Thomas S.

    2006-01-01

    Although not perfect, skinfolds (SK), or the measurement of fat under the skin, remains the most popular and practical method available to assess body composition on a large scale (Kuczmarski, Flegal, Campbell, & Johnson, 1994). Even for practitioners who have been using SK for years and are highly proficient at locating the correct anatomical…

  19. Alaska national hydrography dataset positional accuracy assessment study

    USGS Publications Warehouse

    Arundel, Samantha; Yamamoto, Kristina H.; Constance, Eric; Mantey, Kim; Vinyard-Houx, Jeremy

    2013-01-01

    Initial visual assessments Wide range in the quality of fit between features in NHD and these new image sources. No statistical analysis has been performed to actually quantify accuracy Determining absolute accuracy is cost prohibitive (must collect independent, well defined test points) Quantitative analysis of relative positional error is feasible.

  20. Assessment of the Thematic Accuracy of Land Cover Maps

    NASA Astrophysics Data System (ADS)

    Höhle, J.

    2015-08-01

    Several land cover maps are generated from aerial imagery and assessed by different approaches. The test site is an urban area in Europe for which six classes (`building', `hedge and bush', `grass', `road and parking lot', `tree', `wall and car port') had to be derived. Two classification methods were applied (`Decision Tree' and `Support Vector Machine') using only two attributes (height above ground and normalized difference vegetation index) which both are derived from the images. The assessment of the thematic accuracy applied a stratified design and was based on accuracy measures such as user's and producer's accuracy, and kappa coefficient. In addition, confidence intervals were computed for several accuracy measures. The achieved accuracies and confidence intervals are thoroughly analysed and recommendations are derived from the gained experiences. Reliable reference values are obtained using stereovision, false-colour image pairs, and positioning to the checkpoints with 3D coordinates. The influence of the training areas on the results is studied. Cross validation has been tested with a few reference points in order to derive approximate accuracy measures. The two classification methods perform equally for five classes. Trees are classified with a much better accuracy and a smaller confidence interval by means of the decision tree method. Buildings are classified by both methods with an accuracy of 99% (95% CI: 95%-100%) using independent 3D checkpoints. The average width of the confidence interval of six classes was 14% of the user's accuracy.

  1. Evaluating the Accuracy of Pharmacy Students' Self-Assessment Skills

    PubMed Central

    Gregory, Paul A. M.

    2007-01-01

    Objectives To evaluate the accuracy of self-assessment skills of senior-level bachelor of science pharmacy students. Methods A method proposed by Kruger and Dunning involving comparisons of pharmacy students' self-assessment with weighted average assessments of peers, standardized patients, and pharmacist-instructors was used. Results Eighty students participated in the study. Differences between self-assessment and external assessments were found across all performance quartiles. These differences were particularly large and significant in the third and fourth (lowest) quartiles and particularly marked in the areas of empathy, and logic/focus/coherence of interviewing. Conclusions The quality and accuracy of pharmacy students' self-assessment skills were not as strong as expected, particularly given recent efforts to include self-assessment in the curriculum. Further work is necessary to ensure this important practice competency and life skill is at the level expected for professional practice and continuous professional development. PMID:17998986

  2. Positional Accuracy Assessment of Googleearth in Riyadh

    NASA Astrophysics Data System (ADS)

    Farah, Ashraf; Algarni, Dafer

    2014-06-01

    Google Earth is a virtual globe, map and geographical information program that is controlled by Google corporation. It maps the Earth by the superimposition of images obtained from satellite imagery, aerial photography and GIS 3D globe. With millions of users all around the globe, GoogleEarth® has become the ultimate source of spatial data and information for private and public decision-support systems besides many types and forms of social interactions. Many users mostly in developing countries are also using it for surveying applications, the matter that raises questions about the positional accuracy of the Google Earth program. This research presents a small-scale assessment study of the positional accuracy of GoogleEarth® Imagery in Riyadh; capital of Kingdom of Saudi Arabia (KSA). The results show that the RMSE of the GoogleEarth imagery is 2.18 m and 1.51 m for the horizontal and height coordinates respectively.

  3. Accuracy Assessment of Altimeter Derived Geostrophic Velocities

    NASA Astrophysics Data System (ADS)

    Leben, R. R.; Powell, B. S.; Born, G. H.; Guinasso, N. L.

    2002-12-01

    Along track sea surface height anomaly gradients are proportional to cross track geostrophic velocity anomalies allowing satellite altimetry to provide much needed satellite observations of changes in the geostrophic component of surface ocean currents. Often, surface height gradients are computed from altimeter data archives that have been corrected to give the most accurate absolute sea level, a practice that may unnecessarily increase the error in the cross track velocity anomalies and thereby require excessive smoothing to mitigate noise. Because differentiation along track acts as a high-pass filter, many of the path length corrections applied to altimeter data for absolute height accuracy are unnecessary for the corresponding gradient calculations. We report on a study to investigate appropriate altimetric corrections and processing techniques for improving geostrophic velocity accuracy. Accuracy is assessed by comparing cross track current measurements from two moorings placed along the descending TOPEX/POSEIDON ground track number 52 in the Gulf of Mexico to the corresponding altimeter velocity estimates. The buoys are deployed and maintained by the Texas Automated Buoy System (TABS) under Interagency Contracts with Texas A&M University. The buoys telemeter observations in near real-time via satellite to the TABS station located at the Geochemical and Environmental Research Group (GERG) at Texas A&M. Buoy M is located in shelf waters of 57 m depth with a second, Buoy N, 38 km away on the shelf break at 105 m depth. Buoy N has been operational since the beginning of 2002 and has a current meter at 2m depth providing in situ measurements of surface velocities coincident with Jason and TOPEX/POSEIDON altimeter over flights. This allows one of the first detailed comparisons of shallow water near surface current meter time series to coincident altimetry.

  4. Accuracy of quantitative visual soil assessment

    NASA Astrophysics Data System (ADS)

    van Leeuwen, Maricke; Heuvelink, Gerard; Stoorvogel, Jetse; Wallinga, Jakob; de Boer, Imke; van Dam, Jos; van Essen, Everhard; Moolenaar, Simon; Verhoeven, Frank; Stoof, Cathelijne

    2016-04-01

    Visual soil assessment (VSA) is a method to assess soil quality visually, when standing in the field. VSA is increasingly used by farmers, farm organisations and companies, because it is rapid and cost-effective, and because looking at soil provides understanding about soil functioning. Often VSA is regarded as subjective, so there is a need to verify VSA. Also, many VSAs have not been fine-tuned for contrasting soil types. This could lead to wrong interpretation of soil quality and soil functioning when contrasting sites are compared to each other. We wanted to assess accuracy of VSA, while taking into account soil type. The first objective was to test whether quantitative visual field observations, which form the basis in many VSAs, could be validated with standardized field or laboratory measurements. The second objective was to assess whether quantitative visual field observations are reproducible, when used by observers with contrasting backgrounds. For the validation study, we made quantitative visual observations at 26 cattle farms. Farms were located at sand, clay and peat soils in the North Friesian Woodlands, the Netherlands. Quantitative visual observations evaluated were grass cover, number of biopores, number of roots, soil colour, soil structure, number of earthworms, number of gley mottles and soil compaction. Linear regression analysis showed that four out of eight quantitative visual observations could be well validated with standardized field or laboratory measurements. The following quantitative visual observations correlated well with standardized field or laboratory measurements: grass cover with classified images of surface cover; number of roots with root dry weight; amount of large structure elements with mean weight diameter; and soil colour with soil organic matter content. Correlation coefficients were greater than 0.3, from which half of the correlations were significant. For the reproducibility study, a group of 9 soil scientists and 7

  5. Accuracy assessment of GPS satellite orbits

    NASA Technical Reports Server (NTRS)

    Schutz, B. E.; Tapley, B. D.; Abusali, P. A. M.; Ho, C. S.

    1991-01-01

    GPS orbit accuracy is examined using several evaluation procedures. The existence is shown of unmodeled effects which correlate with the eclipsing of the sun. The ability to obtain geodetic results that show an accuracy of 1-2 parts in 10 to the 8th or better has not diminished.

  6. Performance and accuracy benchmarks for a next generation geodynamo simulation

    NASA Astrophysics Data System (ADS)

    Matsui, H.

    2015-12-01

    A number of numerical dynamo models have successfully represented basic characteristics of the geomagnetic field in the last twenty years. However, parameters in the current dynamo model are far from realistic for the Earth's core. To approach a realistic parameters for the Earth's core in geodynmo simulations, extremely large spatial resolutions are required to resolve convective turbulence and small-scale magnetic fields. To assess the next generation dynamo models on a massively parallel computer, we performed performance and accuracy benchmarks from 15 dynamo codes which employ a diverse range of discretization (spectral, finite difference, finite element, and hybrid methods) and parallelization methods. In the performance benchmark, we compare elapsed time and parallelization capability on the TACC Stampede platform, using up to 16384 processor cores. In the accuracy benchmark, we compare required resolutions to obtain less than 1% error from the suggested solutions. The results of the performance benchmark show that codes using 2-D or 3-D parallelization models have a capability to run with 16384 processor cores. The elapsed time for Calypso and Rayleigh, two parallelized codes that use the spectral method, scales with a smaller exponent than the ideal scaling. The elapsed time of SFEMaNS, which uses finite element and Fourier transform, has the smallest growth of the elapsed time with the resolution and parallelization. However, the accuracy benchmark results show that SFEMaNS require three times more degrees of freedoms in each direction compared with a spherical harmonics expansion. Consequently, SFEMaNS needs more than 200 times of elapsed time for the Calypso and Rayleigh with 10000 cores to obtain the same accuracy. These benchmark results indicate that the spectral method with 2-D or 3-D domain decomposition is the most promising methodology for advancing numerical dynamo simulations in the immediate future.

  7. Accuracy Assessment for AG500, Electromagnetic Articulograph

    ERIC Educational Resources Information Center

    Yunusova, Yana; Green, Jordan R.; Mefferd, Antje

    2009-01-01

    Purpose: The goal of this article was to evaluate the accuracy and reliability of the AG500 (Carstens Medizinelectronik, Lenglern, Germany), an electromagnetic device developed recently to register articulatory movements in three dimensions. This technology seems to have unprecedented capabilities to provide rich information about time-varying…

  8. Accuracy Of Stereometry In Assessing Orthognathic Surgery

    NASA Astrophysics Data System (ADS)

    King, Geoffrey E.; Bays, R. A.

    1983-07-01

    An X-ray stereometric technique has been developed for the determination of 3-dimensional coordinates of spherical metallic markers previously implanted in monkey skulls. The accuracy of the technique is better than 0.5mm. and uses readily available demountable X-ray equipment. The technique is used to study the effects and stability of experimental orthognathic surgery.

  9. Accuracy assessment of NLCD 2006 land cover and impervious surface

    USGS Publications Warehouse

    Wickham, James D.; Stehman, Stephen V.; Gass, Leila; Dewitz, Jon; Fry, Joyce A.; Wade, Timothy G.

    2013-01-01

    Release of NLCD 2006 provides the first wall-to-wall land-cover change database for the conterminous United States from Landsat Thematic Mapper (TM) data. Accuracy assessment of NLCD 2006 focused on four primary products: 2001 land cover, 2006 land cover, land-cover change between 2001 and 2006, and impervious surface change between 2001 and 2006. The accuracy assessment was conducted by selecting a stratified random sample of pixels with the reference classification interpreted from multi-temporal high resolution digital imagery. The NLCD Level II (16 classes) overall accuracies for the 2001 and 2006 land cover were 79% and 78%, respectively, with Level II user's accuracies exceeding 80% for water, high density urban, all upland forest classes, shrubland, and cropland for both dates. Level I (8 classes) accuracies were 85% for NLCD 2001 and 84% for NLCD 2006. The high overall and user's accuracies for the individual dates translated into high user's accuracies for the 2001–2006 change reporting themes water gain and loss, forest loss, urban gain, and the no-change reporting themes for water, urban, forest, and agriculture. The main factor limiting higher accuracies for the change reporting themes appeared to be difficulty in distinguishing the context of grass. We discuss the need for more research on land-cover change accuracy assessment.

  10. Evaluating the Effect of Learning Style and Student Background on Self-Assessment Accuracy

    ERIC Educational Resources Information Center

    Alaoutinen, Satu

    2012-01-01

    This study evaluates a new taxonomy-based self-assessment scale and examines factors that affect assessment accuracy and course performance. The scale is based on Bloom's Revised Taxonomy and is evaluated by comparing students' self-assessment results with course performance in a programming course. Correlation has been used to reveal possible…

  11. Contemporary flow meters: an assessment of their accuracy and reliability.

    PubMed

    Christmas, T J; Chapple, C R; Rickards, D; Milroy, E J; Turner-Warwick, R T

    1989-05-01

    The accuracy, reliability and cost effectiveness of 5 currently marketed flow meters have been assessed. The mechanics of each meter is briefly described in relation to its accuracy and robustness. The merits and faults of the meters are discussed and the important features of flow measurements that need to be taken into account when making diagnostic interpretations are emphasised.

  12. Assessing the Accuracy of the Precise Point Positioning Technique

    NASA Astrophysics Data System (ADS)

    Bisnath, S. B.; Collins, P.; Seepersad, G.

    2012-12-01

    The Precise Point Positioning (PPP) GPS data processing technique has developed over the past 15 years to become a standard method for growing categories of positioning and navigation applications. The technique relies on single receiver point positioning combined with the use of precise satellite orbit and clock information and high-fidelity error modelling. The research presented here uniquely addresses the current accuracy of the technique, explains the limits of performance, and defines paths to improvements. For geodetic purposes, performance refers to daily static position accuracy. PPP processing of over 80 IGS stations over one week results in few millimetre positioning rms error in the north and east components and few centimetres in the vertical (all one sigma values). Larger error statistics for real-time and kinematic processing are also given. GPS PPP with ambiguity resolution processing is also carried out, producing slight improvements over the float solution results. These results are categorised into quality classes in order to analyse the root error causes of the resultant accuracies: "best", "worst", multipath, site displacement effects, satellite availability and geometry, etc. Also of interest in PPP performance is solution convergence period. Static, conventional solutions are slow to converge, with approximately 35 minutes required for 95% of solutions to reach the 20 cm or better horizontal accuracy. Ambiguity resolution can significantly reduce this period without biasing solutions. The definition of a PPP error budget is a complex task even with the resulting numerical assessment, as unlike the epoch-by-epoch processing in the Standard Position Service, PPP processing involving filtering. An attempt is made here to 1) define the magnitude of each error source in terms of range, 2) transform ranging error to position error via Dilution Of Precision (DOP), and 3) scale the DOP through the filtering process. The result is a deeper

  13. Assessing and Ensuring GOES-R Magnetometer Accuracy

    NASA Technical Reports Server (NTRS)

    Kronenwetter, Jeffrey; Carter, Delano R.; Todirita, Monica; Chu, Donald

    2016-01-01

    The GOES-R magnetometer accuracy requirement is 1.7 nanoteslas (nT). During quiet times (100 nT), accuracy is defined as absolute mean plus 3 sigma. During storms (300 nT), accuracy is defined as absolute mean plus 2 sigma. To achieve this, the sensor itself has better than 1 nT accuracy. Because zero offset and scale factor drift over time, it is also necessary to perform annual calibration maneuvers. To predict performance, we used covariance analysis and attempted to corroborate it with simulations. Although not perfect, the two generally agree and show the expected behaviors. With the annual calibration regimen, these predictions suggest that the magnetometers will meet their accuracy requirements.

  14. Assessing and Ensuring GOES-R Magnetometer Accuracy

    NASA Technical Reports Server (NTRS)

    Carter, Delano R.; Todirita, Monica; Kronenwetter, Jeffrey; Chu, Donald

    2016-01-01

    The GOES-R magnetometer subsystem accuracy requirement is 1.7 nanoteslas (nT). During quiet times (100 nT), accuracy is defined as absolute mean plus 3 sigma. During storms (300 nT), accuracy is defined as absolute mean plus 2 sigma. Error comes both from outside the magnetometers, e.g. spacecraft fields and misalignments, as well as inside, e.g. zero offset and scale factor errors. Because zero offset and scale factor drift over time, it will be necessary to perform annual calibration maneuvers. To predict performance before launch, we have used Monte Carlo simulations and covariance analysis. Both behave as expected, and their accuracy predictions agree within 30%. With the proposed calibration regimen, both suggest that the GOES-R magnetometer subsystem will meet its accuracy requirements.

  15. Accuracy Assessment in rainfall upscaling in multiple time scales

    NASA Astrophysics Data System (ADS)

    Yu, H.; Wang, C.; Lin, Y.

    2008-12-01

    Long-term hydrologic parameters, e.g. annual precipitations, are usually used to represent the general hydrologic characteristics in a region. Recently, the analysis of the impact of climate change to hydrological patterns primarily relies on the measurement and/or the estimations in long time scales, e.g. year. Under the general condition of the prevalence of short-term measurements, therefore, it is important to understand the accuracy of upscaling for the long-term estimations of hydrologic parameters. This study applies spatiotemporal geostatistical method to analyze and discuss the accuracy of precipitation upscaling in Taiwan under the different time scales, and also quantifies the uncertainty in the upscaled long-term precipitations. In this study, two space-time upscaling approaches developed by Bayesian Maximum Entropy method (BME) are presented 1) UM1: data aggregation followed by BME estimation and 2) UM2: BME estimation followed by aggregation. The investigation and comparison are also implemented to assess the performance of the rainfall estimations in multiple time scales in Taiwan by the two upscaling. Keywords: upscaling, geostatistics, BME, uncertainty analysis

  16. 20 CFR 416.1043 - Performance accuracy standard.

    Code of Federal Regulations, 2013 CFR

    2013-04-01

    ... well as the correctness of the decision. For example, if a particular item of medical evidence should... case, that is a performance error. Performance accuracy, therefore, is a higher standard than... reflected in the error rate established by SSA's quality assurance system. (b) Target level. The...

  17. 20 CFR 404.1643 - Performance accuracy standard.

    Code of Federal Regulations, 2014 CFR

    2014-04-01

    ... 20 Employees' Benefits 2 2014-04-01 2014-04-01 false Performance accuracy standard. 404.1643 Section 404.1643 Employees' Benefits SOCIAL SECURITY ADMINISTRATION FEDERAL OLD-AGE, SURVIVORS AND... stepping stones to progress towards our targeted level of performance. (d) Threshold levels. The...

  18. 20 CFR 404.1643 - Performance accuracy standard.

    Code of Federal Regulations, 2011 CFR

    2011-04-01

    ... 20 Employees' Benefits 2 2011-04-01 2011-04-01 false Performance accuracy standard. 404.1643 Section 404.1643 Employees' Benefits SOCIAL SECURITY ADMINISTRATION FEDERAL OLD-AGE, SURVIVORS AND... stepping stones to progress towards our targeted level of performance. (d) Threshold levels. The...

  19. 20 CFR 404.1643 - Performance accuracy standard.

    Code of Federal Regulations, 2010 CFR

    2010-04-01

    ... 20 Employees' Benefits 2 2010-04-01 2010-04-01 false Performance accuracy standard. 404.1643 Section 404.1643 Employees' Benefits SOCIAL SECURITY ADMINISTRATION FEDERAL OLD-AGE, SURVIVORS AND... stepping stones to progress towards our targeted level of performance. (d) Threshold levels. The...

  20. 20 CFR 404.1643 - Performance accuracy standard.

    Code of Federal Regulations, 2013 CFR

    2013-04-01

    ... 20 Employees' Benefits 2 2013-04-01 2013-04-01 false Performance accuracy standard. 404.1643 Section 404.1643 Employees' Benefits SOCIAL SECURITY ADMINISTRATION FEDERAL OLD-AGE, SURVIVORS AND... stepping stones to progress towards our targeted level of performance. (d) Threshold levels. The...

  1. 20 CFR 404.1643 - Performance accuracy standard.

    Code of Federal Regulations, 2012 CFR

    2012-04-01

    ... 20 Employees' Benefits 2 2012-04-01 2012-04-01 false Performance accuracy standard. 404.1643 Section 404.1643 Employees' Benefits SOCIAL SECURITY ADMINISTRATION FEDERAL OLD-AGE, SURVIVORS AND... stepping stones to progress towards our targeted level of performance. (d) Threshold levels. The...

  2. Update and review of accuracy assessment techniques for remotely sensed data

    NASA Technical Reports Server (NTRS)

    Congalton, R. G.; Heinen, J. T.; Oderwald, R. G.

    1983-01-01

    Research performed in the accuracy assessment of remotely sensed data is updated and reviewed. The use of discrete multivariate analysis techniques for the assessment of error matrices, the use of computer simulation for assessing various sampling strategies, and an investigation of spatial autocorrelation techniques are examined.

  3. Robust methods for assessing the accuracy of linear interpolated DEM

    NASA Astrophysics Data System (ADS)

    Wang, Bin; Shi, Wenzhong; Liu, Eryong

    2015-02-01

    Methods for assessing the accuracy of a digital elevation model (DEM) with emphasis on robust methods have been studied in this paper. Based on the squared DEM residual population generated by the bi-linear interpolation method, three average-error statistics including (a) mean, (b) median, and (c) M-estimator are thoroughly investigated for measuring the interpolated DEM accuracy. Correspondingly, their confidence intervals are also constructed for each average error statistic to further evaluate the DEM quality. The first method mainly utilizes the student distribution while the second and third are derived from the robust theories. These innovative robust methods possess the capability of counteracting the outlier effects or even the skew distributed residuals in DEM accuracy assessment. Experimental studies using Monte Carlo simulation have commendably investigated the asymptotic convergence behavior of confidence intervals constructed by these three methods with the increase of sample size. It is demonstrated that the robust methods can produce more reliable DEM accuracy assessment results compared with those by the classical t-distribution-based method. Consequently, these proposed robust methods are strongly recommended for assessing DEM accuracy, particularly for those cases where the DEM residual population is evidently non-normal or heavily contaminated with outliers.

  4. Assessing and ensuring GOES-R magnetometer accuracy

    NASA Astrophysics Data System (ADS)

    Carter, Delano; Todirita, Monica; Kronenwetter, Jeffrey; Dahya, Melissa; Chu, Donald

    2016-05-01

    The GOES-R magnetometer subsystem accuracy requirement is 1.7 nanoteslas (nT). During quiet times (100 nT), accuracy is defined as absolute mean plus 3 sigma error per axis. During storms (300 nT), accuracy is defined as absolute mean plus 2 sigma error per axis. Error comes both from outside the magnetometers, e.g. spacecraft fields and misalignments, as well as inside, e.g. zero offset and scale factor errors. Because zero offset and scale factor drift over time, it will be necessary to perform annual calibration maneuvers. To predict performance before launch, we have used Monte Carlo simulations and covariance analysis. With the proposed calibration regimen, both suggest that the magnetometer subsystem will meet its accuracy requirements.

  5. 20 CFR 416.1043 - Performance accuracy standard.

    Code of Federal Regulations, 2010 CFR

    2010-04-01

    ... 20 Employees' Benefits 2 2010-04-01 2010-04-01 false Performance accuracy standard. 416.1043 Section 416.1043 Employees' Benefits SOCIAL SECURITY ADMINISTRATION SUPPLEMENTAL SECURITY INCOME FOR THE... have been in the file but was not included, even though its inclusion does not change the result in...

  6. Modelling Second Language Performance: Integrating Complexity, Accuracy, Fluency, and Lexis

    ERIC Educational Resources Information Center

    Skehan, Peter

    2009-01-01

    Complexity, accuracy, and fluency have proved useful measures of second language performance. The present article will re-examine these measures themselves, arguing that fluency needs to be rethought if it is to be measured effectively, and that the three general measures need to be supplemented by measures of lexical use. Building upon this…

  7. Performance Assessment: Lessons from Performers

    ERIC Educational Resources Information Center

    Parkes, Kelly A.

    2010-01-01

    The performing arts studio is a highly complex learning setting, and assessing student outcomes relative to reliable and valid standards has presented challenges to this teaching and learning method. Building from the general international higher education literature, this article illustrates details, processes, and solutions, drawing on…

  8. Performance-Based Assessment

    ERIC Educational Resources Information Center

    ERIC Review, 1994

    1994-01-01

    "The ERIC Review" is published three times a year and announces research results, publications, and new programs relevant to each issue's theme topic. This issue explores performance-based assessment via two principal articles: "Performance Assessment" (Lawrence M. Rudner and Carol Boston); and "Alternative Assessment: Implications for Social…

  9. ASSESSING ACCURACY OF NET CHANGE DERIVED FROM LAND COVER MAPS

    EPA Science Inventory

    Net change derived from land-cover maps provides important descriptive information for environmental monitoring and is often used as an input or explanatory variable in environmental models. The sampling design and analysis for assessing net change accuracy differ from traditio...

  10. Estimating Classification Consistency and Accuracy for Cognitive Diagnostic Assessment

    ERIC Educational Resources Information Center

    Cui, Ying; Gierl, Mark J.; Chang, Hua-Hua

    2012-01-01

    This article introduces procedures for the computation and asymptotic statistical inference for classification consistency and accuracy indices specifically designed for cognitive diagnostic assessments. The new classification indices can be used as important indicators of the reliability and validity of classification results produced by…

  11. Accuracy of a semiquantitative method for Dermal Exposure Assessment (DREAM)

    PubMed Central

    van Wendel, de Joo... B; Vermeulen, R; van Hemmen, J J; Fransman, W; Kromhout, H

    2005-01-01

    Background: The authors recently developed a Dermal Exposure Assessment Method (DREAM), an observational semiquantitative method to assess dermal exposures by systematically evaluating exposure determinants using pre-assigned default values. Aim: To explore the accuracy of the DREAM method by comparing its estimates with quantitative dermal exposure measurements in several occupational settings. Methods: Occupational hygienists observed workers performing a certain task, whose exposure to chemical agents on skin or clothing was measured quantitatively simultaneously, and filled in the DREAM questionnaire. DREAM estimates were compared with measurement data by estimating Spearman correlation coefficients for each task and for individual observations. In addition, mixed linear regression models were used to study the effect of DREAM estimates on the variability in measured exposures between tasks, between workers, and from day to day. Results: For skin exposures, spearman correlation coefficients for individual observations ranged from 0.19 to 0.82. DREAM estimates for exposure levels on hands and forearms showed a fixed effect between and within surveys, explaining mainly between-task variance. In general, exposure levels on clothing layer were only predicted in a meaningful way by detailed DREAM estimates, which comprised detailed information on the concentration of the agent in the formulation to which exposure occurred. Conclusions: The authors expect that the DREAM method can be successfully applied for semiquantitative dermal exposure assessment in epidemiological and occupational hygiene surveys of groups of workers with considerable contrast in dermal exposure levels (variability between groups >1.0). For surveys with less contrasting exposure levels, quantitative dermal exposure measurements are preferable. PMID:16109819

  12. [Navigation in implantology: Accuracy assessment regarding the literature].

    PubMed

    Barrak, Ibrahim Ádám; Varga, Endre; Piffko, József

    2016-06-01

    Our objective was to assess the literature regarding the accuracy of the different static guided systems. After applying electronic literature search we found 661 articles. After reviewing 139 articles, the authors chose 52 articles for full-text evaluation. 24 studies involved accuracy measurements. Fourteen of our selected references were clinical and ten of them were in vitro (modell or cadaver). Variance-analysis (Tukey's post-hoc test; p < 0.05) was conducted to summarize the selected publications. Regarding 2819 results the average mean error at the entry point was 0.98 mm. At the level of the apex the average deviation was 1.29 mm while the mean of the angular deviation was 3,96 degrees. Significant difference could be observed between the two methods of implant placement (partially and fully guided sequence) in terms of deviation at the entry point, apex and angular deviation. Different levels of quality and quantity of evidence were available for assessing the accuracy of the different computer-assisted implant placement. The rapidly evolving field of digital dentistry and the new developments will further improve the accuracy of guided implant placement. In the interest of being able to draw dependable conclusions and for the further evaluation of the parameters used for accuracy measurements, randomized, controlled single or multi-centered clinical trials are necessary. PMID:27544966

  13. [Navigation in implantology: Accuracy assessment regarding the literature].

    PubMed

    Barrak, Ibrahim Ádám; Varga, Endre; Piffko, József

    2016-06-01

    Our objective was to assess the literature regarding the accuracy of the different static guided systems. After applying electronic literature search we found 661 articles. After reviewing 139 articles, the authors chose 52 articles for full-text evaluation. 24 studies involved accuracy measurements. Fourteen of our selected references were clinical and ten of them were in vitro (modell or cadaver). Variance-analysis (Tukey's post-hoc test; p < 0.05) was conducted to summarize the selected publications. Regarding 2819 results the average mean error at the entry point was 0.98 mm. At the level of the apex the average deviation was 1.29 mm while the mean of the angular deviation was 3,96 degrees. Significant difference could be observed between the two methods of implant placement (partially and fully guided sequence) in terms of deviation at the entry point, apex and angular deviation. Different levels of quality and quantity of evidence were available for assessing the accuracy of the different computer-assisted implant placement. The rapidly evolving field of digital dentistry and the new developments will further improve the accuracy of guided implant placement. In the interest of being able to draw dependable conclusions and for the further evaluation of the parameters used for accuracy measurements, randomized, controlled single or multi-centered clinical trials are necessary.

  14. Survey methods for assessing land cover map accuracy

    USGS Publications Warehouse

    Nusser, S.M.; Klaas, E.E.

    2003-01-01

    The increasing availability of digital photographic materials has fueled efforts by agencies and organizations to generate land cover maps for states, regions, and the United States as a whole. Regardless of the information sources and classification methods used, land cover maps are subject to numerous sources of error. In order to understand the quality of the information contained in these maps, it is desirable to generate statistically valid estimates of accuracy rates describing misclassification errors. We explored a full sample survey framework for creating accuracy assessment study designs that balance statistical and operational considerations in relation to study objectives for a regional assessment of GAP land cover maps. We focused not only on appropriate sample designs and estimation approaches, but on aspects of the data collection process, such as gaining cooperation of land owners and using pixel clusters as an observation unit. The approach was tested in a pilot study to assess the accuracy of Iowa GAP land cover maps. A stratified two-stage cluster sampling design addressed sample size requirements for land covers and the need for geographic spread while minimizing operational effort. Recruitment methods used for private land owners yielded high response rates, minimizing a source of nonresponse error. Collecting data for a 9-pixel cluster centered on the sampled pixel was simple to implement, and provided better information on rarer vegetation classes as well as substantial gains in precision relative to observing data at a single-pixel.

  15. Using composite images to assess accuracy in personality attribution to faces.

    PubMed

    Little, Anthony C; Perrett, David I

    2007-02-01

    Several studies have demonstrated some accuracy in personality attribution using only visual appearance. Using composite images of those scoring high and low on a particular trait, the current study shows that judges perform better than chance in guessing others' personality, particularly for the traits conscientiousness and extraversion. This study also shows that attractiveness, masculinity and age may all provide cues to assess personality accurately and that accuracy is affected by the sex of both of those judging and being judged. Individuals do perform better than chance at guessing another's personality from only facial information, providing some support for the popular belief that it is possible to assess accurately personality from faces. PMID:17319053

  16. Classification, change-detection and accuracy assessment: Toward fuller automation

    NASA Astrophysics Data System (ADS)

    Podger, Nancy E.

    This research aims to automate methods for conducting change detection studies using remotely sensed images. Five major objectives were tested on two study sites, one encompassing Madison, Wisconsin, and the other Fort Hood, Texas. (Objective 1) Enhance accuracy assessments by estimating standard errors using bootstrap analysis. Bootstrap estimates of the standard errors were found to be comparable to parametric statistical estimates. Also, results show that bootstrapping can be used to evaluate the consistency of a classification process. (Objective 2) Automate the guided clustering classifier. This research shows that the guided clustering classification process can be automated while maintaining highly accurate results. Three different evaluation methods were used. (Evaluation 1) Appraised the consistency of 25 classifications produced from the automated system. The classifications differed from one another by only two to four percent. (Evaluation 2) Compared accuracies produced by the automated system to classification accuracies generated following a manual guided clustering protocol. Results: The automated system produced higher overall accuracies in 50 percent of the tests and was comparable for all but one of the remaining tests. (Evaluation 3) Assessed the time and effort required to produce accurate classifications. Results: The automated system produced classifications in less time and with less effort than the manual 'protocol' method. (Objective 3) Built a flexible, interactive software tool to aid in producing binary change masks. (Objective 4) Reduced by automation the amount of training data needed to classify the second image of a two-time-period change detection project. Locations of the training sites in 'unchanged' areas employed to classify the first image were used to identify sites where spectral information was automatically extracted from the second image. Results: The automatically generated training data produces classification accuracies

  17. Assessing accuracy of an electronic provincial medication repository

    PubMed Central

    2012-01-01

    Background Jurisdictional drug information systems are being implemented in many regions around the world. British Columbia, Canada has had a provincial medication dispensing record, PharmaNet, system since 1995. Little is known about how accurately PharmaNet reflects actual medication usage. Methods This prospective, multi-centre study compared pharmacist collected Best Possible Medication Histories (BPMH) to PharmaNet profiles to assess accuracy of the PharmaNet profiles for patients receiving a BPMH as part of clinical care. A review panel examined the anonymized BPMHs and discrepancies to estimate clinical significance of discrepancies. Results 16% of medication profiles were accurate, with 48% of the discrepant profiles considered potentially clinically significant by the clinical review panel. Cardiac medications tended to be more accurate (e.g. ramipril was accurate >90% of the time), while insulin, warfarin, salbutamol and pain relief medications were often inaccurate (80–85% of the time). 1215 sequential BPMHs were collected and reviewed for this study. Conclusions The PharmaNet medication repository has a low accuracy and should be used in conjunction with other sources for medication histories for clinical or research purposes. This finding is consistent with other, smaller medication repository accuracy studies in other jurisdictions. Our study highlights specific medications that tend to be lower in accuracy. PMID:22621690

  18. Standardized accuracy assessment of the calypso wireless transponder tracking system.

    PubMed

    Franz, A M; Schmitt, D; Seitel, A; Chatrasingh, M; Echner, G; Oelfke, U; Nill, S; Birkfellner, W; Maier-Hein, L

    2014-11-21

    Electromagnetic (EM) tracking allows localization of small EM sensors in a magnetic field of known geometry without line-of-sight. However, this technique requires a cable connection to the tracked object. A wireless alternative based on magnetic fields, referred to as transponder tracking, has been proposed by several authors. Although most of the transponder tracking systems are still in an early stage of development and not ready for clinical use yet, Varian Medical Systems Inc. (Palo Alto, California, USA) presented the Calypso system for tumor tracking in radiation therapy which includes transponder technology. But it has not been used for computer-assisted interventions (CAI) in general or been assessed for accuracy in a standardized manner, so far. In this study, we apply a standardized assessment protocol presented by Hummel et al (2005 Med. Phys. 32 2371-9) to the Calypso system for the first time. The results show that transponder tracking with the Calypso system provides a precision and accuracy below 1 mm in ideal clinical environments, which is comparable with other EM tracking systems. Similar to other systems the tracking accuracy was affected by metallic distortion, which led to errors of up to 3.2 mm. The potential of the wireless transponder tracking technology for use in many future CAI applications can be regarded as extremely high.

  19. Standardized accuracy assessment of the calypso wireless transponder tracking system

    NASA Astrophysics Data System (ADS)

    Franz, A. M.; Schmitt, D.; Seitel, A.; Chatrasingh, M.; Echner, G.; Oelfke, U.; Nill, S.; Birkfellner, W.; Maier-Hein, L.

    2014-11-01

    Electromagnetic (EM) tracking allows localization of small EM sensors in a magnetic field of known geometry without line-of-sight. However, this technique requires a cable connection to the tracked object. A wireless alternative based on magnetic fields, referred to as transponder tracking, has been proposed by several authors. Although most of the transponder tracking systems are still in an early stage of development and not ready for clinical use yet, Varian Medical Systems Inc. (Palo Alto, California, USA) presented the Calypso system for tumor tracking in radiation therapy which includes transponder technology. But it has not been used for computer-assisted interventions (CAI) in general or been assessed for accuracy in a standardized manner, so far. In this study, we apply a standardized assessment protocol presented by Hummel et al (2005 Med. Phys. 32 2371-9) to the Calypso system for the first time. The results show that transponder tracking with the Calypso system provides a precision and accuracy below 1 mm in ideal clinical environments, which is comparable with other EM tracking systems. Similar to other systems the tracking accuracy was affected by metallic distortion, which led to errors of up to 3.2 mm. The potential of the wireless transponder tracking technology for use in many future CAI applications can be regarded as extremely high.

  20. Accuracy assessment of fluoroscopy-transesophageal echocardiography registration

    NASA Astrophysics Data System (ADS)

    Lang, Pencilla; Seslija, Petar; Bainbridge, Daniel; Guiraudon, Gerard M.; Jones, Doug L.; Chu, Michael W.; Holdsworth, David W.; Peters, Terry M.

    2011-03-01

    This study assesses the accuracy of a new transesophageal (TEE) ultrasound (US) fluoroscopy registration technique designed to guide percutaneous aortic valve replacement. In this minimally invasive procedure, a valve is inserted into the aortic annulus via a catheter. Navigation and positioning of the valve is guided primarily by intra-operative fluoroscopy. Poor anatomical visualization of the aortic root region can result in incorrect positioning, leading to heart valve embolization, obstruction of the coronary ostia and acute kidney injury. The use of TEE US images to augment intra-operative fluoroscopy provides significant improvements to image-guidance. Registration is achieved using an image-based TEE probe tracking technique and US calibration. TEE probe tracking is accomplished using a single-perspective pose estimation algorithm. Pose estimation from a single image allows registration to be achieved using only images collected in standard OR workflow. Accuracy of this registration technique is assessed using three models: a point target phantom, a cadaveric porcine heart with implanted fiducials, and in-vivo porcine images. Results demonstrate that registration can be achieved with an RMS error of less than 1.5mm, which is within the clinical accuracy requirements of 5mm. US-fluoroscopy registration based on single-perspective pose estimation demonstrates promise as a method for providing guidance to percutaneous aortic valve replacement procedures. Future work will focus on real-time implementation and a visualization system that can be used in the operating room.

  1. Standardized accuracy assessment of the calypso wireless transponder tracking system.

    PubMed

    Franz, A M; Schmitt, D; Seitel, A; Chatrasingh, M; Echner, G; Oelfke, U; Nill, S; Birkfellner, W; Maier-Hein, L

    2014-11-21

    Electromagnetic (EM) tracking allows localization of small EM sensors in a magnetic field of known geometry without line-of-sight. However, this technique requires a cable connection to the tracked object. A wireless alternative based on magnetic fields, referred to as transponder tracking, has been proposed by several authors. Although most of the transponder tracking systems are still in an early stage of development and not ready for clinical use yet, Varian Medical Systems Inc. (Palo Alto, California, USA) presented the Calypso system for tumor tracking in radiation therapy which includes transponder technology. But it has not been used for computer-assisted interventions (CAI) in general or been assessed for accuracy in a standardized manner, so far. In this study, we apply a standardized assessment protocol presented by Hummel et al (2005 Med. Phys. 32 2371-9) to the Calypso system for the first time. The results show that transponder tracking with the Calypso system provides a precision and accuracy below 1 mm in ideal clinical environments, which is comparable with other EM tracking systems. Similar to other systems the tracking accuracy was affected by metallic distortion, which led to errors of up to 3.2 mm. The potential of the wireless transponder tracking technology for use in many future CAI applications can be regarded as extremely high. PMID:25332308

  2. Performance and Accuracy of LAPACK's Symmetric TridiagonalEigensolvers

    SciTech Connect

    Demmel, Jim W.; Marques, Osni A.; Parlett, Beresford N.; Vomel,Christof

    2007-04-19

    We compare four algorithms from the latest LAPACK 3.1 release for computing eigenpairs of a symmetric tridiagonal matrix. These include QR iteration, bisection and inverse iteration (BI), the Divide-and-Conquer method (DC), and the method of Multiple Relatively Robust Representations (MR). Our evaluation considers speed and accuracy when computing all eigenpairs, and additionally subset computations. Using a variety of carefully selected test problems, our study includes a variety of today's computer architectures. Our conclusions can be summarized as follows. (1) DC and MR are generally much faster than QR and BI on large matrices. (2) MR almost always does the fewest floating point operations, but at a lower MFlop rate than all the other algorithms. (3) The exact performance of MR and DC strongly depends on the matrix at hand. (4) DC and QR are the most accurate algorithms with observed accuracy O({radical}ne). The accuracy of BI and MR is generally O(ne). (5) MR is preferable to BI for subset computations.

  3. Assessing Team Performance.

    ERIC Educational Resources Information Center

    Trimble, Susan; Rottier, Jerry

    Interdisciplinary middle school level teams capitalize on the idea that the whole is greater than the sum of its parts. Administrators and team members can maximize the advantages of teamwork using team assessments to increase the benefits for students, teachers, and the school environment. Assessing team performance can lead to high performing…

  4. Accuracy Assessment of Digital Elevation Models Using GPS

    NASA Astrophysics Data System (ADS)

    Farah, Ashraf; Talaat, Ashraf; Farrag, Farrag A.

    2008-01-01

    A Digital Elevation Model (DEM) is a digital representation of ground surface topography or terrain with different accuracies for different application fields. DEM have been applied to a wide range of civil engineering and military planning tasks. DEM is obtained using a number of techniques such as photogrammetry, digitizing, laser scanning, radar interferometry, classical survey and GPS techniques. This paper presents an assessment study of DEM using GPS (Stop&Go) and kinematic techniques comparing with classical survey. The results show that a DEM generated from (Stop&Go) GPS technique has the highest accuracy with a RMS error of 9.70 cm. The RMS error of DEM derived by kinematic GPS is 12.00 cm.

  5. APPLICATION OF A "VITURAL FIELD REFERENCE DATABASE" TO ASSESS LAND-COVER MAP ACCURACIES

    EPA Science Inventory

    An accuracy assessment was performed for the Neuse River Basin, NC land-cover/use
    (LCLU) mapping results using a "Virtual Field Reference Database (VFRDB)". The VFRDB was developed using field measurement and digital imagery (camera) data collected at 1,409 sites over a perio...

  6. The Social Accuracy Model of Interpersonal Perception: Assessing Individual Differences in Perceptive and Expressive Accuracy

    ERIC Educational Resources Information Center

    Biesanz, Jeremy C.

    2010-01-01

    The social accuracy model of interpersonal perception (SAM) is a componential model that estimates perceiver and target effects of different components of accuracy across traits simultaneously. For instance, Jane may be generally accurate in her perceptions of others and thus high in "perceptive accuracy"--the extent to which a particular…

  7. Assessing the accuracy of different simplified frictional rolling contact algorithms

    NASA Astrophysics Data System (ADS)

    Vollebregt, E. A. H.; Iwnicki, S. D.; Xie, G.; Shackleton, P.

    2012-01-01

    This paper presents an approach for assessing the accuracy of different frictional rolling contact theories. The main characteristic of the approach is that it takes a statistically oriented view. This yields a better insight into the behaviour of the methods in diverse circumstances (varying contact patch ellipticities, mixed longitudinal, lateral and spin creepages) than is obtained when only a small number of (basic) circumstances are used in the comparison. The range of contact parameters that occur for realistic vehicles and tracks are assessed using simulations with the Vampire vehicle system dynamics (VSD) package. This shows that larger values for the spin creepage occur rather frequently. Based on this, our approach is applied to typical cases for which railway VSD packages are used. The results show that particularly the USETAB approach but also FASTSIM give considerably better results than the linear theory, Vermeulen-Johnson, Shen-Hedrick-Elkins and Polach methods, when compared with the 'complete theory' of the CONTACT program.

  8. Accuracy Assessment of a Uav-Based Landslide Monitoring System

    NASA Astrophysics Data System (ADS)

    Peppa, M. V.; Mills, J. P.; Moore, P.; Miller, P. E.; Chambers, J. E.

    2016-06-01

    Landslides are hazardous events with often disastrous consequences. Monitoring landslides with observations of high spatio-temporal resolution can help mitigate such hazards. Mini unmanned aerial vehicles (UAVs) complemented by structure-from-motion (SfM) photogrammetry and modern per-pixel image matching algorithms can deliver a time-series of landslide elevation models in an automated and inexpensive way. This research investigates the potential of a mini UAV, equipped with a Panasonic Lumix DMC-LX5 compact camera, to provide surface deformations at acceptable levels of accuracy for landslide assessment. The study adopts a self-calibrating bundle adjustment-SfM pipeline using ground control points (GCPs). It evaluates misalignment biases and unresolved systematic errors that are transferred through the SfM process into the derived elevation models. To cross-validate the research outputs, results are compared to benchmark observations obtained by standard surveying techniques. The data is collected with 6 cm ground sample distance (GSD) and is shown to achieve planimetric and vertical accuracy of a few centimetres at independent check points (ICPs). The co-registration error of the generated elevation models is also examined in areas of stable terrain. Through this error assessment, the study estimates that the vertical sensitivity to real terrain change of the tested landslide is equal to 9 cm.

  9. Assessment of optical localizer accuracy for computer aided surgery systems.

    PubMed

    Elfring, Robert; de la Fuente, Matías; Radermacher, Klaus

    2010-01-01

    The technology for localization of surgical tools with respect to the patient's reference coordinate system in three to six degrees of freedom is one of the key components in computer aided surgery. Several tracking methods are available, of which optical tracking is the most widespread in clinical use. Optical tracking technology has proven to be a reliable method for intra-operative position and orientation acquisition in many clinical applications; however, the accuracy of such localizers is still a topic of discussion. In this paper, the accuracy of three optical localizer systems, the NDI Polaris P4, the NDI Polaris Spectra (in active and passive mode) and the Stryker Navigation System II camera, is assessed and compared critically. Static tests revealed that only the Polaris P4 shows significant warm-up behavior, with a significant shift of accuracy being observed within 42 minutes of being switched on. Furthermore, the intrinsic localizer accuracy was determined for single markers as well as for tools using a volumetric measurement protocol on a coordinate measurement machine. To determine the relative distance error within the measurement volume, the Length Measurement Error (LME) was determined at 35 test lengths. As accuracy depends strongly on the marker configuration employed, the error to be expected in typical clinical setups was estimated in a simulation for different tool configurations. The two active localizer systems, the Stryker Navigation System II camera and the Polaris Spectra (active mode), showed the best results, with trueness values (mean +/- standard deviation) of 0.058 +/- 0.033 mm and 0.089 +/- 0.061 mm, respectively. The Polaris Spectra (passive mode) showed a trueness of 0.170 +/- 0.090 mm, and the Polaris P4 showed the lowest trueness at 0.272 +/- 0.394 mm with a higher number of outliers than for the other cameras. The simulation of the different tool configurations in a typical clinical setup revealed that the tracking error can

  10. Assessing Scientific Performance.

    ERIC Educational Resources Information Center

    Weiner, John M.; And Others

    1984-01-01

    A method for assessing scientific performance based on relationships displayed numerically in published documents is proposed and illustrated using published documents in pediatric oncology for the period 1979-1982. Contributions of a major clinical investigations group, the Childrens Cancer Study Group, are analyzed. Twenty-nine references are…

  11. Accuracy assessment of gridded precipitation datasets in the Himalayas

    NASA Astrophysics Data System (ADS)

    Khan, A.

    2015-12-01

    Accurate precipitation data are vital for hydro-climatic modelling and water resources assessments. Based on mass balance calculations and Turc-Budyko analysis, this study investigates the accuracy of twelve widely used precipitation gridded datasets for sub-basins in the Upper Indus Basin (UIB) in the Himalayas-Karakoram-Hindukush (HKH) region. These datasets are: 1) Global Precipitation Climatology Project (GPCP), 2) Climate Prediction Centre (CPC) Merged Analysis of Precipitation (CMAP), 3) NCEP / NCAR, 4) Global Precipitation Climatology Centre (GPCC), 5) Climatic Research Unit (CRU), 6) Asian Precipitation Highly Resolved Observational Data Integration Towards Evaluation of Water Resources (APHRODITE), 7) Tropical Rainfall Measuring Mission (TRMM), 8) European Reanalysis (ERA) interim data, 9) PRINCETON, 10) European Reanalysis-40 (ERA-40), 11) Willmott and Matsuura, and 12) WATCH Forcing Data based on ERA interim (WFDEI). Precipitation accuracy and consistency was assessed by physical mass balance involving sum of annual measured flow, estimated actual evapotranspiration (average of 4 datasets), estimated glacier mass balance melt contribution (average of 4 datasets), and ground water recharge (average of 3 datasets), during 1999-2010. Mass balance assessment was complemented by Turc-Budyko non-dimensional analysis, where annual precipitation, measured flow and potential evapotranspiration (average of 5 datasets) data were used for the same period. Both analyses suggest that all tested precipitation datasets significantly underestimate precipitation in the Karakoram sub-basins. For the Hindukush and Himalayan sub-basins most datasets underestimate precipitation, except ERA-interim and ERA-40. The analysis indicates that for this large region with complicated terrain features and stark spatial precipitation gradients the reanalysis datasets have better consistency with flow measurements than datasets derived from records of only sparsely distributed climatic

  12. Evaluating the effect of learning style and student background on self-assessment accuracy

    NASA Astrophysics Data System (ADS)

    Alaoutinen, Satu

    2012-06-01

    This study evaluates a new taxonomy-based self-assessment scale and examines factors that affect assessment accuracy and course performance. The scale is based on Bloom's Revised Taxonomy and is evaluated by comparing students' self-assessment results with course performance in a programming course. Correlation has been used to reveal possible connections between student information and both self-assessment and course performance. The results show that students can place their knowledge along the taxonomy-based scale quite well and the scale seems to fit engineering students' learning style. Advanced students assess themselves more accurately than novices. The results also show that reflective students were better in programming than active. The scale used in this study gives a more objective picture of students' knowledge than general scales and with modifications it can be used in other classes than programming.

  13. Accuracy assessment of a surface electromyogram decomposition system in human first dorsal interosseus muscle

    NASA Astrophysics Data System (ADS)

    Hu, Xiaogang; Rymer, William Z.; Suresh, Nina L.

    2014-04-01

    Objective. The aim of this study is to assess the accuracy of a surface electromyogram (sEMG) motor unit (MU) decomposition algorithm during low levels of muscle contraction. Approach. A two-source method was used to verify the accuracy of the sEMG decomposition system, by utilizing simultaneous intramuscular and surface EMG recordings from the human first dorsal interosseous muscle recorded during isometric trapezoidal force contractions. Spike trains from each recording type were decomposed independently utilizing two different algorithms, EMGlab and dEMG decomposition algorithms. The degree of agreement of the decomposed spike timings was assessed for three different segments of the EMG signals, corresponding to specified regions in the force task. A regression analysis was performed to examine whether certain properties of the sEMG and force signal can predict the decomposition accuracy. Main results. The average accuracy of successful decomposition among the 119 MUs that were common to both intramuscular and surface records was approximately 95%, and the accuracy was comparable between the different segments of the sEMG signals (i.e., force ramp-up versus steady state force versus combined). The regression function between the accuracy and properties of sEMG and force signals revealed that the signal-to-noise ratio of the action potential and stability in the action potential records were significant predictors of the surface decomposition accuracy. Significance. The outcomes of our study confirm the accuracy of the sEMG decomposition algorithm during low muscle contraction levels and provide confidence in the overall validity of the surface dEMG decomposition algorithm.

  14. Inertial Measures of Motion for Clinical Biomechanics: Comparative Assessment of Accuracy under Controlled Conditions – Changes in Accuracy over Time

    PubMed Central

    Lebel, Karina; Boissy, Patrick; Hamel, Mathieu; Duval, Christian

    2015-01-01

    Background Interest in 3D inertial motion tracking devices (AHRS) has been growing rapidly among the biomechanical community. Although the convenience of such tracking devices seems to open a whole new world of possibilities for evaluation in clinical biomechanics, its limitations haven’t been extensively documented. The objectives of this study are: 1) to assess the change in absolute and relative accuracy of multiple units of 3 commercially available AHRS over time; and 2) to identify different sources of errors affecting AHRS accuracy and to document how they may affect the measurements over time. Methods This study used an instrumented Gimbal table on which AHRS modules were carefully attached and put through a series of velocity-controlled sustained motions including 2 minutes motion trials (2MT) and 12 minutes multiple dynamic phases motion trials (12MDP). Absolute accuracy was assessed by comparison of the AHRS orientation measurements to those of an optical gold standard. Relative accuracy was evaluated using the variation in relative orientation between modules during the trials. Findings Both absolute and relative accuracy decreased over time during 2MT. 12MDP trials showed a significant decrease in accuracy over multiple phases, but accuracy could be enhanced significantly by resetting the reference point and/or compensating for initial Inertial frame estimation reference for each phase. Interpretation The variation in AHRS accuracy observed between the different systems and with time can be attributed in part to the dynamic estimation error, but also and foremost, to the ability of AHRS units to locate the same Inertial frame. Conclusions Mean accuracies obtained under the Gimbal table sustained conditions of motion suggest that AHRS are promising tools for clinical mobility assessment under constrained conditions of use. However, improvement in magnetic compensation and alignment between AHRS modules are desirable in order for AHRS to reach their

  15. Empathic accuracy for happiness in the daily lives of older couples: Fluid cognitive performance predicts pattern accuracy among men.

    PubMed

    Hülür, Gizem; Hoppmann, Christiane A; Rauers, Antje; Schade, Hannah; Ram, Nilam; Gerstorf, Denis

    2016-08-01

    Correctly identifying other's emotional states is a central cognitive component of empathy. We examined the role of fluid cognitive performance for empathic accuracy for happiness in the daily lives of 86 older couples (mean relationship length = 45 years; mean age = 75 years) on up to 42 occasions over 7 consecutive days. Men performing better on the Digit Symbol test were more accurate in identifying ups and downs of their partner's happiness. A similar association was not found for women. We discuss the potential role of fluid cognitive performance and other individual, partner, and situation characteristics for empathic accuracy. (PsycINFO Database Record PMID:27362351

  16. Trading accuracy for speed: gender differences on a Stroop task under mild performance anxiety.

    PubMed

    von Kluge, S

    1992-10-01

    A standard Stroop task was used to examine the effect of performance anxiety on 58 male and 69 female undergraduates. Subjects were approached either by two casually dressed experimenters who did not stress speed or accuracy or by 4 or 5 formally dressed experimenters who stressed quick and accurate performance. Subjects were told the test would assess their "mental acuity"; their responses were visibly tape-recorded. Reaction times did not show differential response by anxiety condition; men and women showed different RTs only in the low-anxiety condition, with women performing significantly more slowly. There were no significant differences for the high-anxiety condition. Analysis of errors showed women were more accurate than men. Men traded accuracy for speed and may have been under equal performance stress in both situations. When performance was not stressed, women were slower and more accurate than men. When performance was stressed, women increased their speed to match that of men while maintaining their greater accuracy.

  17. Airborne Topographic Mapper Calibration Procedures and Accuracy Assessment

    NASA Technical Reports Server (NTRS)

    Martin, Chreston F.; Krabill, William B.; Manizade, Serdar S.; Russell, Rob L.; Sonntag, John G.; Swift, Robert N.; Yungel, James K.

    2012-01-01

    Description of NASA Airborn Topographic Mapper (ATM) lidar calibration procedures including analysis of the accuracy and consistancy of various ATM instrument parameters and the resulting influence on topographic elevation measurements. The ATM elevations measurements from a nominal operating altitude 500 to 750 m above the ice surface was found to be: Horizontal Accuracy 74 cm, Horizontal Precision 14 cm, Vertical Accuracy 6.6 cm, Vertical Precision 3 cm.

  18. Performance Testing using Silicon Devices - Analysis of Accuracy: Preprint

    SciTech Connect

    Sengupta, M.; Gotseff, P.; Myers, D.; Stoffel, T.

    2012-06-01

    Accurately determining PV module performance in the field requires accurate measurements of solar irradiance reaching the PV panel (i.e., Plane-of-Array - POA Irradiance) with known measurement uncertainty. Pyranometers are commonly based on thermopile or silicon photodiode detectors. Silicon detectors, including PV reference cells, are an attractive choice for reasons that include faster time response (10 us) than thermopile detectors (1 s to 5 s), lower cost and maintenance. The main drawback of silicon detectors is their limited spectral response. Therefore, to determine broadband POA solar irradiance, a pyranometer calibration factor that converts the narrowband response to broadband is required. Normally this calibration factor is a single number determined under clear-sky conditions with respect to a broadband reference radiometer. The pyranometer is then used for various scenarios including varying airmass, panel orientation and atmospheric conditions. This would not be an issue if all irradiance wavelengths that form the broadband spectrum responded uniformly to atmospheric constituents. Unfortunately, the scattering and absorption signature varies widely with wavelength and the calibration factor for the silicon photodiode pyranometer is not appropriate for other conditions. This paper reviews the issues that will arise from the use of silicon detectors for PV performance measurement in the field based on measurements from a group of pyranometers mounted on a 1-axis solar tracker. Also we will present a comparison of simultaneous spectral and broadband measurements from silicon and thermopile detectors and estimated measurement errors when using silicon devices for both array performance and resource assessment.

  19. New Criteria for Assessing the Accuracy of Blood Glucose Monitors meeting, October 28, 2011.

    PubMed

    Walsh, John; Roberts, Ruth; Vigersky, Robert A; Schwartz, Frank

    2012-03-01

    Glucose meters (GMs) are routinely used for self-monitoring of blood glucose by patients and for point-of-care glucose monitoring by health care providers in outpatient and inpatient settings. Although widely assumed to be accurate, numerous reports of inaccuracies with resulting morbidity and mortality have been noted. Insulin dosing errors based on inaccurate GMs are most critical. On October 28, 2011, the Diabetes Technology Society invited 45 diabetes technology clinicians who were attending the 2011 Diabetes Technology Meeting to participate in a closed-door meeting entitled New Criteria for Assessing the Accuracy of Blood Glucose Monitors. This report reflects the opinions of most of the attendees of that meeting. The Food and Drug Administration (FDA), the public, and several medical societies are currently in dialogue to establish a new standard for GM accuracy. This update to the FDA standard is driven by improved meter accuracy, technological advances (pumps, bolus calculators, continuous glucose monitors, and insulin pens), reports of hospital and outpatient deaths, consumer complaints about inaccuracy, and research studies showing that several approved GMs failed to meet FDA or International Organization for Standardization standards in postapproval testing. These circumstances mandate a set of new GM standards that appropriately match the GMs' analytical accuracy to the clinical accuracy required for their intended use, as well as ensuring their ongoing accuracy following approval. The attendees of the New Criteria for Assessing the Accuracy of Blood Glucose Monitors meeting proposed a graduated standard and other methods to improve GM performance, which are discussed in this meeting report.

  20. Mapping with Small UAS: A Point Cloud Accuracy Assessment

    NASA Astrophysics Data System (ADS)

    Toth, Charles; Jozkow, Grzegorz; Grejner-Brzezinska, Dorota

    2015-12-01

    Interest in using inexpensive Unmanned Aerial System (UAS) technology for topographic mapping has recently significantly increased. Small UAS platforms equipped with consumer grade cameras can easily acquire high-resolution aerial imagery allowing for dense point cloud generation, followed by surface model creation and orthophoto production. In contrast to conventional airborne mapping systems, UAS has limited ground coverage due to low flying height and limited flying time, yet it offers an attractive alternative to high performance airborne systems, as the cost of the sensors and platform, and the flight logistics, is relatively low. In addition, UAS is better suited for small area data acquisitions and to acquire data in difficult to access areas, such as urban canyons or densely built-up environments. The main question with respect to the use of UAS is whether the inexpensive consumer sensors installed in UAS platforms can provide the geospatial data quality comparable to that provided by conventional systems. This study aims at the performance evaluation of the current practice of UAS-based topographic mapping by reviewing the practical aspects of sensor configuration, georeferencing and point cloud generation, including comparisons between sensor types and processing tools. The main objective is to provide accuracy characterization and practical information for selecting and using UAS solutions in general mapping applications. The analysis is based on statistical evaluation as well as visual examination of experimental data acquired by a Bergen octocopter with three different image sensor configurations, including a GoPro HERO3+ Black Edition, a Nikon D800 DSLR and a Velodyne HDL-32. In addition, georeferencing data of varying quality were acquired and evaluated. The optical imagery was processed by using three commercial point cloud generation tools. Comparing point clouds created by active and passive sensors by using different quality sensors, and finally

  1. An assessment of reservoir storage change accuracy from SWOT

    NASA Astrophysics Data System (ADS)

    Clark, Elizabeth; Moller, Delwyn; Lettenmaier, Dennis

    2013-04-01

    The anticipated Surface Water and Ocean Topography (SWOT) satellite mission will provide water surface height and areal extent measurements for terrestrial water bodies at an unprecedented accuracy with essentially global coverage with a 22-day repeat cycle. These measurements will provide a unique opportunity to observe storage changes in naturally occurring lakes, as well as manmade reservoirs. Given political constraints on the sharing of water information, international data bases of reservoir characteristics, such as the Global Reservoir and Dam Database, are limited to the largest reservoirs for which countries have voluntarily provided information. Impressive efforts have been made to combine currently available altimetry data with satellite-based imagery of water surface extent; however, these data sets are limited to large reservoirs located on an altimeter's flight track. SWOT's global coverage and simultaneous measurement of height and water surface extent remove, in large part, the constraint of location relative to flight path. Previous studies based on Arctic lakes suggest that SWOT will be able to provide a noisy, but meaningful, storage change signal for lakes as small as 250 m x 250 m. Here, we assess the accuracy of monthly storage change estimates over 10 reservoirs in the U.S. and consider the plausibility of estimating total storage change. Published maps of reservoir bathymetry were combined with a historical time series of daily storage to produce daily time series of maps of water surface elevation. Next, these time series were then sampled based on realistic SWOT orbital parameters and noise characteristics to create a time series of synthetic SWOT observations of water surface elevation and extent for each reservoir. We then plotted area versus elevation for the true values and for the synthetic SWOT observations. For each reservoir, a curve was fit to the synthetic SWOT observations, and its integral was used to estimate total storage

  2. Assessment Of Accuracies Of Remote-Sensing Maps

    NASA Technical Reports Server (NTRS)

    Card, Don H.; Strong, Laurence L.

    1992-01-01

    Report describes study of accuracies of classifications of picture elements in map derived by digital processing of Landsat-multispectral-scanner imagery of coastal plain of Arctic National Wildlife Refuge. Accuracies of portions of map analyzed with help of statistical sampling procedure called "stratified plurality sampling", in which all picture elements in given cluster classified in stratum to which plurality of them belong.

  3. Individual differences in learning speed, performance accuracy and exploratory behaviour in black-capped chickadees.

    PubMed

    Guillette, Lauren M; Hahn, Allison H; Hoeschele, Marisa; Przyslupski, Ann-Marie; Sturdy, Christopher B

    2015-01-01

    Cognitive processes are important to animals because they not only influence how animals acquire, store and recall information, but also may underpin behaviours such as deciding where to look for food, build a nest, or with whom to mate. Several recent studies have begun to examine the potential interaction between variation in cognition and variation in personality traits. One hypothesis proposed that there is a speed-accuracy trade-off in cognition ability that aligns with a fast-slow behaviour type. Here, we explicitly examined this hypothesis by testing wild-caught black-capped chickadees in a series of cognitive tasks that assessed both learning speed (the number of trials taken to learn) and accuracy (post-acquisition performance when tested with un-trained exemplars). Chickadees' exploration scores were measured in a novel environment task. We found that slow-exploring chickadees demonstrated higher accuracy during the test phase, but did not learn the initial task in fewer trials compared to fast-exploring chickadees, providing partial support for the proposed link between cognition and personality. We report positive correlations in learning speed between different phases within cognitive tasks, but not between the three cognitive tasks suggesting independence in underlying cognitive processing. We discuss different rule-based strategies that may contribute to differential performance accuracy in cognitive tasks and provide suggestions for future experimentation to examine mechanisms underlying the relationship between cognition and personality.

  4. Accuracy Assessment of Coastal Topography Derived from Uav Images

    NASA Astrophysics Data System (ADS)

    Long, N.; Millescamps, B.; Pouget, F.; Dumon, A.; Lachaussée, N.; Bertin, X.

    2016-06-01

    To monitor coastal environments, Unmanned Aerial Vehicle (UAV) is a low-cost and easy to use solution to enable data acquisition with high temporal frequency and spatial resolution. Compared to Light Detection And Ranging (LiDAR) or Terrestrial Laser Scanning (TLS), this solution produces Digital Surface Model (DSM) with a similar accuracy. To evaluate the DSM accuracy on a coastal environment, a campaign was carried out with a flying wing (eBee) combined with a digital camera. Using the Photoscan software and the photogrammetry process (Structure From Motion algorithm), a DSM and an orthomosaic were produced. Compared to GNSS surveys, the DSM accuracy is estimated. Two parameters are tested: the influence of the methodology (number and distribution of Ground Control Points, GCPs) and the influence of spatial image resolution (4.6 cm vs 2 cm). The results show that this solution is able to reproduce the topography of a coastal area with a high vertical accuracy (< 10 cm). The georeferencing of the DSM require a homogeneous distribution and a large number of GCPs. The accuracy is correlated with the number of GCPs (use 19 GCPs instead of 10 allows to reduce the difference of 4 cm); the required accuracy should be dependant of the research problematic. Last, in this particular environment, the presence of very small water surfaces on the sand bank does not allow to improve the accuracy when the spatial resolution of images is decreased.

  5. Biological Performance Assessment

    SciTech Connect

    2013-07-09

    The BioPA provides turbine designers with a set of tools that can be used to assess biological risks of turbines during the design phase, before expensive construction begins. The toolset can also be used to assess existing installations under a variety of operating conditions, supplementing data obtained through expensive field testing. The BioPA uses computational fluid dynamics (CFD) simulations of a turbine design to quantify the exposure of passing fish to a set of known injury mechanisms. By appropriate sampling of the fluid domain, the BioPA assigns exposure probabilities to each mechanism. The exposure probabilities are combined with dose-response data from laboratory stress studies of fish to produce a set of biological BioPA Scores. These metrics provide an objective measure that can be used to compare competing turbines or to refine a new design. The BioPA process can be performed during the turbine design phase and is considerably less expensive than prototype-scale field testing.

  6. Accuracy of peak VO2 assessments in career firefighters

    PubMed Central

    2011-01-01

    Background Sudden cardiac death is the leading cause of on-duty death in United States firefighters. Accurately assessing cardiopulmonary capacity is critical to preventing, or reducing, cardiovascular events in this population. Methods A total of 83 male firefighters performed Wellness-Fitness Initiative (WFI) maximal exercise treadmill tests and direct peak VO2 assessments to volitional fatigue. Of the 83, 63 completed WFI sub-maximal exercise treadmill tests for comparison to directly measured peak VO2 and historical estimations. Results Maximal heart rates were overestimated by the traditional 220-age equation by about 5 beats per minute (p < .001). Peak VO2 was overestimated by the WFI maximal exercise treadmill and the historical WFI sub-maximal estimation by ~ 1MET and ~ 2 METs, respectively (p < 0.001). The revised 2008 WFI sub-maximal treadmill estimation was found to accurately estimate peak VO2 when compared to directly measured peak VO2. Conclusion Accurate assessment of cardiopulmonary capacity is critical in determining appropriate duty assignments, and identification of potential cardiovascular problems, for firefighters. Estimation of cardiopulmonary fitness improves using the revised 2008 WFI sub-maximal equation. PMID:21943154

  7. Considering the base rates of low performance in cognitively healthy older adults improves the accuracy to identify neurocognitive impairment with the Consortium to Establish a Registry for Alzheimer's Disease-Neuropsychological Assessment Battery (CERAD-NAB).

    PubMed

    Mistridis, Panagiota; Egli, Simone C; Iverson, Grant L; Berres, Manfred; Willmes, Klaus; Welsh-Bohmer, Kathleen A; Monsch, Andreas U

    2015-08-01

    It is common for some healthy older adults to obtain low test scores when a battery of neuropsychological tests is administered, which increases the risk of the clinician misdiagnosing cognitive impairment. Thus, base rates of healthy individuals' low scores are required to more accurately interpret neuropsychological results. At present, this information is not available for the German version of the Consortium to Establish a Registry for Alzheimer's Disease-Neuropsychological Assessment Battery (CERAD-NAB), a frequently used battery in the USA and in German-speaking Europe. This study aimed to determine the base rates of low scores for the CERAD-NAB and to tabulate a summary figure of cut-off scores and numbers of low scores to aid in clinical decision making. The base rates of low scores on the ten German CERAD-NAB subscores were calculated from the German CERAD-NAB normative sample (N = 1,081) using six different cut-off scores (i.e., 1st, 2.5th, 7th, 10th, 16th, and 25th percentile). Results indicate that high percentages of one or more "abnormal" scores were obtained, irrespective of the cut-off criterion. For example, 60.6% of the normative sample obtained one or more scores at or below the 10th percentile. These findings illustrate the importance of considering the prevalence of low scores in healthy individuals. The summary figure of CERAD-NAB base rates is an important supplement for test interpretation and can be used to improve the diagnostic accuracy of neurocognitive disorders. PMID:25555899

  8. Accuracy Assessment of Response Surface Approximations for Supersonic Turbine Design

    NASA Technical Reports Server (NTRS)

    Papila, Nilay; Papila, Melih; Shyy, Wei; Haftka, Raphael T.; FitzCoy, Norman

    2001-01-01

    There is a growing trend to employ CFD tools to supply the necessary information for design optimization of fluid dynamics components/systems. Such results are prone to uncertainties due to reasons including discretization. errors, incomplete convergence of computational procedures, and errors associated with physical models such as turbulence closures. Based on this type of information, gradient-based optimization algorithms often suffer from the noisy calculations, which can seriously compromise the outcome. Similar problems arise from the experimental measurements. Global optimization techniques, such as those based on the response surface (RS) concept are becoming popular in part because they can overcome some of these barriers. However, there are also fundamental issues related to such global optimization technique such as RS. For example, in high dimensional design spaces, typically only a small number of function evaluations are available due to computational and experimental costs. On the other hand, complex features of the design variables do not allow one to model the global characteristics of the design space with simple quadratic polynomials. Consequently a main challenge is to reduce the size of the region where we fit the RS, or make it more accurate in the regions where the optimum is likely to reside. Response Surface techniques using either polynomials or and Neural Network (NN) methods offer designers alternatives to conduct design optimization. The RS technique employs statistical and numerical techniques to establish the relationship between design variables and objective/constraint functions, typically using polynomials. In this study, we aim at addressing issues related to the following questions: (1) How to identify outliers associated with a given RS representation and improve the RS model via appropriate treatments? (2) How to focus on selected design data so that RS can give better performance in regions critical to design optimization? (3

  9. Pixels, Blocks of Pixels, and Polygons: Choosing a Spatial Unit for Thematic Accuracy Assessment

    EPA Science Inventory

    Pixels, polygons, and blocks of pixels are all potentially viable spatial assessment units for conducting an accuracy assessment. We develop a statistical population-based framework to examine how the spatial unit chosen affects the outcome of an accuracy assessment. The populati...

  10. Does it Make a Difference? Investigating the Assessment Accuracy of Teacher Tutors and Student Tutors

    ERIC Educational Resources Information Center

    Herppich, Stephanie; Wittwer, Jorg; Nuckles, Matthias; Renkl, Alexander

    2013-01-01

    Tutors often have difficulty with accurately assessing a tutee's understanding. However, little is known about whether the professional expertise of tutors influences their assessment accuracy. In this study, the authors examined the accuracy with which 21 teacher tutors and 25 student tutors assessed a tutee's understanding of the human…

  11. Accuracy of virtual models in the assessment of maxillary defects

    PubMed Central

    Kurşun, Şebnem; Kılıç, Cenk; Özen, Tuncer

    2015-01-01

    Purpose This study aimed to assess the reliability of measurements performed on three-dimensional (3D) virtual models of maxillary defects obtained using cone-beam computed tomography (CBCT) and 3D optical scanning. Materials and Methods Mechanical cavities simulating maxillary defects were prepared on the hard palate of nine cadavers. Images were obtained using a CBCT unit at three different fields-of-views (FOVs) and voxel sizes: 1) 60×60 mm FOV, 0.125 mm3 (FOV60); 2) 80×80 mm FOV, 0.160 mm3 (FOV80); and 3) 100×100 mm FOV, 0.250 mm3 (FOV100). Superimposition of the images was performed using software called VRMesh Design. Automated volume measurements were conducted, and differences between surfaces were demonstrated. Silicon impressions obtained from the defects were also scanned with a 3D optical scanner. Virtual models obtained using VRMesh Design were compared with impressions obtained by scanning silicon models. Gold standard volumes of the impression models were then compared with CBCT and 3D scanner measurements. Further, the general linear model was used, and the significance was set to p=0.05. Results A comparison of the results obtained by the observers and methods revealed the p values to be smaller than 0.05, suggesting that the measurement variations were caused by both methods and observers along with the different cadaver specimens used. Further, the 3D scanner measurements were closer to the gold standard measurements when compared to the CBCT measurements. Conclusion In the assessment of artificially created maxillary defects, the 3D scanner measurements were more accurate than the CBCT measurements. PMID:25793180

  12. Bootstrap performance profiles in stochastic algorithms assessment

    SciTech Connect

    Costa, Lino; Espírito Santo, Isabel A.C.P.; Oliveira, Pedro

    2015-03-10

    Optimization with stochastic algorithms has become a relevant research field. Due to its stochastic nature, its assessment is not straightforward and involves integrating accuracy and precision. Performance profiles for the mean do not show the trade-off between accuracy and precision, and parametric stochastic profiles require strong distributional assumptions and are limited to the mean performance for a large number of runs. In this work, bootstrap performance profiles are used to compare stochastic algorithms for different statistics. This technique allows the estimation of the sampling distribution of almost any statistic even with small samples. Multiple comparison profiles are presented for more than two algorithms. The advantages and drawbacks of each assessment methodology are discussed.

  13. Measuring physicians' performance in clinical practice: reliability, classification accuracy, and validity.

    PubMed

    Weifeng Weng; Hess, Brian J; Lynn, Lorna A; Holmboe, Eric S; Lipner, Rebecca S

    2010-09-01

    Much research has been devoted to addressing challenges in achieving reliable assessments of physicians' clinical performance but less work has focused on whether valid and accurate classification decisions are feasible. This study used 957 physicians certified in internal medicine (IM) or a subspecialty, who completed the American Board of Internal Medicine (ABIM) Diabetes Practice Improvement Module (PIM). Ten clinical and two patient-experience measures were aggregated into a composite measure. The composite measure score was highly reliable (r = .91) and classification accuracy was high across the entire score scale (>0.90), which indicated that it is possible to differentiate high-performing and low-performing physicians. Physicians certified in endocrinology and those who scored higher on their IM certification examination had higher composite scores, providing some validity evidence. In summary, it is feasible to create a psychometrically robust composite measure of physicians' clinical performance, specifically for the quality of care they provide to patients with diabetes.

  14. Georgia's Teacher Performance Assessment

    ERIC Educational Resources Information Center

    Fenton, Anne Marie; Wetherington, Pamela

    2016-01-01

    Like most states, Georgia until recently depended on an assessment of content knowledge to award teaching licenses, along with a licensure recommendation from candidates' educator preparation programs. While the content assessment reflected candidates' grasp of subject matter, licensure decisions did not hinge on direct, statewide assessment of…

  15. Constraint on Absolute Accuracy of Metacomprehension Assessments: The Anchoring and Adjustment Model vs. the Standards Model

    ERIC Educational Resources Information Center

    Kwon, Heekyung

    2011-01-01

    The objective of this study is to provide a systematic account of three typical phenomena surrounding absolute accuracy of metacomprehension assessments: (1) the absolute accuracy of predictions is typically quite low; (2) there exist individual differences in absolute accuracy of predictions as a function of reading skill; and (3) postdictions…

  16. ASSESSING THE ACCURACY OF NATIONAL LAND COVER DATASET AREA ESTIMATES AT MULTIPLE SPATIAL EXTENTS

    EPA Science Inventory

    Site specific accuracy assessments provide fine-scale evaluation of the thematic accuracy of land use/land cover (LULC) datasets; however, they provide little insight into LULC accuracy across varying spatial extents. Additionally, LULC data are typically used to describe lands...

  17. Comparative Accuracy Assessment of Global Land Cover Datasets Using Existing Reference Data

    NASA Astrophysics Data System (ADS)

    Tsendbazar, N. E.; de Bruin, S.; Mora, B.; Herold, M.

    2014-12-01

    Land cover is a key variable to monitor the impact of human and natural processes on the biosphere. As one of the Essential Climate Variables, land cover observations are used for climate models and several other applications. Remote sensing technologies have enabled the generation of several global land cover (GLC) products that are based on different data sources and methods (e.g. legends). Moreover, the reported map accuracies result from varying validation strategies. Such differences make the comparison of the GLC products challenging and create confusion on selecting suitable datasets for different applications. This study aims to conduct comparative accuracy assessment of GLC datasets (LC-CCI 2005, MODIS 2005, and Globcover 2005) using the Globcover 2005 reference data which can represent the thematic differences of these GLC maps. This GLC reference dataset provides LCCS classifier information for 3 main land cover types for each sample plot. The LCCS classifier information was translated according to the legends of the GLC maps analysed. The preliminary analysis showed some challenges in LCCS classifier translation arising from missing important classifier information, differences in class definition between the legends and absence of class proportion of main land cover types. To overcome these issues, we consolidated the entire reference data (i.e. 3857 samples distributed at global scale). Then the GLC maps and the reference dataset were harmonized into 13 general classes to perform the comparative accuracy assessments. To help users on selecting suitable GLC dataset(s) for their application, we conducted the map accuracy assessments considering different users' perspectives: climate modelling, bio-diversity assessments, agriculture monitoring, and map producers. This communication will present the method and the results of this study and provide a set of recommendations to the GLC map producers and users with the aim to facilitate the use of GLC maps.

  18. Accuracy of Nurse-Performed Lung Ultrasound in Patients With Acute Dyspnea: A Prospective Observational Study.

    PubMed

    Mumoli, Nicola; Vitale, Josè; Giorgi-Pierfranceschi, Matteo; Cresci, Alessandra; Cei, Marco; Basile, Valentina; Brondi, Barbara; Russo, Elisa; Giuntini, Lucia; Masi, Lorenzo; Cocciolo, Massimo; Dentali, Francesco

    2016-03-01

    In clinical practice lung ultrasound (LUS) is becoming an easy and reliable noninvasive tool for the evaluation of dyspnea. The aim of this study was to assess the accuracy of nurse-performed LUS, in particular, in the diagnosis of acute cardiogenic pulmonary congestion. We prospectively evaluated all the consecutive patients admitted for dyspnea in our Medicine Department between April and July 2014. At admission, serum brain natriuretic peptide (BNP) levels and LUS was performed by trained nurses blinded to clinical and laboratory data. The accuracy of nurse-performed LUS alone and combined with BNP for the diagnosis of acute cardiogenic dyspnea was calculated. Two hundred twenty-six patients (41.6% men, mean age 78.7 ± 12.7 years) were included in the study. Nurse-performed LUS alone had a sensitivity of 95.3% (95% CI: 92.6-98.1%), a specificity of 88.2% (95% CI: 84.0-92.4%), a positive predictive value of 87.9% (95% CI: 83.7-92.2%) and a negative predictive value of 95.5% (95% CI: 92.7-98.2%). The combination of nurse-performed LUS with BNP level (cut-off 400 pg/mL) resulted in a higher sensitivity (98.9%, 95% CI: 97.4-100%), negative predictive value (98.8%, 95% CI: 97.2-100%), and corresponding negative likelihood ratio (0.01, 95% CI: 0.0, 0.07). Nurse-performed LUS had a good accuracy in the diagnosis of acute cardiogenic dyspnea. Use of this technique in combination with BNP seems to be useful in ruling out cardiogenic dyspnea. Other studies are warranted to confirm our preliminary findings and to establish the role of this tool in other settings. PMID:26945396

  19. Accuracy of Nurse-Performed Lung Ultrasound in Patients With Acute Dyspnea

    PubMed Central

    Mumoli, Nicola; Vitale, Josè; Giorgi-Pierfranceschi, Matteo; Cresci, Alessandra; Cei, Marco; Basile, Valentina; Brondi, Barbara; Russo, Elisa; Giuntini, Lucia; Masi, Lorenzo; Cocciolo, Massimo; Dentali, Francesco

    2016-01-01

    Abstract In clinical practice lung ultrasound (LUS) is becoming an easy and reliable noninvasive tool for the evaluation of dyspnea. The aim of this study was to assess the accuracy of nurse-performed LUS, in particular, in the diagnosis of acute cardiogenic pulmonary congestion. We prospectively evaluated all the consecutive patients admitted for dyspnea in our Medicine Department between April and July 2014. At admission, serum brain natriuretic peptide (BNP) levels and LUS was performed by trained nurses blinded to clinical and laboratory data. The accuracy of nurse-performed LUS alone and combined with BNP for the diagnosis of acute cardiogenic dyspnea was calculated. Two hundred twenty-six patients (41.6% men, mean age 78.7 ± 12.7 years) were included in the study. Nurse-performed LUS alone had a sensitivity of 95.3% (95% CI: 92.6–98.1%), a specificity of 88.2% (95% CI: 84.0–92.4%), a positive predictive value of 87.9% (95% CI: 83.7–92.2%) and a negative predictive value of 95.5% (95% CI: 92.7–98.2%). The combination of nurse-performed LUS with BNP level (cut-off 400 pg/mL) resulted in a higher sensitivity (98.9%, 95% CI: 97.4–100%), negative predictive value (98.8%, 95% CI: 97.2–100%), and corresponding negative likelihood ratio (0.01, 95% CI: 0.0, 0.07). Nurse-performed LUS had a good accuracy in the diagnosis of acute cardiogenic dyspnea. Use of this technique in combination with BNP seems to be useful in ruling out cardiogenic dyspnea. Other studies are warranted to confirm our preliminary findings and to establish the role of this tool in other settings. PMID:26945396

  20. Bilingual Language Assessment: A Meta-Analysis of Diagnostic Accuracy

    ERIC Educational Resources Information Center

    Dollaghan, Christine A.; Horner, Elizabeth A.

    2011-01-01

    Purpose: To describe quality indicators for appraising studies of diagnostic accuracy and to report a meta-analysis of measures for diagnosing language impairment (LI) in bilingual Spanish-English U.S. children. Method: The authors searched electronically and by hand to locate peer-reviewed English-language publications meeting inclusion criteria;…

  1. Assessing genomic selection prediction accuracy in a dynamic barley breeding

    Technology Transfer Automated Retrieval System (TEKTRAN)

    Genomic selection is a method to improve quantitative traits in crops and livestock by estimating breeding values of selection candidates using phenotype and genome-wide marker data sets. Prediction accuracy has been evaluated through simulation and cross-validation, however validation based on prog...

  2. Sleep restriction and serving accuracy in performance tennis players, and effects of caffeine.

    PubMed

    Reyner, L A; Horne, J A

    2013-08-15

    Athletes often lose sleep on the night before a competition. Whilst it is unlikely that sleep loss will impair sports mostly relying on strength and endurance, little is known about potential effects on sports involving psychomotor performance necessitating judgement and accuracy, rather than speed, as in tennis for example, and where caffeine is 'permitted'. Two studies were undertaken, on 5h sleep (33%) restriction versus normal sleep, on serving accuracy in semi-professional tennis players. Testing (14:00 h-16:00 h) comprised 40 serves into a (1.8 m×1.1 m) 'service box' diagonally, over the net. Study 1 (8 m; 8 f) was within-Ss, counterbalanced (normal versus sleep restriction). Study 2 (6m;6f -different Ss) comprised three conditions (Latin square), identical to Study 1, except for an extra sleep restriction condition with 80 mg caffeine vs placebo in a sugar-free drink, given (double blind), 30 min before testing. Both studies showed significant impairments to serving accuracy after sleep restriction. Caffeine at this dose had no beneficial effect. Study 1 also assessed gender differences, with women significantly poorer under all conditions, and non-significant indications that women were more impaired by sleep restriction (also seen in Study 2). We conclude that adequate sleep is essential for best performance of this type of skill in tennis players and that caffeine is no substitute for 'lost sleep'. 210. PMID:23916998

  3. The analysis accuracy assessment of CORINE land cover in the Iberian coast

    NASA Astrophysics Data System (ADS)

    Grullón, Yraida R.; Alhaddad, Bahaaeddin; Cladera, Josep R.

    2009-09-01

    Corine land cover 2000 (CLC2000) is a project jointly managed by the Joint Research Centre (JRC) and the European Environment Agency (EEA). Its aim is to update the Corine land cover database in Europe for the year 2000. Landsat-7 Enhanced Thematic Mapper (ETM) satellite images were used for the update and were acquired within the framework of the Image2000 project. Knowledge of the land status through the use of mapping CORINE Land Cover is of great importance to study of interaction land cover and land use categories in Europe scale. This paper presents the accuracy assessment methodology designed and implemented to validate the Iberian Coast CORINE Land Cover 2000 cartography. It presents an implementation of a new methodological concept for land cover data production, Object- Based classification, and automatic generalization to assess the thematic accuracy of CLC2000 by means of an independent data source based on the comparison of the land cover database with reference data derived from visual interpretation of high resolution satellite imageries for sample areas. In our case study, the existing Object-Based classifications are supported with digital maps and attribute databases. According to the quality tests performed, we computed the overall accuracy, and Kappa Coefficient. We will focus on the development of a methodology based on classification and generalization analysis for built-up areas that may improve the investigation. This study can be divided in these fundamental steps: -Extract artificial areas from land use Classifications based on Land-sat and Spot images. -Manuel interpretation for high resolution of multispectral images. -Determine the homogeneity of artificial areas by generalization process. -Overall accuracy, Kappa Coefficient and Special grid (fishnet) test for quality test. Finally, this paper will concentrate to illustrate the precise accuracy of CORINE dataset based on the above general steps.

  4. An assessment of template-guided implant surgery in terms of accuracy and related factors

    PubMed Central

    Lee, Jee-Ho; Park, Ji-Man; Kim, Soung-Min; Kim, Myung-Joo; Lee, Jong-Ho

    2013-01-01

    PURPOSE Template-guided implant therapy has developed hand-in-hand with computed tomography (CT) to improve the accuracy of implant surgery and future prosthodontic treatment. In our present study, the accuracy and causative factors for computer-assisted implant surgery were assessed to further validate the stable clinical application of this technique. MATERIALS AND METHODS A total of 102 implants in 48 patients were included in this study. Implant surgery was performed with a stereolithographic template. Pre- and post-operative CTs were used to compare the planned and placed implants. Accuracy and related factors were statistically analyzed with the Spearman correlation method and the linear mixed model. Differences were considered to be statistically significant at P≤.05. RESULTS The mean errors of computer-assisted implant surgery were 1.09 mm at the coronal center, 1.56 mm at the apical center, and the axis deviation was 3.80°. The coronal and apical errors of the implants were found to be strongly correlated. The errors developed at the coronal center were magnified at the apical center by the fixture length. The case of anterior edentulous area and longer fixtures affected the accuracy of the implant template. CONCLUSION The control of errors at the coronal center and stabilization of the anterior part of the template are needed for safe implant surgery and future prosthodontic treatment. PMID:24353883

  5. Predictive accuracy of the Miller assessment for preschoolers in children with prenatal drug exposure.

    PubMed

    Fulks, Mary-Ann L; Harris, Susan R

    2005-01-01

    The Miller Assessment for Preschoolers (MAP) is a standardized test purported to identify preschool-aged children at risk for later learning difficulties. We evaluated the predictive validity of the MAP Total Score, relative to later cognitive performance and across a range of possible cut-points, in 37 preschool-aged children with prenatal drug exposure. Criterion measures were the Wechsler Preschool & Primary Scale of Intelligence-Revised (WPPSI-R), Test of Early Reading Ability-2, Peabody Picture Vocabulary Test-Revised, and Developmental Test of Visual Motor Integration. The highest predictive accuracy was demonstrated when the WPPSI-R was the criterion measure. The 14th percentile cutoff point demonstrated the highest predictive accuracy across all measures.

  6. Effects of Performance Feedback on Typing Speed and Accuracy

    ERIC Educational Resources Information Center

    Tittelbach, Danielle; Fields, Lanny; Alvero, Alicia M.

    2008-01-01

    Performance feedback is one of the most widely used tools in organizational settings. To date, little research has been conducted focusing on comparisons of the differential effects of the sources, frequency, or media used for feedback on both the quality and quantity of performance. This research investigated the effects of different feedback…

  7. Rectal cancer staging: Multidetector-row computed tomography diagnostic accuracy in assessment of mesorectal fascia invasion

    PubMed Central

    Ippolito, Davide; Drago, Silvia Girolama; Franzesi, Cammillo Talei; Fior, Davide; Sironi, Sandro

    2016-01-01

    AIM: To assess the diagnostic accuracy of multidetector-row computed tomography (MDCT) as compared with conventional magnetic resonance imaging (MRI), in identifying mesorectal fascia (MRF) invasion in rectal cancer patients. METHODS: Ninety-one patients with biopsy proven rectal adenocarcinoma referred for thoracic and abdominal CT staging were enrolled in this study. The contrast-enhanced MDCT scans were performed on a 256 row scanner (ICT, Philips) with the following acquisition parameters: tube voltage 120 KV, tube current 150-300 mAs. Imaging data were reviewed as axial and as multiplanar reconstructions (MPRs) images along the rectal tumor axis. MRI study, performed on 1.5 T with dedicated phased array multicoil, included multiplanar T2 and axial T1 sequences and diffusion weighted images (DWI). Axial and MPR CT images independently were compared to MRI and MRF involvement was determined. Diagnostic accuracy of both modalities was compared and statistically analyzed. RESULTS: According to MRI, the MRF was involved in 51 patients and not involved in 40 patients. DWI allowed to recognize the tumor as a focal mass with high signal intensity on high b-value images, compared with the signal of the normal adjacent rectal wall or with the lower tissue signal intensity background. The number of patients correctly staged by the native axial CT images was 71 out of 91 (41 with involved MRF; 30 with not involved MRF), while by using the MPR 80 patients were correctly staged (45 with involved MRF; 35 with not involved MRF). Local tumor staging suggested by MDCT agreed with those of MRI, obtaining for CT axial images sensitivity and specificity of 80.4% and 75%, positive predictive value (PPV) 80.4%, negative predictive value (NPV) 75% and accuracy 78%; while performing MPR the sensitivity and specificity increased to 88% and 87.5%, PPV was 90%, NPV 85.36% and accuracy 88%. MPR images showed higher diagnostic accuracy, in terms of MRF involvement, than native axial images

  8. Accuracy assessment of modeling architectural structures and details using terrestrial laser scanning

    NASA Astrophysics Data System (ADS)

    Kedzierski, M.; Walczykowski, P.; Orych, A.; Czarnecka, P.

    2015-08-01

    One of the most important aspects when performing architectural documentation of cultural heritage structures is the accuracy of both the data and the products which are generated from these data: documentation in the form of 3D models or vector drawings. The paper describes an assessment of the accuracy of modelling data acquired using a terrestrial phase scanner in relation to the density of a point cloud representing the surface of different types of construction materials typical for cultural heritage structures. This analysis includes the impact of the scanning geometry: the incidence angle of the laser beam and the scanning distance. For the purposes of this research, a test field consisting of samples of different types of construction materials (brick, wood, plastic, plaster, a ceramic tile, sheet metal) was built. The study involved conducting measurements at different angles and from a range of distances for chosen scanning densities. Data, acquired in the form of point clouds, were then filtered and modelled. An accuracy assessment of the 3D model was conducted by fitting it with the point cloud. The reflection intensity of each type of material was also analyzed, trying to determine which construction materials have the highest reflectance coefficients, and which have the lowest reflection coefficients, and in turn how this variable changes for different scanning parameters. Additionally measurements were taken of a fragment of a building in order to compare the results obtained in laboratory conditions, with those taken in field conditions.

  9. Assessing the accuracy and reproducibility of modality independent elastography in a murine model of breast cancer

    PubMed Central

    Weis, Jared A.; Flint, Katelyn M.; Sanchez, Violeta; Yankeelov, Thomas E.; Miga, Michael I.

    2015-01-01

    Abstract. Cancer progression has been linked to mechanics. Therefore, there has been recent interest in developing noninvasive imaging tools for cancer assessment that are sensitive to changes in tissue mechanical properties. We have developed one such method, modality independent elastography (MIE), that estimates the relative elastic properties of tissue by fitting anatomical image volumes acquired before and after the application of compression to biomechanical models. The aim of this study was to assess the accuracy and reproducibility of the method using phantoms and a murine breast cancer model. Magnetic resonance imaging data were acquired, and the MIE method was used to estimate relative volumetric stiffness. Accuracy was assessed using phantom data by comparing to gold-standard mechanical testing of elasticity ratios. Validation error was <12%. Reproducibility analysis was performed on animal data, and within-subject coefficients of variation ranged from 2 to 13% at the bulk level and 32% at the voxel level. To our knowledge, this is the first study to assess the reproducibility of an elasticity imaging metric in a preclinical cancer model. Our results suggest that the MIE method can reproducibly generate accurate estimates of the relative mechanical stiffness and provide guidance on the degree of change needed in order to declare biological changes rather than experimental error in future therapeutic studies. PMID:26158120

  10. Gender differences in structured risk assessment: comparing the accuracy of five instruments.

    PubMed

    Coid, Jeremy; Yang, Min; Ullrich, Simone; Zhang, Tianqiang; Sizmur, Steve; Roberts, Colin; Farrington, David P; Rogers, Robert D

    2009-04-01

    Structured risk assessment should guide clinical risk management, but it is uncertain which instrument has the highest predictive accuracy among men and women. In the present study, the authors compared the Psychopathy Checklist-Revised (PCL-R; R. D. Hare, 1991, 2003); the Historical, Clinical, Risk Management-20 (HCR-20; C. D. Webster, K. S. Douglas, D. Eaves, & S. D. Hart, 1997); the Risk Matrix 2000-Violence (RM2000[V]; D. Thornton et al., 2003); the Violence Risk Appraisal Guide (VRAG; V. L. Quinsey, G. T. Harris, M. E. Rice, & C. A. Cormier, 1998); the Offenders Group Reconviction Scale (OGRS; J. B. Copas & P. Marshall, 1998; R. Taylor, 1999); and the total previous convictions among prisoners, prospectively assessed prerelease. The authors compared predischarge measures with subsequent offending and instruments ranked using multivariate regression. Most instruments demonstrated significant but moderate predictive ability. The OGRS ranked highest for violence among men, and the PCL-R and HCR-20 H subscale ranked highest for violence among women. The OGRS and total previous acquisitive convictions demonstrated greatest accuracy in predicting acquisitive offending among men and women. Actuarial instruments requiring no training to administer performed as well as personality assessment and structured risk assessment and were superior among men for violence.

  11. Combining accuracy assessment of land-cover maps with environmental monitoring programs

    USGS Publications Warehouse

    Stehman, S.V.; Czaplewski, R.L.; Nusser, S.M.; Yang, L.; Zhu, Z.

    2000-01-01

    A scientifically valid accuracy assessment of a large-area, land-cover map is expensive. Environmental monitoring programs offer a potential source of data to partially defray the cost of accuracy assessment while still maintaining the statistical validity. In this article, three general strategies for combining accuracy assessment and environmental monitoring protocols are described. These strategies range from a fully integrated accuracy assessment and environmental monitoring protocol, to one in which the protocols operate nearly independently. For all three strategies, features critical to using monitoring data for accuracy assessment include compatibility of the land-cover classification schemes, precisely co-registered sample data, and spatial and temporal compatibility of the map and reference data. Two monitoring programs, the National Resources Inventory (NRI) and the Forest Inventory and Monitoring (FIM), are used to illustrate important features for implementing a combined protocol.

  12. Tailoring Inlet Flow to Enable High Accuracy Compressor Performance Measurements

    NASA Astrophysics Data System (ADS)

    Brossman, John R.; Smith, Natalie R.; Talalayev, Anton; Key, Nicole L.

    2011-12-01

    To accomplish the research goals of capturing the effects of blade row interactions on compressor performance, small changes in performance must be measurable. This also requires axi-symmetric flow so that measuring one passage accurately captures the phenomena occurring in all passages. Thus, uniform inlet flow is a necessity. The original front-driven compressor had non-uniform temperature at the inlet. Additional challenges in controlling shaft speed to within tight tolerances were associated with the use of a viscous fluid coupling. Thus, a new electric motor, with variable frequency drive speed control was implemented. To address the issues with the inlet flow, the compressor is now driven from the rear resulting in improved inlet flow uniformity. This paper presents the design choices of the new layout in addition to the preliminary performance data of the compressor and an uncertainty analysis.

  13. Simply Performance Assessment

    ERIC Educational Resources Information Center

    McLaughlin, Cheryl A.; McLaughlin, Felecia C.; Pringle, Rose M.

    2013-01-01

    This article presents the experiences of Miss Felecia McLaughlin, a fourth-grade teacher from the island of Jamaica who used the model proposed by Bass et al. (2009) to assess conceptual understanding of four of the six types of simple machines while encouraging collaboration through the creation of learning teams. Students had an opportunity to…

  14. Assessing the Accuracy of Alaska National Hydrography Data for Mapping and Science

    NASA Astrophysics Data System (ADS)

    Arundel, S. T.; Yamamoto, K. H.; Mantey, K.; Vinyard-Houx, J.; Miller-Corbett, C. D.

    2012-12-01

    In July, 2011, the National Geospatial Program embarked on a large-scale Alaska Topographic Mapping Initiative. Maps will be published through the USGS US Topo program. Mapping of the state requires an understanding of the spatial quality of the National Hydrography Dataset (NHD), which is the hydrographic source for the US Topo. The NHD in Alaska was originally produced from topographic maps at 1:63,360 scale. It is critical to determine whether the NHD is accurate enough to be represented at the targeted map scale of the US Topo (1:25,000). Concerns are the spatial accuracy of data and the density of the stream network. Unsuitably low accuracy can be a result of the lower positional accuracy standards required for the original 1:63,360 scale mapping, temporal changes in water features, or any combination of these factors. Insufficient positional accuracy results in poor vertical integration with data layers of higher positional accuracy. Poor integration is readily apparent on the US Topo, particularly relative to current imagery and elevation data. In Alaska, current IFSAR-derived digital terrain models meet positional accuracy requirements for 1:24,000-scale mapping. Initial visual assessments indicate a wide range in the quality of fit between features in NHD and the IFSAR. However, no statistical analysis had been performed to quantify NHD feature accuracy. Determining the absolute accuracy is cost prohibitive, because of the need to collect independent, well-defined test points for such analysis; however, quantitative analysis of relative positional error is a feasible alternative. The purpose of this study is to determine the baseline accuracy of Alaska NHD pertinent to US Topo production, and to recommend reasonable guidelines and costs for NHD improvement and updates. A second goal is to detect error trends that might help identify areas or features where data improvements are most needed. There are four primary objectives of the study: 1. Choose study

  15. Accuracy assessment of a mobile terrestrial lidar survey at Padre Island National Seashore

    USGS Publications Warehouse

    Lim, Samsung; Thatcher, Cindy A.; Brock, John C.; Kimbrow, Dustin R.; Danielson, Jeffrey J.; Reynolds, B.J.

    2013-01-01

    The higher point density and mobility of terrestrial laser scanning (light detection and ranging (lidar)) is desired when extremely detailed elevation data are needed for mapping vertically orientated complex features such as levees, dunes, and cliffs, or when highly accurate data are needed for monitoring geomorphic changes. Mobile terrestrial lidar scanners have the capability for rapid data collection on a larger spatial scale compared with tripod-based terrestrial lidar, but few studies have examined the accuracy of this relatively new mapping technology. For this reason, we conducted a field test at Padre Island National Seashore of a mobile lidar scanner mounted on a sport utility vehicle and integrated with a position and orientation system. The purpose of the study was to assess the vertical and horizontal accuracy of data collected by the mobile terrestrial lidar system, which is georeferenced to the Universal Transverse Mercator coordinate system and the North American Vertical Datum of 1988. To accomplish the study objectives, independent elevation data were collected by conducting a high-accuracy global positioning system survey to establish the coordinates and elevations of 12 targets spaced throughout the 12 km transect. These independent ground control data were compared to the lidar scanner-derived elevations to quantify the accuracy of the mobile lidar system. The performance of the mobile lidar system was also tested at various vehicle speeds and scan density settings (e.g. field of view and linear point spacing) to estimate the optimal parameters for desired point density. After adjustment of the lever arm parameters, the final point cloud accuracy was 0.060 m (east), 0.095 m (north), and 0.053 m (height). The very high density of the resulting point cloud was sufficient to map fine-scale topographic features, such as the complex shape of the sand dunes.

  16. A multilaboratory comparison of calibration accuracy and the performance of external references in analytical ultracentrifugation.

    PubMed

    Zhao, Huaying; Ghirlando, Rodolfo; Alfonso, Carlos; Arisaka, Fumio; Attali, Ilan; Bain, David L; Bakhtina, Marina M; Becker, Donald F; Bedwell, Gregory J; Bekdemir, Ahmet; Besong, Tabot M D; Birck, Catherine; Brautigam, Chad A; Brennerman, William; Byron, Olwyn; Bzowska, Agnieszka; Chaires, Jonathan B; Chaton, Catherine T; Cölfen, Helmut; Connaghan, Keith D; Crowley, Kimberly A; Curth, Ute; Daviter, Tina; Dean, William L; Díez, Ana I; Ebel, Christine; Eckert, Debra M; Eisele, Leslie E; Eisenstein, Edward; England, Patrick; Escalante, Carlos; Fagan, Jeffrey A; Fairman, Robert; Finn, Ron M; Fischle, Wolfgang; de la Torre, José García; Gor, Jayesh; Gustafsson, Henning; Hall, Damien; Harding, Stephen E; Cifre, José G Hernández; Herr, Andrew B; Howell, Elizabeth E; Isaac, Richard S; Jao, Shu-Chuan; Jose, Davis; Kim, Soon-Jong; Kokona, Bashkim; Kornblatt, Jack A; Kosek, Dalibor; Krayukhina, Elena; Krzizike, Daniel; Kusznir, Eric A; Kwon, Hyewon; Larson, Adam; Laue, Thomas M; Le Roy, Aline; Leech, Andrew P; Lilie, Hauke; Luger, Karolin; Luque-Ortega, Juan R; Ma, Jia; May, Carrie A; Maynard, Ernest L; Modrak-Wojcik, Anna; Mok, Yee-Foong; Mücke, Norbert; Nagel-Steger, Luitgard; Narlikar, Geeta J; Noda, Masanori; Nourse, Amanda; Obsil, Tomas; Park, Chad K; Park, Jin-Ku; Pawelek, Peter D; Perdue, Erby E; Perkins, Stephen J; Perugini, Matthew A; Peterson, Craig L; Peverelli, Martin G; Piszczek, Grzegorz; Prag, Gali; Prevelige, Peter E; Raynal, Bertrand D E; Rezabkova, Lenka; Richter, Klaus; Ringel, Alison E; Rosenberg, Rose; Rowe, Arthur J; Rufer, Arne C; Scott, David J; Seravalli, Javier G; Solovyova, Alexandra S; Song, Renjie; Staunton, David; Stoddard, Caitlin; Stott, Katherine; Strauss, Holger M; Streicher, Werner W; Sumida, John P; Swygert, Sarah G; Szczepanowski, Roman H; Tessmer, Ingrid; Toth, Ronald T; Tripathy, Ashutosh; Uchiyama, Susumu; Uebel, Stephan F W; Unzai, Satoru; Gruber, Anna Vitlin; von Hippel, Peter H; Wandrey, Christine; Wang, Szu-Huan; Weitzel, Steven E; Wielgus-Kutrowska, Beata; Wolberger, Cynthia; Wolff, Martin; Wright, Edward; Wu, Yu-Sung; Wubben, Jacinta M; Schuck, Peter

    2015-01-01

    Analytical ultracentrifugation (AUC) is a first principles based method to determine absolute sedimentation coefficients and buoyant molar masses of macromolecules and their complexes, reporting on their size and shape in free solution. The purpose of this multi-laboratory study was to establish the precision and accuracy of basic data dimensions in AUC and validate previously proposed calibration techniques. Three kits of AUC cell assemblies containing radial and temperature calibration tools and a bovine serum albumin (BSA) reference sample were shared among 67 laboratories, generating 129 comprehensive data sets. These allowed for an assessment of many parameters of instrument performance, including accuracy of the reported scan time after the start of centrifugation, the accuracy of the temperature calibration, and the accuracy of the radial magnification. The range of sedimentation coefficients obtained for BSA monomer in different instruments and using different optical systems was from 3.655 S to 4.949 S, with a mean and standard deviation of (4.304 ± 0.188) S (4.4%). After the combined application of correction factors derived from the external calibration references for elapsed time, scan velocity, temperature, and radial magnification, the range of s-values was reduced 7-fold with a mean of 4.325 S and a 6-fold reduced standard deviation of ± 0.030 S (0.7%). In addition, the large data set provided an opportunity to determine the instrument-to-instrument variation of the absolute radial positions reported in the scan files, the precision of photometric or refractometric signal magnitudes, and the precision of the calculated apparent molar mass of BSA monomer and the fraction of BSA dimers. These results highlight the necessity and effectiveness of independent calibration of basic AUC data dimensions for reliable quantitative studies.

  17. A multilaboratory comparison of calibration accuracy and the performance of external references in analytical ultracentrifugation.

    PubMed

    Zhao, Huaying; Ghirlando, Rodolfo; Alfonso, Carlos; Arisaka, Fumio; Attali, Ilan; Bain, David L; Bakhtina, Marina M; Becker, Donald F; Bedwell, Gregory J; Bekdemir, Ahmet; Besong, Tabot M D; Birck, Catherine; Brautigam, Chad A; Brennerman, William; Byron, Olwyn; Bzowska, Agnieszka; Chaires, Jonathan B; Chaton, Catherine T; Cölfen, Helmut; Connaghan, Keith D; Crowley, Kimberly A; Curth, Ute; Daviter, Tina; Dean, William L; Díez, Ana I; Ebel, Christine; Eckert, Debra M; Eisele, Leslie E; Eisenstein, Edward; England, Patrick; Escalante, Carlos; Fagan, Jeffrey A; Fairman, Robert; Finn, Ron M; Fischle, Wolfgang; de la Torre, José García; Gor, Jayesh; Gustafsson, Henning; Hall, Damien; Harding, Stephen E; Cifre, José G Hernández; Herr, Andrew B; Howell, Elizabeth E; Isaac, Richard S; Jao, Shu-Chuan; Jose, Davis; Kim, Soon-Jong; Kokona, Bashkim; Kornblatt, Jack A; Kosek, Dalibor; Krayukhina, Elena; Krzizike, Daniel; Kusznir, Eric A; Kwon, Hyewon; Larson, Adam; Laue, Thomas M; Le Roy, Aline; Leech, Andrew P; Lilie, Hauke; Luger, Karolin; Luque-Ortega, Juan R; Ma, Jia; May, Carrie A; Maynard, Ernest L; Modrak-Wojcik, Anna; Mok, Yee-Foong; Mücke, Norbert; Nagel-Steger, Luitgard; Narlikar, Geeta J; Noda, Masanori; Nourse, Amanda; Obsil, Tomas; Park, Chad K; Park, Jin-Ku; Pawelek, Peter D; Perdue, Erby E; Perkins, Stephen J; Perugini, Matthew A; Peterson, Craig L; Peverelli, Martin G; Piszczek, Grzegorz; Prag, Gali; Prevelige, Peter E; Raynal, Bertrand D E; Rezabkova, Lenka; Richter, Klaus; Ringel, Alison E; Rosenberg, Rose; Rowe, Arthur J; Rufer, Arne C; Scott, David J; Seravalli, Javier G; Solovyova, Alexandra S; Song, Renjie; Staunton, David; Stoddard, Caitlin; Stott, Katherine; Strauss, Holger M; Streicher, Werner W; Sumida, John P; Swygert, Sarah G; Szczepanowski, Roman H; Tessmer, Ingrid; Toth, Ronald T; Tripathy, Ashutosh; Uchiyama, Susumu; Uebel, Stephan F W; Unzai, Satoru; Gruber, Anna Vitlin; von Hippel, Peter H; Wandrey, Christine; Wang, Szu-Huan; Weitzel, Steven E; Wielgus-Kutrowska, Beata; Wolberger, Cynthia; Wolff, Martin; Wright, Edward; Wu, Yu-Sung; Wubben, Jacinta M; Schuck, Peter

    2015-01-01

    Analytical ultracentrifugation (AUC) is a first principles based method to determine absolute sedimentation coefficients and buoyant molar masses of macromolecules and their complexes, reporting on their size and shape in free solution. The purpose of this multi-laboratory study was to establish the precision and accuracy of basic data dimensions in AUC and validate previously proposed calibration techniques. Three kits of AUC cell assemblies containing radial and temperature calibration tools and a bovine serum albumin (BSA) reference sample were shared among 67 laboratories, generating 129 comprehensive data sets. These allowed for an assessment of many parameters of instrument performance, including accuracy of the reported scan time after the start of centrifugation, the accuracy of the temperature calibration, and the accuracy of the radial magnification. The range of sedimentation coefficients obtained for BSA monomer in different instruments and using different optical systems was from 3.655 S to 4.949 S, with a mean and standard deviation of (4.304 ± 0.188) S (4.4%). After the combined application of correction factors derived from the external calibration references for elapsed time, scan velocity, temperature, and radial magnification, the range of s-values was reduced 7-fold with a mean of 4.325 S and a 6-fold reduced standard deviation of ± 0.030 S (0.7%). In addition, the large data set provided an opportunity to determine the instrument-to-instrument variation of the absolute radial positions reported in the scan files, the precision of photometric or refractometric signal magnitudes, and the precision of the calculated apparent molar mass of BSA monomer and the fraction of BSA dimers. These results highlight the necessity and effectiveness of independent calibration of basic AUC data dimensions for reliable quantitative studies. PMID:25997164

  18. A Multilaboratory Comparison of Calibration Accuracy and the Performance of External References in Analytical Ultracentrifugation

    PubMed Central

    Zhao, Huaying; Ghirlando, Rodolfo; Alfonso, Carlos; Arisaka, Fumio; Attali, Ilan; Bain, David L.; Bakhtina, Marina M.; Becker, Donald F.; Bedwell, Gregory J.; Bekdemir, Ahmet; Besong, Tabot M. D.; Birck, Catherine; Brautigam, Chad A.; Brennerman, William; Byron, Olwyn; Bzowska, Agnieszka; Chaires, Jonathan B.; Chaton, Catherine T.; Cölfen, Helmut; Connaghan, Keith D.; Crowley, Kimberly A.; Curth, Ute; Daviter, Tina; Dean, William L.; Díez, Ana I.; Ebel, Christine; Eckert, Debra M.; Eisele, Leslie E.; Eisenstein, Edward; England, Patrick; Escalante, Carlos; Fagan, Jeffrey A.; Fairman, Robert; Finn, Ron M.; Fischle, Wolfgang; de la Torre, José García; Gor, Jayesh; Gustafsson, Henning; Hall, Damien; Harding, Stephen E.; Cifre, José G. Hernández; Herr, Andrew B.; Howell, Elizabeth E.; Isaac, Richard S.; Jao, Shu-Chuan; Jose, Davis; Kim, Soon-Jong; Kokona, Bashkim; Kornblatt, Jack A.; Kosek, Dalibor; Krayukhina, Elena; Krzizike, Daniel; Kusznir, Eric A.; Kwon, Hyewon; Larson, Adam; Laue, Thomas M.; Le Roy, Aline; Leech, Andrew P.; Lilie, Hauke; Luger, Karolin; Luque-Ortega, Juan R.; Ma, Jia; May, Carrie A.; Maynard, Ernest L.; Modrak-Wojcik, Anna; Mok, Yee-Foong; Mücke, Norbert; Nagel-Steger, Luitgard; Narlikar, Geeta J.; Noda, Masanori; Nourse, Amanda; Obsil, Tomas; Park, Chad K.; Park, Jin-Ku; Pawelek, Peter D.; Perdue, Erby E.; Perkins, Stephen J.; Perugini, Matthew A.; Peterson, Craig L.; Peverelli, Martin G.; Piszczek, Grzegorz; Prag, Gali; Prevelige, Peter E.; Raynal, Bertrand D. E.; Rezabkova, Lenka; Richter, Klaus; Ringel, Alison E.; Rosenberg, Rose; Rowe, Arthur J.; Rufer, Arne C.; Scott, David J.; Seravalli, Javier G.; Solovyova, Alexandra S.; Song, Renjie; Staunton, David; Stoddard, Caitlin; Stott, Katherine; Strauss, Holger M.; Streicher, Werner W.; Sumida, John P.; Swygert, Sarah G.; Szczepanowski, Roman H.; Tessmer, Ingrid; Toth, Ronald T.; Tripathy, Ashutosh; Uchiyama, Susumu; Uebel, Stephan F. W.; Unzai, Satoru; Gruber, Anna Vitlin; von Hippel, Peter H.; Wandrey, Christine; Wang, Szu-Huan; Weitzel, Steven E.; Wielgus-Kutrowska, Beata; Wolberger, Cynthia; Wolff, Martin; Wright, Edward; Wu, Yu-Sung; Wubben, Jacinta M.; Schuck, Peter

    2015-01-01

    Analytical ultracentrifugation (AUC) is a first principles based method to determine absolute sedimentation coefficients and buoyant molar masses of macromolecules and their complexes, reporting on their size and shape in free solution. The purpose of this multi-laboratory study was to establish the precision and accuracy of basic data dimensions in AUC and validate previously proposed calibration techniques. Three kits of AUC cell assemblies containing radial and temperature calibration tools and a bovine serum albumin (BSA) reference sample were shared among 67 laboratories, generating 129 comprehensive data sets. These allowed for an assessment of many parameters of instrument performance, including accuracy of the reported scan time after the start of centrifugation, the accuracy of the temperature calibration, and the accuracy of the radial magnification. The range of sedimentation coefficients obtained for BSA monomer in different instruments and using different optical systems was from 3.655 S to 4.949 S, with a mean and standard deviation of (4.304 ± 0.188) S (4.4%). After the combined application of correction factors derived from the external calibration references for elapsed time, scan velocity, temperature, and radial magnification, the range of s-values was reduced 7-fold with a mean of 4.325 S and a 6-fold reduced standard deviation of ± 0.030 S (0.7%). In addition, the large data set provided an opportunity to determine the instrument-to-instrument variation of the absolute radial positions reported in the scan files, the precision of photometric or refractometric signal magnitudes, and the precision of the calculated apparent molar mass of BSA monomer and the fraction of BSA dimers. These results highlight the necessity and effectiveness of independent calibration of basic AUC data dimensions for reliable quantitative studies. PMID:25997164

  19. Assessment of the Geodetic and Color Accuracy of Multi-Pass Airborne/Mobile Lidar Data

    NASA Astrophysics Data System (ADS)

    Pack, R. T.; Petersen, B.; Sunderland, D.; Blonquist, K.; Israelsen, P.; Crum, G.; Fowles, A.; Neale, C.

    2008-12-01

    The ability to merge lidar and color image data acquired by multiple passes of an aircraft or van is largely dependent on the accuracy of the navigation system that estimates the dynamic position and orientation of the sensor. We report an assessment of the performance of a Riegl Q560 lidar transceiver combined with a Litton LN-200 inertial measurement unit (IMU) based NovAtel SPAN GPS/IMU system and a Panasonic HD Video Camera system. Several techniques are reported that were used to maximize the performance of the GPS/IMU system in generating precisely merged point clouds. The airborne data used included eight flight lines all overflying the same building on the campus at Utah State University. These lines were flown at the FAA minimum altitude of 1000 feet for fixed-wing aircraft. The mobile data was then acquired with the same system mounted to look sideways out of a van several months later. The van was driven around the same building at variable speed in order to avoid pedestrians. An absolute accuracy of about 6 cm and a relative accuracy of less than 2.5 cm one-sigma are documented for the merged data. Several techniques are also reported for merging of the color video data stream with the lidar point cloud. A technique for back-projecting and burning lidar points within the video stream enables the verification of co-boresighting accuracy. The resulting pixel-level alignment is accurate with within the size of a lidar footprint. The techniques described in this paper enable the display of high-resolution colored points of high detail and color clarity.

  20. Precision and accuracy of visual foliar injury assessments

    SciTech Connect

    Gumpertz, M.L.; Tingey, D.T.; Hogsett, W.E.

    1982-07-01

    The study compared three measures of foliar injury: (i) mean percent leaf area injured of all leaves on the plant, (ii) mean percent leaf area injured of the three most injured leaves, and (iii) the proportion of injured leaves to total number of leaves. For the first measure, the variation caused by reader biases and day-to-day variations were compared with the innate plant-to-plant variation. Bean (Phaseolus vulgaris 'Pinto'), pea (Pisum sativum 'Little Marvel'), radish (Rhaphanus sativus 'Cherry Belle'), and spinach (Spinacia oleracea 'Northland') plants were exposed to either 3 ..mu..L L/sup -1/ SO/sub 2/ or 0.3 ..mu..L L/sup -1/ ozone for 2 h. Three leaf readers visually assessed the percent injury on every leaf of each plant while a fourth reader used a transparent grid to make an unbiased assessment for each plant. The mean leaf area injured of the three most injured leaves was highly correlated with all leaves on the plant only if the three most injured leaves were <100% injured. The proportion of leaves injured was not highly correlated with percent leaf area injured of all leaves on the plant for any species in this study. The largest source of variation in visual assessments was plant-to-plant variation, which ranged from 44 to 97% of the total variance, followed by variation among readers (0-32% of the variance). Except for radish exposed to ozone, the day-to-day variation accounted for <18% of the total. Reader bias in assessment of ozone injury was significant but could be adjusted for each reader by a simple linear regression (R/sup 2/ = 0.89-0.91) of the visual assessments against the grid assessments.

  1. Vestibular and Oculomotor Assessments May Increase Accuracy of Subacute Concussion Assessment.

    PubMed

    McDevitt, J; Appiah-Kubi, K O; Tierney, R; Wright, W G

    2016-08-01

    In this study, we collected and analyzed preliminary data for the internal consistency of a new condensed model to assess vestibular and oculomotor impairments following a concussion. We also examined this model's ability to discriminate concussed athletes from healthy controls. Each participant was tested in a concussion assessment protocol that consisted of the Neurocom's Sensory Organization Test (SOT), Balance Error Scoring System exam, and a series of 8 vestibular and oculomotor assessments. Of these 10 assessments, only the SOT, near point convergence, and the signs and symptoms (S/S) scores collected following optokinetic stimulation, the horizontal eye saccades test, and the gaze stabilization test were significantly correlated with health status, and were used in further analyses. Multivariate logistic regression for binary outcomes was employed and these beta weights were used to calculate the area under the receiver operating characteristic curve ( area under the curve). The best model supported by our findings suggest that an exam consisting of the 4 SOT sensory ratios, near point convergence, and the optokinetic stimulation signs and symptoms score are sensitive in discriminating concussed athletes from healthy controls (accuracy=98.6%, AUC=0.983). However, an even more parsimonious model consisting of only the optokinetic stimulation and gaze stabilization test S/S scores and near point convergence was found to be a sensitive model for discriminating concussed athletes from healthy controls (accuracy=94.4%, AUC=0.951) without the need for expensive equipment. Although more investigation is needed, these findings will be helpful to health professionals potentially providing them with a sensitive and specific battery of simple vestibular and oculomotor assessments for concussion management. PMID:27176886

  2. Multipolar Ewald Methods, 1: Theory, Accuracy, and Performance

    PubMed Central

    2015-01-01

    The Ewald, Particle Mesh Ewald (PME), and Fast Fourier–Poisson (FFP) methods are developed for systems composed of spherical multipole moment expansions. A unified set of equations is derived that takes advantage of a spherical tensor gradient operator formalism in both real space and reciprocal space to allow extension to arbitrary multipole order. The implementation of these methods into a novel linear-scaling modified “divide-and-conquer” (mDC) quantum mechanical force field is discussed. The evaluation times and relative force errors are compared between the three methods, as a function of multipole expansion order. Timings and errors are also compared within the context of the quantum mechanical force field, which encounters primary errors related to the quality of reproducing electrostatic forces for a given density matrix and secondary errors resulting from the propagation of the approximate electrostatics into the self-consistent field procedure, which yields a converged, variational, but nonetheless approximate density matrix. Condensed-phase simulations of an mDC water model are performed with the multipolar PME method and compared to an electrostatic cutoff method, which is shown to artificially increase the density of water and heat of vaporization relative to full electrostatic treatment. PMID:25691829

  3. Multipolar Ewald methods, 1: theory, accuracy, and performance.

    PubMed

    Giese, Timothy J; Panteva, Maria T; Chen, Haoyuan; York, Darrin M

    2015-02-10

    The Ewald, Particle Mesh Ewald (PME), and Fast Fourier–Poisson (FFP) methods are developed for systems composed of spherical multipole moment expansions. A unified set of equations is derived that takes advantage of a spherical tensor gradient operator formalism in both real space and reciprocal space to allow extension to arbitrary multipole order. The implementation of these methods into a novel linear-scaling modified “divide-and-conquer” (mDC) quantum mechanical force field is discussed. The evaluation times and relative force errors are compared between the three methods, as a function of multipole expansion order. Timings and errors are also compared within the context of the quantum mechanical force field, which encounters primary errors related to the quality of reproducing electrostatic forces for a given density matrix and secondary errors resulting from the propagation of the approximate electrostatics into the self-consistent field procedure, which yields a converged, variational, but nonetheless approximate density matrix. Condensed-phase simulations of an mDC water model are performed with the multipolar PME method and compared to an electrostatic cutoff method, which is shown to artificially increase the density of water and heat of vaporization relative to full electrostatic treatment.

  4. Assessing Data Accuracy When Involving Students in Authentic Paleontological Research.

    ERIC Educational Resources Information Center

    Harnik, Paul G.; Ross, Robert M.

    2003-01-01

    Regards Student-Scientist Partnerships (SSPs) as beneficial collaborations for both students and researchers. Introduces the Paleontological Research Institution (PRI), which developed and pilot tested an SSP that involved grade 4-9 students in paleontological research on Devonian marine fossil assemblages. Reports formative data assessment and…

  5. An assessment of the accuracy of orthotropic photoelasticity

    NASA Technical Reports Server (NTRS)

    Hyer, M. W.; Liu, D. H.

    1984-01-01

    The accuracy of orthotropic photoelasticity was studied. The study consisted of both theoretical and experimental phases. In the theoretical phase a stress-optic law was developed. The stress-optic law included the effects of residual birefringence in the relation between applied stress and the material's optical response. The experimental phase had several portions. First, it was shown that four-point bending tests and the concept of an optical neutral axis could be conveniently used to calibrate the stress-optic behavior of the material. Second, the actual optical response of an orthotropic disk in diametral compression was compared with theoretical predictions. Third, the stresses in the disk were determined from the observed optical response, the stress-optic law, and a finite-difference form of the plane stress equilibrium equations. It was concluded that orthotropic photoelasticity is not as accurate as isotropic photoelasticity. This is believed to be due to the lack of good fringe resolution and the low sensitivity of most orthotropic photoelastic materials.

  6. The Attribute Accuracy Assessment of Land Cover Data in the National Geographic Conditions Survey

    NASA Astrophysics Data System (ADS)

    Ji, X.; Niu, X.

    2014-04-01

    With the widespread national survey of geographic conditions, object-based data has already became the most common data organization pattern in the area of land cover research. Assessing the accuracy of object-based land cover data is related to lots of processes of data production, such like the efficiency of inside production and the quality of final land cover data. Therefore,there are a great deal of requirements of accuracy assessment of object-based classification map. Traditional approaches for accuracy assessment in surveying and mapping are not aimed at land cover data. It is necessary to employ the accuracy assessment in imagery classification. However traditional pixel-based accuracy assessing methods are inadequate for the requirements. The measures we improved are based on error matrix and using objects as sample units, because the pixel sample units are not suitable for assessing the accuracy of object-based classification result. Compared to pixel samples, we realize that the uniformity of object samples has changed. In order to make the indexes generating from error matrix reliable, we using the areas of object samples as the weight to establish the error matrix of object-based image classification map. We compare the result of two error matrixes setting up by the number of object samples and the sum of area of object samples. The error matrix using the sum of area of object sample is proved to be an intuitive, useful technique for reflecting the actual accuracy of object-based imagery classification result.

  7. Attribute-Level and Pattern-Level Classification Consistency and Accuracy Indices for Cognitive Diagnostic Assessment

    ERIC Educational Resources Information Center

    Wang, Wenyi; Song, Lihong; Chen, Ping; Meng, Yaru; Ding, Shuliang

    2015-01-01

    Classification consistency and accuracy are viewed as important indicators for evaluating the reliability and validity of classification results in cognitive diagnostic assessment (CDA). Pattern-level classification consistency and accuracy indices were introduced by Cui, Gierl, and Chang. However, the indices at the attribute level have not yet…

  8. Assessing the accuracy of quantitative molecular microbial profiling.

    PubMed

    O'Sullivan, Denise M; Laver, Thomas; Temisak, Sasithon; Redshaw, Nicholas; Harris, Kathryn A; Foy, Carole A; Studholme, David J; Huggett, Jim F

    2014-01-01

    The application of high-throughput sequencing in profiling microbial communities is providing an unprecedented ability to investigate microbiomes. Such studies typically apply one of two methods: amplicon sequencing using PCR to target a conserved orthologous sequence (typically the 16S ribosomal RNA gene) or whole (meta)genome sequencing (WGS). Both methods have been used to catalog the microbial taxa present in a sample and quantify their respective abundances. However, a comparison of the inherent precision or bias of the different sequencing approaches has not been performed. We previously developed a metagenomic control material (MCM) to investigate error when performing different sequencing strategies. Amplicon sequencing using four different primer strategies and two 16S rRNA regions was examined (Roche 454 Junior) and compared to WGS (Illumina HiSeq). All sequencing methods generally performed comparably and in good agreement with organism specific digital PCR (dPCR); WGS notably demonstrated very high precision. Where discrepancies between relative abundances occurred they tended to differ by less than twofold. Our findings suggest that when alternative sequencing approaches are used for microbial molecular profiling they can perform with good reproducibility, but care should be taken when comparing small differences between distinct methods. This work provides a foundation for future work comparing relative differences between samples and the impact of extraction methods. We also highlight the value of control materials when conducting microbial profiling studies to benchmark methods and set appropriate thresholds.

  9. Assessing the Accuracy of Quantitative Molecular Microbial Profiling

    PubMed Central

    O’Sullivan, Denise M.; Laver, Thomas; Temisak, Sasithon; Redshaw, Nicholas; Harris, Kathryn A.; Foy, Carole A.; Studholme, David J.; Huggett, Jim F.

    2014-01-01

    The application of high-throughput sequencing in profiling microbial communities is providing an unprecedented ability to investigate microbiomes. Such studies typically apply one of two methods: amplicon sequencing using PCR to target a conserved orthologous sequence (typically the 16S ribosomal RNA gene) or whole (meta)genome sequencing (WGS). Both methods have been used to catalog the microbial taxa present in a sample and quantify their respective abundances. However, a comparison of the inherent precision or bias of the different sequencing approaches has not been performed. We previously developed a metagenomic control material (MCM) to investigate error when performing different sequencing strategies. Amplicon sequencing using four different primer strategies and two 16S rRNA regions was examined (Roche 454 Junior) and compared to WGS (Illumina HiSeq). All sequencing methods generally performed comparably and in good agreement with organism specific digital PCR (dPCR); WGS notably demonstrated very high precision. Where discrepancies between relative abundances occurred they tended to differ by less than twofold. Our findings suggest that when alternative sequencing approaches are used for microbial molecular profiling they can perform with good reproducibility, but care should be taken when comparing small differences between distinct methods. This work provides a foundation for future work comparing relative differences between samples and the impact of extraction methods. We also highlight the value of control materials when conducting microbial profiling studies to benchmark methods and set appropriate thresholds. PMID:25421243

  10. Probabilistic Digital Elevation Model Generation For Spatial Accuracy Assessment

    NASA Astrophysics Data System (ADS)

    Jalobeanu, A.

    2008-12-01

    are presented. A pair of images (including the nadir view) at 30m resolution was used to obtain a DEM with a vertical accuracy better than 10m in well-textured areas. The lack of information in smooth regions naturally led to large uncertainty estimates.

  11. Accuracy of Optimized Branched Algorithms to Assess Activity-Specific PAEE

    PubMed Central

    Edwards, Andy G.; Hill, James O.; Byrnes, William C.; Browning, Raymond C.

    2009-01-01

    PURPOSE To assess the activity-specific accuracy achievable by branched algorithm (BA) analysis of simulated daily-living physical activity energy expenditure (PAEE) within a sedentary population. METHODS Sedentary men (n=8) and women (n=8) first performed a treadmill calibration protocol, during which heart rate (HR), accelerometry (ACC), and PAEE were measured in 1-minute epochs. From these data, HR-PAEE, and ACC-PAEE regressions were constructed and used in each of six analytic models to predict PAEE from ACC and HR data collected during a subsequent simulated daily-living protocol. Criterion PAEE was measured during both protocols via indirect calorimetry. The accuracy achieved by each model was assessed by the root mean square of the difference between model-predicted daily–living PAEE and the criterion daily-living PAEE (expressed here as % of mean daily living PAEE). RESULTS Across the range of activities an unconstrained post hoc optimized branched algorithm best predicted criterion PAEE. Estimates using individual calibration were generally more accurate than those using group calibration (14 vs. 16 % error, respectively). These analyses also performed well within each of the six daily-living activities, but systematic errors appeared for several of those activities, which may be explained by an inability of the algorithm to simultaneously accommodate a heterogeneous range of activities. Analyses of between mean square error by subject and activity suggest that optimization involving minimization of RMS for total daily-living PAEE is associated with decreased error between subjects but increased error between activities. CONCLUSION The performance of post hoc optimized branched algorithms may be limited by heterogeneity in the daily-living activities being performed. PMID:19952842

  12. Performance Prediction For Multi-Sensor Tracking Systems: Kinematic Accuracy And Data Association Performance

    NASA Astrophysics Data System (ADS)

    Broida, Ted J.

    1990-03-01

    object or feature) and data fusion (combining measurements from different times and/or different sensors) are required in one form or another in essentially all multiple sensor fusion applications: one function determines what information should be fused, the other function performs the fusion. This paper presents approaches for quantifying the performance of these functions in the surveillance and tracking application. First, analytical techniques are presented that bound or approximate the fused kinematic estimation performance of multiple sen-sor tracking systems, in the absence of association errors. These bounds and approximations are based on several extensions of standard Kalman filter covariance analysis procedures, and allow modeling of a wide range of sensor types and arbitrary, time-varying geometries, both sensor-to-sensor and sensor-to-object. Arbitrarily many sensors can be used with varying update intervals, measurement accuracies, and detection performance. In heavy clutter or false alarm backgrounds it is often impossible to determine which (if any) of the measurements near a target track actually arise from the target, which leads to a degradation of tracking accuracy. This degradation can be estimated (but not bounded) with an approximate covariance analysis of the Probabilistic Data Association Filter (PDAF). Next, data association performance is quantified in terms of error probability for the case of closely spaced objects (CSOs) with minimal clutter, and for the case of isolated objects in a heavy clutter or false alarm background. These probabilities can be applied to data acquired by any sensor, based on measurement and track accuracies described by error covariance matrices. For example, in many applications a track established by one sensor is used to cue another sensor - in the presence of CSOs and/or clutter backgrounds, this approach can be used to estimate the probability of successful acquisition of the desired target by the second sensor.

  13. Online Medical Device Use Prediction: Assessment of Accuracy.

    PubMed

    Maktabi, Marianne; Neumuth, Thomas

    2016-01-01

    Cost-intensive units in the hospital such as the operating room require effective resource management to improve surgical workflow and patient care. To maximize efficiency, online management systems should accurately forecast the use of technical resources (medical instruments and devices). We compare several surgical activities like using the coagulator based on spectral analysis and application of a linear time variant system to obtain future technical resource usage. In our study we examine the influence of the duration of usage and total usage rate of the technical equipment to the prediction performance in several time intervals. A cross validation was conducted with sixty-two neck dissections to evaluate the prediction performance. The performance of a use-state-forecast does not change whether duration is considered or not, but decreases with lower total usage rates of the observed instruments. A minimum number of surgical workflow recordings (here: 62) and >5 minute time intervals for use-state forecast are required for applying our described method to surgical practice. The work presented here might support the reduction of resource conflicts when resources are shared among different operating rooms. PMID:27577445

  14. Assessment of RFID Read Accuracy for ISS Water Kit

    NASA Technical Reports Server (NTRS)

    Chu, Andrew

    2011-01-01

    The Space Life Sciences Directorate/Medical Informatics and Health Care Systems Branch (SD4) is assessing the benefits Radio Frequency Identification (RFID) technology for tracking items flown onboard the International Space Station (ISS). As an initial study, the Avionic Systems Division Electromagnetic Systems Branch (EV4) is collaborating with SD4 to affix RFID tags to a water kit supplied by SD4 and studying the read success rate of the tagged items. The tagged water kit inside a Cargo Transfer Bag (CTB) was inventoried using three different RFID technologies, including the Johnson Space Center Building 14 Wireless Habitat Test Bed RFID portal, an RFID hand-held reader being targeted for use on board the ISS, and an RFID enclosure designed and prototyped by EV4.

  15. Shuttle radar topography mission accuracy assessment and evaluation for hydrologic modeling

    NASA Astrophysics Data System (ADS)

    Mercuri, Pablo Alberto

    Digital Elevation Models (DEMs) are increasingly used even in low relief landscapes for multiple mapping applications and modeling approaches such as surface hydrology, flood risk mapping, agricultural suitability, and generation of topographic attributes. The National Aeronautics and Space Administration (NASA) has produced a nearly global database of highly accurate elevation data, the Shuttle Radar Topography Mission (SRTM) DEM. The main goals of this thesis were to investigate quality issues of SRTM, provide measures of vertical accuracy with emphasis on low relief areas, and to analyze the performance for the generation of physical boundaries and streams for watershed modeling and characterization. The absolute and relative accuracy of the two SRTM resolutions, at 1 and 3 arc-seconds, were investigated to generate information that can be used as a reference in areas with similar characteristics in other regions of the world. The absolute accuracy was obtained from accurate point estimates using the best available federal geodetic network in Indiana. The SRTM root mean square error for this area of the Midwest US surpassed data specifications. It was on the order of 2 meters for the 1 arc-second resolution in flat areas of the Midwest US. Estimates of error were smaller for the global coverage 3 arc-second data with very similar results obtained in the flat plains in Argentina. In addition to calculating the vertical accuracy, the impacts of physiography and terrain attributes, like slope, on the error magnitude were studied. The assessment also included analysis of the effects of land cover on vertical accuracy. Measures of local variability were described to identify the adjacency effects produced by surface features in the SRTM DEM, like forests and manmade features near the geodetic point. Spatial relationships among the bare-earth National Elevation Data and SRTM were also analyzed to assess the relative accuracy that was 2.33 meters in terms of the total

  16. 12 CFR 630.5 - Accuracy of reports and assessment of internal control over financial reporting.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... CREDIT SYSTEM General § 630.5 Accuracy of reports and assessment of internal control over financial... assessment of internal control over financial reporting. (1) Annual reports must include a report by the Funding Corporation's management assessing the effectiveness of the internal control over...

  17. Assessment of Second Language Performance.

    ERIC Educational Resources Information Center

    Lumley, Tom

    1996-01-01

    A discussion of current second language testing trends and practices in Australia focuses on the use of performance assessment, providing examples of its application in four specific contexts: an occupational English test used for to assess job-related English language skills as part of the certification procedure for health professionals;…

  18. Assessment of fine motor skill in musicians and nonmusicians: differences in timing versus sequence accuracy in a bimanual fingering task.

    PubMed

    Kincaid, Anthony E; Duncan, Scott; Scott, Samuel A

    2002-08-01

    While professional musicians are generally considered to possess better control of finger movements than nonmusicians, relatively few reports have experimentally addressed the nature of this discrepancy in fine motor skills. For example, it is unknown whether musicians perform with greater skill than control subjects in all aspects of different types of fine motor activities. More specifically, it is not known whether musicians perform better than control subjects on a fine motor task that is similar, but not identical, to the playing of their primary instrument. The purpose of this study was to examine the accuracy of finger placement and accuracy of timing in professional musicians and nonmusicians using a simple, rhythmical, bilateral fingering pattern and the technology that allowed separate assessment of these two parameters. Professional musicians (other than pianists) and nonmusicians were given identical, detailed and explicit instructions but not allowed physically to practice the finger pattern. After verbally repeating the correct pattern for the investigator, subjects performed the task on an electric keyboard with both hands simultaneously. Each subject's performance was then converted to a numerical score. While musicians clearly demonstrated better accuracy in timing, no significant difference was found between the groups in their finger placement scores. These findings were not correlated with subjects' age, sex, limb dominance, or primary instrument (for the professional musicians). This study indicates that professional musicians perform better in timing accuracy but not spatial accuracy while executing a simple, novel, bimanual motor sequence. PMID:12365261

  19. Bringing everyday mind reading into everyday life: assessing empathic accuracy with daily diary data.

    PubMed

    Howland, Maryhope; Rafaeli, Eshkol

    2010-10-01

    Individual differences in empathic accuracy (EA) can be assessed using daily diary methods as a complement to more commonly used lab-based behavioral observations. Using electronic dyadic diaries, we distinguished among elements of EA (i.e., accuracy in levels, scatter, and pattern, regarding both positive and negative moods) and examined them as phenomena at both the day and the person level. In a 3-week diary study of cohabiting partners, we found support for differentiating these elements. The proposed indices reflect differing aspects of accuracy, with considerable similarity among same-valenced accuracy indices. Overall there was greater accuracy regarding negative target moods than positive target moods. These methods and findings take the phenomenon of "everyday mindreading" (Ickes, 2003) into everyday life. We conclude by discussing empathic accuracies as a family of capacities for, or tendencies toward, accurate interpersonal sensitivity. Members of this family may have distinct associations with the perceiver's, target's, and relationship's well-being.

  20. The Eye Phone Study: reliability and accuracy of assessing Snellen visual acuity using smartphone technology

    PubMed Central

    Perera, C; Chakrabarti, R; Islam, F M A; Crowston, J

    2015-01-01

    Purpose Smartphone-based Snellen visual acuity charts has become popularized; however, their accuracy has not been established. This study aimed to evaluate the equivalence of a smartphone-based visual acuity chart with a standard 6-m Snellen visual acuity (6SVA) chart. Methods First, a review of available Snellen chart applications on iPhone was performed to determine the most accurate application based on optotype size. Subsequently, a prospective comparative study was performed by measuring conventional 6SVA and then iPhone visual acuity using the ‘Snellen' application on an Apple iPhone 4. Results Eleven applications were identified, with accuracy of optotype size ranging from 4.4–39.9%. Eighty-eight patients from general medical and surgical wards in a tertiary hospital took part in the second part of the study. The mean difference in logMAR visual acuity between the two charts was 0.02 logMAR (95% limit of agreement −0.332, 0.372 logMAR). The largest mean difference in logMAR acuity was noted in the subgroup of patients with 6SVA worse than 6/18 (n=5), who had a mean difference of two Snellen visual acuity lines between the charts (0.276 logMAR). Conclusion We did not identify a Snellen visual acuity app at the time of study, which could predict a patients standard Snellen visual acuity within one line. There was considerable variability in the optotype accuracy of apps. Further validation is required for assessment of acuity in patients with severe vision impairment. PMID:25931170

  1. An evaluation of the accuracy and performance of lightweight GPS collars in a suburban environment.

    PubMed

    Adams, Amy L; Dickinson, Katharine J M; Robertson, Bruce C; van Heezik, Yolanda

    2013-01-01

    The recent development of lightweight GPS collars has enabled medium-to-small sized animals to be tracked via GPS telemetry. Evaluation of the performance and accuracy of GPS collars is largely confined to devices designed for large animals for deployment in natural environments. This study aimed to assess the performance of lightweight GPS collars within a suburban environment, which may be different from natural environments in a way that is relevant to satellite signal acquisition. We assessed the effects of vegetation complexity, sky availability (percentage of clear sky not obstructed by natural or artificial features of the environment), proximity to buildings, and satellite geometry on fix success rate (FSR) and location error (LE) for lightweight GPS collars within a suburban environment. Sky availability had the largest affect on FSR, while LE was influenced by sky availability, vegetation complexity, and HDOP (Horizontal Dilution of Precision). Despite the complexity and modified nature of suburban areas, values for FSR (mean= 90.6%) and LE (mean = 30.1 m) obtained within the suburban environment are comparable to those from previous evaluations of GPS collars designed for larger animals and within less built-up environments. Due to fine-scale patchiness of habitat within urban environments, it is recommended that resource selection methods that are not reliant on buffer sizes be utilised for selection studies.

  2. Integrated three-dimensional digital assessment of accuracy of anterior tooth movement using clear aligners

    PubMed Central

    Zhang, Xiao-Juan; He, Li; Tian, Jie; Bai, Yu-Xing; Li, Song

    2015-01-01

    Objective To assess the accuracy of anterior tooth movement using clear aligners in integrated three-dimensional digital models. Methods Cone-beam computed tomography was performed before and after treatment with clear aligners in 32 patients. Plaster casts were laser-scanned for virtual setup and aligner fabrication. Differences in predicted and achieved root and crown positions of anterior teeth were compared on superimposed maxillofacial digital images and virtual models and analyzed by Student's t-test. Results The mean discrepancies in maxillary and mandibular crown positions were 0.376 ± 0.041 mm and 0.398 ± 0.037 mm, respectively. Maxillary and mandibular root positions differed by 2.062 ± 0.128 mm and 1.941 ± 0.154 mm, respectively. Conclusions Crowns but not roots of anterior teeth can be moved to designated positions using clear aligners, because these appliances cause tooth movement by tilting motion. PMID:26629473

  3. Assessing the accuracy of Landsat Thematic Mapper classification using double sampling

    USGS Publications Warehouse

    Kalkhan, M.A.; Reich, R.M.; Stohlgren, T.J.

    1998-01-01

    Double sampling was used to provide a cost efficient estimate of the accuracy of a Landsat Thematic Mapper (TM) classification map of a scene located in the Rocky Moutnain National Park, Colorado. In the first phase, 200 sample points were randomly selected to assess the accuracy between Landsat TM data and aerial photography. The overall accuracy and Kappa statistic were 49.5% and 32.5%, respectively. In the second phase, 25 sample points identified in the first phase were selected using stratified random sampling and located in the field. This information was used to correct for misclassification errors associated with the first phase samples. The overall accuracy and Kappa statistic increased to 59.6% and 45.6%, respectively.Double sampling was used to provide a cost efficient estimate of the accuracy of a Landsat Thematic Mapper (TM) classification map of a scene located in the Rocky Mountain National Park, Colorado. In the first phase, 200 sample points were randomly selected to assess the accuracy between Landsat TM data and aerial photography. The overall accuracy and Kappa statistic were 49.5 per cent and 32.5 per cent, respectively. In the second phase, 25 sample points identified in the first phase were selected using stratified random sampling and located in the field. This information was used to correct for misclassification errors associated with the first phase samples. The overall accuracy and Kappa statistic increased to 59.6 per cent and 45.6 per cent, respectively.

  4. Reconstruction Accuracy Assessment of Surface and Underwater 3D Motion Analysis: A New Approach.

    PubMed

    de Jesus, Kelly; de Jesus, Karla; Figueiredo, Pedro; Vilas-Boas, João Paulo; Fernandes, Ricardo Jorge; Machado, Leandro José

    2015-01-01

    This study assessed accuracy of surface and underwater 3D reconstruction of a calibration volume with and without homography. A calibration volume (6000 × 2000 × 2500 mm) with 236 markers (64 above and 88 underwater control points--with 8 common points at water surface--and 92 validation points) was positioned on a 25 m swimming pool and recorded with two surface and four underwater cameras. Planar homography estimation for each calibration plane was computed to perform image rectification. Direct linear transformation algorithm for 3D reconstruction was applied, using 1600000 different combinations of 32 and 44 points out of the 64 and 88 control points for surface and underwater markers (resp.). Root Mean Square (RMS) error with homography of control and validations points was lower than without it for surface and underwater cameras (P ≤ 0.03). With homography, RMS errors of control and validation points were similar between surface and underwater cameras (P ≥ 0.47). Without homography, RMS error of control points was greater for underwater than surface cameras (P ≤ 0.04) and the opposite was observed for validation points (P ≤ 0.04). It is recommended that future studies using 3D reconstruction should include homography to improve swimming movement analysis accuracy.

  5. Accuracy assessment of the integration of GNSS and a MEMS IMU in a terrestrial platform.

    PubMed

    Madeira, Sergio; Yan, Wenlin; Bastos, Luísa; Gonçalves, José A

    2014-11-04

    MEMS Inertial Measurement Units are available at low cost and can replace expensive units in mobile mapping platforms which need direct georeferencing. This is done through the integration with GNSS measurements in order to achieve a continuous positioning solution and to obtain orientation angles. This paper presents the results of the assessment of the accuracy of a system that integrates GNSS and a MEMS IMU in a terrestrial platform. We describe the methodology used and the tests realized where the accuracy of the positions and orientation parameters were assessed using an independent photogrammetric technique employing cameras that integrate the mobile mapping system developed by the authors. Results for the accuracy of attitude angles and coordinates show that accuracies better than a decimeter in positions, and under a degree in angles, can be achieved even considering that the terrestrial platform is operating in less than favorable environments.

  6. Accuracy Assessment of the Integration of GNSS and a MEMS IMU in a Terrestrial Platform

    PubMed Central

    Madeira, Sergio; Yan, Wenlin; Bastos, Luísa; Gonçalves, José A.

    2014-01-01

    MEMS Inertial Measurement Units are available at low cost and can replace expensive units in mobile mapping platforms which need direct georeferencing. This is done through the integration with GNSS measurements in order to achieve a continuous positioning solution and to obtain orientation angles. This paper presents the results of the assessment of the accuracy of a system that integrates GNSS and a MEMS IMU in a terrestrial platform. We describe the methodology used and the tests realized where the accuracy of the positions and orientation parameters were assessed using an independent photogrammetric technique employing cameras that integrate the mobile mapping system developed by the authors. Results for the accuracy of attitude angles and coordinates show that accuracies better than a decimeter in positions, and under a degree in angles, can be achieved even considering that the terrestrial platform is operating in less than favorable environments. PMID:25375757

  7. 3D combinational curves for accuracy and performance analysis of positive biometrics identification

    NASA Astrophysics Data System (ADS)

    Du, Yingzi; Chang, Chein-I.

    2008-06-01

    The receiver operating characteristic (ROC) curve has been widely used as an evaluation criterion to measure the accuracy of biometrics system. Unfortunately, such an ROC curve provides no indication of the optimum threshold and cost function. In this paper, two kinds of 3D combinational curves are proposed: the 3D combinational accuracy curve and the 3D combinational performance curve. The 3D combinational accuracy curve gives a balanced view of the relationships among FAR (false alarm rate), FRR (false rejection rate), threshold t, and Cost. Six 2D curves can be derived from the 3D combinational accuracy curve: the conventional 2D ROC curve, 2D curve of (FRR, t), 2D curve of (FAR, t), 2D curve of (FRR, Cost), 2D curve of (FAR, Cost), and 2D curve of ( t, Cost). The 3D combinational performance curve can be derived from the 3D combinational accuracy curve which can give a balanced view among Security, Convenience, threshold t, and Cost. The advantages of using the proposed 3D combinational curves are demonstrated by iris recognition systems where the experimental results show that the proposed 3D combinational curves can provide more comprehensive information of the system accuracy and performance.

  8. Quality and accuracy assessment of nutrition information on the Web for cancer prevention.

    PubMed

    Shahar, Suzana; Shirley, Ng; Noah, Shahrul A

    2013-01-01

    This study aimed to assess the quality and accuracy of nutrition information about cancer prevention available on the Web. The keywords 'nutrition  +  diet  +  cancer  +  prevention' were submitted to the Google search engine. Out of 400 websites evaluated, 100 met the inclusion and exclusion criteria and were selected as the sample for the assessment of quality and accuracy. Overall, 54% of the studied websites had low quality, 48 and 57% had no author's name or information, respectively, 100% were not updated within 1 month during the study period and 86% did not have the Health on the Net seal. When the websites were assessed for readability using the Flesch Reading Ease test, nearly 44% of the websites were categorised as 'quite difficult'. With regard to accuracy, 91% of the websites did not precisely follow the latest WCRF/AICR 2007 recommendation. The quality scores correlated significantly with the accuracy scores (r  =  0.250, p  <  0.05). Professional websites (n  =  22) had the highest mean quality scores, whereas government websites (n  =  2) had the highest mean accuracy scores. The quality of the websites selected in this study was not satisfactory, and there is great concern about the accuracy of the information being disseminated. PMID:22957981

  9. Thermal effects on human performance in office environment measured by integrating task speed and accuracy.

    PubMed

    Lan, Li; Wargocki, Pawel; Lian, Zhiwei

    2014-05-01

    We have proposed a method in which the speed and accuracy can be integrated into one metric of human performance. This was achieved by designing a performance task in which the subjects receive feedback on their performance by informing them whether they have committed errors, and if did, they can only proceed when the errors are corrected. Traditionally, the tasks are presented without giving this feedback and thus the speed and accuracy are treated separately. The method was examined in a subjective experiment with thermal environment as the prototypical example. During exposure in an office, 12 subjects performed tasks under two thermal conditions (neutral & warm) repeatedly. The tasks were presented with and without feedback on errors committed, as outlined above. The results indicate that there was a greater decrease in task performance due to thermal discomfort when feedback was given, compared to the performance of tasks presented without feedback.

  10. Assessment of the accuracy of pharmacy students' compounded solutions using vapor pressure osmometry.

    PubMed

    Kolling, William M; McPherson, Timothy B

    2013-04-12

    OBJECTIVE. To assess the effectiveness of using a vapor pressure osmometer to measure the accuracy of pharmacy students' compounding skills. DESIGN. Students calculated the theoretical osmotic pressure (mmol/kg) of a solution as a pre-laboratory exercise, compared their calculations with actual values, and then attempted to determine the cause of any errors found. ASSESSMENT. After the introduction of the vapor pressure osmometer, the first-time pass rate for solution compounding has varied from 85% to 100%. Approximately 85% of students surveyed reported that the instrument was valuable as a teaching tool because it objectively assessed their work and provided immediate formative assessment. CONCLUSIONS. This simple technique of measuring compounding accuracy using a vapor pressure osmometer allowed students to see the importance of quality control and assessment in practice for both pharmacists and technicians.

  11. Assessment of the Accuracy of Pharmacy Students’ Compounded Solutions Using Vapor Pressure Osmometry

    PubMed Central

    McPherson, Timothy B.

    2013-01-01

    Objective. To assess the effectiveness of using a vapor pressure osmometer to measure the accuracy of pharmacy students’ compounding skills. Design. Students calculated the theoretical osmotic pressure (mmol/kg) of a solution as a pre-laboratory exercise, compared their calculations with actual values, and then attempted to determine the cause of any errors found. Assessment. After the introduction of the vapor pressure osmometer, the first-time pass rate for solution compounding has varied from 85% to 100%. Approximately 85% of students surveyed reported that the instrument was valuable as a teaching tool because it objectively assessed their work and provided immediate formative assessment. Conclusions. This simple technique of measuring compounding accuracy using a vapor pressure osmometer allowed students to see the importance of quality control and assessment in practice for both pharmacists and technicians. PMID:23610476

  12. A priori evaluation of two-stage cluster sampling for accuracy assessment of large-area land-cover maps

    USGS Publications Warehouse

    Wickham, J.D.; Stehman, S.V.; Smith, J.H.; Wade, T.G.; Yang, L.

    2004-01-01

    Two-stage cluster sampling reduces the cost of collecting accuracy assessment reference data by constraining sample elements to fall within a limited number of geographic domains (clusters). However, because classification error is typically positively spatially correlated, within-cluster correlation may reduce the precision of the accuracy estimates. The detailed population information to quantify a priori the effect of within-cluster correlation on precision is typically unavailable. Consequently, a convenient, practical approach to evaluate the likely performance of a two-stage cluster sample is needed. We describe such an a priori evaluation protocol focusing on the spatial distribution of the sample by land-cover class across different cluster sizes and costs of different sampling options, including options not imposing clustering. This protocol also assesses the two-stage design's adequacy for estimating the precision of accuracy estimates for rare land-cover classes. We illustrate the approach using two large-area, regional accuracy assessments from the National Land-Cover Data (NLCD), and describe how the a priorievaluation was used as a decision-making tool when implementing the NLCD design.

  13. Diagnostic accuracy of refractometry for assessing bovine colostrum quality: A systematic review and meta-analysis.

    PubMed

    Buczinski, S; Vandeweerd, J M

    2016-09-01

    Provision of good quality colostrum [i.e., immunoglobulin G (IgG) concentration ≥50g/L] is the first step toward ensuring proper passive transfer of immunity for young calves. Precise quantification of colostrum IgG levels cannot be easily performed on the farm. Assessment of the refractive index using a Brix scale with a refractometer has been described as being highly correlated with IgG concentration in colostrum. The aim of this study was to perform a systematic review of the diagnostic accuracy of Brix refractometry to diagnose good quality colostrum. From 101 references initially obtain ed, 11 were included in the systematic review meta-analysis representing 4,251 colostrum samples. The prevalence of good colostrum samples with IgG ≥50g/L varied from 67.3 to 92.3% (median 77.9%). Specific estimates of accuracy [sensitivity (Se) and specificity (Sp)] were obtained for different reported cut-points using a hierarchical summary receiver operating characteristic curve model. For the cut-point of 22% (n=8 studies), Se=80.2% (95% CI: 71.1-87.0%) and Sp=82.6% (71.4-90.0%). Decreasing the cut-point to 18% increased Se [96.1% (91.8-98.2%)] and decreased Sp [54.5% (26.9-79.6%)]. Modeling the effect of these Brix accuracy estimates using a stochastic simulation and Bayes theorem showed that a positive result with the 22% Brix cut-point can be used to diagnose good quality colostrum (posttest probability of a good colostrum: 94.3% (90.7-96.9%). The posttest probability of good colostrum with a Brix value <18% was only 22.7% (12.3-39.2%). Based on this study, the 2 cut-points could be alternatively used to select good quality colostrum (sample with Brix ≥22%) or to discard poor quality colostrum (sample with Brix <18%). When sample results are between these 2 values, colostrum supplementation should be considered. PMID:27423958

  14. Development of a Haptic Elbow Spasticity Simulator (HESS) for Improving Accuracy and Reliability of Clinical Assessment of Spasticity

    PubMed Central

    Park, Hyung-Soon; Kim, Jonghyun; Damiano, Diane L.

    2013-01-01

    This paper presents the framework for developing a robotic system to improve accuracy and reliability of clinical assessment. Clinical assessment of spasticity tends to have poor reliability because of the nature of the in-person assessment. To improve accuracy and reliability of spasticity assessment, a haptic device, named the HESS (Haptic Elbow Spasticity Simulator) has been designed and constructed to recreate the clinical “feel” of elbow spasticity based on quantitative measurements. A mathematical model representing the spastic elbow joint was proposed based on clinical assessment using the Modified Ashworth Scale (MAS) and quantitative data (position, velocity, and torque) collected on subjects with elbow spasticity. Four haptic models (HMs) were created to represent the haptic feel of MAS 1, 1+, 2, and 3. The four HMs were assessed by experienced clinicians; three clinicians performed both in-person and haptic assessments, and had 100% agreement in MAS scores; and eight clinicians who were experienced with MAS assessed the four HMs without receiving any training prior to the test. Inter-rater reliability among the eight clinicians had substantial agreement (κ = 0.626). The eight clinicians also rated the level of realism (7.63 ± 0.92 out of 10) as compared to their experience with real patients. PMID:22562769

  15. Accuracy assessment of novel two-axes rotating and single-axis translating calibration equipment

    NASA Astrophysics Data System (ADS)

    Liu, Bo; Ye, Dong; Che, Rensheng

    2009-11-01

    There is a new method that the rocket nozzle 3D motion is measured by a motion tracking system based on the passive optical markers. However, an important issue is required to resolve-how to assess the accuracy of rocket nozzle motion test. Therefore, calibration equipment is designed and manufactured for generating the truth of nozzle model motion such as translation, angle, velocity, angular velocity, etc. It consists of a base, a lifting platform, a rotary table and a rocket nozzle model with precise geometry size. The nozzle model associated with the markers is installed on the rotary table, which can translate or rotate at the known velocity. The general accuracy of rocket nozzle motion test is evaluated by comparing the truth value with the static and dynamic test data. This paper puts emphasis on accuracy assessment of novel two-axes rotating and single-axis translating calibration equipment. By substituting measured value of the error source into error model, the pointing error reaches less than 0.005deg, rotation center position error reaches 0.08mm, and the rate stability is less than 10-3. The calibration equipment accuracy is much higher than the accuracy of nozzle motion test system, thus the former can be used to assess and calibrate the later.

  16. Assessing map accuracy in a remotely sensed, ecoregion-scale cover map

    USGS Publications Warehouse

    Edwards, T.C.; Moisen, G.G.; Cutler, D.R.

    1998-01-01

    Landscape- and ecoregion-based conservation efforts increasingly use a spatial component to organize data for analysis and interpretation. A challenge particular to remotely sensed cover maps generated from these efforts is how best to assess the accuracy of the cover maps, especially when they can exceed 1000 s/km2 in size. Here we develop and describe a methodological approach for assessing the accuracy of large-area cover maps, using as a test case the 21.9 million ha cover map developed for Utah Gap Analysis. As part of our design process, we first reviewed the effect of intracluster correlation and a simple cost function on the relative efficiency of cluster sample designs to simple random designs. Our design ultimately combined clustered and subsampled field data stratified by ecological modeling unit and accessibility (hereafter a mixed design). We next outline estimation formulas for simple map accuracy measures under our mixed design and report results for eight major cover types and the three ecoregions mapped as part of the Utah Gap Analysis. Overall accuracy of the map was 83.2% (SE=1.4). Within ecoregions, accuracy ranged from 78.9% to 85.0%. Accuracy by cover type varied, ranging from a low of 50.4% for barren to a high of 90.6% for man modified. In addition, we examined gains in efficiency of our mixed design compared with a simple random sample approach. In regard to precision, our mixed design was more precise than a simple random design, given fixed sample costs. We close with a discussion of the logistical constraints facing attempts to assess the accuracy of large-area, remotely sensed cover maps.

  17. Preliminary melter performance assessment report

    SciTech Connect

    Elliott, M.L.; Eyler, L.L.; Mahoney, L.A.; Cooper, M.F.; Whitney, L.D.; Shafer, P.J.

    1994-08-01

    The Melter Performance Assessment activity, a component of the Pacific Northwest Laboratory`s (PNL) Vitrification Technology Development (PVTD) effort, was designed to determine the impact of noble metals on the operational life of the reference Hanford Waste Vitrification Plant (HWVP) melter. The melter performance assessment consisted of several activities, including a literature review of all work done with noble metals in glass, gradient furnace testing to study the behavior of noble metals during the melting process, research-scale and engineering-scale melter testing to evaluate effects of noble metals on melter operation, and computer modeling that used the experimental data to predict effects of noble metals on the full-scale melter. Feed used in these tests simulated neutralized current acid waste (NCAW) feed. This report summarizes the results of the melter performance assessment and predicts the lifetime of the HWVP melter. It should be noted that this work was conducted before the recent Tri-Party Agreement changes, so the reference melter referred to here is the Defense Waste Processing Facility (DWPF) melter design.

  18. Assessing the Accuracy of MODIS-NDVI Derived Land-Cover Across the Great Lakes Basin

    EPA Science Inventory

    This research describes the accuracy assessment process for a land-cover dataset developed for the Great Lakes Basin (GLB). This land-cover dataset was developed from the 2007 MODIS Normalized Difference Vegetation Index (NDVI) 16-day composite (MOD13Q) 250 m time-series data. Tr...

  19. A PIXEL COMPOSITION-BASED REFERENCE DATA SET FOR THEMATIC ACCURACY ASSESSMENT

    EPA Science Inventory

    Developing reference data sets for accuracy assessment of land-cover classifications derived from coarse spatial resolution sensors such as MODIS can be difficult due to the large resolution differences between the image data and available reference data sources. Ideally, the spa...

  20. The Word Writing CAFE: Assessing Student Writing for Complexity, Accuracy, and Fluency

    ERIC Educational Resources Information Center

    Leal, Dorothy J.

    2005-01-01

    The Word Writing CAFE is a new assessment tool designed for teachers to evaluate objectively students' word-writing ability for fluency, accuracy, and complexity. It is designed to be given to the whole class at one time. This article describes the development of the CAFE and provides directions for administering and scoring it. The author also…

  1. Accuracy of Revised and Traditional Parallel Analyses for Assessing Dimensionality with Binary Data

    ERIC Educational Resources Information Center

    Green, Samuel B.; Redell, Nickalus; Thompson, Marilyn S.; Levy, Roy

    2016-01-01

    Parallel analysis (PA) is a useful empirical tool for assessing the number of factors in exploratory factor analysis. On conceptual and empirical grounds, we argue for a revision to PA that makes it more consistent with hypothesis testing. Using Monte Carlo methods, we evaluated the relative accuracy of the revised PA (R-PA) and traditional PA…

  2. Gender Differences in Structured Risk Assessment: Comparing the Accuracy of Five Instruments

    ERIC Educational Resources Information Center

    Coid, Jeremy; Yang, Min; Ullrich, Simone; Zhang, Tianqiang; Sizmur, Steve; Roberts, Colin; Farrington, David P.; Rogers, Robert D.

    2009-01-01

    Structured risk assessment should guide clinical risk management, but it is uncertain which instrument has the highest predictive accuracy among men and women. In the present study, the authors compared the Psychopathy Checklist-Revised (PCL-R; R. D. Hare, 1991, 2003); the Historical, Clinical, Risk Management-20 (HCR-20; C. D. Webster, K. S.…

  3. In the Right Ballpark? Assessing the Accuracy of Net Price Calculators

    ERIC Educational Resources Information Center

    Anthony, Aaron M.; Page, Lindsay C.; Seldin, Abigail

    2016-01-01

    Large differences often exist between a college's sticker price and net price after accounting for financial aid. Net price calculators (NPCs) were designed to help students more accurately estimate their actual costs to attend a given college. This study assesses the accuracy of information provided by net price calculators. Specifically, we…

  4. Modifications to the accuracy assessment analysis routine MLTCRP to produce an output file

    NASA Technical Reports Server (NTRS)

    Carnes, J. G.

    1978-01-01

    Modifications are described that were made to the analysis program MLTCRP in the accuracy assessment software system to produce a disk output file. The output files produced by this modified program are used to aggregate data for regions greater than a single segment.

  5. Assessing the Accuracy of Classwide Direct Observation Methods: Two Analyses Using Simulated and Naturalistic Data

    ERIC Educational Resources Information Center

    Dart, Evan H.; Radley, Keith C.; Briesch, Amy M.; Furlow, Christopher M.; Cavell, Hannah J.; Briesch, Amy M.

    2016-01-01

    Two studies investigated the accuracy of eight different interval-based group observation methods that are commonly used to assess the effects of classwide interventions. In Study 1, a Microsoft Visual Basic program was created to simulate a large set of observational data. Binary data were randomly generated at the student level to represent…

  6. 12 CFR 620.3 - Accuracy of reports and assessment of internal control over financial reporting.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... CREDIT SYSTEM DISCLOSURE TO SHAREHOLDERS General § 620.3 Accuracy of reports and assessment of internal... shall make any disclosure to shareholders or the general public concerning any matter required to be... person shall make such additional or corrective disclosure as is necessary to provide shareholders...

  7. Performance Assessment Institute-NV

    SciTech Connect

    Lombardo, Joesph

    2012-12-31

    The National Supercomputing Center for Energy and the Environment’s intention is to purchase a multi-purpose computer cluster in support of the Performance Assessment Institute (PA Institute). The PA Institute will serve as a research consortium located in Las Vegas Nevada with membership that includes: national laboratories, universities, industry partners, and domestic and international governments. This center will provide a one-of-a-kind centralized facility for the accumulation of information for use by Institutions of Higher Learning, the U.S. Government, and Regulatory Agencies and approved users. This initiative will enhance and extend High Performance Computing (HPC) resources in Nevada to support critical national and international needs in "scientific confirmation". The PA Institute will be promoted as the leading Modeling, Learning and Research Center worldwide. The program proposes to utilize the existing supercomputing capabilities and alliances of the University of Nevada Las Vegas as a base, and to extend these resource and capabilities through a collaborative relationship with its membership. The PA Institute will provide an academic setting for interactive sharing, learning, mentoring and monitoring of multi-disciplinary performance assessment and performance confirmation information. The role of the PA Institute is to facilitate research, knowledge-increase, and knowledge-sharing among users.

  8. Parallel Reaction Monitoring: A Targeted Experiment Performed Using High Resolution and High Mass Accuracy Mass Spectrometry

    PubMed Central

    Rauniyar, Navin

    2015-01-01

    The parallel reaction monitoring (PRM) assay has emerged as an alternative method of targeted quantification. The PRM assay is performed in a high resolution and high mass accuracy mode on a mass spectrometer. This review presents the features that make PRM a highly specific and selective method for targeted quantification using quadrupole-Orbitrap hybrid instruments. In addition, this review discusses the label-based and label-free methods of quantification that can be performed with the targeted approach. PMID:26633379

  9. Accuracy Assessment and Correction of Vaisala RS92 Radiosonde Water Vapor Measurements

    NASA Technical Reports Server (NTRS)

    Whiteman, David N.; Miloshevich, Larry M.; Vomel, Holger; Leblanc, Thierry

    2008-01-01

    Relative humidity (RH) measurements from Vaisala RS92 radiosondes are widely used in both research and operational applications, although the measurement accuracy is not well characterized as a function of its known dependences on height, RH, and time of day (or solar altitude angle). This study characterizes RS92 mean bias error as a function of its dependences by comparing simultaneous measurements from RS92 radiosondes and from three reference instruments of known accuracy. The cryogenic frostpoint hygrometer (CFH) gives the RS92 accuracy above the 700 mb level; the ARM microwave radiometer gives the RS92 accuracy in the lower troposphere; and the ARM SurTHref system gives the RS92 accuracy at the surface using 6 RH probes with NIST-traceable calibrations. These RS92 assessments are combined using the principle of Consensus Referencing to yield a detailed estimate of RS92 accuracy from the surface to the lowermost stratosphere. An empirical bias correction is derived to remove the mean bias error, yielding corrected RS92 measurements whose mean accuracy is estimated to be +/-3% of the measured RH value for nighttime soundings and +/-4% for daytime soundings, plus an RH offset uncertainty of +/-0.5%RH that is significant for dry conditions. The accuracy of individual RS92 soundings is further characterized by the 1-sigma "production variability," estimated to be +/-1.5% of the measured RH value. The daytime bias correction should not be applied to cloudy daytime soundings, because clouds affect the solar radiation error in a complicated and uncharacterized way.

  10. Self-Confidence and Performance Goal Orientation Interactively Predict Performance in a Reasoning Test with Accuracy Feedback

    ERIC Educational Resources Information Center

    Beckmann, Nadin; Beckmann, Jens F.; Elliott, Julian G.

    2009-01-01

    This study takes an individual differences' perspective on performance feedback effects in psychometric testing. A total of 105 students in a mainstream secondary school in North East England undertook a cognitive ability test on two occasions. In one condition, students received item-specific accuracy feedback while in the other (standard…

  11. Accuracy assessment of airborne photogrammetrically derived high-resolution digital elevation models in a high mountain environment

    NASA Astrophysics Data System (ADS)

    Müller, Johann; Gärtner-Roer, Isabelle; Thee, Patrick; Ginzler, Christian

    2014-12-01

    High-resolution digital elevation models (DEMs) generated by airborne remote sensing are frequently used to analyze landform structures (monotemporal) and geomorphological processes (multitemporal) in remote areas or areas of extreme terrain. In order to assess and quantify such structures and processes it is necessary to know the absolute accuracy of the available DEMs. This study assesses the absolute vertical accuracy of DEMs generated by the High Resolution Stereo Camera-Airborne (HRSC-A), the Leica Airborne Digital Sensors 40/80 (ADS40 and ADS80) and the analogue camera system RC30. The study area is located in the Turtmann valley, Valais, Switzerland, a glacially and periglacially formed hanging valley stretching from 2400 m to 3300 m a.s.l. The photogrammetrically derived DEMs are evaluated against geodetic field measurements and an airborne laser scan (ALS). Traditional and robust global and local accuracy measurements are used to describe the vertical quality of the DEMs, which show a non Gaussian distribution of errors. The results show that all four sensor systems produce DEMs with similar accuracy despite their different setups and generations. The ADS40 and ADS80 (both with a ground sampling distance of 0.50 m) generate the most accurate DEMs in complex high mountain areas with a RMSE of 0.8 m and NMAD of 0.6 m They also show the highest accuracy relating to flying height (0.14‰). The pushbroom scanning system HRSC-A produces a RMSE of 1.03 m and a NMAD of 0.83 m (0.21‰ accuracy of the flying height and 10 times the ground sampling distance). The analogue camera system RC30 produces DEMs with a vertical accuracy of 1.30 m RMSE and 0.83 m NMAD (0.17‰ accuracy of the flying height and two times the ground sampling distance). It is also shown that the performance of the DEMs strongly depends on the inclination of the terrain. The RMSE of areas up to an inclination <40° is better than 1 m. In more inclined areas the error and outlier occurrence

  12. Salt site performance assessment activities

    SciTech Connect

    Kircher, J.F.; Gupta, S.K.

    1983-01-01

    During this year the first selection of the tools (codes) for performance assessments of potential salt sites have been tentatively selected and documented; the emphasis has shifted from code development to applications. During this period prior to detailed characterization of a salt site, the focus is on bounding calculations, sensitivity and with the data available. The development and application of improved methods for sensitivity and uncertainty analysis is a focus for the coming years activities and the subject of a following paper in these proceedings. Although the assessments to date are preliminary and based on admittedly scant data, the results indicate that suitable salt sites can be identified and repository subsystems designed which will meet the established criteria for protecting the health and safety of the public. 36 references, 5 figures, 2 tables.

  13. Assessment of the Accuracy of the Bethe-Salpeter (BSE/GW) Oscillator Strengths.

    PubMed

    Jacquemin, Denis; Duchemin, Ivan; Blondel, Aymeric; Blase, Xavier

    2016-08-01

    Aiming to assess the accuracy of the oscillator strengths determined at the BSE/GW level, we performed benchmark calculations using three complementary sets of molecules. In the first, we considered ∼80 states in Thiel's set of compounds and compared the BSE/GW oscillator strengths to recently determined ADC(3/2) and CC3 reference values. The second set includes the oscillator strengths of the low-lying states of 80 medium to large dyes for which we have determined CC2/aug-cc-pVTZ values. The third set contains 30 anthraquinones for which experimental oscillator strengths are available. We find that BSE/GW accurately reproduces the trends for all series with excellent correlation coefficients to the benchmark data and generally very small errors. Indeed, for Thiel's sets, the BSE/GW values are more accurate (using CC3 references) than both CC2 and ADC(3/2) values on both absolute and relative scales. For all three sets, BSE/GW errors also tend to be nicely spread with almost equal numbers of positive and negative deviations as compared to reference values.

  14. Assessment of the Accuracy of the Bethe-Salpeter (BSE/GW) Oscillator Strengths.

    PubMed

    Jacquemin, Denis; Duchemin, Ivan; Blondel, Aymeric; Blase, Xavier

    2016-08-01

    Aiming to assess the accuracy of the oscillator strengths determined at the BSE/GW level, we performed benchmark calculations using three complementary sets of molecules. In the first, we considered ∼80 states in Thiel's set of compounds and compared the BSE/GW oscillator strengths to recently determined ADC(3/2) and CC3 reference values. The second set includes the oscillator strengths of the low-lying states of 80 medium to large dyes for which we have determined CC2/aug-cc-pVTZ values. The third set contains 30 anthraquinones for which experimental oscillator strengths are available. We find that BSE/GW accurately reproduces the trends for all series with excellent correlation coefficients to the benchmark data and generally very small errors. Indeed, for Thiel's sets, the BSE/GW values are more accurate (using CC3 references) than both CC2 and ADC(3/2) values on both absolute and relative scales. For all three sets, BSE/GW errors also tend to be nicely spread with almost equal numbers of positive and negative deviations as compared to reference values. PMID:27403612

  15. Increasing accuracy in the assessment of motion sickness: A construct methodology

    NASA Technical Reports Server (NTRS)

    Stout, Cynthia S.; Cowings, Patricia S.

    1993-01-01

    The purpose is to introduce a new methodology that should improve the accuracy of the assessment of motion sickness. This construct methodology utilizes both subjective reports of motion sickness and objective measures of physiological correlates to assess motion sickness. Current techniques and methods used in the framework of a construct methodology are inadequate. Current assessment techniques for diagnosing motion sickness and space motion sickness are reviewed, and attention is called to the problems with the current methods. Further, principles of psychophysiology that when applied will probably resolve some of these problems are described in detail.

  16. Radiative accuracy assessment of CrIS upper level channels using COSMIC RO data

    NASA Astrophysics Data System (ADS)

    Qi, C.; Weng, F.; Han, Y.; Lin, L.; Chen, Y.; Wang, L.

    2012-12-01

    The Cross-track Infrared Sounder(CrIS) onboard Suomi National Polar-orbiting Partnership(NPP) satellite is designed to provide high vertical resolution information on the atmosphere's three-dimensional structure of temperature and water vapor. There are much work has been done to verify the observation accuracy of CrIS since its launch date of Oct. 28, 2011, such as SNO cross comparison with other hyper-spectral infrared instruments and forward simulation comparison using radiative transfer model based on numerical prediction background profiles. Radio occultation technique can provide profiles of the Earth's ionosphere and neutral atmosphere with high accuracy, high vertical resolution and global coverage. It has advantages of all-weather capability, low expense, long-term stability etc. Assessing CrIS radiative calibration accuracy was conducted by comparison between observation and Line-by-line simulation using COSMIC RO data. The main process technique include : (a) COSMIC RO data downloading and collocation with CrIS measurements through weighting function (wf) peak altitude dependent collocation method; (b) High spectral resolution of Line-by-line radiance simulation using collocated COSMIC RO profiles ; (c) Generation of CrIS channel radiance by FFT transform method; (d): Bias analysis . This absolute calibration accuracy assessing method verified a 0.3K around bias error of CrIS measurements.

  17. Assessing the impact of measurement frequency on accuracy and uncertainty of water quality data

    NASA Astrophysics Data System (ADS)

    Helm, Björn; Schiffner, Stefanie; Krebs, Peter

    2014-05-01

    Physico-chemical water quality is a major objective for the evaluation of the ecological state of a river water body. Physical and chemical water properties are measured to assess the river state, identify prevalent pressures and develop mitigating measures. Regularly water quality is assessed based on weekly to quarterly grab samples. The increasing availability of online-sensor data measured at a high frequency allows for an enhanced understanding of emission and transport dynamics, as well as the identification of typical and critical states. In this study we present a systematic approach to assess the impact of measurement frequency on the accuracy and uncertainty of derived aggregate indicators of environmental quality. High frequency measured (10 min-1 and 15 min-1) data on water temperature, pH, turbidity, electric conductivity and concentrations of dissolved oxygen nitrate, ammonia and phosphate are assessed in resampling experiments. The data is collected at 14 sites in eastern and northern Germany representing catchments between 40 km2 and 140 000 km2 of varying properties. Resampling is performed to create series of hourly to quarterly frequency, including special restrictions like sampling at working hours or discharge compensation. Statistical properties and their confidence intervals are determined in a bootstrapping procedure and evaluated along a gradient of sampling frequency. For all variables the range of the aggregate indicators increases largely in the bootstrapping realizations with decreasing sampling frequency. Mean values of electric conductivity, pH and water temperature obtained with monthly frequency differ in average less than five percent from the original data. Mean dissolved oxygen, nitrate and phosphate had in most stations less than 15 % bias. Ammonia and turbidity are most sensitive to the increase of sampling frequency with up to 30 % in average and 250 % maximum bias at monthly sampling frequency. A systematic bias is recognized

  18. Calibration of ground-based microwave radiometers - Accuracy assessment and recommendations for network users

    NASA Astrophysics Data System (ADS)

    Pospichal, Bernhard; Küchler, Nils; Löhnert, Ulrich; Crewell, Susanne; Czekala, Harald; Güldner, Jürgen

    2016-04-01

    Ground-based microwave radiometers (MWR) are becoming widely used in atmospheric remote sensing and start to be routinely operated by national weather services and other institutions. However, common standards for calibration of these radiometers and a detailed knowledge about the error characteristics is needed, in order to assimilate the data into models. Intercomparisons of calibrations by different MWRs have rarely been done. Therefore, two calibration experiments in Lindenberg (2014) and Meckenheim (2015) were performed in the frame of TOPROF (Cost action ES1303) in order to assess uncertainties and differences between various instruments. In addition, a series of experiments were taken in Oklahoma in autumn 2014. The focus lay on the performance of the two main instrument types, which are currently used operationally. These are the MP-Profiler series by Radiometrics Corporation as well as the HATPRO series by Radiometer Physics GmbH (RPG). Both instrument types are operating in two frequency bands, one along the 22 GHz water vapour line, the other one at the lower wing of the 60 GHz oxygen absorption complex. The goal was to establish protocols for providing quality controlled (QC) MWR data and their uncertainties. To this end, standardized calibration procedures for MWR were developed and recommendations for radiometer users were compiled. We focus here mainly on data types, integration times and optimal settings for calibration intervals, both for absolute (liquid nitrogen, tipping curve) as well as relative (hot load, noise diode) calibrations. Besides the recommendations for ground-based MWR operators, we will present methods to determine the accuracy of the calibration as well as means for automatic data quality control. In addition, some results from the intercomparison of different radiometers will be discussed.

  19. Analytic Guided-Search Model of Human Performance Accuracy in Target- Localization Search Tasks

    NASA Technical Reports Server (NTRS)

    Eckstein, Miguel P.; Beutter, Brent R.; Stone, Leland S.

    2000-01-01

    Current models of human visual search have extended the traditional serial/parallel search dichotomy. Two successful models for predicting human visual search are the Guided Search model and the Signal Detection Theory model. Although these models are inherently different, it has been difficult to compare them because the Guided Search model is designed to predict response time, while Signal Detection Theory models are designed to predict performance accuracy. Moreover, current implementations of the Guided Search model require the use of Monte-Carlo simulations, a method that makes fitting the model's performance quantitatively to human data more computationally time consuming. We have extended the Guided Search model to predict human accuracy in target-localization search tasks. We have also developed analytic expressions that simplify simulation of the model to the evaluation of a small set of equations using only three free parameters. This new implementation and extension of the Guided Search model will enable direct quantitative comparisons with human performance in target-localization search experiments and with the predictions of Signal Detection Theory and other search accuracy models.

  20. Communicating Performance Assessments Results - 13609

    SciTech Connect

    Layton, Mark

    2013-07-01

    The F-Area Tank Farms (FTF) and H-Area Tank Farm (HTF) are owned by the U.S. Department of Energy (DOE) and operated by Savannah River Remediation LLC (SRR), Liquid Waste Operations contractor at DOE's Savannah River Site (SRS). The FTF and HTF are active radioactive waste storage and treatment facilities consisting of 51 carbon steel waste tanks and ancillary equipment such as transfer lines, evaporators and pump tanks. Performance Assessments (PAs) for each Tank Farm have been prepared to support the eventual closure of the underground radioactive waste tanks and ancillary equipment. PAs provide the technical bases and results to be used in subsequent documents to demonstrate compliance with the pertinent requirements for final closure of the Tank Farms. The Tank Farms are subject to a number of regulatory requirements. The State regulates Tank Farm operations through an industrial waste water permit and through a Federal Facility Agreement approved by the State, DOE and the Environmental Protection Agency (EPA). Closure documentation will include State-approved Tank Farm Closure Plans and tank-specific closure modules utilizing information from the PAs. For this reason, the State of South Carolina and the EPA must be involved in the performance assessment review process. The residual material remaining after tank cleaning is also subject to reclassification prior to closure via a waste determination pursuant to Section 3116 of the Ronald W. Reagan National Defense Authorization Act of Fiscal Year 2005. PAs are performance-based, risk-informed analyses of the fate and transport of FTF and HTF residual wastes following final closure of the Tank Farms. Since the PAs serve as the primary risk assessment tools in evaluating readiness for closure, it is vital that PA conclusions be communicated effectively. In the course of developing the FTF and HTF PAs, several lessons learned have emerged regarding communicating PA results. When communicating PA results it is

  1. Standardizing the Protocol for Hemispherical Photographs: Accuracy Assessment of Binarization Algorithms

    PubMed Central

    Glatthorn, Jonas; Beckschäfer, Philip

    2014-01-01

    Hemispherical photography is a well-established method to optically assess ecological parameters related to plant canopies; e.g. ground-level light regimes and the distribution of foliage within the crown space. Interpreting hemispherical photographs involves classifying pixels as either sky or vegetation. A wide range of automatic thresholding or binarization algorithms exists to classify the photographs. The variety in methodology hampers ability to compare results across studies. To identify an optimal threshold selection method, this study assessed the accuracy of seven binarization methods implemented in software currently available for the processing of hemispherical photographs. Therefore, binarizations obtained by the algorithms were compared to reference data generated through a manual binarization of a stratified random selection of pixels. This approach was adopted from the accuracy assessment of map classifications known from remote sensing studies. Percentage correct () and kappa-statistics () were calculated. The accuracy of the algorithms was assessed for photographs taken with automatic exposure settings (auto-exposure) and photographs taken with settings which avoid overexposure (histogram-exposure). In addition, gap fraction values derived from hemispherical photographs were compared with estimates derived from the manually classified reference pixels. All tested algorithms were shown to be sensitive to overexposure. Three of the algorithms showed an accuracy which was high enough to be recommended for the processing of histogram-exposed hemispherical photographs: “Minimum” ( 98.8%; 0.952), “Edge Detection” ( 98.1%; 0.950), and “Minimum Histogram” ( 98.1%; 0.947). The Minimum algorithm overestimated gap fraction least of all (11%). The overestimation by the algorithms Edge Detection (63%) and Minimum Histogram (67%) were considerably larger. For the remaining four evaluated algorithms (IsoData, Maximum Entropy, MinError, and Otsu) an

  2. Standardizing the protocol for hemispherical photographs: accuracy assessment of binarization algorithms.

    PubMed

    Glatthorn, Jonas; Beckschäfer, Philip

    2014-01-01

    Hemispherical photography is a well-established method to optically assess ecological parameters related to plant canopies; e.g. ground-level light regimes and the distribution of foliage within the crown space. Interpreting hemispherical photographs involves classifying pixels as either sky or vegetation. A wide range of automatic thresholding or binarization algorithms exists to classify the photographs. The variety in methodology hampers ability to compare results across studies. To identify an optimal threshold selection method, this study assessed the accuracy of seven binarization methods implemented in software currently available for the processing of hemispherical photographs. Therefore, binarizations obtained by the algorithms were compared to reference data generated through a manual binarization of a stratified random selection of pixels. This approach was adopted from the accuracy assessment of map classifications known from remote sensing studies. Percentage correct (Pc) and kappa-statistics (K) were calculated. The accuracy of the algorithms was assessed for photographs taken with automatic exposure settings (auto-exposure) and photographs taken with settings which avoid overexposure (histogram-exposure). In addition, gap fraction values derived from hemispherical photographs were compared with estimates derived from the manually classified reference pixels. All tested algorithms were shown to be sensitive to overexposure. Three of the algorithms showed an accuracy which was high enough to be recommended for the processing of histogram-exposed hemispherical photographs: "Minimum" (Pc 98.8%; K 0.952), "Edge Detection" (Pc 98.1%; K 0.950), and "Minimum Histogram" (Pc 98.1%; K 0.947). The Minimum algorithm overestimated gap fraction least of all (11%). The overestimation by the algorithms Edge Detection (63%) and Minimum Histogram (67%) were considerably larger. For the remaining four evaluated algorithms (IsoData, Maximum Entropy, MinError, and Otsu

  3. Performance Assessment and Geometric Calibration of RESOURCESAT-2

    NASA Astrophysics Data System (ADS)

    Radhadevi, P. V.; Solanki, S. S.; Akilan, A.; Jyothi, M. V.; Nagasubramanian, V.

    2016-06-01

    Resourcesat-2 (RS-2) has successfully completed five years of operations in its orbit. This satellite has multi-resolution and multi-spectral capabilities in a single platform. A continuous and autonomous co-registration, geo-location and radiometric calibration of image data from different sensors with widely varying view angles and resolution was one of the challenges of RS-2 data processing. On-orbit geometric performance of RS-2 sensors has been widely assessed and calibrated during the initial phase operations. Since then, as an ongoing activity, various geometric performance data are being generated periodically. This is performed with sites of dense ground control points (GCPs). These parameters are correlated to the direct geo-location accuracy of the RS-2 sensors and are monitored and validated to maintain the performance. This paper brings out the geometric accuracy assessment, calibration and validation done for about 500 datasets of RS-2. The objectives of this study are to ensure the best absolute and relative location accuracy of different cameras, location performance with payload steering and co-registration of multiple bands. This is done using a viewing geometry model, given ephemeris and attitude data, precise camera geometry and datum transformation. In the model, the forward and reverse transformations between the coordinate systems associated with the focal plane, payload, body, orbit and ground are rigorously and explicitly defined. System level tests using comparisons to ground check points have validated the operational geo-location accuracy performance and the stability of the calibration parameters.

  4. Accuracy assessment of topographic mapping using UAV image integrated with satellite images

    NASA Astrophysics Data System (ADS)

    Azmi, S. M.; Ahmad, Baharin; Ahmad, Anuar

    2014-02-01

    Unmanned Aerial Vehicle or UAV is extensively applied in various fields such as military applications, archaeology, agriculture and scientific research. This study focuses on topographic mapping and map updating. UAV is one of the alternative ways to ease the process of acquiring data with lower operating costs, low manufacturing and operational costs, plus it is easy to operate. Furthermore, UAV images will be integrated with QuickBird images that are used as base maps. The objective of this study is to make accuracy assessment and comparison between topographic mapping using UAV images integrated with aerial photograph and satellite image. The main purpose of using UAV image is as a replacement for cloud covered area which normally exists in aerial photograph and satellite image, and for updating topographic map. Meanwhile, spatial resolution, pixel size, scale, geometric accuracy and correction, image quality and information contents are important requirements needed for the generation of topographic map using these kinds of data. In this study, ground control points (GCPs) and check points (CPs) were established using real time kinematic Global Positioning System (RTK-GPS) technique. There are two types of analysis that are carried out in this study which are quantitative and qualitative assessments. Quantitative assessment is carried out by calculating root mean square error (RMSE). The outputs of this study include topographic map and orthophoto. From this study, the accuracy of UAV image is ± 0.460 m. As conclusion, UAV image has the potential to be used for updating of topographic maps.

  5. Initial Performance Assessment of CALIOP

    NASA Technical Reports Server (NTRS)

    Winker, David; Hunt, Bill; McGill, Matthew

    2007-01-01

    The Cloud-Aerosol Lidar with Orthogonal Polarization (CALIOP, pronounced the same as "calliope") is a spaceborne two-wavelength polarizatio n lidar that has been acquiring global data since June 2006. CALIOP p rovides high resolution vertical profiles of clouds and aerosols, and has been designed with a very large linear dynamic range to encompas s the full range of signal returns from aerosols and clouds. CALIOP is the primary instrument carried by the Cloud-Aerosol Lidar and Infrar ed Pathfinder Satellite Observations (CALIPSO) satellite, which was l aunched on April, 28 2006. CALIPSO was developed within the framework of a collaboration between NASA and the French space agency, CNES. I nitial data analysis and validation intercomparisons indicate the qua lity of data from CALIOP meets or exceeds expectations. This paper presents a description of the CALIPSO mission, the CALIOP instrument, an d an initial assessment of on-orbit measurement performance.

  6. Accuracy of pattern detection methods in the performance of golf putting.

    PubMed

    Couceiro, Micael S; Dias, Gonçalo; Mendes, Rui; Araújo, Duarte

    2013-01-01

    The authors present a comparison of the classification accuracy of 5 pattern detection methods in the performance of golf putting. The detection of the position of the golf club was performed using a computer vision technique followed by the estimation algorithm Darwinian particle swarm optimization to obtain a kinematical model of each trial. The estimated parameters of the models were subsequently used as sample of five classification algorithms: (a) linear discriminant analysis, (b) quadratic discriminant analysis, (c) naive Bayes with normal distribution, (d) naive Bayes with kernel smoothing density estimate, and (e) least squares support vector machines. Beyond testing the performance of each classification method, it was also possible to identify a putting signature that characterized each golf player. It may be concluded that these methods can be applied to the study of coordination and motor control on the putting performance, allowing for the analysis of the intra- and interpersonal variability of motor behavior in performance contexts.

  7. Procedural Documentation and Accuracy Assessment of Bathymetric Maps and Area/Capacity Tables for Small Reservoirs

    USGS Publications Warehouse

    Wilson, Gary L.; Richards, Joseph M.

    2006-01-01

    Because of the increasing use and importance of lakes for water supply to communities, a repeatable and reliable procedure to determine lake bathymetry and capacity is needed. A method to determine the accuracy of the procedure will help ensure proper collection and use of the data and resulting products. It is important to clearly define the intended products and desired accuracy before conducting the bathymetric survey to ensure proper data collection. A survey-grade echo sounder and differential global positioning system receivers were used to collect water-depth and position data in December 2003 at Sugar Creek Lake near Moberly, Missouri. Data were collected along planned transects, with an additional set of quality-assurance data collected for use in accuracy computations. All collected data were imported into a geographic information system database. A bathymetric surface model, contour map, and area/capacity tables were created from the geographic information system database. An accuracy assessment was completed on the collected data, bathymetric surface model, area/capacity table, and contour map products. Using established vertical accuracy standards, the accuracy of the collected data, bathymetric surface model, and contour map product was 0.67 foot, 0.91 foot, and 1.51 feet at the 95 percent confidence level. By comparing results from different transect intervals with the quality-assurance transect data, it was determined that a transect interval of 1 percent of the longitudinal length of Sugar Creek Lake produced nearly as good results as 0.5 percent transect interval for the bathymetric surface model, area/capacity table, and contour map products.

  8. Muscle activity and accuracy of performance of the smash stroke in badminton with reference to skill and practice.

    PubMed

    Sakurai, S; Ohtsuki, T

    2000-11-01

    The aims of this study were to establish the temporal-spatial relationship between muscle activity and the smash stroke of skilled badminton players and to assess performance accuracy using the ellipse of constant distance. We recorded the surface electromyographic (EMG) activity of selected superficial muscles of the stroking arm and shoulder--flexor carpi ulnalis, extensor carpi radialis, triceps brachii (lateral head), biceps brachii and trapezius (upper)--during the badminton smash. In the first part of the study, we examined the characteristics of muscle function and performance accuracy of skilled and unskilled individuals during the badminton smash. Five well-trained badminton players and five students with no experience of badminton were asked to smash a shuttle as hard as they could towards a vertical square target 4 m away, repeating the stroke 30 times. In general, the skilled players showed a more constant time from peak electromyographic amplitude to impact. Immediately after impact, the electromyographic activity of the triceps brachii and flexor carpi radialis of the skilled players decreased; in the unskilled participants, however, it continued until well after impact. The area of the ellipse of constant distance and the off-target distance, which were used as indices of performance accuracy, were smaller for the skilled than for the unskilled participants. In the second part of the study, one skilled and one unskilled participant performed 100 trials a day for 6 days. The time from peak electromyographic amplitude to impact in the extensor carpi radialis and flexor carpi ulnalis was more variable in the unskilled than in the skilled participant even after 6 days of practice, but the proximal muscles of the unskilled participant had a similar pattern of activity to that of the skilled player. Thus, controlling the distal muscles appears to be important for achieving accurate performance of the smash in badminton.

  9. Physician performance assessment: prevention of cardiovascular disease.

    PubMed

    Lipner, Rebecca S; Weng, Weifeng; Caverzagie, Kelly J; Hess, Brian J

    2013-12-01

    Given the rising burden of healthcare costs, both patients and healthcare purchasers are interested in discerning which physicians deliver quality care. We proposed a methodology to assess physician clinical performance in preventive cardiology care, and determined a benchmark for minimally acceptable performance. We used data on eight evidence-based clinical measures from 811 physicians that completed the American Board of Internal Medicine's Preventive Cardiology Practice Improvement Module(SM) to form an overall composite score for preventive cardiology care. An expert panel of nine internists/cardiologists skilled in preventive care for cardiovascular disease used an adaptation of the Angoff standard-setting method and the Dunn-Rankin method to create the composite and establish a standard. Physician characteristics were used to examine the validity of the inferences made from the composite scores. The mean composite score was 73.88 % (SD = 11.88 %). Reliability of the composite was high at 0.87. Specialized cardiologists had significantly lower composite scores (P = 0.04), while physicians who reported spending more time in primary, longitudinal, and preventive consultative care had significantly higher scores (P = 0.01), providing some evidence of score validity. The panel established a standard of 47.38 % on the composite measure with high classification accuracy (0.98). Only 2.7 % of the physicians performed below the standard for minimally acceptable preventive cardiovascular disease care. Of those, 64 % (N = 14) were not general cardiologists. Our study presents a psychometrically defensible methodology for assessing physician performance in preventive cardiology while also providing relative feedback with the hope of heightening physician awareness about deficits and improving patient care. PMID:23417594

  10. 3D Surface Reconstruction of Plant Seeds by Volume Carving: Performance and Accuracies

    PubMed Central

    Roussel, Johanna; Geiger, Felix; Fischbach, Andreas; Jahnke, Siegfried; Scharr, Hanno

    2016-01-01

    We describe a method for 3D reconstruction of plant seed surfaces, focusing on small seeds with diameters as small as 200 μm. The method considers robotized systems allowing single seed handling in order to rotate a single seed in front of a camera. Even though such systems feature high position repeatability, at sub-millimeter object scales, camera pose variations have to be compensated. We do this by robustly estimating the tool center point from each acquired image. 3D reconstruction can then be performed by a simple shape-from-silhouette approach. In experiments we investigate runtimes, theoretically achievable accuracy, experimentally achieved accuracy, and show as a proof of principle that the proposed method is well sufficient for 3D seed phenotyping purposes. PMID:27375628

  11. Assessing accuracy and precision for field and laboratory data: a perspective in ecosystem restoration

    USGS Publications Warehouse

    Stapanian, Martin A.; Lewis, Timothy E; Palmer, Craig J.; Middlebrook Amos, Molly

    2016-01-01

    Unlike most laboratory studies, rigorous quality assurance/quality control (QA/QC) procedures may be lacking in ecosystem restoration (“ecorestoration”) projects, despite legislative mandates in the United States. This is due, in part, to ecorestoration specialists making the false assumption that some types of data (e.g. discrete variables such as species identification and abundance classes) are not subject to evaluations of data quality. Moreover, emergent behavior manifested by complex, adapting, and nonlinear organizations responsible for monitoring the success of ecorestoration projects tend to unconsciously minimize disorder, QA/QC being an activity perceived as creating disorder. We discuss similarities and differences in assessing precision and accuracy for field and laboratory data. Although the concepts for assessing precision and accuracy of ecorestoration field data are conceptually the same as laboratory data, the manner in which these data quality attributes are assessed is different. From a sample analysis perspective, a field crew is comparable to a laboratory instrument that requires regular “recalibration,” with results obtained by experts at the same plot treated as laboratory calibration standards. Unlike laboratory standards and reference materials, the “true” value for many field variables is commonly unknown. In the laboratory, specific QA/QC samples assess error for each aspect of the measurement process, whereas field revisits assess precision and accuracy of the entire data collection process following initial calibration. Rigorous QA/QC data in an ecorestoration project are essential for evaluating the success of a project, and they provide the only objective “legacy” of the dataset for potential legal challenges and future uses.

  12. Technical note: A physical phantom for assessment of accuracy of deformable alignment algorithms

    SciTech Connect

    Kashani, Rojano; Hub, Martina; Kessler, Marc L.; Balter, James M.

    2007-07-15

    The purpose of this study was to investigate the feasibility of a simple deformable phantom as a QA tool for testing and validation of deformable image registration algorithms. A diagnostic thoracic imaging phantom with a deformable foam insert was used in this study. Small plastic markers were distributed through the foam to create a lattice with a measurable deformation as the ground truth data for all comparisons. The foam was compressed in the superior-inferior direction using a one-dimensional drive stage pushing a flat 'diaphragm' to create deformations similar to those from inhale and exhale states. Images were acquired at different compressions of the foam and the location of every marker was manually identified on each image volume to establish a known deformation field with a known accuracy. The markers were removed digitally from corresponding images prior to registration. Different image registration algorithms were tested using this method. Repeat measurement of marker positions showed an accuracy of better than 1 mm in identification of the reference marks. Testing the method on several image registration algorithms showed that the system is capable of evaluating errors quantitatively. This phantom is able to quantitatively assess the accuracy of deformable image registration, using a measure of accuracy that is independent of the signals that drive the deformation parameters.

  13. Accuracy assessment of minimum control points for UAV photography and georeferencing

    NASA Astrophysics Data System (ADS)

    Skarlatos, D.; Procopiou, E.; Stavrou, G.; Gregoriou, M.

    2013-08-01

    In recent years, Autonomous Unmanned Aerial Vehicles (AUAV) became popular among researchers across disciplines because they combine many advantages. One major application is monitoring and mapping. Their ability to fly beyond eye sight autonomously, collecting data over large areas whenever, wherever, makes them excellent platform for monitoring hazardous areas or disasters. In both cases rapid mapping is needed while human access isn't always a given. Indeed, current automatic processing of aerial photos using photogrammetry and computer vision algorithms allows for rapid orthophomap production and Digital Surface Model (DSM) generation, as tools for monitoring and damage assessment. In such cases, control point measurement using GPS is either impossible, or time consuming or costly. This work investigates accuracies that can be attained using few or none control points over areas of one square kilometer, in two test sites; a typical block and a corridor survey. On board GPS data logged during AUAV's flight are being used for direct georeferencing, while ground check points are being used for evaluation. In addition various control point layouts are being tested using bundle adjustment for accuracy evaluation. Results indicate that it is possible to use on board single frequency GPS for direct georeferencing in cases of disaster management or areas without easy access, or even over featureless areas. Due to large numbers of tie points in the bundle adjustment, horizontal accuracy can be fulfilled with a rather small number of control points, but vertical accuracy may not.

  14. Accuracy Assessment of Crown Delineation Methods for the Individual Trees Using LIDAR Data

    NASA Astrophysics Data System (ADS)

    Chang, K. T.; Lin, C.; Lin, Y. C.; Liu, J. K.

    2016-06-01

    Forest canopy density and height are used as variables in a number of environmental applications, including the estimation of biomass, forest extent and condition, and biodiversity. The airborne Light Detection and Ranging (LiDAR) is very useful to estimate forest canopy parameters according to the generated canopy height models (CHMs). The purpose of this work is to introduce an algorithm to delineate crown parameters, e.g. tree height and crown radii based on the generated rasterized CHMs. And accuracy assessment for the extraction of volumetric parameters of a single tree is also performed via manual measurement using corresponding aerial photo pairs. A LiDAR dataset of a golf course acquired by Leica ALS70-HP is used in this study. Two algorithms, i.e. a traditional one with the subtraction of a digital elevation model (DEM) from a digital surface model (DSM), and a pit-free approach are conducted to generate the CHMs firstly. Then two algorithms, a multilevel morphological active-contour (MMAC) and a variable window filter (VWF), are implemented and used in this study for individual tree delineation. Finally, experimental results of two automatic estimation methods for individual trees can be evaluated with manually measured stand-level parameters, i.e. tree height and crown diameter. The resulting CHM generated by a simple subtraction is full of empty pixels (called "pits") that will give vital impact on subsequent analysis for individual tree delineation. The experimental results indicated that if more individual trees can be extracted, tree crown shape will became more completely in the CHM data after the pit-free process.

  15. Numerical simulation of turbulence flow in a Kaplan turbine -Evaluation on turbine performance prediction accuracy-

    NASA Astrophysics Data System (ADS)

    Ko, P.; Kurosawa, S.

    2014-03-01

    The understanding and accurate prediction of the flow behaviour related to cavitation and pressure fluctuation in a Kaplan turbine are important to the design work enhancing the turbine performance including the elongation of the operation life span and the improvement of turbine efficiency. In this paper, high accuracy turbine and cavitation performance prediction method based on entire flow passage for a Kaplan turbine is presented and evaluated. Two-phase flow field is predicted by solving Reynolds-Averaged Navier-Stokes equations expressed by volume of fluid method tracking the free surface and combined with Reynolds Stress model. The growth and collapse of cavitation bubbles are modelled by the modified Rayleigh-Plesset equation. The prediction accuracy is evaluated by comparing with the model test results of Ns 400 Kaplan model turbine. As a result that the experimentally measured data including turbine efficiency, cavitation performance, and pressure fluctuation are accurately predicted. Furthermore, the cavitation occurrence on the runner blade surface and the influence to the hydraulic loss of the flow passage are discussed. Evaluated prediction method for the turbine flow and performance is introduced to facilitate the future design and research works on Kaplan type turbine.

  16. A comparison of two in vitro methods for assessing the fitting accuracy of composite inlays.

    PubMed

    Qualtrough, A J; Piddock, V; Kypreou, V

    1993-06-19

    Composite inlays were fabricated in standardised cavities cut into aluminum and perspex blocks using a computer controlled milling process. Four materials were used to construct the inlays. These were fabricated using an indirect technique following the manufacturers' recommendations, where applicable. In addition, for one of the composites, the fabrication procedures were modified. The fitting accuracy of the restorations was assessed by taking elastomeric impression wash replicas of the luting space and by examination of sectioned restored units using image analysis. The former method indicated significantly reduced fitting accuracy when either use of die spacer or secondary curing was omitted from restoration construction resulting in incomplete seating. The sectioning technique indicated that more factors appeared to significantly reduce fitting accuracy including bulk packing, alteration in curing time, omission to die spacer and the final polishing procedure. This method also provided more specific information concerning sites of premature contact. One material gave rise to significantly greater film thicknesses using both methods of assessment. No direct correlation was found between the two techniques of fit evaluation but both methods taken together provided complementary information.

  17. Assessing the quality of studies on the diagnostic accuracy of tumor markers

    PubMed Central

    Goebell, Peter J.; Kamat, Ashish M.; Sylvester, Richard J.; Black, Peter; Droller, Michael; Godoy, Guilherme; Hudson, M’Liss A.; Junker, Kerstin; Kassouf, Wassim; Knowles, Margaret A.; Schulz, Wolfgang A.; Seiler, Roland; Schmitz-Dräger, Bernd J.

    2015-01-01

    Objectives With rapidly increasing numbers of publications, assessments of study quality, reporting quality, and classification of studies according to their level of evidence or developmental stage have become key issues in weighing the relevance of new information reported. Diagnostic marker studies are often criticized for yielding highly discrepant and even controversial results. Much of this discrepancy has been attributed to differences in study quality. So far, numerous tools for measuring study quality have been developed, but few of them have been used for systematic reviews and meta-analysis. This is owing to the fact that most tools are complicated and time consuming, suffer from poor reproducibility, and do not permit quantitative scoring. Methods The International Bladder Cancer Network (IBCN) has adopted this problem and has systematically identified the more commonly used tools developed since 2000. Results In this review, those tools addressing study quality (Quality Assessment of Studies of Diagnostic Accuracy and Newcastle-Ottawa Scale), reporting quality (Standards for Reporting of Diagnostic Accuracy), and developmental stage (IBCN phases) of studies on diagnostic markers in bladder cancer are introduced and critically analyzed. Based upon this, the IBCN has launched an initiative to assess and validate existing tools with emphasis on diagnostic bladder cancer studies. Conclusions The development of simple and reproducible tools for quality assessment of diagnostic marker studies permitting quantitative scoring is suggested. PMID:25159014

  18. Immediate Feedback on Accuracy and Performance: The Effects of Wireless Technology on Food Safety Tracking at a Distribution Center

    ERIC Educational Resources Information Center

    Goomas, David T.

    2012-01-01

    The effects of wireless ring scanners, which provided immediate auditory and visual feedback, were evaluated to increase the performance and accuracy of order selectors at a meat distribution center. The scanners not only increased performance and accuracy compared to paper pick sheets, but were also instrumental in immediate and accurate data…

  19. Toward a Science Performance Assessment Technology.

    ERIC Educational Resources Information Center

    Shavelson, Richard J.; Solano-Flores, Guillermo; Ruiz-Primo, Maria Araceli

    1998-01-01

    Research on developing technology for large-scale performance assessments in science is reported briefly, and a conceptual framework is presented for defining, generating, and evaluating science performance assessments. Types of tasks are discussed, and the technical qualities of performance assessments are discussed in the context of…

  20. An accuracy assessment of Cartesian-mesh approaches for the Euler equations

    NASA Technical Reports Server (NTRS)

    Coirier, William J.; Powell, Kenneth G.

    1995-01-01

    A critical assessment of the accuracy of Cartesian-mesh approaches for steady, transonic solutions of the Euler equations of gas dynamics is made. An exact solution of the Euler equations (Ringleb's flow) is used not only to infer the order of the truncation error of the Cartesian-mesh approaches, but also to compare the magnitude of the discrete error directly to that obtained with a structured mesh approach. Uniformly and adaptively refined solutions using a Cartesian-mesh approach are obtained and compared to each other and to uniformly refined structured mesh results. The effect of cell merging is investigated as well as the use of two different K-exact reconstruction procedures. The solution methodology of the schemes is explained and tabulated results are presented to compare the solution accuracies.

  1. Assessing the Accuracy of Cone-Beam Computerized Tomography in Measuring Thinning Oral and Buccal Bone.

    PubMed

    Raskó, Zoltán; Nagy, Lili; Radnai, Márta; Piffkó, József; Baráth, Zoltán

    2016-06-01

    The aim of this study was to assess the accuracy and reliability of cone-beam computerized tomography (CBCT) in measuring thinning bone surrounding dental implants. Three implants were inserted into the mandible of a domestic pig at 6 different bone thicknesses on the vestibular and the lingual sides, and measurements were recorded using CBCT. The results were obtained, analyzed, and compared with areas without implants. Our results indicated that the bone thickness and the neighboring implants decreased the accuracy and reliability of CBCT for measuring bone volume around dental implants. We concluded that CBCT slightly undermeasured the bone thickness around the implant, both buccally and orally, compared with the same thickness without the implant. These results support that using the i-CAT NG with a 0.2 voxel size is not accurate for either qualitative or quantitative bone evaluations, especially when the bone is thinner than 0.72 mm in the horizontal dimension.

  2. Theory and methods for accuracy assessment of thematic maps using fuzzy sets

    SciTech Connect

    Gopal, S.; Woodcock, C. )

    1994-02-01

    The use of fuzzy sets in map accuracy assessment expands the amount of information that can be provided regarding the nature, frequency, magnitude, and source of errors in a thematic map. The need for using fuzzy sets arises from the observation that all map locations do not fit unambiguously in a single map category. Fuzzy sets allow for varying levels of set membership for multiple map categories. A linguistic measurement scale allows the kinds of comments commonly made during map evaluations to be used to quantify map accuracy. Four tables result from the use of fuzzy functions, and when taken together they provide more information than traditional confusion matrices. The use of a hypothetical dataset helps illustrate the benefits of the new methods. It is hoped that the enhanced ability to evaluate maps resulting from the use of fuzzy sets will improve our understanding of uncertainty in maps and facilitate improved error modeling. 40 refs.

  3. Evaluating the accuracy performance of Lucas-Kanade algorithm in the circumstance of PIV application

    NASA Astrophysics Data System (ADS)

    Pan, Chong; Xue, Dong; Xu, Yang; Wang, JinJun; Wei, RunJie

    2015-10-01

    Lucas-Kanade (LK) algorithm, usually used in optical flow filed, has recently received increasing attention from PIV community due to its advanced calculation efficiency by GPU acceleration. Although applications of this algorithm are continuously emerging, a systematic performance evaluation is still lacking. This forms the primary aim of the present work. Three warping schemes in the family of LK algorithm: forward/inverse/symmetric warping, are evaluated in a prototype flow of a hierarchy of multiple two-dimensional vortices. Second-order Newton descent is also considered here. The accuracy & efficiency of all these LK variants are investigated under a large domain of various influential parameters. It is found that the constant displacement constraint, which is a necessary building block for GPU acceleration, is the most critical issue in affecting LK algorithm's accuracy, which can be somehow ameliorated by using second-order Newton descent. Moreover, symmetric warping outbids the other two warping schemes in accuracy level, robustness to noise, convergence speed and tolerance to displacement gradient, and might be the first choice when applying LK algorithm to PIV measurement.

  4. Design and performance of a new high accuracy combined small sample neutron/gamma detector

    SciTech Connect

    Menlove, H.; Davidson, D.; Verplancke, J.; Vermeulen, P.; Wagner, H.G.; Wellum, R.; Brandelise, B.; Mayer, K.

    1993-08-01

    This paper describes the design of an optimized combined neutron and gamma detector installed around a measurement well protruding from the floor of a glove box. The objective of this design was to achieve an overall accuracy for the plutonium element concentration in gram-sized samples of plutonium oxide powder approaching the {approximately}0.1--0.2% accuracies routinely achieved by inspectors` chemical analysis. The efficiency of the clam-shell neutron detector was increased and the flat response zone extended in axial and radial directions. The sample holder introduced from within the glove box was designed to form the upper reflector, while two graphite half-shells fitted around the thin neck of the high-resolution LEGE detector replaced the lower plug. The Institute for Reference Materials and Measurements (IRMM) in Geel prepared special plutonium oxide test samples whose plutonium concentration was determined to better than 0.05%. During a three week initial performance test in July 1992 at ITU Karlsruhe and in long term tests, it was established that the target accuracy can be achieved provided sufficient care is taken to assure the reproducibility of sample bottling and sample positioning. The paper presents and discusses the results of all test measurements.

  5. Design and performance of a new high accuracy combined small sample neutron/gamma detector

    SciTech Connect

    Menlove, H.; Davidson, D.; Verplancke, J.; Vermeulen, P.; Wagner, H.G.; Wellum, R.; Brandelise, B.; Mayer, K.

    1993-12-31

    This paper describes the design of an optimized combined neutron and gamma detector installed around a measurement well protruding from the floor of a glove box. The objective of this design was to achieve an overall accuracy for the plutonium element concentration in gram-sized samples of plutonium oxide powder approaching the {approximately}0.1--0.2% accuracies routinely achieved by inspectors` chemical analysis. The efficiency of the clam-shell neutron detector was increased and the flat response zone extended in axial and radial directions. The sample holder introduced from within the glove box was designed to form the upper reflector, while two graphite half-shells fitted around the thin neck of the high-resolution LEGe detector replaced the lower plug. The Institute for Reference Materials and Measurements (IRMM) in Geel prepared special plutonium oxide test samples whose plutonium concentration was determined to better than 0.05%. During a three week initial performance test in July 1992 at ITU Karlsruhe and in long term tests, it was established that the target accuracy can be achieved provided sufficient care is taken to assure the reproducibility of sample bottling and sample positioning. The paper presents and discusses the results of all test measurements.

  6. Behavior model for performance assessment.

    SciTech Connect

    Borwn-VanHoozer, S. A.

    1999-07-23

    Every individual channels information differently based on their preference of the sensory modality or representational system (visual auditory or kinesthetic) we tend to favor most (our primary representational system (PRS)). Therefore, some of us access and store our information primarily visually first, some auditorily, and others kinesthetically (through feel and touch); which in turn establishes our information processing patterns and strategies and external to internal (and subsequently vice versa) experiential language representation. Because of the different ways we channel our information, each of us will respond differently to a task--the way we gather and process the external information (input), our response time (process), and the outcome (behavior). Traditional human models of decision making and response time focus on perception, cognitive and motor systems stimulated and influenced by the three sensory modalities, visual, auditory and kinesthetic. For us, these are the building blocks to knowing how someone is thinking. Being aware of what is taking place and how to ask questions is essential in assessing performance toward reducing human errors. Existing models give predications based on time values or response times for a particular event, and may be summed and averaged for a generalization of behavior(s). However, by our not establishing a basic understanding of the foundation of how the behavior was predicated through a decision making strategy process, predicative models are overall inefficient in their analysis of the means by which behavior was generated. What is seen is the end result.

  7. Exploring Writing Accuracy and Writing Complexity as Predictors of High-Stakes State Assessments

    ERIC Educational Resources Information Center

    Edman, Ellie Whitner

    2012-01-01

    The advent of No Child Left Behind led to increased teacher accountability for student performance and placed strict sanctions in place for failure to meet a certain level of performance each year. With instructional time at a premium, it is imperative that educators have brief academic assessments that accurately predict performance on…

  8. Computational Performance and Statistical Accuracy of *BEAST and Comparisons with Other Methods.

    PubMed

    Ogilvie, Huw A; Heled, Joseph; Xie, Dong; Drummond, Alexei J

    2016-05-01

    Under the multispecies coalescent model of molecular evolution, gene trees have independent evolutionary histories within a shared species tree. In comparison, supermatrix concatenation methods assume that gene trees share a single common genealogical history, thereby equating gene coalescence with species divergence. The multispecies coalescent is supported by previous studies which found that its predicted distributions fit empirical data, and that concatenation is not a consistent estimator of the species tree. *BEAST, a fully Bayesian implementation of the multispecies coalescent, is popular but computationally intensive, so the increasing size of phylogenetic data sets is both a computational challenge and an opportunity for better systematics. Using simulation studies, we characterize the scaling behavior of *BEAST, and enable quantitative prediction of the impact increasing the number of loci has on both computational performance and statistical accuracy. Follow-up simulations over a wide range of parameters show that the statistical performance of *BEAST relative to concatenation improves both as branch length is reduced and as the number of loci is increased. Finally, using simulations based on estimated parameters from two phylogenomic data sets, we compare the performance of a range of species tree and concatenation methods to show that using *BEAST with tens of loci can be preferable to using concatenation with thousands of loci. Our results provide insight into the practicalities of Bayesian species tree estimation, the number of loci required to obtain a given level of accuracy and the situations in which supermatrix or summary methods will be outperformed by the fully Bayesian multispecies coalescent. PMID:26821913

  9. Computational Performance and Statistical Accuracy of *BEAST and Comparisons with Other Methods

    PubMed Central

    Ogilvie, Huw A.; Heled, Joseph; Xie, Dong; Drummond, Alexei J.

    2016-01-01

    Under the multispecies coalescent model of molecular evolution, gene trees have independent evolutionary histories within a shared species tree. In comparison, supermatrix concatenation methods assume that gene trees share a single common genealogical history, thereby equating gene coalescence with species divergence. The multispecies coalescent is supported by previous studies which found that its predicted distributions fit empirical data, and that concatenation is not a consistent estimator of the species tree. *BEAST, a fully Bayesian implementation of the multispecies coalescent, is popular but computationally intensive, so the increasing size of phylogenetic data sets is both a computational challenge and an opportunity for better systematics. Using simulation studies, we characterize the scaling behavior of *BEAST, and enable quantitative prediction of the impact increasing the number of loci has on both computational performance and statistical accuracy. Follow-up simulations over a wide range of parameters show that the statistical performance of *BEAST relative to concatenation improves both as branch length is reduced and as the number of loci is increased. Finally, using simulations based on estimated parameters from two phylogenomic data sets, we compare the performance of a range of species tree and concatenation methods to show that using *BEAST with tens of loci can be preferable to using concatenation with thousands of loci. Our results provide insight into the practicalities of Bayesian species tree estimation, the number of loci required to obtain a given level of accuracy and the situations in which supermatrix or summary methods will be outperformed by the fully Bayesian multispecies coalescent. PMID:26821913

  10. Proposed Testing to Assess the Accuracy of Glass-To-Metal Seal Stress Analyses.

    SciTech Connect

    Chambers, Robert S.; Emery, John M; Tandon, Rajan; Antoun, Bonnie R.; Stavig, Mark E.; Newton, Clay S.; Gibson, Cory S; Bencoe, Denise N.

    2014-09-01

    The material characterization tests conducted on 304L VAR stainless steel and Schott 8061 glass have provided higher fidelity data for calibration of material models used in Glass - T o - Metal (GTM) seal analyses. Specifically, a Thermo - Multi - Linear Elastic Plastic ( thermo - MLEP) material model has be en defined for S S304L and the Simplified Potential Energy Clock nonlinear visc oelastic model has been calibrated for the S8061 glass. To assess the accuracy of finite element stress analyses of GTM seals, a suite of tests are proposed to provide data for comparison to mo del predictions.

  11. Comparative study of application accuracy of two frameless neuronavigation systems: experimental error assessment quantifying registration methods and clinically influencing factors.

    PubMed

    Paraskevopoulos, Dimitrios; Unterberg, Andreas; Metzner, Roland; Dreyhaupt, Jens; Eggers, Georg; Wirtz, Christian Rainer

    2010-04-01

    This study aimed at comparing the accuracy of two commercial neuronavigation systems. Error assessment and quantification of clinical factors and surface registration, often resulting in decreased accuracy, were intended. Active (Stryker Navigation) and passive (VectorVision Sky, BrainLAB) neuronavigation systems were tested with an anthropomorphic phantom with a deformable layer, simulating skin and soft tissue. True coordinates measured by computer numerical control were compared with coordinates on image data and during navigation, to calculate software and system accuracy respectively. Comparison of image and navigation coordinates was used to evaluate navigation accuracy. Both systems achieved an overall accuracy of <1.5 mm. Stryker achieved better software accuracy, whereas BrainLAB better system and navigation accuracy. Factors with conspicuous influence (P<0.01) were imaging, instrument replacement, sterile cover drape and geometry of instruments. Precision data indicated by the systems did not reflect measured accuracy in general. Surface matching resulted in no improvement of accuracy, confirming former studies. Laser registration showed no differences compared to conventional pointers. Differences between the two systems were limited. Surface registration may improve inaccurate point-based registrations but does not in general affect overall accuracy. Accuracy feedback by the systems does not always match with true target accuracy and requires critical evaluation from the surgeon.

  12. Application of a Monte Carlo accuracy assessment tool to TDRS and GPS

    NASA Technical Reports Server (NTRS)

    Pavloff, Michael S.

    1994-01-01

    In support of a NASA study on the application of radio interferometry to satellite orbit determination, MITRE developed a simulation tool for assessing interferometric tracking accuracy. Initially, the tool was applied to the problem of determining optimal interferometric station siting for orbit determination of the Tracking and Data Relay Satellite (TDRS). Subsequently, the Orbit Determination Accuracy Estimator (ODAE) was expanded to model the general batch maximum likelihood orbit determination algorithms of the Goddard Trajectory Determination System (GTDS) with measurement types including not only group and phase delay from radio interferometry, but also range, range rate, angular measurements, and satellite-to-satellite measurements. The user of ODAE specifies the statistical properties of error sources, including inherent observable imprecision, atmospheric delays, station location uncertainty, and measurement biases. Upon Monte Carlo simulation of the orbit determination process, ODAE calculates the statistical properties of the error in the satellite state vector and any other parameters for which a solution was obtained in the orbit determination. This paper presents results from ODAE application to two different problems: (1)determination of optimal geometry for interferometirc tracking of TDRS, and (2) expected orbit determination accuracy for Global Positioning System (GPS) tracking of low-earth orbit (LEO) satellites. Conclusions about optimal ground station locations for TDRS orbit determination by radio interferometry are presented, and the feasibility of GPS-based tracking for IRIDIUM, a LEO mobile satellite communications (MOBILSATCOM) system, is demonstrated.

  13. Estimating orientation using magnetic and inertial sensors and different sensor fusion approaches: accuracy assessment in manual and locomotion tasks.

    PubMed

    Bergamini, Elena; Ligorio, Gabriele; Summa, Aurora; Vannozzi, Giuseppe; Cappozzo, Aurelio; Sabatini, Angelo Maria

    2014-10-09

    Magnetic and inertial measurement units are an emerging technology to obtain 3D orientation of body segments in human movement analysis. In this respect, sensor fusion is used to limit the drift errors resulting from the gyroscope data integration by exploiting accelerometer and magnetic aiding sensors. The present study aims at investigating the effectiveness of sensor fusion methods under different experimental conditions. Manual and locomotion tasks, differing in time duration, measurement volume, presence/absence of static phases, and out-of-plane movements, were performed by six subjects, and recorded by one unit located on the forearm or the lower trunk, respectively. Two sensor fusion methods, representative of the stochastic (Extended Kalman Filter) and complementary (Non-linear observer) filtering, were selected, and their accuracy was assessed in terms of attitude (pitch and roll angles) and heading (yaw angle) errors using stereophotogrammetric data as a reference. The sensor fusion approaches provided significantly more accurate results than gyroscope data integration. Accuracy improved mostly for heading and when the movement exhibited stationary phases, evenly distributed 3D rotations, it occurred in a small volume, and its duration was greater than approximately 20 s. These results were independent from the specific sensor fusion method used. Practice guidelines for improving the outcome accuracy are provided.

  14. Estimating Orientation Using Magnetic and Inertial Sensors and Different Sensor Fusion Approaches: Accuracy Assessment in Manual and Locomotion Tasks

    PubMed Central

    Bergamini, Elena; Ligorio, Gabriele; Summa, Aurora; Vannozzi, Giuseppe; Cappozzo, Aurelio; Sabatini, Angelo Maria

    2014-01-01

    Magnetic and inertial measurement units are an emerging technology to obtain 3D orientation of body segments in human movement analysis. In this respect, sensor fusion is used to limit the drift errors resulting from the gyroscope data integration by exploiting accelerometer and magnetic aiding sensors. The present study aims at investigating the effectiveness of sensor fusion methods under different experimental conditions. Manual and locomotion tasks, differing in time duration, measurement volume, presence/absence of static phases, and out-of-plane movements, were performed by six subjects, and recorded by one unit located on the forearm or the lower trunk, respectively. Two sensor fusion methods, representative of the stochastic (Extended Kalman Filter) and complementary (Non-linear observer) filtering, were selected, and their accuracy was assessed in terms of attitude (pitch and roll angles) and heading (yaw angle) errors using stereophotogrammetric data as a reference. The sensor fusion approaches provided significantly more accurate results than gyroscope data integration. Accuracy improved mostly for heading and when the movement exhibited stationary phases, evenly distributed 3D rotations, it occurred in a small volume, and its duration was greater than approximately 20 s. These results were independent from the specific sensor fusion method used. Practice guidelines for improving the outcome accuracy are provided. PMID:25302810

  15. The diagnostic accuracy of pharmacological stress echocardiography for the assessment of coronary artery disease: a meta-analysis

    PubMed Central

    Picano, Eugenio; Molinaro, Sabrina; Pasanisi, Emilio

    2008-01-01

    Background Recent American Heart Association/American College of Cardiology guidelines state that "dobutamine stress echo has substantially higher sensitivity than vasodilator stress echo for detection of coronary artery stenosis" while the European Society of Cardiology guidelines and the European Association of Echocardiography recommendations conclude that "the two tests have very similar applications". Who is right? Aim To evaluate the diagnostic accuracy of dobutamine versus dipyridamole stress echocardiography through an evidence-based approach. Methods From PubMed search, we identified all papers with coronary angiographic verification and head-to-head comparison of dobutamine stress echo (40 mcg/kg/min ± atropine) versus dipyridamole stress echo performed with state-of-the art protocols (either 0.84 mg/kg in 10' plus atropine, or 0.84 mg/kg in 6' without atropine). A total of 5 papers have been found. Pooled weight meta-analysis was performed. Results the 5 analyzed papers recruited 435 patients, 299 with and 136 without angiographically assessed coronary artery disease (quantitatively assessed stenosis > 50%). Dipyridamole and dobutamine showed similar accuracy (87%, 95% confidence intervals, CI, 83–90, vs. 84%, CI, 80–88, p = 0.48), sensitivity (85%, CI 80–89, vs. 86%, CI 78–91, p = 0.81) and specificity (89%, CI 82–94 vs. 86%, CI 75–89, p = 0.15). Conclusion When state-of-the art protocols are considered, dipyridamole and dobutamine stress echo have similar accuracy, specificity and – most importantly – sensitivity for detection of CAD. European recommendations concluding that "dobutamine and vasodilators (at appropriately high doses) are equally potent ischemic stressors for inducing wall motion abnormalities in presence of a critical coronary artery stenosis" are evidence-based. PMID:18565214

  16. The influence of sampling interval on the accuracy of trail impact assessment

    USGS Publications Warehouse

    Leung, Y.-F.; Marion, J.L.

    1999-01-01

    Trail impact assessment and monitoring (IA&M) programs have been growing in importance and application in recreation resource management at protected areas. Census-based and sampling-based approaches have been developed in such programs, with systematic point sampling being the most common survey design. This paper examines the influence of sampling interval on the accuracy of estimates for selected trail impact problems. A complete census of four impact types on 70 trails in Great Smoky Mountains National Park was utilized as the base data set for the analyses. The census data were resampled at increasing intervals to create a series of simulated point data sets. Estimates of frequency of occurrence and lineal extent for the four impact types were compared with the census data set. The responses of accuracy loss on lineal extent estimates to increasing sampling intervals varied across different impact types, while the responses on frequency of occurrence estimates were consistent, approximating an inverse asymptotic curve. These findings suggest that systematic point sampling may be an appropriate method for estimating the lineal extent but not the frequency of trail impacts. Sample intervals of less than 100 m appear to yield an excellent level of accuracy for the four impact types evaluated. Multiple regression analysis results suggest that appropriate sampling intervals are more likely to be determined by the type of impact in question rather than the length of trail. The census-based trail survey and the resampling-simulation method developed in this study can be a valuable first step in establishing long-term trail IA&M programs, in which an optimal sampling interval range with acceptable accuracy is determined before investing efforts in data collection.

  17. Geometric calibration and accuracy assessment of a multispectral imager on UAVs

    NASA Astrophysics Data System (ADS)

    Zheng, Fengjie; Yu, Tao; Chen, Xingfeng; Chen, Jiping; Yuan, Guoti

    2012-11-01

    The increasing developments in Unmanned Aerial Vehicles (UAVs) platforms and associated sensing technologies have widely promoted UAVs remote sensing application. UAVs, especially low-cost UAVs, limit the sensor payload in weight and dimension. Mostly, cameras on UAVs are panoramic, fisheye lens, small-format CCD planar array camera, unknown intrinsic parameters and lens optical distortion will cause serious image aberrations, even leading a few meters or tens of meters errors in ground per pixel. However, the characteristic of high spatial resolution make accurate geolocation more critical to UAV quantitative remote sensing research. A method for MCC4-12F Multispectral Imager designed to load on UAVs has been developed and implemented. Using multi-image space resection algorithm to assess geometric calibration parameters of random position and different photogrammetric altitudes in 3D test field, which is suitable for multispectral cameras. Both theoretical and practical accuracy assessments were selected. The results of theoretical strategy, resolving object space and image point coordinate differences by space intersection, showed that object space RMSE were 0.2 and 0.14 pixels in X direction and in Y direction, image space RMSE were superior to 0.5 pixels. In order to verify the accuracy and reliability of the calibration parameters,practical study was carried out in Tianjin UAV flight experiments, the corrected accuracy validated by ground checkpoints was less than 0.3m. Typical surface reflectance retrieved on the basis of geo-rectified data was compared with ground ASD measurement resulting 4% discrepancy. Hence, the approach presented here was suitable for UAV multispectral imager.

  18. Accuracy assessment of 3D bone reconstructions using CT: an intro comparison.

    PubMed

    Lalone, Emily A; Willing, Ryan T; Shannon, Hannah L; King, Graham J W; Johnson, James A

    2015-08-01

    Computed tomography provides high contrast imaging of the joint anatomy and is used routinely to reconstruct 3D models of the osseous and cartilage geometry (CT arthrography) for use in the design of orthopedic implants, for computer assisted surgeries and computational dynamic and structural analysis. The objective of this study was to assess the accuracy of bone and cartilage surface model reconstructions by comparing reconstructed geometries with bone digitizations obtained using an optical tracking system. Bone surface digitizations obtained in this study determined the ground truth measure for the underlying geometry. We evaluated the use of a commercially available reconstruction technique using clinical CT scanning protocols using the elbow joint as an example of a surface with complex geometry. To assess the accuracies of the reconstructed models (8 fresh frozen cadaveric specimens) against the ground truth bony digitization-as defined by this study-proximity mapping was used to calculate residual error. The overall mean error was less than 0.4 mm in the cortical region and 0.3 mm in the subchondral region of the bone. Similarly creating 3D cartilage surface models from CT scans using air contrast had a mean error of less than 0.3 mm. Results from this study indicate that clinical CT scanning protocols and commonly used and commercially available reconstruction algorithms can create models which accurately represent the true geometry.

  19. Dimensions of L2 Performance and Proficiency: Complexity, Accuracy and Fluency in SLA. Language Learning & Language Teaching. Volume 32

    ERIC Educational Resources Information Center

    Housen, Alex, Ed.; Kuiken, Folkert, Ed.; Vedder, Ineke, Ed.

    2012-01-01

    Research into complexity, accuracy and fluency (CAF) as basic dimensions of second language performance, proficiency and development has received increased attention in SLA. However, the larger picture in this field of research is often obscured by the breadth of scope, multiple objectives and lack of clarity as to how complexity, accuracy and…

  20. Assessment of accuracy of in-situ methods for measuring building-envelope thermal resistance

    SciTech Connect

    Fang, J.B.; Grot, R.A.; Park, H.S.

    1986-03-01

    A series of field and laboratory tests were conducted to evaluate the accuracy of in-situ thermal-resistance-measurement techniques. The results of thermal-performance evaluation of the exterior walls of six thermal mass test houses situated in Gaithersburg, Maryland are presented. The wall construction of these one-room houses includes insulated light-weight wood frame, uninsulated light-weight wood frame, insulated masonry with outside mass, uninsulated masonry, log, and insulated masonry with inside mass. In-situ measurements of heat transfer through building envelopes were made with heat flux transducers and portable calorimeters.

  1. Assessing Accuracy of Exchange-Correlation Functionals for the Description of Atomic Excited States

    NASA Astrophysics Data System (ADS)

    Makowski, Marcin; Hanas, Martyna

    2016-09-01

    The performance of exchange-correlation functionals for the description of atomic excitations is investigated. A benchmark set of excited states is constructed and experimental data is compared to Time-Dependent Density Functional Theory (TDDFT) calculations. The benchmark results show that for the selected group of functionals good accuracy may be achieved and the quality of predictions provided is competitive to computationally more demanding coupled-cluster approaches. Apart from testing the standard TDDFT approaches, also the role of self-interaction error plaguing DFT calculations and the adiabatic approximation to the exchange-correlation kernels is given some insight.

  2. Performance Characteristics and Accuracy in Perceptual Discrimination of Leather and Synthetic Basketballs.

    ERIC Educational Resources Information Center

    Mathes, Sharon; Flatten, Kay

    1982-01-01

    To assess the performance characteristics of synthetic and leather basketballs, individuals were asked to discriminate perceptually between the leather and synthetic basketballs under four treatment conditions. Rebound characteristics on five playing surfaces were measured. Leather basketballs rebounded significantly higher; no significant…

  3. A PRIOR EVALUATION OF TWO-STAGE CLUSTER SAMPLING FOR ACCURACY ASSESSMENT OF LARGE-AREA LAND-COVER MAPS

    EPA Science Inventory

    Two-stage cluster sampling reduces the cost of collecting accuracy assessment reference data by constraining sample elements to fall within a limited number of geographic domains (clusters). However, because classification error is typically positively spatially correlated, withi...

  4. Reliability and accuracy of anthropometry performed by community health workers among infants under 6 months in rural Kenya

    PubMed Central

    Mwangome, Martha K; Fegan, Greg; Mbunya, Ronald; Prentice, Andrew M; Berkley, James A

    2012-01-01

    Objective To assess the inter-observer variability and accuracy of Mid Upper Arm Circumference (MUAC) and weight-for-length Z score (WFLz) among infants aged <6 months performed by community health workers (CHWs) in Kilifi District, Kenya. Methods A cross-sectional repeatability study estimated inter-observer variation and accuracy of measurements initially undertaken by an expert anthropometrist, nurses and public health technicians. Then, after training, 18 CHWs (three at each of six sites) repeatedly measured MUAC, weight and length of infants aged <6 months. Intra-class correlations (ICCs) and the Pitman’s statistic were calculated. Results Among CHWs, ICCs pooled across the six sites (924 infants) were 0.96 (95% CI 0.95–0.96) for MUAC and 0.71 (95% CI 0.68–0.74) for WFLz. MUAC measures by CHWs differed little from their trainers: the mean difference in MUAC was 0.65 mm (95% CI 0.023–1.07), with no significant difference in variance (P = 0.075). Conclusion Mid Upper Arm Circumference is more reliably measured by CHWs than WFLz among infants aged <6 months. Further work is needed to define cut-off values based on MUAC’s ability to predict mortality among younger infants. PMID:22364555

  5. Accuracy assessment of land cover/land use classifiers in dry and humid areas of Iran.

    PubMed

    Yousefi, Saleh; Khatami, Reza; Mountrakis, Giorgos; Mirzaee, Somayeh; Pourghasemi, Hamid Reza; Tazeh, Mehdi

    2015-10-01

    Land cover/land use (LCLU) maps are essential inputs for environmental analysis. Remote sensing provides an opportunity to construct LCLU maps of large geographic areas in a timely fashion. Knowing the most accurate classification method to produce LCLU maps based on site characteristics is necessary for the environment managers. The aim of this research is to examine the performance of various classification algorithms for LCLU mapping in dry and humid climates (from June to August). Testing is performed in three case studies from each of the two climates in Iran. The reference dataset of each image was randomly selected from the entire images and was randomly divided into training and validation set. Training sets included 400 pixels, and validation sets included 200 pixels of each LCLU. Results indicate that the support vector machine (SVM) and neural network methods can achieve higher overall accuracy (86.7 and 86.6%) than other examined algorithms, with a slight advantage for the SVM. Dry areas exhibit higher classification difficulty as man-made features often have overlapping spectral responses to soil. A further observation is that spatial segregation and lower mixture of LCLU classes can increase classification overall accuracy.

  6. Quantitative Assessment of Shockwave Lithotripsy Accuracy and the Effect of Respiratory Motion*

    PubMed Central

    Bailey, Michael R.; Shah, Anup R.; Hsi, Ryan S.; Paun, Marla; Harper, Jonathan D.

    2012-01-01

    Abstract Background and Purpose Effective stone comminution during shockwave lithotripsy (SWL) is dependent on precise three-dimensional targeting of the shockwave. Respiratory motion, imprecise targeting or shockwave alignment, and stone movement may compromise treatment efficacy. The purpose of this study was to evaluate the accuracy of shockwave targeting during SWL treatment and the effect of motion from respiration. Patients and Methods Ten patients underwent SWL for the treatment of 13 renal stones. Stones were targeted fluoroscopically using a Healthtronics Lithotron (five cases) or Dornier Compact Delta II (five cases) shockwave lithotripter. Shocks were delivered at a rate of 1 to 2 Hz with ramping shockwave energy settings of 14 to 26 kV or level 1 to 5. After the low energy pretreatment and protective pause, a commercial diagnostic ultrasound (US) imaging system was used to record images of the stone during active SWL treatment. Shockwave accuracy, defined as the proportion of shockwaves that resulted in stone motion with shockwave delivery, and respiratory stone motion were determined by two independent observers who reviewed the ultrasonographic videos. Results Mean age was 51±15 years with 60% men, and mean stone size was 10.5±3.7 mm (range 5–18 mm). A mean of 2675±303 shocks was delivered. Shockwave-induced stone motion was observed with every stone. Accurate targeting of the stone occurred in 60%±15% of shockwaves. Conclusions US imaging during SWL revealed that 40% of shockwaves miss the stone and contribute solely to tissue injury, primarily from movement with respiration. These data support the need for a device to deliver shockwaves only when the stone is in target. US imaging provides real-time assessment of stone targeting and accuracy of shockwave delivery. PMID:22471349

  7. Assessment of the sources of error affecting the quantitative accuracy of SPECT imaging in small animals

    SciTech Connect

    Joint Graduate Group in Bioengineering, University of California, San Francisco and University of California, Berkeley; Department of Radiology, University of California; Gullberg, Grant T; Hwang, Andrew B.; Franc, Benjamin L.; Gullberg, Grant T.; Hasegawa, Bruce H.

    2008-02-15

    Small animal SPECT imaging systems have multiple potential applications in biomedical research. Whereas SPECT data are commonly interpreted qualitatively in a clinical setting, the ability to accurately quantify measurements will increase the utility of the SPECT data for laboratory measurements involving small animals. In this work, we assess the effect of photon attenuation, scatter and partial volume errors on the quantitative accuracy of small animal SPECT measurements, first with Monte Carlo simulation and then confirmed with experimental measurements. The simulations modeled the imaging geometry of a commercially available small animal SPECT system. We simulated the imaging of a radioactive source within a cylinder of water, and reconstructed the projection data using iterative reconstruction algorithms. The size of the source and the size of the surrounding cylinder were varied to evaluate the effects of photon attenuation and scatter on quantitative accuracy. We found that photon attenuation can reduce the measured concentration of radioactivity in a volume of interest in the center of a rat-sized cylinder of water by up to 50percent when imaging with iodine-125, and up to 25percent when imaging with technetium-99m. When imaging with iodine-125, the scatter-to-primary ratio can reach up to approximately 30percent, and can cause overestimation of the radioactivity concentration when reconstructing data with attenuation correction. We varied the size of the source to evaluate partial volume errors, which we found to be a strong function of the size of the volume of interest and the spatial resolution. These errors can result in large (>50percent) changes in the measured amount of radioactivity. The simulation results were compared with and found to agree with experimental measurements. The inclusion of attenuation correction in the reconstruction algorithm improved quantitative accuracy. We also found that an improvement of the spatial resolution through the

  8. Assessing the accuracy of satellite derived global and national urban maps in Kenya.

    PubMed

    Tatem, A J; Noor, A M; Hay, S I

    2005-05-15

    Ninety percent of projected global urbanization will be concentrated in low income countries (United-Nations, 2004). This will have considerable environmental, economic and public health implications for those populations. Objective and efficient methods of delineating urban extent are a cross-sectoral need complicated by a diversity of urban definition rubrics world-wide. Large-area maps of urban extents are becoming increasingly available in the public domain, as are a wide-range of medium spatial resolution satellite imagery. Here we describe the extension of a methodology based on Landsat ETM and Radarsat imagery to the production of a human settlement map of Kenya. This map was then compared with five satellite imagery-derived, global maps of urban extent at Kenya national-level, against an expert opinion coverage for accuracy assessment. The results showed the map produced using medium spatial resolution satellite imagery was of comparable accuracy to the expert opinion coverage. The five global urban maps exhibited a range of inaccuracies, emphasising that care should be taken with use of these maps at national and sub-national scale.

  9. ACSB: A minimum performance assessment

    NASA Technical Reports Server (NTRS)

    Jones, Lloyd Thomas; Kissick, William A.

    1988-01-01

    Amplitude companded sideband (ACSB) is a new modulation technique which uses a much smaller channel width than does conventional frequency modulation (FM). Among the requirements of a mobile communications system is adequate speech intelligibility. This paper explores this aspect of minimum required performance. First, the basic principles of ACSB are described, with emphasis on those features that affect speech quality. Second, the appropriate performance measures for ACSB are reviewed. Third, a subjective voice quality scoring method is used to determine the values of the performance measures that equate to the minimum level of intelligibility. It is assumed that the intelligibility of an FM system operating at 12 dB SINAD represents that minimum. It was determined that ACSB operating at 12 dB SINAD with an audio-to-pilot ratio of 10 dB provides approximately the same intelligibility as FM operating at 12 dB SINAD.

  10. Assessing the accuracy of selectivity as a basis for solvent screening in extractive distillation processes

    SciTech Connect

    Momoh, S.O. )

    1991-01-01

    An important parameter for consideration in the screening of solvents for an extractive distillation process is selectivity at infinite dilution. The higher the selectivity, the better the solvent. This paper assesses the accuracy of using selectivity as a basis for solvent screening in extractive distillation processes. Three types of binary mixtures that are usually separated by an extractive distillation process are chosen for investigation. Having determined the optimum solvent feed rate to be two times the feed rate of the binary mixture, the total annual costs of extractive distillation processes for each of the chosen mixtures and for various solvents are carried out. The solvents are ranked on the basis of the total annual cost (obtained by design and costing equations) for the extractive distillation processes, and this ranking order is compared with that of selectivity at infinite dilution as determined by the UNIFAC method. This matching of selectivity with total annual cost does not produce a very good correlation.

  11. Accuracy assessment of Kinect for Xbox One in point-based tracking applications

    NASA Astrophysics Data System (ADS)

    Goral, Adrian; Skalski, Andrzej

    2015-12-01

    We present the accuracy assessment of a point-based tracking system built on Kinect v2. In our approach, color, IR and depth data were used to determine the positions of spherical markers. To accomplish this task, we calibrated the depth/infrared and color cameras using a custom method. As a reference tool we used Polaris Spectra optical tracking system. The mean error obtained within the range from 0.9 to 2.9 m was 61.6 mm. Although the depth component of the error turned out to be the largest, the random error of depth estimation was only 1.24 mm on average. Our Kinect-based system also allowed for reliable angular measurements within the range of ±20° from the sensor's optical axis.

  12. Accuracy assessment of human trunk surface 3D reconstructions from an optical digitising system.

    PubMed

    Pazos, V; Cheriet, F; Song, L; Labelle, H; Dansereau, J

    2005-01-01

    The lack of reliable techniques to follow up scoliotic deformity from the external asymmetry of the trunk leads to a general use of X-rays and indices of spinal deformity. Young adolescents with idiopathic scoliosis need intensive follow-ups for many years and, consequently, they are repeatedly exposed to ionising radiation, which is hazardous to their long-term health. Furthermore, treatments attempt to improve both spinal and surface deformities, but internal indices do not describe the external asymmetry. The purpose of this study was to assess a commercial, optical 3D digitising system for the 3D reconstruction of the entire trunk for clinical assessment of external asymmetry. The resulting surface is a textured, high-density polygonal mesh. The accuracy assessment was based on repeated reconstructions of a manikin with markers fixed on it. The average normal distance between the reconstructed surfaces and the reference data (markers measured with CMM) was 1.1 +/- 0.9 mm. PMID:15742714

  13. Accuracy Assessment of GPS Buoy Sea Level Measurements for Coastal Applications

    NASA Astrophysics Data System (ADS)

    Chiu, S.; Cheng, K.

    2008-12-01

    The GPS buoy in this study contains a geodetic antenna and a compact floater with the GPS receiver and power supply tethered to a boat. The coastal applications using GPS include monitoring of sea level and its change, calibration of satellite altimeters, hydrological or geophysical parameters modeling, seafloor geodesy, and others. Among these applications, in order to understand the overall data or model quality, it is required to gain the knowledge of position accuracy of GPS buoys or GPS-equipped vessels. Despite different new GPS data processing techniques, e.g., Precise Point Positioning (PPP) and virtual reference station (VRS), that require a prioir information obtained from the a regional GPS network. While the required a prioir information can be implemented on land, it may not be available on the sea. Hence, in this study, the GPS buoy was positioned with respect to a onshore GPS reference station using the traditional double- difference technique. Since the atmosphere starts to decorrelate as the baseline, the distance between the buoy and the reference station, increases, the positioning accuracy consequently decreases. Therefore, this study aims to assess the buoy position accuracy as the baseline increases and in order to quantify the upper limit of sea level measured by the GPS buoy. A GPS buoy campaign was conducted by National Chung Cheng University in An Ping, Taiwan with a 8- hour GPS buoy data collection. In addition, a GPS network contains 4 Continuous GPS (CGPS) stations in Taiwan was established with the goal to enable baselines in different range for buoy data processing. A vector relation from the network was utilized in order to find the correct ambiguities, which were applied to the long-baseline solution to eliminate the position error caused by incorrect ambiguities. After this procedure, a 3.6-cm discrepancy was found in the mean sea level solution between the long (~80 km) and the short (~1.5 km) baselines. The discrepancy between a

  14. Diagnostic accuracy of refractometer and Brix refractometer to assess failure of passive transfer in calves: protocol for a systematic review and meta-analysis.

    PubMed

    Buczinski, S; Fecteau, G; Chigerwe, M; Vandeweerd, J M

    2016-06-01

    Calves are highly dependent of colostrum (and antibody) intake because they are born agammaglobulinemic. The transfer of passive immunity in calves can be assessed directly by dosing immunoglobulin G (IgG) or by refractometry or Brix refractometry. The latter are easier to perform routinely in the field. This paper presents a protocol for a systematic review meta-analysis to assess the diagnostic accuracy of refractometry or Brix refractometry versus dosage of IgG as a reference standard test. With this review protocol we aim to be able to report refractometer and Brix refractometer accuracy in terms of sensitivity and specificity as well as to quantify the impact of any study characteristic on test accuracy. PMID:27427188

  15. The Effects of Performance-Based Assessment Criteria on Student Performance and Self-Assessment Skills

    ERIC Educational Resources Information Center

    Fastre, Greet Mia Jos; van der Klink, Marcel R.; van Merrienboer, Jeroen J. G.

    2010-01-01

    This study investigated the effect of performance-based versus competence-based assessment criteria on task performance and self-assessment skills among 39 novice secondary vocational education students in the domain of nursing and care. In a performance-based assessment group students are provided with a preset list of performance-based…

  16. Performance Assessment in Language Testing

    ERIC Educational Resources Information Center

    Salmani Nodoushan, Mohammad Ali

    2008-01-01

    Over the past few decades, educators in general, and language teachers in specific, were more inclined towards using testing techniques that resembled real-life language performance. Unlike traditional paper-and-pencil language tests that required test-takers to attempt tests that were based on artificial and contrived language content,…

  17. Creating a Standard Set of Metrics to Assess Accuracy of Solar Forecasts: Preliminary Results

    NASA Astrophysics Data System (ADS)

    Banunarayanan, V.; Brockway, A.; Marquis, M.; Haupt, S. E.; Brown, B.; Fowler, T.; Jensen, T.; Hamann, H.; Lu, S.; Hodge, B.; Zhang, J.; Florita, A.

    2013-12-01

    The U.S. Department of Energy (DOE) SunShot Initiative, launched in 2011, seeks to reduce the cost of solar energy systems by 75% from 2010 to 2020. In support of the SunShot Initiative, the DOE Office of Energy Efficiency and Renewable Energy (EERE) is partnering with the National Oceanic and Atmospheric Administration (NOAA) and solar energy stakeholders to improve solar forecasting. Through a funding opportunity announcement issued in the April, 2012, DOE is funding two teams - led by National Center for Atmospheric Research (NCAR), and by IBM - to perform three key activities in order to improve solar forecasts. The teams will: (1) With DOE and NOAA's leadership and significant stakeholder input, develop a standardized set of metrics to evaluate forecast accuracy, and determine the baseline and target values for these metrics; (2) Conduct research that yields a transformational improvement in weather models and methods for forecasting solar irradiance and power; and (3) Incorporate solar forecasts into the system operations of the electric power grid, and evaluate the impact of forecast accuracy on the economics and reliability of operations using the defined, standard metrics. This paper will present preliminary results on the first activity: the development of a standardized set of metrics, baselines and target values. The results will include a proposed framework for metrics development, key categories of metrics, descriptions of each of the proposed set of specific metrics to measure forecast accuracy, feedback gathered from a range of stakeholders on the metrics, and processes to determine baselines and target values for each metric. The paper will also analyze the temporal and spatial resolutions under which these metrics would apply, and conclude with a summary of the work in progress on solar forecasting activities funded by DOE.

  18. Accuracy Assessment of Direct Georeferencing for Photogrammetric Applications on Small Unmanned Aerial Platforms

    NASA Astrophysics Data System (ADS)

    Mian, O.; Lutes, J.; Lipa, G.; Hutton, J. J.; Gavelle, E.; Borghini, S.

    2016-03-01

    Microdrones md4-1000 quad-rotor VTOL UAV. The Sony A7R and each lens combination were focused and calibrated terrestrially using the Applanix camera calibration facility, and then integrated with the APX-15 GNSS-Inertial system using a custom mount specifically designed for UAV applications. The mount is constructed in such a way as to maintain the stability of both the interior orientation and IMU boresight calibration over shock and vibration, thus turning the Sony A7R into a metric imaging solution. In July and August 2015, Applanix and Avyon carried out a series of test flights of this system. The goal of these test flights was to assess the performance of DMS APX-15 direct georeferencing system under various scenarios. Furthermore, an examination of how DMS APX-15 can be used to produce accurate map products without the use of ground control points and with reduced sidelap was also carried out. Reducing the side lap for survey missions performed by small UAVs can significantly increase the mapping productivity of these platforms. The area mapped during the first flight campaign was a 250m x 300m block and a 775m long railway corridor in a rural setting in Ontario, Canada. The second area mapped was a 450m long corridor over a dam known as Fryer Dam (over Richelieu River in Quebec, Canada). Several ground control points were distributed within both test areas. The flight over the block area included 8 North-South lines and 1 cross strip flown at 80m AGL, resulting in a ~1cm GSD. The flight over the railway corridor included 2 North-South lines also flown at 80m AGL. Similarly, the flight over the dam corridor included 2 North-South lines flown at 50m AGL. The focus of this paper was to analyse the results obtained from the two corridors. Test results from both areas were processed using Direct Georeferencing techniques, and then compared for accuracy against the known positions of ground control points in each test area. The GNSS-Inertial data collected by the APX-15 was

  19. Accuracy Assessment of Direct Georeferencing for Photogrammetric Applications on Small Unmanned Aerial Platforms

    NASA Astrophysics Data System (ADS)

    Mian, O.; Lutes, J.; Lipa, G.; Hutton, J. J.; Gavelle, E.; Borghini, S.

    2016-03-01

    Microdrones md4-1000 quad-rotor VTOL UAV. The Sony A7R and each lens combination were focused and calibrated terrestrially using the Applanix camera calibration facility, and then integrated with the APX-15 GNSS-Inertial system using a custom mount specifically designed for UAV applications. The mount is constructed in such a way as to maintain the stability of both the interior orientation and IMU boresight calibration over shock and vibration, thus turning the Sony A7R into a metric imaging solution. In July and August 2015, Applanix and Avyon carried out a series of test flights of this system. The goal of these test flights was to assess the performance of DMS APX-15 direct georeferencing system under various scenarios. Furthermore, an examination of how DMS APX-15 can be used to produce accurate map products without the use of ground control points and with reduced sidelap was also carried out. Reducing the side lap for survey missions performed by small UAVs can significantly increase the mapping productivity of these platforms. The area mapped during the first flight campaign was a 250m x 300m block and a 775m long railway corridor in a rural setting in Ontario, Canada. The second area mapped was a 450m long corridor over a dam known as Fryer Dam (over Richelieu River in Quebec, Canada). Several ground control points were distributed within both test areas. The flight over the block area included 8 North-South lines and 1 cross strip flown at 80m AGL, resulting in a ~1cm GSD. The flight over the railway corridor included 2 North-South lines also flown at 80m AGL. Similarly, the flight over the dam corridor included 2 North-South lines flown at 50m AGL. The focus of this paper was to analyse the results obtained from the two corridors. Test results from both areas were processed using Direct Georeferencing techniques, and then compared for accuracy against the known positions of ground control points in each test area. The GNSS-Inertial data collected by the APX-15 was

  20. Transformational Leadership: Development and Performance Assessment

    ERIC Educational Resources Information Center

    Servais, Kristine A.

    2006-01-01

    Leadership is about transformation. It is the opportunity to transform people, places, and possibilities. The purpose of the study reported in this article was to identify means to assess leadership development and performance. Although leadership development and performance can be assessed, school systems often lack consistent methods,…

  1. Improving the assessment of ICESat water altimetry accuracy accounting for autocorrelation

    NASA Astrophysics Data System (ADS)

    Abdallah, Hani; Bailly, Jean-Stéphane; Baghdadi, Nicolas; Lemarquand, Nicolas

    2011-11-01

    Given that water resources are scarce and are strained by competing demands, it has become crucial to develop and improve techniques to observe the temporal and spatial variations in the inland water volume. Due to the lack of data and the heterogeneity of water level stations, remote sensing, and especially altimetry from space, appear as complementary techniques for water level monitoring. In addition to spatial resolution and sampling rates in space or time, one of the most relevant criteria for satellite altimetry on inland water is the accuracy of the elevation data. Here, the accuracy of ICESat LIDAR altimetry product is assessed over the Great Lakes in North America. The accuracy assessment method used in this paper emphasizes on autocorrelation in high temporal frequency ICESat measurements. It also considers uncertainties resulting from both in situ lake level reference data. A probabilistic upscaling process was developed. This process is based on several successive ICESat shots averaged in a spatial transect accounting for autocorrelation between successive shots. The method also applies pre-processing of the ICESat data with saturation correction of ICESat waveforms, spatial filtering to avoid measurement disturbance from the land-water transition effects on waveform saturation and data selection to avoid trends in water elevations across space. Initially this paper analyzes 237 collected ICESat transects, consistent with the available hydrometric ground stations for four of the Great Lakes. By adapting a geostatistical framework, a high frequency autocorrelation between successive shot elevation values was observed and then modeled for 45% of the 237 transects. The modeled autocorrelation was therefore used to estimate water elevations at the transect scale and the resulting uncertainty for the 117 transects without trend. This uncertainty was 8 times greater than the usual computed uncertainty, when no temporal correlation is taken into account. This

  2. A simple device for high-precision head image registration: Preliminary performance and accuracy tests

    SciTech Connect

    Pallotta, Stefania

    2007-05-15

    The purpose of this paper is to present a new device for multimodal head study registration and to examine its performance in preliminary tests. The device consists of a system of eight markers fixed to mobile carbon pipes and bars which can be easily mounted on the patient's head using the ear canals and the nasal bridge. Four graduated scales fixed to the rigid support allow examiners to find the same device position on the patient's head during different acquisitions. The markers can be filled with appropriate substances for visualisation in computed tomography (CT), magnetic resonance, single photon emission computer tomography (SPECT) and positron emission tomography images. The device's rigidity and its position reproducibility were measured in 15 repeated CT acquisitions of the Alderson Rando anthropomorphic phantom and in two SPECT studies of a patient. The proposed system displays good rigidity and reproducibility characteristics. A relocation accuracy of less than 1,5 mm was found in more than 90% of the results. The registration parameters obtained using such a device were compared to those obtained using fiducial markers fixed on phantom and patient heads, resulting in differences of less than 1 deg. and 1 mm for rotation and translation parameters, respectively. Residual differences between fiducial marker coordinates in reference and in registered studies were less than 1 mm in more than 90% of the results, proving that the device performed as accurately as noninvasive stereotactic devices. Finally, an example of multimodal employment of the proposed device is reported.

  3. An accuracy assessment of different rigid body image registration methods and robotic couch positional corrections using a novel phantom

    SciTech Connect

    Arumugam, Sankar; Xing Aitang; Jameson, Michael G.; Holloway, Lois

    2013-03-15

    Purpose: Image guided radiotherapy (IGRT) using cone beam computed tomography (CBCT) images greatly reduces interfractional patient positional uncertainties. An understanding of uncertainties in the IGRT process itself is essential to ensure appropriate use of this technology. The purpose of this study was to develop a phantom capable of assessing the accuracy of IGRT hardware and software including a 6 degrees of freedom patient positioning system and to investigate the accuracy of the Elekta XVI system in combination with the HexaPOD robotic treatment couch top. Methods: The constructed phantom enabled verification of the three automatic rigid body registrations (gray value, bone, seed) available in the Elekta XVI software and includes an adjustable mount that introduces known rotational offsets to the phantom from its reference position. Repeated positioning of the phantom was undertaken to assess phantom rotational accuracy. Using this phantom the accuracy of the XVI registration algorithms was assessed considering CBCT hardware factors and image resolution together with the residual error in the overall image guidance process when positional corrections were performed through the HexaPOD couch system. Results: The phantom positioning was found to be within 0.04 ({sigma}= 0.12) Degree-Sign , 0.02 ({sigma}= 0.13) Degree-Sign , and -0.03 ({sigma}= 0.06) Degree-Sign in X, Y, and Z directions, respectively, enabling assessment of IGRT with a 6 degrees of freedom patient positioning system. The gray value registration algorithm showed the least error in calculated offsets with maximum mean difference of -0.2({sigma}= 0.4) mm in translational and -0.1({sigma}= 0.1) Degree-Sign in rotational directions for all image resolutions. Bone and seed registration were found to be sensitive to CBCT image resolution. Seed registration was found to be most sensitive demonstrating a maximum mean error of -0.3({sigma}= 0.9) mm and -1.4({sigma}= 1.7) Degree-Sign in translational

  4. Assessment of Accuracy and Reliability in Acetabular Cup Placement Using an iPhone/iPad System.

    PubMed

    Kurosaka, Kenji; Fukunishi, Shigeo; Fukui, Tomokazu; Nishio, Shoji; Fujihara, Yuki; Okahisa, Shohei; Takeda, Yu; Daimon, Takashi; Yoshiya, Shinichi

    2016-07-01

    Implant positioning is one of the critical factors that influences postoperative outcome of total hip arthroplasty (THA). Malpositioning of the implant may lead to an increased risk of postoperative complications such as prosthetic impingement, dislocation, restricted range of motion, polyethylene wear, and loosening. In 2012, the intraoperative use of smartphone technology in THA for improved accuracy of acetabular cup placement was reported. The purpose of this study was to examine the accuracy of an iPhone/iPad-guided technique in positioning the acetabular cup in THA compared with the reference values obtained from the image-free navigation system in a cadaveric experiment. Five hips of 5 embalmed whole-body cadavers were used in the study. Seven orthopedic surgeons (4 residents and 3 senior hip surgeons) participated in the study. All of the surgeons examined each of the 5 hips 3 times. The target angle was 38°/19° for operative inclination/anteversion angles, which corresponded to radiographic inclination/anteversion angles of 40°/15°. The simultaneous assessment using the navigation system showed mean±SD radiographic alignment angles of 39.4°±2.6° and 16.4°±2.6° for inclination and anteversion, respectively. Assessment of cup positioning based on Lewinnek's safe zone criteria showed all of the procedures (n=105) achieved acceptable alignment within the safe zone. A comparison of the performances by resident and senior hip surgeons showed no significant difference between the groups (P=.74 for inclination and P=.81 for anteversion). The iPhone/iPad technique examined in this study could achieve acceptable performance in determining cup alignment in THA regardless of the surgeon's expertise. [Orthopedics. 2016; 39(4):e621-e626.]. PMID:27322169

  5. Assessment of Accuracy and Reliability in Acetabular Cup Placement Using an iPhone/iPad System.

    PubMed

    Kurosaka, Kenji; Fukunishi, Shigeo; Fukui, Tomokazu; Nishio, Shoji; Fujihara, Yuki; Okahisa, Shohei; Takeda, Yu; Daimon, Takashi; Yoshiya, Shinichi

    2016-07-01

    Implant positioning is one of the critical factors that influences postoperative outcome of total hip arthroplasty (THA). Malpositioning of the implant may lead to an increased risk of postoperative complications such as prosthetic impingement, dislocation, restricted range of motion, polyethylene wear, and loosening. In 2012, the intraoperative use of smartphone technology in THA for improved accuracy of acetabular cup placement was reported. The purpose of this study was to examine the accuracy of an iPhone/iPad-guided technique in positioning the acetabular cup in THA compared with the reference values obtained from the image-free navigation system in a cadaveric experiment. Five hips of 5 embalmed whole-body cadavers were used in the study. Seven orthopedic surgeons (4 residents and 3 senior hip surgeons) participated in the study. All of the surgeons examined each of the 5 hips 3 times. The target angle was 38°/19° for operative inclination/anteversion angles, which corresponded to radiographic inclination/anteversion angles of 40°/15°. The simultaneous assessment using the navigation system showed mean±SD radiographic alignment angles of 39.4°±2.6° and 16.4°±2.6° for inclination and anteversion, respectively. Assessment of cup positioning based on Lewinnek's safe zone criteria showed all of the procedures (n=105) achieved acceptable alignment within the safe zone. A comparison of the performances by resident and senior hip surgeons showed no significant difference between the groups (P=.74 for inclination and P=.81 for anteversion). The iPhone/iPad technique examined in this study could achieve acceptable performance in determining cup alignment in THA regardless of the surgeon's expertise. [Orthopedics. 2016; 39(4):e621-e626.].

  6. Use of Selected Goodness-of-Fit Statistics to Assess the Accuracy of a Model of Henry Hagg Lake, Oregon

    NASA Astrophysics Data System (ADS)

    Rounds, S. A.; Sullivan, A. B.

    2004-12-01

    Assessing a model's ability to reproduce field data is a critical step in the modeling process. For any model, some method of determining goodness-of-fit to measured data is needed to aid in calibration and to evaluate model performance. Visualizations and graphical comparisons of model output are an excellent way to begin that assessment. At some point, however, model performance must be quantified. Goodness-of-fit statistics, including the mean error (ME), mean absolute error (MAE), root mean square error, and coefficient of determination, typically are used to measure model accuracy. Statistical tools such as the sign test or Wilcoxon test can be used to test for model bias. The runs test can detect phase errors in simulated time series. Each statistic is useful, but each has its limitations. None provides a complete quantification of model accuracy. In this study, a suite of goodness-of-fit statistics was applied to a model of Henry Hagg Lake in northwest Oregon. Hagg Lake is a man-made reservoir on Scoggins Creek, a tributary to the Tualatin River. Located on the west side of the Portland metropolitan area, the Tualatin Basin is home to more than 450,000 people. Stored water in Hagg Lake helps to meet the agricultural and municipal water needs of that population. Future water demands have caused water managers to plan for a potential expansion of Hagg Lake, doubling its storage to roughly 115,000 acre-feet. A model of the lake was constructed to evaluate the lake's water quality and estimate how that quality might change after raising the dam. The laterally averaged, two-dimensional, U.S. Army Corps of Engineers model CE-QUAL-W2 was used to construct the Hagg Lake model. Calibrated for the years 2000 and 2001 and confirmed with data from 2002 and 2003, modeled parameters included water temperature, ammonia, nitrate, phosphorus, algae, zooplankton, and dissolved oxygen. Several goodness-of-fit statistics were used to quantify model accuracy and bias. Model

  7. A retrospective study to validate an intraoperative robotic classification system for assessing the accuracy of kirschner wire (K-wire) placements with postoperative computed tomography classification system for assessing the accuracy of pedicle screw placements

    PubMed Central

    Tsai, Tai-Hsin; Wu, Dong-Syuan; Su, Yu-Feng; Wu, Chieh-Hsin; Lin, Chih-Lung

    2016-01-01

    Abstract This purpose of this retrospective study is validation of an intraoperative robotic grading classification system for assessing the accuracy of Kirschner-wire (K-wire) placements with the postoperative computed tomography (CT)-base classification system for assessing the accuracy of pedicle screw placements. We conducted a retrospective review of prospectively collected data from 35 consecutive patients who underwent 176 robotic assisted pedicle screws instrumentation at Kaohsiung Medical University Hospital from September 2014 to November 2015. During the operation, we used a robotic grading classification system for verifying the intraoperative accuracy of K-wire placements. Three months after surgery, we used the common CT-base classification system to assess the postoperative accuracy of pedicle screw placements. The distributions of accuracy between the intraoperative robot-assisted and various postoperative CT-based classification systems were compared using kappa statistics of agreement. The intraoperative accuracies of K-wire placements before and after repositioning were classified as excellent (131/176, 74.4% and 133/176, 75.6%, respectively), satisfactory (36/176, 20.5% and 41/176, 23.3%, respectively), and malpositioned (9/176, 5.1% and 2/176, 1.1%, respectively) In postoperative CT-base classification systems were evaluated. No screw placements were evaluated as unacceptable under any of these systems. Kappa statistics revealed no significant differences between the proposed system and the aforementioned classification systems (P <0.001). Our results revealed no significant differences between the intraoperative robotic grading system and various postoperative CT-based grading systems. The robotic grading classification system is a feasible method for evaluating the accuracy of K-wire placements. Using the intraoperative robot grading system to classify the accuracy of K-wire placements enables predicting the postoperative accuracy of

  8. A retrospective study to validate an intraoperative robotic classification system for assessing the accuracy of kirschner wire (K-wire) placements with postoperative computed tomography classification system for assessing the accuracy of pedicle screw placements.

    PubMed

    Tsai, Tai-Hsin; Wu, Dong-Syuan; Su, Yu-Feng; Wu, Chieh-Hsin; Lin, Chih-Lung

    2016-09-01

    This purpose of this retrospective study is validation of an intraoperative robotic grading classification system for assessing the accuracy of Kirschner-wire (K-wire) placements with the postoperative computed tomography (CT)-base classification system for assessing the accuracy of pedicle screw placements.We conducted a retrospective review of prospectively collected data from 35 consecutive patients who underwent 176 robotic assisted pedicle screws instrumentation at Kaohsiung Medical University Hospital from September 2014 to November 2015. During the operation, we used a robotic grading classification system for verifying the intraoperative accuracy of K-wire placements. Three months after surgery, we used the common CT-base classification system to assess the postoperative accuracy of pedicle screw placements. The distributions of accuracy between the intraoperative robot-assisted and various postoperative CT-based classification systems were compared using kappa statistics of agreement.The intraoperative accuracies of K-wire placements before and after repositioning were classified as excellent (131/176, 74.4% and 133/176, 75.6%, respectively), satisfactory (36/176, 20.5% and 41/176, 23.3%, respectively), and malpositioned (9/176, 5.1% and 2/176, 1.1%, respectively)In postoperative CT-base classification systems were evaluated. No screw placements were evaluated as unacceptable under any of these systems. Kappa statistics revealed no significant differences between the proposed system and the aforementioned classification systems (P <0.001).Our results revealed no significant differences between the intraoperative robotic grading system and various postoperative CT-based grading systems. The robotic grading classification system is a feasible method for evaluating the accuracy of K-wire placements. Using the intraoperative robot grading system to classify the accuracy of K-wire placements enables predicting the postoperative accuracy of pedicle screw

  9. Pareto-based evolutionary algorithms for the calculation of transformation parameters and accuracy assessment of historical maps

    NASA Astrophysics Data System (ADS)

    Manzano-Agugliaro, F.; San-Antonio-Gómez, C.; López, S.; Montoya, F. G.; Gil, C.

    2013-08-01

    When historical map data are compared with modern cartography, the old map coordinates must be transformed to the current system. However, historical data often exhibit heterogeneous quality. In calculating the transformation parameters between the historical and modern maps, it is often necessary to discard highly uncertain data. An optimal balance between the objectives of minimising the transformation error and eliminating as few points as possible can be achieved by generating a Pareto front of solutions using evolutionary genetic algorithms. The aim of this paper is to assess the performance of evolutionary algorithms in determining the accuracy of historical maps in regard to modern cartography. When applied to the 1787 Tomas Lopez map, the use of evolutionary algorithms reduces the linear error by 40% while eliminating only 2% of the data points. The main conclusion of this paper is that evolutionary algorithms provide a promising alternative for the transformation of historical map coordinates and determining the accuracy of historical maps in regard to modern cartography, particularly when the positional quality of the data points used cannot be assured.

  10. Assessment of the labelling accuracy of spanish semipreserved anchovies products by FINS (forensically informative nucleotide sequencing).

    PubMed

    Velasco, Amaya; Aldrey, Anxela; Pérez-Martín, Ricardo I; Sotelo, Carmen G

    2016-06-01

    Anchovies have been traditionally captured and processed for human consumption for millennia. In the case of Spain, ripened and salted anchovies are a delicacy, which, in some cases, can reach high commercial values. Although there have been a number of studies presenting DNA methodologies for the identification of anchovies, this is one of the first studies investigating the level of mislabelling in this kind of products in Europe. Sixty-three commercial semipreserved anchovy products were collected in different types of food markets in four Spanish cities to check labelling accuracy. Species determination in these commercial products was performed by sequencing two different cyt-b mitochondrial DNA fragments. Results revealed mislabelling levels higher than 15%, what authors consider relatively high considering the importance of the product. The most frequent substitute species was the Argentine anchovy, Engraulis anchoita, which can be interpreted as an economic fraud.

  11. Violence risk assessment and women: predictive accuracy of the HCR-20 in a civil psychiatric sample.

    PubMed

    Garcia-Mansilla, Alexandra; Rosenfeld, Barry; Cruise, Keith R

    2011-01-01

    Research to date has not adequately demonstrated whether the HCR-20 Violence Risk Assessment Scheme (HCR-20; Webster, Douglas, Eaves, & Hart, 1997), a structured violence risk assessment measure with a robust literature supporting its validity in male samples, is a valid indicator of violence risk in women. This study utilized data from the MacArthur Study of Mental Disorder and Violence to retrospectively score an abbreviated version of HCR-20 in 827 civil psychiatric patients. HCR-20 scores and predictive accuracy of community violence were compared for men and women. Results suggested that the HCR-20 is slightly, but not significantly, better for evaluating future risk for violence in men than in women, although the magnitude of the gender differences was small and was largely limited to historical factors. The results do not indicate that the HCR-20 needs to be tailored for use in women or that it should not be used in women, but they do highlight that the HCR-20 should be used cautiously and with full awareness of its potential limitations in women.

  12. Accuracy Assessment of GO Pro Hero 3 (black) Camera in Underwater Environment

    NASA Astrophysics Data System (ADS)

    Helmholz, , P.; Long, J.; Munsie, T.; Belton, D.

    2016-06-01

    Modern digital cameras are increasing in quality whilst decreasing in size. In the last decade, a number of waterproof consumer digital cameras (action cameras) have become available, which often cost less than 500. A possible application of such action cameras is in the field of Underwater Photogrammetry. Especially with respect to the fact that with the change of the medium to below water can in turn counteract the distortions present. The goal of this paper is to investigate the suitability of such action cameras for underwater photogrammetric applications focusing on the stability of the camera and the accuracy of the derived coordinates for possible photogrammetric applications. For this paper a series of image sequences was capture in a water tank. A calibration frame was placed in the water tank allowing the calibration of the camera and the validation of the measurements using check points. The accuracy assessment covered three test sets operating three GoPro sports cameras of the same model (Hero 3 black). The test set included the handling of the camera in a controlled manner where the camera was only dunked into the water tank using 7MP and 12MP resolution and a rough handling where the camera was shaken as well as being removed from the waterproof case using 12MP resolution. The tests showed that the camera stability was given with a maximum standard deviation of the camera constant σc of 0.0031mm for 7MB (for an average c of 2.720mm) and 0.0072 mm for 12MB (for an average c of 3.642mm). The residual test of the check points gave for the 7MB test series the largest rms value with only 0.450mm and the largest maximal residual of only 2.5 mm. For the 12MB test series the maximum rms value is 0. 653mm.

  13. Finite-volume versus streaming-based lattice Boltzmann algorithm for fluid-dynamics simulations: A one-to-one accuracy and performance study.

    PubMed

    Shrestha, Kalyan; Mompean, Gilmar; Calzavarini, Enrico

    2016-02-01

    A finite-volume (FV) discretization method for the lattice Boltzmann (LB) equation, which combines high accuracy with limited computational cost is presented. In order to assess the performance of the FV method we carry out a systematic comparison, focused on accuracy and computational performances, with the standard streaming lattice Boltzmann equation algorithm. In particular we aim at clarifying whether and in which conditions the proposed algorithm, and more generally any FV algorithm, can be taken as the method of choice in fluid-dynamics LB simulations. For this reason the comparative analysis is further extended to the case of realistic flows, in particular thermally driven flows in turbulent conditions. We report the successful simulation of high-Rayleigh number convective flow performed by a lattice Boltzmann FV-based algorithm with wall grid refinement.

  14. The Impact of Self-Evaluation Instruction on Student Self-Evaluation, Music Performance, and Self-Evaluation Accuracy

    ERIC Educational Resources Information Center

    Hewitt, Michael P.

    2011-01-01

    The author sought to determine whether self-evaluation instruction had an impact on student self-evaluation, music performance, and self-evaluation accuracy of music performance among middle school instrumentalists. Participants (N = 211) were students at a private middle school located in a metropolitan area of a mid-Atlantic state. Students in…

  15. Prostate Localization on Daily Cone-Beam Computed Tomography Images: Accuracy Assessment of Similarity Metrics

    SciTech Connect

    Kim, Jinkoo; Hammoud, Rabih; Pradhan, Deepak; Zhong Hualiang; Jin, Ryan Y.; Movsas, Benjamin; Chetty, Indrin J.

    2010-07-15

    Purpose: To evaluate different similarity metrics (SM) using natural calcifications and observation-based measures to determine the most accurate prostate and seminal vesicle localization on daily cone-beam CT (CBCT) images. Methods and Materials: CBCT images of 29 patients were retrospectively analyzed; 14 patients with prostate calcifications (calcification data set) and 15 patients without calcifications (no-calcification data set). Three groups of test registrations were performed. Test 1: 70 CT/CBCT pairs from calcification dataset were registered using 17 SMs (6,580 registrations) and compared using the calcification mismatch error as an endpoint. Test 2: Using the four best SMs from Test 1, 75 CT/CBCT pairs in the no-calcification data set were registered (300 registrations). Accuracy of contour overlays was ranked visually. Test 3: For the best SM from Tests 1 and 2, accuracy was estimated using 356 CT/CBCT registrations. Additionally, target expansion margins were investigated for generating registration regions of interest. Results: Test 1-Incremental sign correlation (ISC), gradient correlation (GC), gradient difference (GD), and normalized cross correlation (NCC) showed the smallest errors ({mu} {+-} {sigma}: 1.6 {+-} 0.9 {approx} 2.9 {+-} 2.1 mm). Test 2-Two of the three reviewers ranked GC higher. Test 3-Using GC, 96% of registrations showed <3-mm error when calcifications were filtered. Errors were left/right: 0.1 {+-} 0.5mm, anterior/posterior: 0.8 {+-} 1.0mm, and superior/inferior: 0.5 {+-} 1.1 mm. The existence of calcifications increased the success rate to 97%. Expansion margins of 4-10 mm were equally successful. Conclusion: Gradient-based SMs were most accurate. Estimated error was found to be <3 mm (1.1 mm SD) in 96% of the registrations. Results suggest that the contour expansion margin should be no less than 4 mm.

  16. Accuracy of task recall for epidemiological exposure assessment to construction noise

    PubMed Central

    Reeb-Whitaker, C; Seixas, N; Sheppard, L; Neitzel, R

    2004-01-01

    Aims: To validate the accuracy of construction worker recall of task and environment based information; and to evaluate the effect of task recall on estimates of noise exposure. Methods: A cohort of 25 construction workers recorded tasks daily and had dosimetry measurements weekly for six weeks. Worker recall of tasks reported on the daily activity cards was validated with research observations and compared directly to task recall at a six month interview. Results: The mean LEQ noise exposure level (dBA) from dosimeter measurements was 89.9 (n = 61) and 83.3 (n = 47) for carpenters and electricians, respectively. The percentage time at tasks reported during the interview was compared to that calculated from daily activity cards; only 2/22 tasks were different at the nominal 5% significance level. The accuracy, based on bias and precision, of percentage time reported for tasks from the interview was 53–100% (median 91%). For carpenters, the difference in noise estimates derived from activity cards (mean 91.9 dBA) was not different from those derived from the questionnaire (mean 91.7 dBA). This trend held for electricians as well. For all subjects, noise estimates derived from the activity card and the questionnaire were strongly correlated with dosimetry measurements. The average difference between the noise estimate derived from the questionnaire and dosimetry measurements was 2.0 dBA, and was independent of the actual exposure level. Conclusions: Six months after tasks were performed, construction workers were able to accurately recall the percentage time they spent at various tasks. Estimates of noise exposure based on long term recall (questionnaire) were no different from estimates derived from daily activity cards and were strongly correlated with dosimetry measurements, overestimating the level on average by 2.0 dBA. PMID:14739379

  17. Assessing BMP Performance Using Microtox Toxicity Analysis

    EPA Science Inventory

    Best Management Practices (BMPs) have been shown to be effective in reducing runoff and pollutants from urban areas and thus provide a mechanism to improve downstream water quality. Currently, BMP performance regarding water quality improvement is assessed through measuring each...

  18. Assessing Vocal Performances Using Analytical Assessment: A Case Study

    ERIC Educational Resources Information Center

    Gynnild, Vidar

    2016-01-01

    This study investigated ways to improve the appraisal of vocal performances within a national academy of music. Since a criterion-based assessment framework had already been adopted, the conceptual foundation of an assessment rubric was used as a guide in an action research project. The group of teachers involved wanted to explore thinking…

  19. The Assessment of Performance in Science Project.

    ERIC Educational Resources Information Center

    Driver, Rosalind; Worsley, Christopher

    1979-01-01

    Described are national methods of assessing and monitoring the achievement in science of students of 11, 13, and 16 years old in England and Wales. The tasks of the Assessment of Performance Unit (APU), a unit within the Department of Education and Science, are also described. (HM)

  20. Personality, Assessment Methods and Academic Performance

    ERIC Educational Resources Information Center

    Furnham, Adrian; Nuygards, Sarah; Chamorro-Premuzic, Tomas

    2013-01-01

    This study examines the relationship between personality and two different academic performance (AP) assessment methods, namely exams and coursework. It aimed to examine whether the relationship between traits and AP was consistent across self-reported versus documented exam results, two different assessment techniques and across different…

  1. Towards an assessment of the accuracy of density functional theory for first principles simulations of water

    NASA Astrophysics Data System (ADS)

    Grossman, Jeffrey C.; Schwegler, Eric; Draeger, Erik W.; Gygi, François; Galli, Giulia

    2004-01-01

    A series of Car-Parrinello (CP) molecular dynamics simulations of water are presented, aimed at assessing the accuracy of density functional theory in describing the structural and dynamical properties of water at ambient conditions. We found negligible differences in structural properties obtained using the Perdew-Burke-Ernzerhof or the Becke-Lee-Yang-Parr exchange and correlation energy functionals; we also found that size effects, although not fully negligible when using 32 molecule cells, are rather small. In addition, we identified a wide range of values of the fictitious electronic mass (μ) entering the CP Lagrangian for which the electronic ground state is accurately described, yielding trajectories and average properties that are independent of the value chosen. However, care must be exercised not to carry out simulations outside this range, where structural properties may artificially depend on μ. In the case of an accurate description of the electronic ground state, and in the absence of proton quantum effects, we obtained an oxygen-oxygen correlation function that is overstructured compared to experiment, and a diffusion coefficient which is approximately ten times smaller.

  2. Cold pressor stress induces opposite effects on cardioceptive accuracy dependent on assessment paradigm.

    PubMed

    Schulz, André; Lass-Hennemann, Johanna; Sütterlin, Stefan; Schächinger, Hartmut; Vögele, Claus

    2013-04-01

    Interoception depends on visceral afferent neurotraffic and central control processes. Physiological arousal and organ activation provide the biochemical and mechanical basis for visceral afferent neurotraffic. Perception of visceral symptoms occurs when attention is directed toward body sensations. Clinical studies suggest that stress contributes to the generation of visceral symptoms. However, during stress exposure attention is normally shifted away from bodily signals. Therefore, the net effects of stress on interoception remain unclear. We, therefore, investigated the impact of the cold pressor test or a control intervention (each n=21) on three established laboratory paradigms to assess cardioceptive accuracy (CA): for the Schandry-paradigm, participants were asked to count heartbeats, while during the Whitehead-tasks subjects were asked to rate whether a cardiac sensation appeared simultaneously with an auditory or visual stimulus. CA was increased by stress when attention was focused on visceral sensations (Schandry), while it decreased when attention was additionally directed toward external stimuli (visual Whitehead). Explanations for these results are offered in terms of internal versus external deployment of attention, as well as specific effects of the cold pressor on the cardiovascular system.

  3. The FES2014 tidal atlas, accuracy assessment for satellite altimetry and other geophysical applications

    NASA Astrophysics Data System (ADS)

    Lyard, Florent Henri; Carrère, Loren; Cancet, Mathilde; Boy, Jean-Paul; Gégout, Pascal; Lemoine, Jean-Michel

    2016-04-01

    The FES2014 tidal atlas (elaborated in a CNES-supported joint project involving the LEGOS laboratory, CLS and Noveltis) is the last release of the FES atlases series. Based on finite element hydrodynamic modelling with data assimilation, the FES atlases are routinely improved by taken advantage of the increasing duration of satellite altimetry missions. However, the most remarkable improvement in the FES2014 atlas is the unprecedentedly low level of prior misfits (i.e. between the hydrodynamic simulations and data), typically less than 1.3 centimeters RMS for the ocean M2 tide. This makes the data assimilation step much more reliable and more consistent with the true tidal dynamics, especially in shelf and coastal seas, and diminish the sensitivity of the accuracy to the observation distribution (extremely sparse or inexistent in the high latitudes). The FES2014 atlas has been validated and assessed in various geophysical applications (satellite altimetry corrections, gravimetry, etc…), showing significant improvements compared to previous FES releases and other state-of -the-art tidal atlases (such as DTU10, GOT4.8, TPXO8).

  4. Accuracy assessment and automation of free energy calculations for drug design.

    PubMed

    Christ, Clara D; Fox, Thomas

    2014-01-27

    As the free energy of binding of a ligand to its target is one of the crucial optimization parameters in drug design, its accurate prediction is highly desirable. In the present study we have assessed the average accuracy of free energy calculations for a total of 92 ligands binding to five different targets. To make this study and future larger scale applications possible we automated the setup procedure. Starting from user defined binding modes, the procedure decides which ligands to connect via a perturbation based on maximum common substructure criteria and produces all necessary parameter files for free energy calculations in AMBER 11. For the systems investigated, errors due to insufficient sampling were found to be substantial in some cases whereas differences in estimators (thermodynamic integration (TI) versus multistate Bennett acceptance ratio (MBAR)) were found to be negligible. Analytical uncertainty estimates calculated from a single free energy calculation were found to be much smaller than the sample standard deviation obtained from two independent free energy calculations. Agreement with experiment was found to be system dependent ranging from excellent to mediocre (RMSE = [0.9, 8.2, 4.7, 5.7, 8.7] kJ/mol). When restricting analyses to free energy calculations with sample standard deviations below 1 kJ/mol agreement with experiment improved (RMSE = [0.8, 6.9, 1.8, 3.9, 5.6] kJ/mol).

  5. Accuracy of Cameriere's cut-off value for third molar in assessing 18 years of age.

    PubMed

    De Luca, S; Biagi, R; Begnoni, G; Farronato, G; Cingolani, M; Merelli, V; Ferrante, L; Cameriere, R

    2014-02-01

    Due to increasingly numerous international migrations, estimating the age of unaccompanied minors is becoming of enormous significance for forensic professionals who are required to deliver expert opinions. The third molar tooth is one of the few anatomical sites available for estimating the age of individuals in late adolescence. This study verifies the accuracy of Cameriere's cut-off value of the third molar index (I3M) in assessing 18 years of age. For this purpose, a sample of orthopantomographs (OPTs) of 397 living subjects aged between 13 and 22 years (192 female and 205 male) was analyzed. Age distribution gradually decreases as I3M increases in both males and females. The results show that the sensitivity of the test was 86.6%, with a 95% confidence interval of (80.8%, 91.1%), and its specificity was 95.7%, with a 95% confidence interval of (92.1%, 98%). The proportion of correctly classified individuals was 91.4%. Estimated post-test probability, p was 95.6%, with a 95% confidence interval of (92%, 98%). Hence, the probability that a subject positive on the test (i.e., I3M<0.08) was 18 years of age or older was 95.6%.

  6. Accuracy assessment of the large-scale dynamic ocean topography from TOPEX/POSEIDON altimetry

    NASA Technical Reports Server (NTRS)

    Tapley, B. D.; Chambers, D. P.; Shum, C. K.; Eanes, R. J.; Ries, J. C.; Stewart, R. H.

    1994-01-01

    The quality of TOPEX/POSEIDON determinations of the global scale dynamic ocean topography have been assessed by determining mean topography solutions for successive 10-day repeat cycles and by examining the temporal changes in the sea surface topography to identify known features. The assessment is based on the analysis of TOPEX altimeter data cycles 1 through 36. Important errors in the tide model used to correct the altimeter data have been identified. The errors were reduced significantly by use of a new tide model derived with the TOPEX/POSEIDON measurements. Maps of the global 1-year mean topography, produced using four of the most accurate of the marine geoid, show that the largest error in the dynamic ocean topography show expected features, such as the known annual hemispherical sea surface rise and fall and the seasonal variability due to monsoon influence in the Indian Ocean. Changes in the sequence of 10-day topography maps show the development and propagation of an equatorial Kelvin wave in the Pacific beginning in December 1992 with a propagation velocity of approximately 3 m/s. The observations are consistent with observed changes in the equatorial trade winds, and with tide gauge and other in situ observations of the strengthening of the El Nino. Comparison of TOPEX-determine sea surface height at points near oceanic tide gauges shows agreement at the 4 cm root-mean-square (RMS) level over the tropical Pacific. The results show that the TOPEX altimeter data set can be used to map the ocean surface with a temporal resolution of 10 days and an accuracy which is insonsistent with traditional in situ methods for the determination of sea level variations.

  7. Assessing the performance of health technology assessment organizations: a framework.

    PubMed

    Lafortune, Louise; Farand, Lambert; Mondou, Isabelle; Sicotte, Claude; Battista, Renaldo

    2008-01-01

    In light of growing demands for public accountability, the broadening scope of health technology assessment organizations (HTAOs) activities and their increasing role in decision-making underscore the importance for them to demonstrate their performance. Based on Parson's social action theory, we propose a conceptual model that includes four functions an organization needs to balance to perform well: (i) goal attainment, (ii) production, (iii) adaptation to the environment, and (iv) culture and values maintenance. From a review of the HTA literature, we identify specific dimensions pertaining to the four functions and show how they relate to performance. We compare our model with evaluations reported in the scientific and gray literature to confirm its capacity to accommodate various evaluation designs, contexts of evaluation, and organizational models and perspectives. Our findings reveal the dimensions of performance most often assessed and other important ones that, hitherto, remain unexplored. The model provides a flexible and theoretically grounded tool to assess the performance of HTAOs.

  8. Performance assessment to enhance training effectiveness.

    SciTech Connect

    Stevens-Adams, Susan Marie; Gieseler, Charles J.; Basilico, Justin Derrick; Abbott, Robert G.; Forsythe, James Chris

    2010-09-01

    Training simulators have become increasingly popular tools for instructing humans on performance in complex environments. However, the question of how to provide individualized and scenario-specific assessment and feedback to students remains largely an open question. To maximize training efficiency, new technologies are required that assist instructors in providing individually relevant instruction. Sandia National Laboratories has shown the feasibility of automated performance assessment tools, such as the Sandia-developed Automated Expert Modeling and Student Evaluation (AEMASE) software, through proof-of-concept demonstrations, a pilot study, and an experiment. In the pilot study, the AEMASE system, which automatically assesses student performance based on observed examples of good and bad performance in a given domain, achieved a high degree of agreement with a human grader (89%) in assessing tactical air engagement scenarios. In more recent work, we found that AEMASE achieved a high degree of agreement with human graders (83-99%) for three Navy E-2 domain-relevant performance metrics. The current study provides a rigorous empirical evaluation of the enhanced training effectiveness achievable with this technology. In particular, we assessed whether giving students feedback based on automated metrics would enhance training effectiveness and improve student performance. We trained two groups of employees (differentiated by type of feedback) on a Navy E-2 simulator and assessed their performance on three domain-specific performance metrics. We found that students given feedback via the AEMASE-based debrief tool performed significantly better than students given only instructor feedback on two out of three metrics. Future work will focus on extending these developments for automated assessment of teamwork.

  9. Accuracy assessment, using stratified plurality sampling, of portions of a LANDSAT classification of the Arctic National Wildlife Refuge Coastal Plain

    NASA Technical Reports Server (NTRS)

    Card, Don H.; Strong, Laurence L.

    1989-01-01

    An application of a classification accuracy assessment procedure is described for a vegetation and land cover map prepared by digital image processing of LANDSAT multispectral scanner data. A statistical sampling procedure called Stratified Plurality Sampling was used to assess the accuracy of portions of a map of the Arctic National Wildlife Refuge coastal plain. Results are tabulated as percent correct classification overall as well as per category with associated confidence intervals. Although values of percent correct were disappointingly low for most categories, the study was useful in highlighting sources of classification error and demonstrating shortcomings of the plurality sampling method.

  10. Cluster Detection Tests in Spatial Epidemiology: A Global Indicator for Performance Assessment.

    PubMed

    Guttmann, Aline; Li, Xinran; Feschet, Fabien; Gaudart, Jean; Demongeot, Jacques; Boire, Jean-Yves; Ouchchane, Lemlih

    2015-01-01

    In cluster detection of disease, the use of local cluster detection tests (CDTs) is current. These methods aim both at locating likely clusters and testing for their statistical significance. New or improved CDTs are regularly proposed to epidemiologists and must be subjected to performance assessment. Because location accuracy has to be considered, performance assessment goes beyond the raw estimation of type I or II errors. As no consensus exists for performance evaluations, heterogeneous methods are used, and therefore studies are rarely comparable. A global indicator of performance, which assesses both spatial accuracy and usual power, would facilitate the exploration of CDTs behaviour and help between-studies comparisons. The Tanimoto coefficient (TC) is a well-known measure of similarity that can assess location accuracy but only for one detected cluster. In a simulation study, performance is measured for many tests. From the TC, we here propose two statistics, the averaged TC and the cumulated TC, as indicators able to provide a global overview of CDTs performance for both usual power and location accuracy. We evidence the properties of these two indicators and the superiority of the cumulated TC to assess performance. We tested these indicators to conduct a systematic spatial assessment displayed through performance maps. PMID:26086911

  11. Cluster Detection Tests in Spatial Epidemiology: A Global Indicator for Performance Assessment

    PubMed Central

    Guttmann, Aline; Li, Xinran; Feschet, Fabien; Gaudart, Jean; Demongeot, Jacques; Boire, Jean-Yves; Ouchchane, Lemlih

    2015-01-01

    In cluster detection of disease, the use of local cluster detection tests (CDTs) is current. These methods aim both at locating likely clusters and testing for their statistical significance. New or improved CDTs are regularly proposed to epidemiologists and must be subjected to performance assessment. Because location accuracy has to be considered, performance assessment goes beyond the raw estimation of type I or II errors. As no consensus exists for performance evaluations, heterogeneous methods are used, and therefore studies are rarely comparable. A global indicator of performance, which assesses both spatial accuracy and usual power, would facilitate the exploration of CDTs behaviour and help between-studies comparisons. The Tanimoto coefficient (TC) is a well-known measure of similarity that can assess location accuracy but only for one detected cluster. In a simulation study, performance is measured for many tests. From the TC, we here propose two statistics, the averaged TC and the cumulated TC, as indicators able to provide a global overview of CDTs performance for both usual power and location accuracy. We evidence the properties of these two indicators and the superiority of the cumulated TC to assess performance. We tested these indicators to conduct a systematic spatial assessment displayed through performance maps. PMID:26086911

  12. Assessing the Accuracy of Sentinel-3 SLSTR Sea-Surface Temperature Retrievals Using High Accuracy Infrared Radiiometers on Ships of Opportunity

    NASA Astrophysics Data System (ADS)

    Minnett, P. J.; Izaguirre, M. A.; Szcszodrak, M.; Williams, E.; Reynolds, R. M.

    2015-12-01

    The assessment of errors and uncertainties in satellite-derived SSTs can be achieved by comparisons with independent measurements of skin SST of high accuracy. Such validation measurements are provided by well-calibrated infrared radiometers mounted on ships. The second generation of Marine-Atmospheric Emitted Radiance Interferometers (M-AERIs) have recently been developed and two are now deployed on cruise ships of Royal Caribbean Cruise Lines that operate in the Caribbean Sea, North Atlantic and Mediterranean Sea. In addition, two Infrared SST Autonomous Radiometers (ISARs) are mounted alternately on a vehicle transporter of NYK Lines that crosses the Pacific Ocean between Japan and the USA. Both M-AERIs and ISARs are self-calibrating radiometers having two internal blackbody cavities to provide at-sea calibration of the measured radiances, and the accuracy of the internal calibration is periodically determined by measurements of a NIST-traceable blackbody cavity in the laboratory. This provides SI-traceability for the at-sea measurements. It is anticipated that these sensors will be deployed during the next several years and will be available for the validation of the SLSTRs on Sentinel-3a and -3b.

  13. Physician Performance Assessment: Prevention of Cardiovascular Disease

    ERIC Educational Resources Information Center

    Lipner, Rebecca S.; Weng, Weifeng; Caverzagie, Kelly J.; Hess, Brian J.

    2013-01-01

    Given the rising burden of healthcare costs, both patients and healthcare purchasers are interested in discerning which physicians deliver quality care. We proposed a methodology to assess physician clinical performance in preventive cardiology care, and determined a benchmark for minimally acceptable performance. We used data on eight…

  14. Using Generalizability Theory to Examine the Accuracy and Validity of Large-Scale ESL Writing Assessment

    ERIC Educational Resources Information Center

    Huang, Jinyan

    2012-01-01

    Using generalizability (G-) theory, this study examined the accuracy and validity of the writing scores assigned to secondary school ESL students in the provincial English examinations in Canada. The major research question that guided this study was: Are there any differences between the accuracy and construct validity of the analytic scores…

  15. Estimating covariate-adjusted measures of diagnostic accuracy based on pooled biomarker assessments.

    PubMed

    McMahan, Christopher S; McLain, Alexander C; Gallagher, Colin M; Schisterman, Enrique F

    2016-07-01

    There is a need for epidemiological and medical researchers to identify new biomarkers (biological markers) that are useful in determining exposure levels and/or for the purposes of disease detection. Often this process is stunted by high testing costs associated with evaluating new biomarkers. Traditionally, biomarker assessments are individually tested within a target population. Pooling has been proposed to help alleviate the testing costs, where pools are formed by combining several individual specimens. Methods for using pooled biomarker assessments to estimate discriminatory ability have been developed. However, all these procedures have failed to acknowledge confounding factors. In this paper, we propose a regression methodology based on pooled biomarker measurements that allow the assessment of the discriminatory ability of a biomarker of interest. In particular, we develop covariate-adjusted estimators of the receiver-operating characteristic curve, the area under the curve, and Youden's index. We establish the asymptotic properties of these estimators and develop inferential techniques that allow one to assess whether a biomarker is a good discriminator between cases and controls, while controlling for confounders. The finite sample performance of the proposed methodology is illustrated through simulation. We apply our methods to analyze myocardial infarction (MI) data, with the goal of determining whether the pro-inflammatory cytokine interleukin-6 is a good predictor of MI after controlling for the subjects' cholesterol levels. PMID:26927583

  16. Accuracy Assessments of Cloud Droplet Size Retrievals from Polarized Reflectance Measurements by the Research Scanning Polarimeter

    NASA Technical Reports Server (NTRS)

    Alexandrov, Mikhail Dmitrievic; Cairns, Brian; Emde, Claudia; Ackerman, Andrew S.; vanDiedenhove, Bastiaan

    2012-01-01

    We present an algorithm for the retrieval of cloud droplet size distribution parameters (effective radius and variance) from the Research Scanning Polarimeter (RSP) measurements. The RSP is an airborne prototype for the Aerosol Polarimetery Sensor (APS), which was on-board of the NASA Glory satellite. This instrument measures both polarized and total reflectance in 9 spectral channels with central wavelengths ranging from 410 to 2260 nm. The cloud droplet size retrievals use the polarized reflectance in the scattering angle range between 135deg and 165deg, where they exhibit the sharply defined structure known as the rain- or cloud-bow. The shape of the rainbow is determined mainly by the single scattering properties of cloud particles. This significantly simplifies both forward modeling and inversions, while also substantially reducing uncertainties caused by the aerosol loading and possible presence of undetected clouds nearby. In this study we present the accuracy evaluation of our algorithm based on the results of sensitivity tests performed using realistic simulated cloud radiation fields.

  17. Estimating the Consistency and Accuracy of Classifications in a Standards-Referenced Assessment. CSE Technical Report 475.

    ERIC Educational Resources Information Center

    Young, Michael James; Yoon, Bokhee

    An important feature of recent large-scale performance assessments has been the reporting of pupil and school performance in terms of performance or proficiency categories. When an assessment uses such ordered categories as the primary means of reporting results, the natural way of reporting on the quality of the assessment is through the…

  18. Accuracy Assessment of Mobile Mapping Point Clouds Using the Existing Environment as Terrestrial Reference

    NASA Astrophysics Data System (ADS)

    Hofmann, S.; Brenner, C.

    2016-06-01

    Mobile mapping data is widely used in various applications, what makes it especially important for data users to get a statistically verified quality statement on the geometric accuracy of the acquired point clouds or its processed products. The accuracy of point clouds can be divided into an absolute and a relative quality, where the absolute quality describes the position of the point cloud in a world coordinate system such as WGS84 or UTM, whereas the relative accuracy describes the accuracy within the point cloud itself. Furthermore, the quality of processed products such as segmented features depends on the global accuracy of the point cloud but mainly on the quality of the processing steps. Several data sources with different characteristics and quality can be thought of as potential reference data, such as cadastral maps, orthophoto, artificial control objects or terrestrial surveys using a total station. In this work a test field in a selected residential area was acquired as reference data in a terrestrial survey using a total station. In order to reach high accuracy the stationing of the total station was based on a newly made geodetic network with a local accuracy of less than 3 mm. The global position of the network was determined using a long time GNSS survey reaching an accuracy of 8 mm. Based on this geodetic network a 3D test field with facades and street profiles was measured with a total station, each point with a two-dimensional position and altitude. In addition, the surface of poles of street lights, traffic signs and trees was acquired using the scanning mode of the total station. Comparing this reference data to the acquired mobile mapping point clouds of several measurement campaigns a detailed quality statement on the accuracy of the point cloud data is made. Additionally, the advantages and disadvantages of the described reference data source concerning availability, cost, accuracy and applicability are discussed.

  19. Quantitative performance assessments for neuromagnetic imaging systems.

    PubMed

    Koga, Ryo; Hiyama, Ei; Matsumoto, Takuya; Sekihara, Kensuke

    2013-01-01

    We have developed a Monte-Carlo simulation method to assess the performance of neuromagnetic imaging systems using two kinds of performance metrics: A-prime metric and spatial resolution. We compute these performance metrics for virtual sensor systems having 80, 160, 320, and 640 sensors, and discuss how the system performance is improved, depending on the number of sensors. We also compute these metrics for existing whole-head MEG systems, MEGvision™ (Yokogawa Electric Corporation, Tokyo, Japan) that uses axial-gradiometer sensors, and TRIUX™ (Elekta Corporate, Stockholm, Sweden) that uses planar-gradiometer and magnetometer sensors. We discuss performance comparisons between these significantly different systems.

  20. Quantitative performance assessments for neuromagnetic imaging systems.

    PubMed

    Koga, Ryo; Hiyama, Ei; Matsumoto, Takuya; Sekihara, Kensuke

    2013-01-01

    We have developed a Monte-Carlo simulation method to assess the performance of neuromagnetic imaging systems using two kinds of performance metrics: A-prime metric and spatial resolution. We compute these performance metrics for virtual sensor systems having 80, 160, 320, and 640 sensors, and discuss how the system performance is improved, depending on the number of sensors. We also compute these metrics for existing whole-head MEG systems, MEGvision™ (Yokogawa Electric Corporation, Tokyo, Japan) that uses axial-gradiometer sensors, and TRIUX™ (Elekta Corporate, Stockholm, Sweden) that uses planar-gradiometer and magnetometer sensors. We discuss performance comparisons between these significantly different systems. PMID:24110711

  1. Accuracy of a Low-Cost Novel Computer-Vision Dynamic Movement Assessment: Potential Limitations and Future Directions

    NASA Astrophysics Data System (ADS)

    McGroarty, M.; Giblin, S.; Meldrum, D.; Wetterling, F.

    2016-04-01

    The aim of the study was to perform a preliminary validation of a low cost markerless motion capture system (CAPTURE) against an industry gold standard (Vicon). Measurements of knee valgus and flexion during the performance of a countermovement jump (CMJ) between CAPTURE and Vicon were compared. After correction algorithms were applied to the raw CAPTURE data acceptable levels of accuracy and precision were achieved. The knee flexion angle measured for three trials using Capture deviated by -3.8° ± 3° (left) and 1.7° ± 2.8° (right) compared to Vicon. The findings suggest that low-cost markerless motion capture has potential to provide an objective method for assessing lower limb jump and landing mechanics in an applied sports setting. Furthermore, the outcome of the study warrants the need for future research to examine more fully the potential implications of the use of low-cost markerless motion capture in the evaluation of dynamic movement for injury prevention.

  2. A flexible alternative to the Cox proportional hazards model for assessing the prognostic accuracy of hospice patient survival.

    PubMed

    Miladinovic, Branko; Kumar, Ambuj; Mhaskar, Rahul; Kim, Sehwan; Schonwetter, Ronald; Djulbegovic, Benjamin

    2012-01-01

    Prognostic models are often used to estimate the length of patient survival. The Cox proportional hazards model has traditionally been applied to assess the accuracy of prognostic models. However, it may be suboptimal due to the inflexibility to model the baseline survival function and when the proportional hazards assumption is violated. The aim of this study was to use internal validation to compare the predictive power of a flexible Royston-Parmar family of survival functions with the Cox proportional hazards model. We applied the Palliative Performance Scale on a dataset of 590 hospice patients at the time of hospice admission. The retrospective data were obtained from the Lifepath Hospice and Palliative Care center in Hillsborough County, Florida, USA. The criteria used to evaluate and compare the models' predictive performance were the explained variation statistic R(2), scaled Brier score, and the discrimination slope. The explained variation statistic demonstrated that overall the Royston-Parmar family of survival functions provided a better fit (R(2) =0.298; 95% CI: 0.236-0.358) than the Cox model (R(2) =0.156; 95% CI: 0.111-0.203). The scaled Brier scores and discrimination slopes were consistently higher under the Royston-Parmar model. Researchers involved in prognosticating patient survival are encouraged to consider the Royston-Parmar model as an alternative to Cox. PMID:23082220

  3. Designing a Multi-Objective Multi-Support Accuracy Assessment of the 2001 National Land Cover Data (NLCD 2001) of the Conterminous United States

    EPA Science Inventory

    The database design and diverse application of NLCD 2001 pose significant challenges for accuracy assessment because numerous objectives are of interest, including accuracy of land cover, percent urban imperviousness, percent tree canopy, land-cover composition, and net change. ...

  4. Assessing the Accuracy of the Tracer Dilution Method with Atmospheric Dispersion Modeling

    NASA Astrophysics Data System (ADS)

    Taylor, D.; Delkash, M.; Chow, F. K.; Imhoff, P. T.

    2015-12-01

    Landfill methane emissions are difficult to estimate due to limited observations and data uncertainty. The mobile tracer dilution method is a widely used and cost-effective approach for predicting landfill methane emissions. The method uses a tracer gas released on the surface of the landfill and measures the concentrations of both methane and the tracer gas downwind. Mobile measurements are conducted with a gas analyzer mounted on a vehicle to capture transects of both gas plumes. The idea behind the method is that if the measurements are performed far enough downwind, the methane plume from the large area source of the landfill and the tracer plume from a small number of point sources will be sufficiently well-mixed to behave similarly, and the ratio between the concentrations will be a good estimate of the ratio between the two emissions rates. The mobile tracer dilution method is sensitive to different factors of the setup such as placement of the tracer release locations and distance from the landfill to the downwind measurements, which have not been thoroughly examined. In this study, numerical modeling is used as an alternative to field measurements to study the sensitivity of the tracer dilution method and provide estimates of measurement accuracy. Using topography and wind conditions for an actual landfill, a landfill emissions rate is prescribed in the model and compared against the emissions rate predicted by application of the tracer dilution method. Two different methane emissions scenarios are simulated: homogeneous emissions over the entire surface of the landfill, and heterogeneous emissions with a hot spot containing 80% of the total emissions where the daily cover area is located. Numerical modeling of the tracer dilution method is a useful tool for evaluating the method without having the expense and labor commitment of multiple field campaigns. Factors tested include number of tracers, distance between tracers, distance from landfill to transect

  5. Bias-free double judgment accuracy during spatial attention cueing: performance enhancement from voluntary and involuntary attention.

    PubMed

    Pack, Weston; Klein, Stanley A; Carney, Thom

    2014-12-01

    Recent research has demonstrated that involuntary attention improves target identification accuracy for letters using non-predictive peripheral cues, helping to resolve some of the controversy over performance enhancement from involuntary attention. While various cueing studies have demonstrated that their reported cueing effects were not due to response bias to the cue, very few investigations have quantified the extent of any response bias or developed methods of removing bias from observed results in a double judgment accuracy task. We have devised a method to quantify and remove response bias to cued locations in a double judgment accuracy cueing task, revealing the true, unbiased performance enhancement from involuntary and voluntary attention. In a 7-alternative forced choice cueing task using backward masked stimuli to temporally constrain stimulus processing, non-predictive cueing increased target detection and discrimination at cued locations relative to uncued locations even after cue location bias had been corrected.

  6. Do students know what they know? Exploring the accuracy of students' self-assessments

    NASA Astrophysics Data System (ADS)

    Lindsey, Beth A.; Nagel, Megan L.

    2015-12-01

    We have conducted an investigation into how well students in introductory science classes (both physics and chemistry) are able to predict which questions they will or will not be able to answer correctly on an upcoming assessment. An examination of the data at the level of students' overall scores reveals results consistent with the Dunning-Kruger effect, in which low-performing students tend to overestimate their abilities, while high-performing students estimate their abilities more accurately. Similar results have been widely reported in the science education literature. Breaking results out by students' responses to individual questions, however, reveals that students of all ability levels have difficulty distinguishing questions which they are able to answer correctly from those that they are not able to answer correctly. These results have implications for the future study and reporting of students' metacognitive abilities.

  7. Accuracy of Panoramic Radiograph in Assessment of the Relationship Between Mandibular Canal and Impacted Third Molars

    PubMed Central

    Tantanapornkul, Weeraya; Mavin, Darika; Prapaiphittayakun, Jaruthai; Phipatboonyarat, Natnicha; Julphantong, Wanchanok

    2016-01-01

    Background: The relationship between impacted mandibular third molar and mandibular canal is important for removal of this tooth. Panoramic radiography is one of the commonly used diagnostic tools for evaluating the relationship of these two structures. Objectives: To evaluate the accuracy of panoramic radiographic findings in predicting direct contact between mandibular canal and impacted third molars on 3D digital images, and to define panoramic criterion in predicting direct contact between the two structures. Methods: Two observers examined panoramic radiographs of 178 patients (256 impacted mandibular third molars). Panoramic findings of interruption of mandibular canal wall, isolated or with darkening of third molar root, diversion of mandibular canal and narrowing of third molar root were evaluated for 3D digital radiography. Direct contact between mandibular canal and impacted third molars on 3D digital images was then correlated with panoramic findings. Panoramic criterion was also defined in predicting direct contact between the two structures. Results: Panoramic findings of interruption of mandibular canal wall, isolated or with darkening of third molar root were statistically significantly correlated with direct contact between mandibular canal and impacted third molars on 3D digital images (p < 0.005), and were defined as panoramic criteria in predicting direct contact between the two structures. Conclusion: Interruption of mandibular canal wall, isolated or with darkening of third molar root observed on panoramic radiographs were effective in predicting direct contact between mandibular canal and impacted third molars on 3D digital images. Panoramic radiography is one of the efficient diagnostic tools for pre-operative assessment of impacted mandibular third molars. PMID:27398105

  8. The accuracy of a patient or parent-administered bleeding assessment tool administered in a paediatric haematology clinic.

    PubMed

    Lang, A T; Sturm, M S; Koch, T; Walsh, M; Grooms, L P; O'Brien, S H

    2014-11-01

    Classifying and describing bleeding symptoms is essential in the diagnosis and management of patients with mild bleeding disorders (MBDs). There has been increased interest in the use of bleeding assessment tools (BATs) to more objectively quantify the presence and severity of bleeding symptoms. To date, the administration of BATs has been performed almost exclusively by clinicians; the accuracy of a parent-proxy BAT has not been studied. Our objective was to determine the accuracy of a parent-administered BAT by measuring the level of agreement between parent and clinician responses to the Condensed MCMDM-1VWD Bleeding Questionnaire. Our cross-sectional study included children 0-21 years presenting to a haematology clinic for initial evaluation of a suspected MBD or follow-up evaluation of a previously diagnosed MBD. The parent/caregiver completed a modified version of the BAT; the clinician separately completed the BAT through interview. The mean parent-report bleeding score (BS) was 6.09 (range: -2 to 25); the mean clinician report BS was 4.54 (range: -1 to 17). The mean percentage of agreement across all bleeding symptoms was 78% (mean κ = 0.40; Gwet's AC1 = 0.74). Eighty percent of the population had an abnormal BS (defined as ≥2) when rated by parents and 76% had an abnormal score when rated by clinicians (86% agreement, κ = 0.59, Gwet's AC1 = 0.79). While parents tended to over-report bleeding as compared to clinicians, overall, BSs were similar between groups. These results lend support for further study of a modified proxy-report BAT as a clinical and research tool.

  9. Assessment of the accuracy of coupled cluster perturbation theory for open-shell systems. II. Quadruples expansions

    NASA Astrophysics Data System (ADS)

    Eriksen, Janus J.; Matthews, Devin A.; Jørgensen, Poul; Gauss, Jürgen

    2016-05-01

    We extend our assessment of the potential of perturbative coupled cluster (CC) expansions for a test set of open-shell atoms and organic radicals to the description of quadruple excitations. Namely, the second- through sixth-order models of the recently proposed CCSDT(Q-n) quadruples series [J. J. Eriksen et al., J. Chem. Phys. 140, 064108 (2014)] are compared to the prominent CCSDT(Q) and ΛCCSDT(Q) models. From a comparison of the models in terms of their recovery of total CC singles, doubles, triples, and quadruples (CCSDTQ) energies, we find that the performance of the CCSDT(Q-n) models is independent of the reference used (unrestricted or restricted (open-shell) Hartree-Fock), in contrast to the CCSDT(Q) and ΛCCSDT(Q) models, for which the accuracy is strongly dependent on the spin of the molecular ground state. By further comparing the ability of the models to recover relative CCSDTQ total atomization energies, the discrepancy between them is found to be even more pronounced, stressing how a balanced description of both closed- and open-shell species—as found in the CCSDT(Q-n) models—is indeed of paramount importance if any perturbative CC model is to be of chemical relevance for high-accuracy applications. In particular, the third-order CCSDT(Q-3) model is found to offer an encouraging alternative to the existing choices of quadruples models used in modern computational thermochemistry, since the model is still only of moderate cost, albeit markedly more costly than, e.g., the CCSDT(Q) and ΛCCSDT(Q) models.

  10. Assessment of the accuracy of coupled cluster perturbation theory for open-shell systems. II. Quadruples expansions.

    PubMed

    Eriksen, Janus J; Matthews, Devin A; Jørgensen, Poul; Gauss, Jürgen

    2016-05-21

    We extend our assessment of the potential of perturbative coupled cluster (CC) expansions for a test set of open-shell atoms and organic radicals to the description of quadruple excitations. Namely, the second- through sixth-order models of the recently proposed CCSDT(Q-n) quadruples series [J. J. Eriksen et al., J. Chem. Phys. 140, 064108 (2014)] are compared to the prominent CCSDT(Q) and ΛCCSDT(Q) models. From a comparison of the models in terms of their recovery of total CC singles, doubles, triples, and quadruples (CCSDTQ) energies, we find that the performance of the CCSDT(Q-n) models is independent of the reference used (unrestricted or restricted (open-shell) Hartree-Fock), in contrast to the CCSDT(Q) and ΛCCSDT(Q) models, for which the accuracy is strongly dependent on the spin of the molecular ground state. By further comparing the ability of the models to recover relative CCSDTQ total atomization energies, the discrepancy between them is found to be even more pronounced, stressing how a balanced description of both closed- and open-shell species-as found in the CCSDT(Q-n) models-is indeed of paramount importance if any perturbative CC model is to be of chemical relevance for high-accuracy applications. In particular, the third-order CCSDT(Q-3) model is found to offer an encouraging alternative to the existing choices of quadruples models used in modern computational thermochemistry, since the model is still only of moderate cost, albeit markedly more costly than, e.g., the CCSDT(Q) and ΛCCSDT(Q) models. PMID:27208932

  11. How Nonrecidivism Affects Predictive Accuracy: Evidence from a Cross-Validation of the Ontario Domestic Assault Risk Assessment (ODARA)

    ERIC Educational Resources Information Center

    Hilton, N. Zoe; Harris, Grant T.

    2009-01-01

    Prediction effect sizes such as ROC area are important for demonstrating a risk assessment's generalizability and utility. How a study defines recidivism might affect predictive accuracy. Nonrecidivism is problematic when predicting specialized violence (e.g., domestic violence). The present study cross-validates the ability of the Ontario…

  12. A TECHNIQUE FOR ASSESSING THE ACCURACY OF SUB-PIXEL IMPERVIOUS SURFACE ESTIMATES DERIVED FROM LANDSAT TM IMAGERY

    EPA Science Inventory

    We developed a technique for assessing the accuracy of sub-pixel derived estimates of impervious surface extracted from LANDSAT TM imagery. We utilized spatially coincident
    sub-pixel derived impervious surface estimates, high-resolution planimetric GIS data, vector--to-
    r...

  13. Diagnostic Accuracy of Computer-Aided Assessment of Intranodal Vascularity in Distinguishing Different Causes of Cervical Lymphadenopathy.

    PubMed

    Ying, Michael; Cheng, Sammy C H; Ahuja, Anil T

    2016-08-01

    Ultrasound is useful in assessing cervical lymphadenopathy. Advancement of computer science technology allows accurate and reliable assessment of medical images. The aim of the study described here was to evaluate the diagnostic accuracy of computer-aided assessment of the intranodal vascularity index (VI) in differentiating the various common causes of cervical lymphadenopathy. Power Doppler sonograms of 347 patients (155 with metastasis, 23 with lymphoma, 44 with tuberculous lymphadenitis, 125 reactive) with palpable cervical lymph nodes were reviewed. Ultrasound images of cervical nodes were evaluated, and the intranodal VI was quantified using a customized computer program. The diagnostic accuracy of using the intranodal VI to distinguish different disease groups was evaluated and compared. Metastatic and lymphomatous lymph nodes tend to be more vascular than tuberculous and reactive lymph nodes. The intranodal VI had the highest diagnostic accuracy in distinguishing metastatic and tuberculous nodes with a sensitivity of 80%, specificity of 73%, positive predictive value of 91%, negative predictive value of 51% and overall accuracy of 68% when a cutoff VI of 22% was used. Computer-aided assessment provides an objective and quantitative way to evaluate intranodal vascularity. The intranodal VI is a useful parameter in distinguishing certain causes of cervical lymphadenopathy and is particularly useful in differentiating metastatic and tuberculous lymph nodes. However, it has limited value in distinguishing lymphomatous nodes from metastatic and reactive nodes.

  14. An improved multivariate analytical method to assess the accuracy of acoustic sediment classification maps.

    NASA Astrophysics Data System (ADS)

    Biondo, M.; Bartholomä, A.

    2014-12-01

    High resolution hydro acoustic methods have been successfully employed for the detailed classification of sedimentary habitats. The fine-scale mapping of very heterogeneous, patchy sedimentary facies, and the compound effect of multiple non-linear physical processes on the acoustic signal, cause the classification of backscatter images to be subject to a great level of uncertainty. Standard procedures for assessing the accuracy of acoustic classification maps are not yet established. This study applies different statistical techniques to automated classified acoustic images with the aim of i) quantifying the ability of backscatter to resolve grain size distributions ii) understanding complex patterns influenced by factors other than grain size variations iii) designing innovative repeatable statistical procedures to spatially assess classification uncertainties. A high-frequency (450 kHz) sidescan sonar survey, carried out in the year 2012 in the shallow upper-mesotidal inlet the Jade Bay (German North Sea), allowed to map 100 km2 of surficial sediment with a resolution and coverage never acquired before in the area. The backscatter mosaic was ground-truthed using a large dataset of sediment grab sample information (2009-2011). Multivariate procedures were employed for modelling the relationship between acoustic descriptors and granulometric variables in order to evaluate the correctness of acoustic classes allocation and sediment group separation. Complex patterns in the acoustic signal appeared to be controlled by the combined effect of surface roughness, sorting and mean grain size variations. The area is dominated by silt and fine sand in very mixed compositions; in this fine grained matrix, percentages of gravel resulted to be the prevailing factor affecting backscatter variability. In the absence of coarse material, sorting mostly affected the ability to detect gradual but significant changes in seabed types. Misclassification due to temporal discrepancies

  15. 20 CFR 404.1645 - How and when we determine whether the performance accuracy standard is met.

    Code of Federal Regulations, 2010 CFR

    2010-04-01

    ... 20 Employees' Benefits 2 2010-04-01 2010-04-01 false How and when we determine whether the performance accuracy standard is met. 404.1645 Section 404.1645 Employees' Benefits SOCIAL SECURITY ADMINISTRATION FEDERAL OLD-AGE, SURVIVORS AND DISABILITY INSURANCE (1950- ) Determinations of...

  16. Accuracy assessment of planimetric large-scale map data for decision-making

    NASA Astrophysics Data System (ADS)

    Doskocz, Adam

    2016-06-01

    This paper presents decision-making risk estimation based on planimetric large-scale map data, which are data sets or databases which are useful for creating planimetric maps on scales of 1:5,000 or larger. The studies were conducted on four data sets of large-scale map data. Errors of map data were used for a risk assessment of decision-making about the localization of objects, e.g. for land-use planning in realization of investments. An analysis was performed for a large statistical sample set of shift vectors of control points, which were identified with the position errors of these points (errors of map data). In this paper, empirical cumulative distribution function models for decision-making risk assessment were established. The established models of the empirical cumulative distribution functions of shift vectors of control points involve polynomial equations. An evaluation of the compatibility degree of the polynomial with empirical data was stated by the convergence coefficient and by the indicator of the mean relative compatibility of model. The application of an empirical cumulative distribution function allows an estimation of the probability of the occurrence of position errors of points in a database. The estimated decision-making risk assessment is represented by the probability of the errors of points stored in the database.

  17. Accuracy of forced oscillation technique to assess lung function in geriatric COPD population

    PubMed Central

    Tse, Hoi Nam; Tseng, Cee Zhung Steven; Wong, King Ying; Yee, Kwok Sang; Ng, Lai Yun

    2016-01-01

    Introduction Performing lung function test in geriatric patients has never been an easy task. With well-established evidence indicating impaired small airway function and air trapping in patients with geriatric COPD, utilizing forced oscillation technique (FOT) as a supplementary tool may aid in the assessment of lung function in this population. Aims To study the use of FOT in the assessment of airflow limitation and air trapping in geriatric COPD patients. Study design A cross-sectional study in a public hospital in Hong Kong. ClinicalTrials.gov ID: NCT01553812. Methods Geriatric patients who had spirometry-diagnosed COPD were recruited, with both FOT and plethysmography performed. “Resistance” and “reactance” FOT parameters were compared to plethysmography for the assessment of air trapping and airflow limitation. Results In total, 158 COPD subjects with a mean age of 71.9±0.7 years and percentage of forced expiratory volume in 1 second of 53.4±1.7 L were recruited. FOT values had a good correlation (r=0.4–0.7) to spirometric data. In general, X values (reactance) were better than R values (resistance), showing a higher correlation with spirometric data in airflow limitation (r=0.07–0.49 vs 0.61–0.67), small airway (r=0.05–0.48 vs 0.56–0.65), and lung volume (r=0.12–0.29 vs 0.43–0.49). In addition, resonance frequency (Fres) and frequency dependence (FDep) could well identify the severe type (percentage of forced expiratory volume in 1 second <50%) of COPD with high sensitivity (0.76, 0.71) and specificity (0.72, 0.64) (area under the curve: 0.8 and 0.77, respectively). Moreover, X values could stratify different severities of air trapping, while R values could not. Conclusion FOT may act as a simple and accurate tool in the assessment of severity of airflow limitation, small and central airway function, and air trapping in patients with geriatric COPD who have difficulties performing conventional lung function test. Moreover, reactance

  18. Judgment of Learning, Monitoring Accuracy, and Student Performance in the Classroom Context

    ERIC Educational Resources Information Center

    Cao, Li; Nietfeld, John L.

    2005-01-01

    As a key component in self-regulated learning, the ability to accurately judge the status of learning enables students to become strategic and effective in the learning process. Weekly monitoring exercises were used to improve college students' (N = 94) accuracy of judgment of learning over a 14-week educational psychology course. A time series…

  19. Completion Rates and Accuracy of Performance Under Fixed and Variable Token Exchange Periods.

    ERIC Educational Resources Information Center

    McLaughlin, T. F.; Malaby, J. E.

    This research investigated the effects of employing fixed, variable, and extended token exchange periods for back-ups on the completion and accuracy of daily assignments for a total fifth and sixth-grade class. The results indicated that, in general, a higher percentage of assignments was completed when the number of days between point exchanges…

  20. Development, preliminary usability and accuracy testing of the EBMT 'eGVHD App' to support GvHD assessment according to NIH criteria-a proof of concept.

    PubMed

    Schoemans, H; Goris, K; Durm, R V; Vanhoof, J; Wolff, D; Greinix, H; Pavletic, S; Lee, S J; Maertens, J; Geest, S D; Dobbels, F; Duarte, R F

    2016-08-01

    The EBMT Complications and Quality of Life Working Party has developed a computer-based algorithm, the 'eGVHD App', using a user-centered design process. Accuracy was tested using a quasi-experimental crossover design with four expert-reviewed case vignettes in a convenience sample of 28 clinical professionals. Perceived usefulness was evaluated by the technology acceptance model (TAM) and User satisfaction by the Post-Study System Usability Questionnaire (PSSUQ). User experience was positive, with a median of 6 TAM points (interquartile range: 1) and beneficial median total, and subscale PSSUQ scores. The initial standard practice assessment of the vignettes yielded 65% correct results for diagnosis and 45% for scoring. The 'eGVHD App' significantly increased diagnostic and scoring accuracy to 93% (+28%) and 88% (+43%), respectively (both P<0.05). The same trend was observed in the repeated analysis of case 2: accuracy improved by using the App (+31% for diagnosis and +39% for scoring), whereas performance tended to decrease once the App was taken away. The 'eGVHD App' could dramatically improve the quality of care and research as it increased the performance of the whole user group by about 30% at the first assessment and showed a trend for improvement of individual performance on repeated case evaluation. PMID:27042834

  1. Enabling performance skills: Assessment in engineering education

    NASA Astrophysics Data System (ADS)

    Ferrone, Jenny Kristina

    Current reform in engineering education is part of a national trend emphasizing student learning as well as accountability in instruction. Assessing student performance to demonstrate accountability has become a necessity in academia. In newly adopted criterion proposed by the Accreditation Board for Engineering and Technology (ABET), undergraduates are expected to demonstrate proficiency in outcomes considered essential for graduating engineers. The case study was designed as a formative evaluation of freshman engineering students to assess the perceived effectiveness of performance skills in a design laboratory environment. The mixed methodology used both quantitative and qualitative approaches to assess students' performance skills and congruency among the respondents, based on individual, team, and faculty perceptions of team effectiveness in three ABET areas: Communications Skills. Design Skills, and Teamwork. The findings of the research were used to address future use of the assessment tool and process. The results of the study found statistically significant differences in perceptions of Teamwork Skills (p < .05). When groups composed of students and professors were compared, professors were less likely to perceive student's teaming skills as effective. The study indicated the need to: (1) improve non-technical performance skills, such as teamwork, among freshman engineering students; (2) incorporate feedback into the learning process; (3) strengthen the assessment process with a follow-up plan that specifically targets performance skill deficiencies, and (4) integrate the assessment instrument and practice with ongoing curriculum development. The findings generated by this study provides engineering departments engaged in assessment activity, opportunity to reflect, refine, and develop their programs as it continues. It also extends research on ABET competencies of engineering students in an under-investigated topic of factors correlated with team

  2. Accuracy of qualitative analysis for assessment of skilled baseball pitching technique.

    PubMed

    Nicholls, Rochelle; Fleisig, Glenn; Elliott, Bruce; Lyman, Stephen; Osinski, Edmund

    2003-07-01

    Baseball pitching must be performed with correct technique if injuries are to be avoided and performance maximized. High-speed video analysis is accepted as the most accurate and objective method for evaluation of baseball pitching mechanics. The aim of this research was to develop an equivalent qualitative analysis method for use with standard video equipment. A qualitative analysis protocol (QAP) was developed for 24 kinematic variables identified as important to pitching performance. Twenty male baseball pitchers were videotaped using 60 Hz camcorders, and their technique evaluated using the QAP, by two independent raters. Each pitcher was also assessed using a 6-camera 200 Hz Motion Analysis system (MAS). Four QAP variables (22%) showed significant similarity with MAS results. Inter-rater reliability showed agreement on 33% of QAP variables. It was concluded that a complete and accurate profile of an athlete's pitching mechanics cannot be made using the QAP in its current form, but it is possible such simple forms of biomechanical analysis could yield accurate results before 3-D methods become obligatory. PMID:14737929

  3. Accuracy of qualitative analysis for assessment of skilled baseball pitching technique.

    PubMed

    Nicholls, Rochelle; Fleisig, Glenn; Elliott, Bruce; Lyman, Stephen; Osinski, Edmund

    2003-07-01

    Baseball pitching must be performed with correct technique if injuries are to be avoided and performance maximized. High-speed video analysis is accepted as the most accurate and objective method for evaluation of baseball pitching mechanics. The aim of this research was to develop an equivalent qualitative analysis method for use with standard video equipment. A qualitative analysis protocol (QAP) was developed for 24 kinematic variables identified as important to pitching performance. Twenty male baseball pitchers were videotaped using 60 Hz camcorders, and their technique evaluated using the QAP, by two independent raters. Each pitcher was also assessed using a 6-camera 200 Hz Motion Analysis system (MAS). Four QAP variables (22%) showed significant similarity with MAS results. Inter-rater reliability showed agreement on 33% of QAP variables. It was concluded that a complete and accurate profile of an athlete's pitching mechanics cannot be made using the QAP in its current form, but it is possible such simple forms of biomechanical analysis could yield accurate results before 3-D methods become obligatory.

  4. Accuracy of field methods in assessing body fat in collegiate baseball players.

    PubMed

    Loenneke, Jeremy P; Wray, Mandy E; Wilson, Jacob M; Barnes, Jeremy T; Kearney, Monica L; Pujol, Thomas J

    2013-01-01

    When assessing the fitness levels of athletes, body composition is usually estimated, as it may play a role in athletic performance. Therefore, the purpose of this study was to determine the validity of bioelectrical impedance analysis (BIA) and skinfold (SKF) methods compared with dual-energy X-ray absorptiometry (DXA) for estimating percent body fat (%BF) in Division 1 collegiate baseball players (n = 35). The results of this study indicate that the field methods investigated were not valid compared with DXA for estimating %BF. In conclusion, this study does not support the use of the TBF-350, HBF-306, HBF-500, or SKF thickness for estimating %BF in collegiate baseball players. The reliability of these BIA devices remains unknown; therefore, it is currently uncertain if they may be used to track changes over time.

  5. Performance, accuracy, and Web server for evolutionary placement of short sequence reads under maximum likelihood.

    PubMed

    Berger, Simon A; Krompass, Denis; Stamatakis, Alexandros

    2011-05-01

    We present an evolutionary placement algorithm (EPA) and a Web server for the rapid assignment of sequence fragments (short reads) to edges of a given phylogenetic tree under the maximum-likelihood model. The accuracy of the algorithm is evaluated on several real-world data sets and compared with placement by pair-wise sequence comparison, using edit distances and BLAST. We introduce a slow and accurate as well as a fast and less accurate placement algorithm. For the slow algorithm, we develop additional heuristic techniques that yield almost the same run times as the fast version with only a small loss of accuracy. When those additional heuristics are employed, the run time of the more accurate algorithm is comparable with that of a simple BLAST search for data sets with a high number of short query sequences. Moreover, the accuracy of the EPA is significantly higher, in particular when the sample of taxa in the reference topology is sparse or inadequate. Our algorithm, which has been integrated into RAxML, therefore provides an equally fast but more accurate alternative to BLAST for tree-based inference of the evolutionary origin and composition of short sequence reads. We are also actively developing a Web server that offers a freely available service for computing read placements on trees using the EPA.

  6. A simple test to assess the static and dynamic accuracy of an inertial sensors system for human movement analysis.

    PubMed

    Cutti, Andrea Giovanni; Giovanardi, Andrea; Rocchi, Laura; Davalli, Angelo

    2006-01-01

    In the present study we introduced a simple test to assess the orientation error of an inertial sensors system for human movement analysis, both in static and dynamic conditions. In particular, the test was intended to quantify the sensitivity of the orientation error to direction and velocity of rotation. The test procedure was performed on a 5 MT9B sensors Xsens acquisition system, and revealed that the system orientation error, expressed by Euler angles decomposition, was sensitive both to direction and to velocity, being higher for fast movements: for mean rotation velocities of 180 degrees/s and 360 degrees/s, the worst case orientation error was 5.4 degrees and 11.6 degrees, respectively. The test can be suggested therefore as a useful tool to verify the user specific system accuracy without requiring any special equipment. In addition, the test provides further error information concerning direction and velocity of the movement which are not supplied by the producer, since they depend on the specific field of application. PMID:17946728

  7. Accuracy and uncertainty assessment on geostatistical simulation of soil salinity in a coastal farmland using auxiliary variable.

    PubMed

    Yao, R J; Yang, J S; Shao, H B

    2013-06-01

    Understanding the spatial soil salinity aids farmers and researchers in identifying areas in the field where special management practices are required. Apparent electrical conductivity measured by electromagnetic induction instrument in a fairly quick manner has been widely used to estimate spatial soil salinity. However, methods used for this purpose are mostly a series of interpolation algorithms. In this study, sequential Gaussian simulation (SGS) and sequential Gaussian co-simulation (SGCS) algorithms were applied for assessing the prediction accuracy and uncertainty of soil salinity with apparent electrical conductivity as auxiliary variable. Results showed that the spatial patterns of soil salinity generated by SGS and SGCS algorithms showed consistency with the measured values. The profile distribution of soil salinity was characterized by increasing with depth with medium salinization (ECe 4-8 dS/m) as the predominant salinization class. SGCS algorithm privileged SGS algorithm with smaller root mean square error according to the generated realizations. In addition, SGCS algorithm had larger proportions of true values falling within probability intervals and narrower range of probability intervals than SGS algorithm. We concluded that SGCS algorithm had better performance in modeling local uncertainty and propagating spatial uncertainty. The inclusion of auxiliary variable contributed to prediction capability and uncertainty modeling when using densely auxiliary variable as the covariate to predict the sparse target variable.

  8. Use of measurement uncertainty analysis to assess accuracy of carbon mass balance closure for a cellulase production process.

    PubMed

    Schell, Daniel J; Sáez, Juan Carlos; Hamilton, Jenny; Tholudur, Arun; McMillan, James D

    2002-01-01

    Closing carbon mass balances is a critical and necessary step for verifying the performance of any conversion process. We developed a methodology for calculating carbon mass balance closures for a cellulase production process and then applied measurement uncertainty analysis to calculate 95% confidence limits to assess the accuracy of the results. Cellulase production experiments were conducted in 7-L fermentors using Trichoderma reesei grown on pure cellulose (Solka-floc), glucose, or lactose. All input and output carbon-containing streams were measured and carbon dioxide in the exhaust gas was quantified using a mass spectrometer. On Solka-floc, carbon mass balances ranged from 90 to 100% closure for the first 48 h but increased to 101 to 135% closure from 72 h to the end of the cultivation at 168 h. Carbon mass balance closures for soluble sugar substrates ranged from 92 to 127% over the entire course of the cultivations. The 95% confidence intervals (CIs) for carbon mass balance closure were typically +/-11 to 12 percentage points after 48 h of cultivation. Many of the carbon mass balance results did not bracket 100% closure within the 95% CIs. These results suggest that measurement problems with the experimental or analytical methods may exist. This work shows that uncertainty analysis can be a useful diagnostic tool for identifying measurement problems in complex biochemical systems.

  9. Accuracy Assessment of Underwater Photogrammetric Three Dimensional Modelling for Coral Reefs

    NASA Astrophysics Data System (ADS)

    Guo, T.; Capra, A.; Troyer, M.; Gruen, A.; Brooks, A. J.; Hench, J. L.; Schmitt, R. J.; Holbrook, S. J.; Dubbini, M.

    2016-06-01

    Recent advances in automation of photogrammetric 3D modelling software packages have stimulated interest in reconstructing highly accurate 3D object geometry in unconventional environments such as underwater utilizing simple and low-cost camera systems. The accuracy of underwater 3D modelling is affected by more parameters than in single media cases. This study is part of a larger project on 3D measurements of temporal change of coral cover in tropical waters. It compares the accuracies of 3D point clouds generated by using images acquired from a system camera mounted in an underwater housing and the popular GoPro cameras respectively. A precisely measured calibration frame was placed in the target scene in order to provide accurate control information and also quantify the errors of the modelling procedure. In addition, several objects (cinder blocks) with various shapes were arranged in the air and underwater and 3D point clouds were generated by automated image matching. These were further used to examine the relative accuracy of the point cloud generation by comparing the point clouds of the individual objects with the objects measured by the system camera in air (the best possible values). Given a working distance of about 1.5 m, the GoPro camera can achieve a relative accuracy of 1.3 mm in air and 2.0 mm in water. The system camera achieved an accuracy of 1.8 mm in water, which meets our requirements for coral measurement in this system.

  10. Dehydration: physiology, assessment, and performance effects.

    PubMed

    Cheuvront, Samuel N; Kenefick, Robert W

    2014-01-01

    This article provides a comprehensive review of dehydration assessment and presents a unique evaluation of the dehydration and performance literature. The importance of osmolality and volume are emphasized when discussing the physiology, assessment, and performance effects of dehydration. The underappreciated physiologic distinction between a loss of hypo-osmotic body water (intracellular dehydration) and an iso-osmotic loss of body water (extracellular dehydration) is presented and argued as the single most essential aspect of dehydration assessment. The importance of diagnostic and biological variation analyses to dehydration assessment methods is reviewed and their use in gauging the true potential of any dehydration assessment method highlighted. The necessity for establishing proper baselines is discussed, as is the magnitude of dehydration required to elicit reliable and detectable osmotic or volume-mediated compensatory physiologic responses. The discussion of physiologic responses further helps inform and explain our analysis of the literature suggesting a ≥ 2% dehydration threshold for impaired endurance exercise performance mediated by volume loss. In contrast, no clear threshold or plausible mechanism(s) support the marginal, but potentially important, impairment in strength, and power observed with dehydration. Similarly, the potential for dehydration to impair cognition appears small and related primarily to distraction or discomfort. The impact of dehydration on any particular sport skill or task is therefore likely dependent upon the makeup of the task itself (e.g., endurance, strength, cognitive, and motor skill).

  11. A Litmus Test for Performance Assessment.

    ERIC Educational Resources Information Center

    Finson, Kevin D.; Beaver, John B.

    1992-01-01

    Presents 10 guidelines for developing performance-based assessment items. Presents a sample activity developed from the guidelines. The activity tests students ability to observe, classify, and infer, using red and blue litmus paper, a pH-range finder, vinegar, ammonia, an unknown solution, distilled water, and paper towels. (PR)

  12. Self-Assessed Intelligence and Academic Performance

    ERIC Educational Resources Information Center

    Chamorro-Premuzic, Tomas; Furnham, Adrian

    2006-01-01

    This paper reports the results of a two-year longitudinal study of the relationship between self-assessed intelligence (SAI) and academic performance (AP) in a sample of 184 British undergraduate students. Results showed significant correlations between SAI (both before and after taking an IQ test) and academic exam marks obtained two years later,…

  13. Assessing Performance When the Stakes are High.

    ERIC Educational Resources Information Center

    Crawford, William R.

    This paper is concerned with measuring achievement levels of medical students. Precise tools are needed to assess the readiness of an individual to practice. The basic question then becomes, what can this candidate do, at a given time, under given circumstances. Given the definition of the circumstances, and the candidate's performance, the…

  14. The Confidence-Accuracy Relationship in Diagnostic Assessment: The Case of the Potential Difference in Parallel Electric Circuits

    ERIC Educational Resources Information Center

    Saglam, Murat

    2015-01-01

    This study explored the relationship between accuracy of and confidence in performance of 114 prospective primary school teachers in answering diagnostic questions on potential difference in parallel electric circuits. The participants were required to indicate their confidence in their answers for each question. Bias and calibration indices were…

  15. Accuracy of CBCT images in the assessment of buccal marginal alveolar peri-implant defects: effect of field of view

    PubMed Central

    Murat, S; Kılıç, C; Yüksel, S; Avsever, H; Farman, A; Scarfe, W C

    2014-01-01

    Objectives: To investigate the reliability and accuracy of cone beam CT (CBCT) images obtained at different fields of view in detecting and quantifying simulated buccal marginal alveolar peri-implant defects. Methods: Simulated buccal defects were prepared in 69 implants inserted into cadaver mandibles. CBCT images at three different fields of view were acquired: 40 × 40, 60 × 60 and 100 × 100 mm. The presence or absence of defects was assessed on three sets of images using a five-point scale by three observers. Observers also measured the depth, width and volume of defects on CBCT images, which were compared with physical measurements. The kappa value was calculated to assess intra- and interobserver agreement. Six-way repeated analysis of variance was used to evaluate treatment effects on the diagnosis. Pairwise comparisons of median true-positive and true-negative rates were calculated by the χ2 test. Pearson's correlation coefficient was used to determine the relationship between measurements. Significance level was set as p < 0.05. Results: All observers had excellent intra-observer agreement. Defect status (p < 0.001) and defect size (p < 0.001) factors were statistically significant. Pairwise interactions were found between defect status and defect size (p = 0.001). No differences between median true-positive or true-negative values were found between CBCT field of views (p > 0.05). Significant correlations were found between physical and CBCT measurements (p < 0.001). Conclusions: All CBCT images performed similarly for the detection of simulated buccal marginal alveolar peri-implant defects. Depth, width and volume measurements of the defects from various CBCT images correlated highly with physical measurements. PMID:24645965

  16. Radioactive Waste Management Complex performance assessment: Draft

    SciTech Connect

    Case, M.J.; Maheras, S.J.; McKenzie-Carter, M.A.; Sussman, M.E.; Voilleque, P.

    1990-06-01

    A radiological performance assessment of the Radioactive Waste Management Complex at the Idaho National Engineering Laboratory was conducted to demonstrate compliance with appropriate radiological criteria of the US Department of Energy and the US Environmental Protection Agency for protection of the general public. The calculations involved modeling the transport of radionuclides from buried waste, to surface soil and subsurface media, and eventually to members of the general public via air, ground water, and food chain pathways. Projections of doses were made for both offsite receptors and individuals intruding onto the site after closure. In addition, uncertainty analyses were performed. Results of calculations made using nominal data indicate that the radiological doses will be below appropriate radiological criteria throughout operations and after closure of the facility. Recommendations were made for future performance assessment calculations.

  17. Accuracy of audio computer-assisted self-interviewing (ACASI) and self-administered questionnaires for the assessment of sexual behavior.

    PubMed

    Morrison-Beedy, Dianne; Carey, Michael P; Tu, Xin

    2006-09-01

    This study examined the accuracy of two retrospective methods and assessment intervals for recall of sexual behavior and assessed predictors of recall accuracy. Using a 2 [mode: audio-computer assisted self-interview (ACASI) vs. self-administered questionnaire (SAQ)] by 2 (frequency: monthly vs. quarterly) design, young women (N =102) were randomly assigned to one of four conditions. Participants completed baseline measures, monitored their behavior with a daily diary, and returned monthly (or quarterly) for assessments. A mixed pattern of accuracy between the four assessment methods was identified. Monthly assessments yielded more accurate recall for protected and unprotected vaginal sex but quarterly assessments yielded more accurate recall for unprotected oral sex. Mode differences were not strong, and hypothesized predictors of accuracy tended not to be associated with recall accuracy. Choice of assessment mode and frequency should be based upon the research question(s), population, resources, and context in which data collection will occur. PMID:16721506

  18. Interrater Reliability Estimators Commonly Used in Scoring Language Assessments: A Monte Carlo Investigation of Estimator Accuracy

    ERIC Educational Resources Information Center

    Morgan, Grant B.; Zhu, Min; Johnson, Robert L.; Hodge, Kari J.

    2014-01-01

    Common estimators of interrater reliability include Pearson product-moment correlation coefficients, Spearman rank-order correlations, and the generalizability coefficient. The purpose of this study was to examine the accuracy of estimators of interrater reliability when varying the true reliability, number of scale categories, and number of…

  19. Comparative analysis of Worldview-2 and Landsat 8 for coastal saltmarsh mapping accuracy assessment

    NASA Astrophysics Data System (ADS)

    Rasel, Sikdar M. M.; Chang, Hsing-Chung; Diti, Israt Jahan; Ralph, Tim; Saintilan, Neil

    2016-05-01

    Coastal saltmarsh and their constituent components and processes are of an interest scientifically due to their ecological function and services. However, heterogeneity and seasonal dynamic of the coastal wetland system makes it challenging to map saltmarshes with remotely sensed data. This study selected four important saltmarsh species Pragmitis australis, Sporobolus virginicus, Ficiona nodosa and Schoeloplectus sp. as well as a Mangrove and Pine tree species, Avecinia and Casuarina sp respectively. High Spatial Resolution Worldview-2 data and Coarse Spatial resolution Landsat 8 imagery were selected in this study. Among the selected vegetation types some patches ware fragmented and close to the spatial resolution of Worldview-2 data while and some patch were larger than the 30 meter resolution of Landsat 8 data. This study aims to test the effectiveness of different classifier for the imagery with various spatial and spectral resolutions. Three different classification algorithm, Maximum Likelihood Classifier (MLC), Support Vector Machine (SVM) and Artificial Neural Network (ANN) were tested and compared with their mapping accuracy of the results derived from both satellite imagery. For Worldview-2 data SVM was giving the higher overall accuracy (92.12%, kappa =0.90) followed by ANN (90.82%, Kappa 0.89) and MLC (90.55%, kappa = 0.88). For Landsat 8 data, MLC (82.04%) showed the highest classification accuracy comparing to SVM (77.31%) and ANN (75.23%). The producer accuracy of the classification results were also presented in the paper.

  20. Applying Signal-Detection Theory to the Study of Observer Accuracy and Bias in Behavioral Assessment

    ERIC Educational Resources Information Center

    Lerman, Dorothea C.; Tetreault, Allison; Hovanetz, Alyson; Bellaci, Emily; Miller, Jonathan; Karp, Hilary; Mahmood, Angela; Strobel, Maggie; Mullen, Shelley; Keyl, Alice; Toupard, Alexis

    2010-01-01

    We evaluated the feasibility and utility of a laboratory model for examining observer accuracy within the framework of signal-detection theory (SDT). Sixty-one individuals collected data on aggression while viewing videotaped segments of simulated teacher-child interactions. The purpose of Experiment 1 was to determine if brief feedback and…

  1. Portable device to assess dynamic accuracy of global positioning systems (GPS) receivers used in agricultural aircraft

    Technology Transfer Automated Retrieval System (TEKTRAN)

    A device was designed to test the dynamic accuracy of Global Positioning System (GPS) receivers used in aerial vehicles. The system works by directing a sun-reflected light beam from the ground to the aircraft using mirrors. A photodetector is placed pointing downward from the aircraft and circuitry...

  2. ESA ExoMars: Pre-launch PanCam Geometric Modeling and Accuracy Assessment

    NASA Astrophysics Data System (ADS)

    Li, D.; Li, R.; Yilmaz, A.

    2014-08-01

    ExoMars is the flagship mission of the European Space Agency (ESA) Aurora Programme. The mobile scientific platform, or rover, will carry a drill and a suite of instruments dedicated to exobiology and geochemistry research. As the ExoMars rover is designed to travel kilometres over the Martian surface, high-precision rover localization and topographic mapping will be critical for traverse path planning and safe planetary surface operations. For such purposes, the ExoMars rover Panoramic Camera system (PanCam) will acquire images that are processed into an imagery network providing vision information for photogrammetric algorithms to localize the rover and generate 3-D mapping products. Since the design of the ExoMars PanCam will influence localization and mapping accuracy, quantitative error analysis of the PanCam design will improve scientists' awareness of the achievable level of accuracy, and enable the PanCam design team to optimize its design to achieve the highest possible level of localization and mapping accuracy. Based on photogrammetric principles and uncertainty propagation theory, we have developed a method to theoretically analyze how mapping and localization accuracy would be affected by various factors, such as length of stereo hard-baseline, focal length, and pixel size, etc.

  3. Assessing the Accuracy and Consistency of Language Proficiency Classification under Competing Measurement Models

    ERIC Educational Resources Information Center

    Zhang, Bo

    2010-01-01

    This article investigates how measurement models and statistical procedures can be applied to estimate the accuracy of proficiency classification in language testing. The paper starts with a concise introduction of four measurement models: the classical test theory (CTT) model, the dichotomous item response theory (IRT) model, the testlet response…

  4. Accuracy, Confidence, and Calibration: How Young Children and Adults Assess Credibility

    ERIC Educational Resources Information Center

    Tenney, Elizabeth R.; Small, Jenna E.; Kondrad, Robyn L.; Jaswal, Vikram K.; Spellman, Barbara A.

    2011-01-01

    Do children and adults use the same cues to judge whether someone is a reliable source of information? In 4 experiments, we investigated whether children (ages 5 and 6) and adults used information regarding accuracy, confidence, and calibration (i.e., how well an informant's confidence predicts the likelihood of being correct) to judge informants'…

  5. Assessing the accuracy of the Second Military Survey for the Doren Landslide (Vorarlberg, Austria)

    NASA Astrophysics Data System (ADS)

    Zámolyi, András.; Székely, Balázs; Biszak, Sándor

    2010-05-01

    Reconstruction of the early and long-term evolution of landslide areas is especially important for determining the proportion of anthropogenic influence on the evolution of the region affected by mass movements. The recent geologic and geomorphological setting of the prominent Doren landslide in Vorarlberg (Western Austria) has been studied extensively by various research groups and civil engineering companies. Civil aerial imaging of the area dates back to the 1950's. Modern monitoring techniques include aerial imaging as well as airborne and terrestrial laser scanning (LiDAR) providing us with almost yearly assessment of the changing geomorphology of the area. However, initiation of the landslide occurred most probably earlier than the application of these methods, since there is evidence that the landslide was already active in the 1930's. For studying the initial phase of landslide formation one possibility is to get back on information recorded on historic photographs or historic maps. In this case study we integrated topographic information from the map sheets of the Second Military Survey of the Habsburg Empire that was conducted in Vorarlberg during the years 1816-1821 (Kretschmer et al., 2004) into a comprehensive GIS. The region of interest around the Doren landslide was georeferenced using the method of Timár et al. (2006) refined by Molnár (2009) thus providing a geodetically correct positioning and the possibility of matching the topographic features from the historic map with features recognized in the LiDAR DTM. The landslide of Doren is clearly visible in the historic map. Additionally, prominent geomorphologic features such as morphological scarps, rills and gullies, mass movement lobes and the course of the Weißach rivulet can be matched. Not only the shape and character of these elements can be recognized and matched, but also the positional accuracy is adequate for geomorphological studies. Since the settlement structure is very stable in the

  6. Viewing the hand prior to movement improves accuracy of pointing performed toward the unseen contralateral hand.

    PubMed

    Desmurget, M; Rossetti, Y; Jordan, M; Meckler, C; Prablanc, C

    1997-06-01

    It is now well established that the accuracy of pointing movements to visual targets is worse in the full open loop condition (FOL; the hand is never visible) than in the static closed loop condition (SCL; the hand is only visible in static position prior to movement onset). In order to account for this result, it is generally admitted that viewing the hand in static position (SCL) improves the movement planning process by allowing a better encoding of the initial state of the motor apparatus. Interestingly, this wide-spread interpretation has recently been challenged by several studies suggesting that the effect of viewing the upper limb at rest might be explained in terms of the simultaneous vision of the hand and target. This result is supported by recent studies showing that goal-directed movements involve different types of planning (egocentric versus allocentric) depending on whether the hand and target are seen simultaneously or not before movement onset. The main aim of the present study was to test whether or not the accuracy improvement observed when the hand is visible before movement onset is related, at least partially, to a better encoding of the initial state of the upper limb. To address this question, we studied experimental conditions in which subjects were instructed to point with their right index finger toward their unseen left index finger. In that situation (proprioceptive pointing), the hand and target are never visible simultaneously and an improvement of movement accuracy in SCL, with respect to FOL, may only be explained by a better encoding of the initial state of the moving limb when vision is present. The results of this experiment showed that both the systematic and the variable errors were significantly lower in the SCL than in the FOL condition. This suggests: (1) that the effect of viewing the static hand prior to motion does not only depend on the simultaneous vision of the goal and the effector during movement planning; (2) that

  7. Assessing the accuracy of software predictions of mammalian and microbial metabolites

    EPA Science Inventory

    New chemical development and hazard assessments benefit from accurate predictions of mammalian and microbial metabolites. Fourteen biotransformation libraries encoded in eight software packages that predict metabolite structures were assessed for their sensitivity (proportion of ...

  8. Computational Tools to Assess Turbine Biological Performance

    SciTech Connect

    Richmond, Marshall C.; Serkowski, John A.; Rakowski, Cynthia L.; Strickler, Brad; Weisbeck, Molly; Dotson, Curtis L.

    2014-07-24

    Public Utility District No. 2 of Grant County (GCPUD) operates the Priest Rapids Dam (PRD), a hydroelectric facility on the Columbia River in Washington State. The dam contains 10 Kaplan-type turbine units that are now more than 50 years old. Plans are underway to refit these aging turbines with new runners. The Columbia River at PRD is a migratory pathway for several species of juvenile and adult salmonids, so passage of fish through the dam is a major consideration when upgrading the turbines. In this paper, a method for turbine biological performance assessment (BioPA) is demonstrated. Using this method, a suite of biological performance indicators is computed based on simulated data from a CFD model of a proposed turbine design. Each performance indicator is a measure of the probability of exposure to a certain dose of an injury mechanism. Using known relationships between the dose of an injury mechanism and frequency of injury (dose–response) from laboratory or field studies, the likelihood of fish injury for a turbine design can be computed from the performance indicator. By comparing the values of the indicators from proposed designs, the engineer can identify the more-promising alternatives. We present an application of the BioPA method for baseline risk assessment calculations for the existing Kaplan turbines at PRD that will be used as the minimum biological performance that a proposed new design must achieve.

  9. An assessment of accuracy, error, and conflict with support values from genome-scale phylogenetic data.

    PubMed

    Taylor, Derek J; Piel, William H

    2004-08-01

    Despite the importance of molecular phylogenetics, few of its assumptions have been tested with real data. It is commonly assumed that nonparametric bootstrap values are an underestimate of the actual support, Bayesian posterior probabilities are an overestimate of the actual support, and among-gene phylogenetic conflict is low. We directly tested these assumptions by using a well-supported yeast reference tree. We found that bootstrap values were not significantly different from accuracy. Bayesian support values were, however, significant overestimates of accuracy but still had low false-positive error rates (0% to 2.8%) at the highest values (>99%). Although we found evidence for a branch-length bias contributing to conflict, there was little evidence for widespread, strongly supported among-gene conflict from bootstraps. The results demonstrate that caution is warranted concerning conclusions of conflict based on the assumption of underestimation for support values in real data. PMID:15140947

  10. Speech variability effects on recognition accuracy associated with concurrent task performance by pilots

    NASA Technical Reports Server (NTRS)

    Simpson, C. A.

    1985-01-01

    In the present study of the responses of pairs of pilots to aircraft warning classification tasks using an isolated word, speaker-dependent speech recognition system, the induced stress was manipulated by means of different scoring procedures for the classification task and by the inclusion of a competitive manual control task. Both speech patterns and recognition accuracy were analyzed, and recognition errors were recorded by type for an isolated word speaker-dependent system and by an offline technique for a connected word speaker-dependent system. While errors increased with task loading for the isolated word system, there was no such effect for task loading in the case of the connected word system.

  11. Accuracy in Student Self-Assessment: Directions and Cautions for Research

    ERIC Educational Resources Information Center

    Brown, Gavin T. L.; Andrade, Heidi L.; Chen, Fei

    2015-01-01

    Student self-assessment is a central component of current conceptions of formative and classroom assessment. The research on self-assessment has focused on its efficacy in promoting both academic achievement and self-regulated learning, with little concern for issues of validity. Because reliability of testing is considered a sine qua non for the…

  12. Accuracy assessment of high frequency 3D ultrasound for digital impression-taking of prepared teeth

    NASA Astrophysics Data System (ADS)

    Heger, Stefan; Vollborn, Thorsten; Tinschert, Joachim; Wolfart, Stefan; Radermacher, Klaus

    2013-03-01

    Silicone based impression-taking of prepared teeth followed by plaster casting is well-established but potentially less reliable, error-prone and inefficient, particularly in combination with emerging techniques like computer aided design and manufacturing (CAD/CAM) of dental prosthesis. Intra-oral optical scanners for digital impression-taking have been introduced but until now some drawbacks still exist. Because optical waves can hardly penetrate liquids or soft-tissues, sub-gingival preparations still need to be uncovered invasively prior to scanning. High frequency ultrasound (HFUS) based micro-scanning has been recently investigated as an alternative to optical intra-oral scanning. Ultrasound is less sensitive against oral fluids and in principal able to penetrate gingiva without invasively exposing of sub-gingival preparations. Nevertheless, spatial resolution as well as digitization accuracy of an ultrasound based micro-scanning system remains a critical parameter because the ultrasound wavelength in water-like media such as gingiva is typically smaller than that of optical waves. In this contribution, the in-vitro accuracy of ultrasound based micro-scanning for tooth geometry reconstruction is being investigated and compared to its extra-oral optical counterpart. In order to increase the spatial resolution of the system, 2nd harmonic frequencies from a mechanically driven focused single element transducer were separated and corresponding 3D surface models were calculated for both fundamentals and 2nd harmonics. Measurements on phantoms, model teeth and human teeth were carried out for evaluation of spatial resolution and surface detection accuracy. Comparison of optical and ultrasound digital impression taking indicate that, in terms of accuracy, ultrasound based tooth digitization can be an alternative for optical impression-taking.

  13. Accuracy of ELISA detection methods for gluten and reference materials: a realistic assessment.

    PubMed

    Diaz-Amigo, Carmen; Popping, Bert

    2013-06-19

    The determination of prolamins by ELISA and subsequent conversion of the resulting concentration to gluten content in food appears to be a comparatively simple and straightforward process with which many laboratories have years-long experience. At the end of the process, a value of gluten, expressed in mg/kg or ppm, is obtained. This value often is the basis for the decision if a product can be labeled gluten-free or not. On the basis of currently available scientific information, the accuracy of the obtained values with commonly used commercial ELISA kits has to be questioned. Although recently several multilaboratory studies have been conducted in an attempt to emphasize and ensure the accuracy of the results, data suggest that it was the precision of these assays, not the accuracy, that was confirmed because some of the underlying assumptions for calculating the gluten content lack scientific data support as well as appropriate reference materials for comparison. This paper discusses the issues of gluten determination and quantification with respect to antibody specificity, extraction procedures, reference materials, and their commutability.

  14. Investigating General Chemistry Students' Metacognitive Monitoring of Their Exam Performance by Measuring Postdiction Accuracies over Time

    ERIC Educational Resources Information Center

    Hawker, Morgan J.; Dysleski, Lisa; Rickey, Dawn

    2016-01-01

    Metacognitive monitoring of one's own understanding plays a key role in learning. An aspect of metacognitive monitoring can be measured by comparing a student's prediction or postdiction of performance (a judgment made before or after completing the relevant task) with the student's actual performance. In this study, we investigated students'…

  15. Performance Rating Accuracy Improvement through Changes in Individual and System Characteristics.

    ERIC Educational Resources Information Center

    Kavanagh, Michael J.

    Although the quest for better measurement of individual job performance has generated considerable empirical research in industrial and organizational psychology, the feeling persists that a good job is not really being done in measuring job performance. This research project investigated the effects of differences in both individual and systems…

  16. Complexity, Accuracy, Fluency and Lexis in Task-Based Performance: A Synthesis of the Ealing Research

    ERIC Educational Resources Information Center

    Skehan, Peter; Foster, Pauline

    2012-01-01

    This chapter will present a research synthesis of a series of studies, termed here the Ealing research. The studies use the same general framework to conceptualise tasks and task performance, enabling easier comparability. The different studies, although each is self-contained, build into a wider picture of task performance. The major point of…

  17. Performance viewing and editing in ASSESS Outsider

    SciTech Connect

    Snell, M.K.; Key, B.; Bingham, B.

    1993-07-01

    The Analytic System and Software for Evaluation of Safeguards and Security (ASSESS) Facility module records site information in the path elements and areas of an Adversary Sequence Diagram. The ASSESS Outsider evaluation module takes this information and first calculates performance values describing how much detection and delay is assigned at each path element and then uses the performance values to determine most-vulnerable paths. This paper discusses new Outsider capabilities that allow the user to view how elements are being defeated and to modify some of these values in Outsider. Outsider now displays how different path element segments are defeated and contrasts the probability of detection for alternate methods of defeating a door (e.g., the lock or the door face itself). The user can also override element segment delays and detection probabilities directly during analysis in Outsider. These capabilities allow users to compare element performance and to verify correct path element performance for all elements, not just those on the most-vulnerable path as is the case currently. Improvements or reductions in protection can be easily checked without creating a set of new facility files to accomplish it.

  18. Performance assessment of radioactive waste repositories.

    PubMed

    Campbell, J E; Cranwell, R M

    1988-03-18

    The current plans for permanent disposal of radioactive waste call for its emplacement in deep underground repositories mined from geologically stable rock formations. The U.S. Nuclear Regulatory Commission and U.S. Environmental Protection Agency have established regulations setting repository performance standards for periods of up to 10,000 years after disposal. Compliance with these regulations will be based on a performance assessment that includes (i) identification and evaluation of the likelihood of all significant processes and events that could affect a repository, (ii) examination of the effects of these processes and events on the performance of a repository, and (iii) estimation of the releases of radionuclides, including the associated uncertainties, caused by these processes and events. These estimates are incorporated into a probability distribution function showing the likelihood of exceeding radionuclide release limits specified by regulations. PMID:3279510

  19. Assessing the impacts of precipitation bias on distributed hydrologic model calibration and prediction accuracy

    NASA Astrophysics Data System (ADS)

    Looper, Jonathan P.; Vieux, Baxter E.; Moreno, Maria A.

    2012-02-01

    SummaryPhysics-based distributed (PBD) hydrologic models predict runoff throughout a basin using the laws of conservation of mass and momentum, and benefit from more accurate and representative precipitation input. V flo™ is a gridded distributed hydrologic model that predicts runoff and continuously updates soil moisture. As a participating model in the second Distributed Model Intercomparison Project (DMIP2), V flo™ is applied to the Illinois and Blue River basins in Oklahoma. Model parameters are derived from geospatial data for initial setup, and then adjusted to reproduce the observed flow under continuous time-series simulations and on an event basis. Simulation results demonstrate that certain runoff events are governed by saturation excess processes, while in others, infiltration-rate excess processes dominate. Streamflow prediction accuracy is enhanced when multi-sensor precipitation estimates (MPE) are bias corrected through re-analysis of the MPE provided in the DMIP2 experiment, resulting in gauge-corrected precipitation estimates (GCPE). Model calibration identified a set of parameters that minimized objective functions for errors in runoff volume and instantaneous discharge. Simulated streamflow for the Blue and Illinois River basins, have Nash-Sutcliffe efficiency coefficients between 0.61 and 0.68, respectively, for the 1996-2002 period using GCPE. The streamflow prediction accuracy improves by 74% in terms of Nash-Sutcliffe efficiency when GCPE is used during the calibration period. Without model calibration, excellent agreement between hourly simulated and observed discharge is obtained for the Illinois, whereas in the Blue River, adjustment of parameters affecting both saturation and infiltration-rate excess processes were necessary. During the 1996-2002 period, GCPE input was more important than model calibration for the Blue River, while model calibration proved more important for the Illinois River. During the verification period (2002

  20. Rigorous Training of Dogs Leads to High Accuracy in Human Scent Matching-To-Sample Performance.

    PubMed

    Marchal, Sophie; Bregeras, Olivier; Puaux, Didier; Gervais, Rémi; Ferry, Barbara

    2016-01-01

    Human scent identification is based on a matching-to-sample task in which trained dogs are required to compare a scent sample collected from an object found at a crime scene to that of a suspect. Based on dogs' greater olfactory ability to detect and process odours, this method has been used in forensic investigations to identify the odour of a suspect at a crime scene. The excellent reliability and reproducibility of the method largely depend on rigor in dog training. The present study describes the various steps of training that lead to high sensitivity scores, with dogs matching samples with 90% efficiency when the complexity of the scents presented during the task in the sample is similar to that presented in the in lineups, and specificity reaching a ceiling, with no false alarms in human scent matching-to-sample tasks. This high level of accuracy ensures reliable results in judicial human scent identification tests. Also, our data should convince law enforcement authorities to use these results as official forensic evidence when dogs are trained appropriately. PMID:26863620

  1. Rigorous Training of Dogs Leads to High Accuracy in Human Scent Matching-To-Sample Performance

    PubMed Central

    Marchal, Sophie; Bregeras, Olivier; Puaux, Didier; Gervais, Rémi; Ferry, Barbara

    2016-01-01

    Human scent identification is based on a matching-to-sample task in which trained dogs are required to compare a scent sample collected from an object found at a crime scene to that of a suspect. Based on dogs’ greater olfactory ability to detect and process odours, this method has been used in forensic investigations to identify the odour of a suspect at a crime scene. The excellent reliability and reproducibility of the method largely depend on rigor in dog training. The present study describes the various steps of training that lead to high sensitivity scores, with dogs matching samples with 90% efficiency when the complexity of the scents presented during the task in the sample is similar to that presented in the in lineups, and specificity reaching a ceiling, with no false alarms in human scent matching-to-sample tasks. This high level of accuracy ensures reliable results in judicial human scent identification tests. Also, our data should convince law enforcement authorities to use these results as official forensic evidence when dogs are trained appropriately. PMID:26863620

  2. Rigorous Training of Dogs Leads to High Accuracy in Human Scent Matching-To-Sample Performance.

    PubMed

    Marchal, Sophie; Bregeras, Olivier; Puaux, Didier; Gervais, Rémi; Ferry, Barbara

    2016-01-01

    Human scent identification is based on a matching-to-sample task in which trained dogs are required to compare a scent sample collected from an object found at a crime scene to that of a suspect. Based on dogs' greater olfactory ability to detect and process odours, this method has been used in forensic investigations to identify the odour of a suspect at a crime scene. The excellent reliability and reproducibility of the method largely depend on rigor in dog training. The present study describes the various steps of training that lead to high sensitivity scores, with dogs matching samples with 90% efficiency when the complexity of the scents presented during the task in the sample is similar to that presented in the in lineups, and specificity reaching a ceiling, with no false alarms in human scent matching-to-sample tasks. This high level of accuracy ensures reliable results in judicial human scent identification tests. Also, our data should convince law enforcement authorities to use these results as official forensic evidence when dogs are trained appropriately.

  3. Accuracy assessment of the ERP prediction method based on analysis of 100-year ERP series

    NASA Astrophysics Data System (ADS)

    Malkin, Z.; Tissen, V. M.

    2012-12-01

    A new method has been developed at the Siberian Research Institute of Metrology (SNIIM) for highly accurate prediction of UT1 and Pole motion (PM). In this study, a detailed comparison was made of real-time UT1 predictions made in 2006-2011 and PMpredictions made in 2009-2011making use of the SNIIM method with simultaneous predictions computed at the International Earth Rotation and Reference Systems Service (IERS), USNO. Obtained results have shown that proposed method provides better accuracy at different prediction lengths.

  4. Methods in Use for Sensitivity Analysis, Uncertainty Evaluation, and Target Accuracy Assessment

    SciTech Connect

    G. Palmiotti; M. Salvatores; G. Aliberti

    2007-10-01

    Sensitivity coefficients can be used for different objectives like uncertainty estimates, design optimization, determination of target accuracy requirements, adjustment of input parameters, and evaluations of the representativity of an experiment with respect to a reference design configuration. In this paper the theory, based on the adjoint approach, that is implemented in the ERANOS fast reactor code system is presented along with some unique tools and features related to specific types of problems as is the case for nuclide transmutation, reactivity loss during the cycle, decay heat, neutron source associated to fuel fabrication, and experiment representativity.

  5. Assessment of the accuracy of infrared and electromagnetic navigation using an industrial robot: Which factors are influencing the accuracy of navigation?

    PubMed

    Liodakis, Emmanouil; Chu, Kongfai; Westphal, Ralf; Krettek, Christian; Citak, Musa; Gosling, Thomas; Kenawey, Mohamed

    2011-10-01

    Our objectives were to detect factors that influence the accuracy of surgical navigation (magnitude of deformity, plane of deformity, position of the navigation bases) and compare the accuracy of infrared with electromagnetic navigation. Human cadaveric femora were used. A robot connected with a computer moved one of the bony fragments in a desired direction. The bases of the infrared navigation (BrainLab) and the receivers of the electromagnetic device (Fastrak-Pohlemus) were attached to the proximal and distal parts of the bone. For the first part of the study, deformities were classified in eight groups (e.g., 0 to 5(°)). For the second part, the bases were initially placed near the osteotomy and then far away. The mean absolute differences between both navigation system measurements and the robotic angles were significantly affected by the magnitude of angulation with better accuracy for smaller angulations (p < 0.001). The accuracy of infrared navigation was significantly better in the frontal and sagittal plane. Changing the position of the navigation bases near and far away from the deformity apex had no significant effect on the accuracy of infrared navigation; however, it influenced the accuracy of electromagnetic navigation in the frontal plane (p < 0.001). In conclusion, the use of infrared navigation systems for corrections of small angulation-deformities in the frontal or sagittal plane provides the most accurate results, irrespectively from the positioning of the navigation bases.

  6. Phase segmentation of X-ray computer tomography rock images using machine learning techniques: an accuracy and performance study

    NASA Astrophysics Data System (ADS)

    Chauhan, Swarup; Rühaak, Wolfram; Anbergen, Hauke; Kabdenov, Alen; Freise, Marcus; Wille, Thorsten; Sass, Ingo

    2016-07-01

    Performance and accuracy of machine learning techniques to segment rock grains, matrix and pore voxels from a 3-D volume of X-ray tomographic (XCT) grayscale rock images was evaluated. The segmentation and classification capability of unsupervised (k-means, fuzzy c-means, self-organized maps), supervised (artificial neural networks, least-squares support vector machines) and ensemble classifiers (bragging and boosting) were tested using XCT images of andesite volcanic rock, Berea sandstone, Rotliegend sandstone and a synthetic sample. The averaged porosity obtained for andesite (15.8 ± 2.5 %), Berea sandstone (16.3 ± 2.6 %), Rotliegend sandstone (13.4 ± 7.4 %) and the synthetic sample (48.3 ± 13.3 %) is in very good agreement with the respective laboratory measurement data and varies by a factor of 0.2. The k-means algorithm is the fastest of all machine learning algorithms, whereas a least-squares support vector machine is the most computationally expensive. Metrics entropy, purity, mean square root error, receiver operational characteristic curve and 10 K-fold cross-validation were used to determine the accuracy of unsupervised, supervised and ensemble classifier techniques. In general, the accuracy was found to be largely affected by the feature vector selection scheme. As it is always a trade-off between performance and accuracy, it is difficult to isolate one particular machine learning algorithm which is best suited for the complex phase segmentation problem. Therefore, our investigation provides parameters that can help in selecting the appropriate machine learning techniques for phase segmentation.

  7. Guidance for performing preliminary assessments under CERCLA

    SciTech Connect

    1991-09-01

    EPA headquarters and a national site assessment workgroup produced this guidance for Regional, State, and contractor staff who manage or perform preliminary assessments (PAs). EPA has focused this guidance on the types of sites and site conditions most commonly encountered. The PA approach described in this guidance is generally applicable to a wide variety of sites. However, because of the variability among sites, the amount of information available, and the level of investigative effort required, it is not possible to provide guidance that is equally applicable to all sites. PA investigators should recognize this and be aware that variation from this guidance may be necessary for some sites, particularly for PAs performed at Federal facilities, PAs conducted under EPA`s Environmental Priorities Initiative (EPI), and PAs at sites that have previously been extensively investigated by EPA or others. The purpose of this guidance is to provide instructions for conducting a PA and reporting results. This guidance discusses the information required to evaluate a site and how to obtain it, how to score a site, and reporting requirements. This document also provides guidelines and instruction on PA evaluation, scoring, and the use of standard PA scoresheets. The overall goal of this guidance is to assist PA investigators in conducting high-quality assessments that result in correct site screening or further action recommendations on a nationally consistent basis.

  8. Assessing the accuracy of the International Classification of Diseases codes to identify abusive head trauma: a feasibility study

    PubMed Central

    Berger, Rachel P; Parks, Sharyn; Fromkin, Janet; Rubin, Pamela; Pecora, Peter J

    2016-01-01

    Objective To assess the accuracy of an International Classification of Diseases (ICD) code-based operational case definition for abusive head trauma (AHT). Methods Subjects were children <5 years of age evaluated for AHT by a hospital-based Child Protection Team (CPT) at a tertiary care paediatric hospital with a completely electronic medical record (EMR) system. Subjects were designated as non-AHT traumatic brain injury (TBI) or AHT based on whether the CPT determined that the injuries were due to AHT. The sensitivity and specificity of the ICD-based definition were calculated. Results There were 223 children evaluated for AHT: 117 AHT and 106 non-AHT TBI. The sensitivity and specificity of the ICD-based operational case definition were 92% (95% CI 85.8 to 96.2) and 96% (95% CI 92.3 to 99.7), respectively. All errors in sensitivity and three of the four specificity errors were due to coder error; one specificity error was a physician error. Conclusions In a paediatric tertiary care hospital with an EMR system, the accuracy of an ICD-based case definition for AHT was high. Additional studies are needed to assess the accuracy of this definition in all types of hospitals in which children with AHT are cared for. PMID:24167034

  9. An Automated Grass-Based Procedure to Assess the Geometrical Accuracy of the Openstreetmap Paris Road Network

    NASA Astrophysics Data System (ADS)

    Brovelli, M. A.; Minghini, M.; Molinari, M. E.

    2016-06-01

    OpenStreetMap (OSM) is the largest spatial database of the world. One of the most frequently occurring geospatial elements within this database is the road network, whose quality is crucial for applications such as routing and navigation. Several methods have been proposed for the assessment of OSM road network quality, however they are often tightly coupled to the characteristics of the authoritative dataset involved in the comparison. This makes it hard to replicate and extend these methods. This study relies on an automated procedure which was recently developed for comparing OSM with any road network dataset. It is based on three Python modules for the open source GRASS GIS software and provides measures of OSM road network spatial accuracy and completeness. Provided that the user is familiar with the authoritative dataset used, he can adjust the values of the parameters involved thanks to the flexibility of the procedure. The method is applied to assess the quality of the Paris OSM road network dataset through a comparison against the French official dataset provided by the French National Institute of Geographic and Forest Information (IGN). The results show that the Paris OSM road network has both a high completeness and spatial accuracy. It has a greater length than the IGN road network, and is found to be suitable for applications requiring spatial accuracies up to 5-6 m. Also, the results confirm the flexibility of the procedure for supporting users in carrying out their own comparisons between OSM and reference road datasets.

  10. Assessment of Classification Accuracies of SENTINEL-2 and LANDSAT-8 Data for Land Cover / Use Mapping

    NASA Astrophysics Data System (ADS)

    Hale Topaloğlu, Raziye; Sertel, Elif; Musaoğlu, Nebiye

    2016-06-01

    This study aims to compare classification accuracies of land cover/use maps created from Sentinel-2 and Landsat-8 data. Istanbul metropolitan city of Turkey, with a population of around 14 million, having different landscape characteristics was selected as study area. Water, forest, agricultural areas, grasslands, transport network, urban, airport- industrial units and barren land- mine land cover/use classes adapted from CORINE nomenclature were used as main land cover/use classes to identify. To fulfil the aims of this research, recently acquired dated 08/02/2016 Sentinel-2 and dated 22/02/2016 Landsat-8 images of Istanbul were obtained and image pre-processing steps like atmospheric and geometric correction were employed. Both Sentinel-2 and Landsat-8 images were resampled to 30m pixel size after geometric correction and similar spectral bands for both satellites were selected to create a similar base for these multi-sensor data. Maximum Likelihood (MLC) and Support Vector Machine (SVM) supervised classification methods were applied to both data sets to accurately identify eight different land cover/ use classes. Error matrix was created using same reference points for Sentinel-2 and Landsat-8 classifications. After the classification accuracy, results were compared to find out the best approach to create current land cover/use map of the region. The results of MLC and SVM classification methods were compared for both images.

  11. Accuracy Assessment of Geostationary-Earth-Orbit with Simplified Perturbations Models

    NASA Astrophysics Data System (ADS)

    Ma, Lihua; Xu, Xiaojun; Pang, Feng

    2016-06-01

    A two-line element set (TLE) is a data format encoding orbital elements of an Earth-orbiting object for a given epoch. Using suitable prediction formula, the motion state of the object can be obtained at any time. The TLE data representation is specific to the simplified perturbations models, so any algorithm using a TLE as a data source must implement one of these models to correctly compute the state at a specific time. Accurately adjustment of antenna direction on the earth station is the key to satellite communications. With the TLE set topocentric elevation and azimuth direction angles can be calculated. The accuracy of perturbations models directly affect communication signal quality. Therefore, finding the error variations of the satellite orbits is really meaningful. In this present paper, the authors investigate the accuracy of the Geostationary-Earth-Orbit (GEO) with simplified perturbations models. The coordinate residuals of the simplified perturbations models in this paper can give references for engineers to predict the satellite orbits with TLE.

  12. Methodology issues concerning the accuracy of kinematic data collection and analysis using the ariel performance analysis system

    NASA Technical Reports Server (NTRS)

    Wilmington, R. P.; Klute, Glenn K. (Editor); Carroll, Amy E. (Editor); Stuart, Mark A. (Editor); Poliner, Jeff (Editor); Rajulu, Sudhakar (Editor); Stanush, Julie (Editor)

    1992-01-01

    Kinematics, the study of motion exclusive of the influences of mass and force, is one of the primary methods used for the analysis of human biomechanical systems as well as other types of mechanical systems. The Anthropometry and Biomechanics Laboratory (ABL) in the Crew Interface Analysis section of the Man-Systems Division performs both human body kinematics as well as mechanical system kinematics using the Ariel Performance Analysis System (APAS). The APAS supports both analysis of analog signals (e.g. force plate data collection) as well as digitization and analysis of video data. The current evaluations address several methodology issues concerning the accuracy of the kinematic data collection and analysis used in the ABL. This document describes a series of evaluations performed to gain quantitative data pertaining to position and constant angular velocity movements under several operating conditions. Two-dimensional as well as three-dimensional data collection and analyses were completed in a controlled laboratory environment using typical hardware setups. In addition, an evaluation was performed to evaluate the accuracy impact due to a single axis camera offset. Segment length and positional data exhibited errors within 3 percent when using three-dimensional analysis and yielded errors within 8 percent through two-dimensional analysis (Direct Linear Software). Peak angular velocities displayed errors within 6 percent through three-dimensional analyses and exhibited errors of 12 percent when using two-dimensional analysis (Direct Linear Software). The specific results from this series of evaluations and their impacts on the methodology issues of kinematic data collection and analyses are presented in detail. The accuracy levels observed in these evaluations are also presented.

  13. Measurement accuracy and Cerenkov removal for high performance, high spatial resolution scintillation dosimetry

    SciTech Connect

    Archambault, Louis; Beddar, A. Sam; Gingras, Luc

    2006-01-15

    With highly conformal radiation therapy techniques such as intensity-modulated radiation therapy, radiosurgery, and tomotherapy becoming more common in clinical practice, the use of these narrow beams requires a higher level of precision in quality assurance and dosimetry. Plastic scintillators with their water equivalence, energy independence, and dose rate linearity have been shown to possess excellent qualities that suit the most complex and demanding radiation therapy treatment plans. The primary disadvantage of plastic scintillators is the presence of Cerenkov radiation generated in the light guide, which results in an undesired stem effect. Several techniques have been proposed to minimize this effect. In this study, we compared three such techniques--background subtraction, simple filtering, and chromatic removal--in terms of reproducibility and dose accuracy as gauges of their ability to remove the Cerenkov stem effect from the dose signal. The dosimeter used in this study comprised a 6-mm{sup 3} plastic scintillating fiber probe, an optical fiber, and a color charge-coupled device camera. The whole system was shown to be linear and the total light collected by the camera was reproducible to within 0.31% for 5-s integration time. Background subtraction and chromatic removal were both found to be suitable for precise dose evaluation, with average absolute dose discrepancies of 0.52% and 0.67%, respectively, from ion chamber values. Background subtraction required two optical fibers, but chromatic removal used only one, thereby preventing possible measurement artifacts when a strong dose gradient was perpendicular to the optical fiber. Our findings showed that a plastic scintillation dosimeter could be made free of the effect of Cerenkov radiation.

  14. Hidden Markov model and nuisance attribute projection based bearing performance degradation assessment

    NASA Astrophysics Data System (ADS)

    Jiang, Huiming; Chen, Jin; Dong, Guangming

    2016-05-01

    Hidden Markov model (HMM) has been widely applied in bearing performance degradation assessment. As a machine learning-based model, its accuracy, subsequently, is dependent on the sensitivity of the features used to estimate the degradation performance of bearings. It's a big challenge to extract effective features which are not influenced by other qualities or attributes uncorrelated with the bearing degradation condition. In this paper, a bearing performance degradation assessment method based on HMM and nuisance attribute projection (NAP) is proposed. NAP can filter out the effect of nuisance attributes in feature space through projection. The new feature space projected by NAP is more sensitive to bearing health changes and barely influenced by other interferences occurring in operation condition. To verify the effectiveness of the proposed method, two different experimental databases are utilized. The results show that the combination of HMM and NAP can effectively improve the accuracy and robustness of the bearing performance degradation assessment system.

  15. Using Covariance Analysis to Assess Pointing Performance

    NASA Technical Reports Server (NTRS)

    Bayard, David; Kang, Bryan

    2009-01-01

    A Pointing Covariance Analysis Tool (PCAT) has been developed for evaluating the expected performance of the pointing control system for NASA s Space Interferometry Mission (SIM). The SIM pointing control system is very complex, consisting of multiple feedback and feedforward loops, and operating with multiple latencies and data rates. The SIM pointing problem is particularly challenging due to the effects of thermomechanical drifts in concert with the long camera exposures needed to image dim stars. Other pointing error sources include sensor noises, mechanical vibrations, and errors in the feedforward signals. PCAT models the effects of finite camera exposures and all other error sources using linear system elements. This allows the pointing analysis to be performed using linear covariance analysis. PCAT propagates the error covariance using a Lyapunov equation associated with time-varying discrete and continuous-time system matrices. Unlike Monte Carlo analysis, which could involve thousands of computational runs for a single assessment, the PCAT analysis performs the same assessment in a single run. This capability facilitates the analysis of parametric studies, design trades, and "what-if" scenarios for quickly evaluating and optimizing the control system architecture and design.

  16. Technology integration performance assessment using lean principles in health care.

    PubMed

    Rico, Florentino; Yalcin, Ali; Eikman, Edward A

    2015-01-01

    This study assesses the impact of an automated infusion system (AIS) integration at a positron emission tomography (PET) center based on "lean thinking" principles. The authors propose a systematic measurement system that evaluates improvement in terms of the "8 wastes." This adaptation to the health care context consisted of performance measurement before and after integration of AIS in terms of time, utilization of resources, amount of materials wasted/saved, system variability, distances traveled, and worker strain. The authors' observations indicate that AIS stands to be very effective in a busy PET department, such as the one in Moffitt Cancer Center, owing to its accuracy, pace, and reliability, especially after the necessary adjustments are made to reduce or eliminate the source of errors. This integration must be accompanied by a process reengineering exercise to realize the full potential of AIS in reducing waste and improving patient care and worker satisfaction. PMID:24878516

  17. Pulsed Lidar Performance/Technical Maturity Assessment

    NASA Technical Reports Server (NTRS)

    Gimmestad, Gary G.; West, Leanne L.; Wood, Jack W.; Frehlich, Rod

    2004-01-01

    This report describes the results of investigations performed by the Georgia Tech Research Institute (GTRI) and the National Center for Atmospheric Research (NCAR) under a task entitled 'Pulsed Lidar Performance/Technical Maturity Assessment' funded by the Crew Systems Branch of the Airborne Systems Competency at the NASA Langley Research Center. The investigations included two tasks, 1.1(a) and 1.1(b). The Tasks discussed in this report are in support of the NASA Virtual Airspace Modeling and Simulation (VAMS) program and are designed to evaluate a pulsed lidar that will be required for active wake vortex avoidance solutions. The Coherent Technologies, Inc. (CTI) WindTracer LIDAR is an eye-safe, 2-micron, coherent, pulsed Doppler lidar with wake tracking capability. The actual performance of the WindTracer system was to be quantified. In addition, the sensor performance has been assessed and modeled, and the models have been included in simulation efforts. The WindTracer LIDAR was purchased by the Federal Aviation Administration (FAA) for use in near-term field data collection efforts as part of a joint NASA/FAA wake vortex research program. In the joint research program, a minimum common wake and weather data collection platform will be defined. NASA Langley will use the field data to support wake model development and operational concept investigation in support of the VAMS project, where the ultimate goal is to improve airport capacity and safety. Task 1.1(a), performed by NCAR in Boulder, Colorado to analyze the lidar system to determine its performance and capabilities based on results from simulated lidar data with analytic wake vortex models provided by NASA, which were then compared to the vendor's claims for the operational specifications of the lidar. Task 1.1(a) is described in Section 3, including the vortex model, lidar parameters and simulations, and results for both detection and tracking of wake vortices generated by Boeing 737s and 747s. Task 1

  18. Assessing the accuracy and repeatability of automated photogrammetrically generated digital surface models from unmanned aerial system imagery

    NASA Astrophysics Data System (ADS)

    Chavis, Christopher

    Using commercial digital cameras in conjunction with Unmanned Aerial Systems (UAS) to generate 3-D Digital Surface Models (DSMs) and orthomosaics is emerging as a cost-effective alternative to Light Detection and Ranging (LiDAR). Powerful software applications such as Pix4D and APS can automate the generation of DSM and orthomosaic products from a handful of inputs. However, the accuracy of these models is relatively untested. The objectives of this study were to generate multiple DSM and orthomosaic pairs of the same area using Pix4D and APS from flights of imagery collected with a lightweight UAS. The accuracy of each individual DSM was assessed in addition to the consistency of the method to model one location over a period of time. Finally, this study determined if the DSMs automatically generated using lightweight UAS and commercial digital cameras could be used for detecting changes in elevation and at what scale. Accuracy was determined by comparing DSMs to a series of reference points collected with survey grade GPS. Other GPS points were also used as control points to georeference the products within Pix4D and APS. The effectiveness of the products for change detection was assessed through image differencing and observance of artificially induced, known elevation changes. The vertical accuracy with the optimal data and model is ≈ 25 cm and the highest consistency over repeat flights is a standard deviation of ≈ 5 cm. Elevation change detection based on such UAS imagery and DSM models should be viable for detecting infrastructure change in urban or suburban environments with little dense canopy vegetation.

  19. Assessment of accuracy of adopted centre of mass corrections for the Etalon geodetic satellites

    NASA Astrophysics Data System (ADS)

    Appleby, Graham; Dunn, Peter; Otsubo, Toshimichi; Rodriguez, Jose

    2016-04-01

    Accurate centre-of-mass corrections are key parameters in the analysis of satellite laser ranging observations. In order to meet current accuracy requirements, the vector from the reflection point of a laser retroreflector array to the centre of mass of the orbiting spacecraft must be known with mm-level accuracy. In general, the centre-of-mass correction will be dependent on the characteristics of the target (geometry, construction materials, type of retroreflectors), the hardware employed by the tracking station (laser system, detector type), the intensity of the returned laser pulses, and the post-processing strategy employed to reduce the observations [1]. For the geodetic targets used by the ILRS to produce the SLR contribution to the ITRF, the LAGEOS and Etalon satellite pairs, there are centre-of-mass correction tables available for each tracking station [2]. These values are based on theoretical considerations, empirical determination of the optical response functions of each satellite, and knowledge of the tracking technology and return intensity employed [1]. Here we present results that put into question the accuracy of some of the current values for the centre-of-mass corrections of the Etalon satellites. We have computed weekly reference frame solutions using LAGEOS and Etalon observations for the period 1996-2014, estimating range bias parameters for each satellite type along with station coordinates. Analysis of the range bias time series reveals an unexplained, cm-level positive bias for the Etalon satellites in the case of most stations operating at high energy return levels. The time series of tracking stations that have undergone a transition from different modes of operation provide the evidence pointing to an inadequate centre-of-mass modelling. [1] Otsubo, T., and G.M. Appleby, System-dependent centre-of-mass correction for spherical geodetic satellites, J Geophys. Res., 108(B4), 2201, 2003 [2] Appleby, G.M., and T. Otsubo, Centre of Mass

  20. Accuracy assessment of photogrammetric digital elevation models generated for the Schultz Fire burn area

    NASA Astrophysics Data System (ADS)

    Muise, Danna K.

    This paper evaluates the accuracy of two digital photogrammetric software programs (ERDAS Imagine LPS and PCI Geomatica OrthoEngine) with respect to high-resolution terrain modeling in a complex topographic setting affected by fire and flooding. The site investigated is the 2010 Schultz Fire burn area, situated on the eastern edge of the San Francisco Peaks approximately 10 km northeast of Flagstaff, Arizona. Here, the fire coupled with monsoon rains typical of northern Arizona drastically altered the terrain of the steep mountainous slopes and residential areas below the burn area. To quantify these changes, high resolution (1 m and 3 m) digital elevation models (DEMs) were generated of the burn area using color stereoscopic aerial photographs taken at a scale of approximately 1:12000. Using a combination of pre-marked and post-marked ground control points (GCPs), I first used ERDAS Imagine LPS to generate a 3 m DEM covering 8365 ha of the affected area. This data was then compared to a reference DEM (USGS 10 m) to evaluate the accuracy of the resultant DEM. Findings were then divided into blunders (errors) and bias (slight differences) and further analyzed to determine if different factors (elevation, slope, aspect and burn severity) affected the accuracy of the DEM. Results indicated that both blunders and bias increased with an increase in slope, elevation and burn severity. It was also found that southern facing slopes contained the highest amount of bias while northern facing slopes contained the highest proportion of blunders. Further investigations compared a 1 m DEM generated using ERDAS Imagine LPS with a 1 m DEM generated using PCI Geomatica OrthoEngine for a specific region of the burn area. This area was limited to the overlap of two images due to OrthoEngine requiring at least three GCPs to be located in the overlap of the imagery. Results indicated that although LPS produced a less accurate DEM, it was much more flexible than OrthoEngine. It was also

  1. Effects of Familiarity with a Melody Prior to Instruction on Children's Piano Performance Accuracy

    ERIC Educational Resources Information Center

    Frewen, Katherine Goins

    2010-01-01

    The main purpose of this study was to examine the effects of familiarity with the sound of a melody on children's performance of the melody. Children in kindergarten through fourth grade (N = 97) with no previous formal instrumental instruction were taught to play a four-measure melody on a keyboard during an individual instruction session. Before…

  2. Monitoring Rater Performance over Time: A Framework for Detecting Differential Accuracy and Differential Scale Category Use

    ERIC Educational Resources Information Center

    Myford, Carol M.; Wolfe, Edward W.

    2009-01-01

    In this study, we describe a framework for monitoring rater performance over time. We present several statistical indices to identify raters whose standards drift and explain how to use those indices operationally. To illustrate the use of the framework, we analyzed rating data from the 2002 Advanced Placement English Literature and Composition…

  3. Measurement issues in assessing employee performance: A generalizability theory approach

    SciTech Connect

    Stephenson, B.O.

    1996-08-01

    Increasingly, organizations are assessing employee performance through the use of rating instruments employed in the context of varied data collection strategies. For example, the focus may be on obtaining multiple perspectives regarding employee performance (360{degree} evaluation). From the standpoint of evaluating managers, upward assessments and ``peer to peer`` evaluations are perhaps two of the more common examples of such a multiple perspective approach. Unfortunately, it is probably fair to say that the increased interest and use of such data collection strategies has not been accompanied by a corresponding interest in addressing both validity and reliability concerns that have traditionally been associated with other forms of employee assessment (e.g., testing, assessment centers, structured interviews). As a consequence, many organizations may be basing decisions upon information collected under less than ideal measurement conditions. To the extent that such conditions produce unreliable measurements, the process may be both dysfunctional to the organization and/or unfair to the individual(s) being evaluated. Conversely, the establishment of reliable and valid measurement processes may in itself support the utilization of results in pursuit of organizational goals and enhance the credibility of the measurement process (see McEvoy (1990), who found the acceptance of subordinate ratings to be related to perceived accuracy and fairness of the measurement process). The present paper discusses a recent ``peer to peer`` evaluation conducted in our organization. The intent is to focus on the design of the study and present a Generalizability Theory (GT) approach to assessing the overall quality of the data collection strategy, along with suggestions for improving future designs. 9 refs., 3 tabs.

  4. A SUB-PIXEL ACCURACY ASSESSMENT FRAMEWORK FOR DETERMINING LANDSAT TM DERIVED IMPERVIOUS SURFACE ESTIMATES.

    EPA Science Inventory

    The amount of impervious surface in a watershed is a landscape indicator integrating a number of concurrent interactions that influence a watershed's hydrology. Remote sensing data and techniques are viable tools to assess anthropogenic impervious surfaces. However a fundamental ...

  5. Assessment of surgical wounds in the home health patient: definitions and accuracy with OASIS-C.

    PubMed

    Trexler, Rhonda A

    2011-10-01

    The number of surgical patients receiving home care continues to grow as hospitals discharge patients sooner. Home health clinicians must gain knowledge of the wound healing stages and surgical wound classification to collect accurate data in the Outcome and Assessment Information Set-C (OASIS-C). This article provides the information clinicians need to accurately assess surgical wounds and implement best practices for improving surgical wounds in the home health patient.

  6. Mathematical accuracy of Aztec land surveys assessed from records in the Codex Vergara.

    PubMed

    Jorge, María del Carmen; Williams, Barbara J; Garza-Hume, C E; Olvera, Arturo

    2011-09-13

    Land surveying in ancient states is documented not only for Eurasia but also for the Americas, amply attested by two Acolhua-Aztec pictorial manuscripts from the Valley of Mexico. The Codex Vergara and the Códice de Santa María Asunción consist of hundreds of drawings of agricultural fields that uniquely record surface areas as well as perimeter measurements. A previous study of the Codex Vergara examines how Acolhua-Aztecs determined field area by reconstructing their calculation procedures. Here we evaluate the accuracy of their area values using modern mathematics. The findings verify the overall mathematical validity of the codex records. Three-quarters of the areas are within 5% of the maximum possible value, and 85% are within 10%, which compares well with reported errors by Western surveyors that postdate Aztec-Acolhua work by several centuries. PMID:21876138

  7. Mathematical accuracy of Aztec land surveys assessed from records in the Codex Vergara

    PubMed Central

    Williams, Barbara J.; Garza-Hume, C. E.; Olvera, Arturo

    2011-01-01

    Land surveying in ancient states is documented not only for Eurasia but also for the Americas, amply attested by two Acolhua–Aztec pictorial manuscripts from the Valley of Mexico. The Codex Vergara and the Códice de Santa María Asunción consist of hundreds of drawings of agricultural fields that uniquely record surface areas as well as perimeter measurements. A previous study of the Codex Vergara examines how Acolhua–Aztecs determined field area by reconstructing their calculation procedures. Here we evaluate the accuracy of their area values using modern mathematics. The findings verify the overall mathematical validity of the codex records. Three-quarters of the areas are within 5% of the maximum possible value, and 85% are within 10%, which compares well with reported errors by Western surveyors that postdate Aztec–Acolhua work by several centuries. PMID:21876138

  8. Assessment of Required Accuracy of Digital Elevation Data for Hydrologic Modeling

    NASA Technical Reports Server (NTRS)

    Kenward, T.; Lettenmaier, D. P.

    1997-01-01

    The effect of vertical accuracy of Digital Elevation Models (DEMs) on hydrologic models is evaluated by comparing three DEMs and resulting hydrologic model predictions applied to a 7.2 sq km USDA - ARS watershed at Mahantango Creek, PA. The high resolution (5 m) DEM was resempled to a 30 m resolution using method that constrained the spatial structure of the elevations to be comparable with the USGS and SIR-C DEMs. This resulting 30 m DEM was used as the reference product for subsequent comparisons. Spatial fields of directly derived quantities, such as elevation differences, slope, and contributing area, were compared to the reference product, as were hydrologic model output fields derived using each of the three DEMs at the common 30 m spatial resolution.

  9. Exploring Proficiency-Based vs. Performance-Based Items with Elicited Imitation Assessment

    ERIC Educational Resources Information Center

    Cox, Troy L.; Bown, Jennifer; Burdis, Jacob

    2015-01-01

    This study investigates the effect of proficiency- vs. performance-based elicited imitation (EI) assessment. EI requires test-takers to repeat sentences in the target language. The accuracy at which test-takers are able to repeat sentences highly correlates with test-takers' language proficiency. However, in EI, the factors that render an item…

  10. Building Confidence in LLW Performance Assessments - 13386

    SciTech Connect

    Rustick, Joseph H.; Kosson, David S.; Krahn, Steven L.; Clarke, James H.

    2013-07-01

    The performance assessment process and incorporated input assumptions for four active and one planned DOE disposal sites were analyzed using a systems approach. The sites selected were the Savannah River E-Area Slit and Engineered Trenches, Hanford Integrated Disposal Facility, Idaho Radioactive Waste Management Complex, Oak Ridge Environmental Management Waste Management Facility, and Nevada National Security Site Area 5. Each disposal facility evaluation incorporated three overall system components (1) site characteristics (climate, geology, geochemistry, etc.), (2) waste properties (waste form and package), and (3) engineered barrier designs (cover system, liner system). Site conceptual models were also analyzed to identity the main risk drivers and risk insights controlling performance for each disposal facility. (authors)

  11. The short- to medium-term predictive accuracy of static and dynamic risk assessment measures in a secure forensic hospital.

    PubMed

    Chu, Chi Meng; Thomas, Stuart D M; Ogloff, James R P; Daffern, Michael

    2013-04-01

    Although violence risk assessment knowledge and practice has advanced over the past few decades, it remains practically difficult to decide which measures clinicians should use to assess and make decisions about the violence potential of individuals on an ongoing basis, particularly in the short to medium term. Within this context, this study sought to compare the predictive accuracy of dynamic risk assessment measures for violence with static risk assessment measures over the short term (up to 1 month) and medium term (up to 6 months) in a forensic psychiatric inpatient setting. Results showed that dynamic measures were generally more accurate than static measures for short- to medium-term predictions of inpatient aggression. These findings highlight the necessity of using risk assessment measures that are sensitive to important clinical risk state variables to improve the short- to medium-term prediction of aggression within the forensic inpatient setting. Such knowledge can assist with the development of more accurate and efficient risk assessment procedures, including the selection of appropriate risk assessment instruments to manage and prevent the violence of offenders with mental illnesses during inpatient treatment.

  12. Performance advantages of dynamically tuned gyroscopes in high accuracy spacecraft pointing and stabilization applications

    NASA Technical Reports Server (NTRS)

    Irvine, R.; Van Alstine, R.

    1979-01-01

    The paper compares and describes the advantages of dry tuned gyros over floated gyros for space applications. Attention is given to describing the Teledyne SDG-5 gyro and the second-generation NASA Standard Dry Rotor Inertial Reference Unit (DRIRU II). Certain tests which were conducted to evaluate the SDG-5 and DRIRU II for specific mission requirements are outlined, and their results are compared with published test results on other gyro types. Performance advantages are highlighted.

  13. Performance assessment of compressive sensing imaging

    NASA Astrophysics Data System (ADS)

    Du Bosq, Todd W.; Haefner, David P.; Preece, Bradley L.

    2014-05-01

    Compressive sensing (CS) can potentially form an image of equivalent quality to a large format, megapixel array, using a smaller number of individual measurements. This has the potential to provide smaller, cheaper, and lower bandwidth imaging systems. To properly assess the value of such systems, it is necessary to fully characterize the image quality, including artifacts, sensitivity to noise, and CS limitations. Full resolution imagery of an eight tracked vehicle target set at range was used as an input for simulated single-pixel CS camera measurements. The CS algorithm then reconstructs images from the simulated single-pixel CS camera for various levels of compression and noise. For comparison, a traditional camera was also simulated setting the number of pixels equal to the number of CS measurements in each case. Human perception experiments were performed to determine the identification performance within the trade space. The performance of the nonlinear CS camera was modeled with the Night Vision Integrated Performance Model (NVIPM) by mapping the nonlinear degradations to an equivalent linear shift invariant model. Finally, the limitations of compressive sensing modeling will be discussed.

  14. Consideration of environmental change in performance assessments.

    PubMed

    Pinedo, P; Thorne, M; Egan, M; Calvez, M; Kautsky, U

    2005-01-01

    Depending on the particular circumstances in which a post-closure performance assessment of a radioactive waste repository is made, it may be appropriate to follow simple or more complex approaches in characterising the biosphere. Several different Example Reference Biospheres were explored in BIOMASS Theme 1 to address a range of issues that arise. Here, consideration is given to Example Reference Biospheres relevant to representing the implications of changes that may occur within the biosphere system during the period over which releases of radionuclides from a disposal facility might take place. Mechanisms of change considered include those extrinsic and intrinsic to the system of interest. An overall methodology for incorporating environmental change into assessments is proposed. This includes screening of primary mechanisms of change; identification of possible time sequences of change; development of a coherent description of the regional landscape response for each time sequence; integration of source term and geosphere-biosphere interface information; identification and description of one or more time series of assessment biospheres; and evaluation of the advantages and disadvantages of simulating the effects of sequences of biosphere systems and the transitions between them, or of defining a set of biosphere systems to be represented individually in a non-sequential analysis. The usefulness of the methodology is explored in two site-specific examples and one generic example. PMID:16198459

  15. Pig performance characteristics in corrosion assessment

    SciTech Connect

    Vieth, P.; Rust, S.W.; Johnson, E.; Cox, M.

    1996-09-01

    Alyeska Pipeline Service Company (APSC) operates the Trans Alaska Pipeline System (TAPS) for transporting crude oil 800 miles from Prudhoe Bay to Valdez. Approximately 420 miles of the pipeline is above ground and 380 miles is below ground. In-line inspection results have indicated external corrosion on portions of the below ground pipe. APSC uses periodic in-line inspections to identify, monitor, and remediate the corrosion. Results of these surveys are used to determine the presence and magnitude of corrosion by sensing a signal (either MFL or UT) produced by the metal loss anomalies. An ideal tool would be able to: detect all corrosion regardless of size, assess the actual corrosion with no measurement errors, and produce no false corrosion indications. Real in-line inspection tools exhibit varying capabilities to detect, measure, and assess corrosion on an operating pipeline. It is essential for the pipeline operator to known how reliable each tool is in order to respond in a manner which prevents a failure from excessive metal loss. Rigorous analysis of three of Alyeska`s more recent inline surveys have provided the essential performance measures to facilitate a satisfactory response plan. These performance measures were evaluated by comparing measurements of the actual corrosion (obtained from 314 excavations) to results provided by three pig runs selected for presentation in this paper.

  16. Performance assessment task team progress report

    SciTech Connect

    Wood, D.E.; Curl, R.U.; Armstrong, D.R.; Cook, J.R.; Dolenc, M.R.; Kocher, D.C.; Owens, K.W.; Regnier, E.P.; Roles, G.W.; Seitz, R.R.

    1994-05-01

    The U.S. Department of Energy (DOE) Headquarters EM-35, established a Performance Assessment Task Team (referred to as the Team) to integrate the activities of the sites that are preparing performance assessments (PAs) for disposal of new low-level waste, as required by Chapter III of DOE Order 5820.2A, {open_quotes}Low-Level Waste Management{close_quotes}. The intent of the Team is to achieve a degree of consistency among these PAs as the analyses proceed at the disposal sites. The Team`s purpose is to recommend policy and guidance to the DOE on issues that impact the PAs, including release scenarios and parameters, so that the approaches are as consistent as possible across the DOE complex. The Team has identified issues requiring attention and developed discussion papers for those issues. Some issues have been completed, and the recommendations are provided in this document. Other issues are still being discussed, and the status summaries are provided in this document. A major initiative was to establish a subteam to develop a set of test scenarios and parameters for benchmarking codes in use at the various sites. The activities of the Team are reported here through December 1993.

  17. Assessment of Geometrical Accuracy of Multimodal Images Used for Treatment Planning in Stereotactic Radiotherapy and Radiosurgery: CT, MRI and PET

    SciTech Connect

    Garcia-Garduno, O. A.; Larraga-Gutierrez, J. M.; Celis, M. A.; Suarez-Campos, J. J.; Rodriguez-Villafuerte, M.; Martinez-Davalos, A.

    2006-09-08

    An acrylic phantom was designed and constructed to assess the geometrical accuracy of CT, MRI and PET images for stereotactic radiotherapy (SRT) and radiosurgery (SRS) applications. The phantom was suited for each image modality with a specific tracer and compared with CT images to measure the radial deviation between the reference marks in the phantom. It was found that for MRI the maximum mean deviation is 1.9 {+-} 0.2 mm compared to 2.4 {+-} 0.3 mm reported for PET. These results will be used for margin outlining in SRS and SRT treatment planning.

  18. The Impact of Performance Level Misclassification on the Accuracy and Precision of Percent at Performance Level Measures

    ERIC Educational Resources Information Center

    Betebenner, Damian W.; Shang, Yi; Xiang, Yun; Zhao, Yan; Yue, Xiaohui

    2008-01-01

    No Child Left Behind (NCLB) performance mandates, embedded within state accountability systems, focus school AYP (adequate yearly progress) compliance squarely on the percentage of students at or above proficient. The singular importance of this quantity for decision-making purposes has initiated extensive research into percent proficient as a…

  19. Envisat Ocean Altimetry Performance Assessment and Cross-calibration

    PubMed Central

    Faugere, Yannice; Dorandeu, Joël; Lefevre, Fabien; Picot, Nicolas; Femenias, Pierre

    2006-01-01

    Nearly three years of Envisat altimetric observations over ocean are available in Geophysical Data Record (GDR) products. The quality assessment of these data is routinely performed at the CLS Space Oceanography Division in the frame of the CNES Segment Sol Altimétrie et Orbitographie (SSALTO) and ESA French Processing and Archiving Center (F-PAC) activities. This paper presents the main results in terms of Envisat data quality: verification of data availability and validity, monitoring of the most relevant altimeter (ocean1 retracking) and radiometer parameters, assessment of the Envisat altimeter system performances. This includes a cross-calibration analysis of Envisat data with Jason-1, ERS-2 and T/P. Envisat data show good general quality. A good orbit quality and a low level of noise allow Envisat to reach the high level of accuracy of other precise missions such as T/P and Jason-1. Some issues raised in this paper, as the gravity induced orbit errors, will be solved in the next version of GDR products. Some others, as the Envisat Mean Sea Level in the first year, still need further investigation.

  20. Assessment of the accuracy of plasma shape reconstruction by the Cauchy condition surface method in JT-60SA

    SciTech Connect

    Miyata, Y.; Suzuki, T.; Takechi, M.; Urano, H.; Ide, S.

    2015-07-15

    For the purpose of stable plasma equilibrium control and detailed analysis, it is essential to reconstruct an accurate plasma boundary on the poloidal cross section in tokamak devices. The Cauchy condition surface (CCS) method is a numerical approach for calculating the spatial distribution of the magnetic flux outside a hypothetical surface and reconstructing the plasma boundary from the magnetic measurements located outside the plasma. The accuracy of the plasma shape reconstruction has been assessed by comparing the CCS method and an equilibrium calculation in JT-60SA with a high elongation and triangularity of plasma shape. The CCS, on which both Dirichlet and Neumann conditions are unknown, is defined as a hypothetical surface located inside the real plasma region. The accuracy of the plasma shape reconstruction is sensitive to the CCS free parameters such as the number of unknown parameters and the shape in JT-60SA. It is found that the optimum number of unknown parameters and the size of the CCS that minimizes errors in the reconstructed plasma shape are in proportion to the plasma size. Furthermore, it is shown that the accuracy of the plasma shape reconstruction is greatly improved using the optimum number of unknown parameters and shape of the CCS, and the reachable reconstruction errors in plasma shape and locations of strike points are within the target ranges in JT-60SA.

  1. Assessing the accuracy of auralizations computed using a hybrid geometrical-acoustics and wave-acoustics method

    NASA Astrophysics Data System (ADS)

    Summers, Jason E.; Takahashi, Kengo; Shimizu, Yasushi; Yamakawa, Takashi

    2001-05-01

    When based on geometrical acoustics, computational models used for auralization of auditorium sound fields are physically inaccurate at low frequencies. To increase accuracy while keeping computation tractable, hybrid methods using computational wave acoustics at low frequencies have been proposed and implemented in small enclosures such as simplified models of car cabins [Granier et al., J. Audio Eng. Soc. 44, 835-849 (1996)]. The present work extends such an approach to an actual 2400-m3 auditorium using the boundary-element method for frequencies below 100 Hz. The effect of including wave-acoustics at low frequencies is assessed by comparing the predictions of the hybrid model with those of the geometrical-acoustics model and comparing both with measurements. Conventional room-acoustical metrics are used together with new methods based on two-dimensional distance measures applied to time-frequency representations of impulse responses. Despite in situ measurements of boundary impedance, uncertainties in input parameters limit the accuracy of the computed results at low frequencies. However, aural perception ultimately defines the required accuracy of computational models. An algorithmic method for making such evaluations is proposed based on correlating listening-test results with distance measures between time-frequency representations derived from auditory models of the ear-brain system. Preliminary results are presented.

  2. Do Students Know What They Know? Exploring the Accuracy of Students' Self-Assessments

    ERIC Educational Resources Information Center

    Lindsey, Beth A.; Nagel, Megan L.

    2015-01-01

    We have conducted an investigation into how well students in introductory science classes (both physics and chemistry) are able to predict which questions they will or will not be able to answer correctly on an upcoming assessment. An examination of the data at the level of students' overall scores reveals results consistent with the…

  3. Disease severity estimates - effects of rater accuracy and assessments methods for comparing treatments

    Technology Transfer Automated Retrieval System (TEKTRAN)

    Assessment of disease is fundamental to the discipline of plant pathology, and estimates of severity are often made visually. However, it is established that visual estimates can be inaccurate and unreliable. In this study estimates of Septoria leaf blotch on leaves of winter wheat from non-treated ...

  4. Regulatory Complementarity and the Speed-Accuracy Balance in Group Performance

    PubMed Central

    Mauro, Romina; Pierro, Antonio; Mannetti, Lucia; Higgins, E. Tory; Kruglanski, Arie W.

    2013-01-01

    In this research, we varied the composition of 4-member groups. One third of the groups consisted exclusively of “locomotors,” individuals predominantly oriented toward action. Another third of the groups consisted exclusively of “assessors,” individuals predominantly oriented toward evaluation. The final third of the groups consisted of a mix of locomotors and assessors. We found that the groups containing only locomotors were faster than the groups containing only assessors, and the groups containing only assessors were more accurate than the groups containing only locomotors. The groups containing a mix of assessors and locomotors were as fast as the groups containing only locomotors and as accurate as the groups containing only assessors. These results echo findings at the individual level of analysis, and suggest that the testing and action components of operating systems independently contribute to performance both intra- and interpersonally. PMID:19470125

  5. Image intensifier distortion correction for fluoroscopic RSA: the need for independent accuracy assessment.

    PubMed

    Kedgley, Angela E; Fox, Anne-Marie V; Jenkyn, Thomas R

    2012-01-01

    Fluoroscopic images suffer from multiple modes of image distortion. Therefore, the purpose of this study was to compare the effects of correction using a range of two-dimensional polynomials and a global approach. The primary measure of interest was the average error in the distances between four beads of an accuracy phantom, as measured using RSA. Secondary measures of interest were the root mean squared errors of the fit of the chosen polynomial to the grid of beads used for correction, and the errors in the corrected distances between the points of the grid in a second position. Based upon the two-dimensional measures, a polynomial of order three in the axis of correction and two in the perpendicular axis was preferred. However, based upon the RSA reconstruction, a polynomial of order three in the axis of correction and one in the perpendicular axis was preferred. The use of a calibration frame for these three-dimensional applications most likely tempers the effects of distortion. This study suggests that distortion correction should be validated for each of its applications with an independent "gold standard" phantom.

  6. Accuracy assessment of building point clouds automatically generated from iphone images

    NASA Astrophysics Data System (ADS)

    Sirmacek, B.; Lindenbergh, R.

    2014-06-01

    Low-cost sensor generated 3D models can be useful for quick 3D urban model updating, yet the quality of the models is questionable. In this article, we evaluate the reliability of an automatic point cloud generation method using multi-view iPhone images or an iPhone video file as an input. We register such automatically generated point cloud on a TLS point cloud of the same object to discuss accuracy, advantages and limitations of the iPhone generated point clouds. For the chosen example showcase, we have classified 1.23% of the iPhone point cloud points as outliers, and calculated the mean of the point to point distances to the TLS point cloud as 0.11 m. Since a TLS point cloud might also include measurement errors and noise, we computed local noise values for the point clouds from both sources. Mean (μ) and standard deviation (σ) of roughness histograms are calculated as (μ1 = 0.44 m., σ1 = 0.071 m.) and (μ2 = 0.025 m., σ2 = 0.037 m.) for the iPhone and TLS point clouds respectively. Our experimental results indicate possible usage of the proposed automatic 3D model generation framework for 3D urban map updating, fusion and detail enhancing, quick and real-time change detection purposes. However, further insights should be obtained first on the circumstances that are needed to guarantee a successful point cloud generation from smartphone images.

  7. Assessment of the accuracy of density functional theory for first principles simulations of water

    NASA Astrophysics Data System (ADS)

    Grossman, J. C.; Schwegler, E.; Draeger, E.; Gygi, F.; Galli, G.

    2004-03-01

    We present a series of Car-Parrinello (CP) molecular dynamics simulation in order to better understand the accuracy of density functional theory for the calculation of the properties of water [1]. Through 10 separate ab initio simulations, each for 20 ps of ``production'' time, a number of approximations are tested by varying the density functional employed, the fictitious electron mass, μ, in the CP Langrangian, the system size, and the ionic mass, M (we considered both H_2O and D_2O). We present the impact of these approximations on properties such as the radial distribution function [g(r)], structure factor [S(k)], diffusion coefficient and dipole moment. Our results show that structural properties may artificially depend on μ, and that in the case of an accurate description of the electronic ground state, and in the absence of proton quantum effects, we obtained an oxygen-oxygen correlation function that is over-structured compared to experiment, and a diffusion coefficient which is approximately 10 times smaller. ^1 J.C. Grossman et. al., J. Chem. Phys. (in press, 2004).

  8. Assessing inter-sensor variability and sensible heat flux derivation accuracy for a large aperture scintillometer.

    PubMed

    Rambikur, Evan H; Chávez, José L

    2014-01-01

    The accuracy in determining sensible heat flux (H) of three Kipp and Zonen large aperture scintillometers (LAS) was evaluated with reference to an eddy covariance (EC) system over relatively flat and uniform grassland near Timpas (CO, USA). Other tests have revealed inherent variability between Kipp and Zonen LAS units and bias to overestimate H. Average H fluxes were compared between LAS units and between LAS and EC. Despite good correlation, inter-LAS biases in H were found between 6% and 13% in terms of the linear regression slope. Physical misalignment was observed to result in increased scatter and bias between H solutions of a well-aligned and poorly-aligned LAS unit. Comparison of LAS and EC H showed little bias for one LAS unit, while the other two units overestimated EC H by more than 10%. A detector alignment issue may have caused the inter-LAS variability, supported by the observation in this study of differing power requirements between LAS units. It is possible that the LAS physical misalignment may have caused edge-of-beam signal noise as well as vulnerability to signal noise from wind-induced vibrations, both having an impact on the solution of H. In addition, there were some uncertainties in the solutions of H from the LAS and EC instruments, including lack of energy balance closure with the EC unit. However, the results obtained do not show clear evidence of inherent bias for the Kipp and Zonen LAS to overestimate H as found in other studies.

  9. Accuracy of Cameriere's third molar maturity index in assessing legal adulthood on Serbian population.

    PubMed

    Zelic, Ksenija; Galic, Ivan; Nedeljkovic, Nenad; Jakovljevic, Aleksandar; Milosevic, Olga; Djuric, Marija; Cameriere, Roberto

    2016-02-01

    At the moment, a large number of asylum seekers from the Middle East are passing through Serbia. Most of them do not have identification documents. Also, the past wars in the Balkan region have left many unidentified victims and missing persons. From a legal point of view, it is crucial to determine whether a person is a minor or an adult (≥18 years of age). In recent years, methods based on the third molar development have been used for this purpose. The present article aims to verify the third molar maturity index (I3M) based on the correlation between the chronological age and normalized measures of the open apices and height of the third mandibular molar. The sample consisted of 598 panoramic radiographs (290 males and 299 females) from 13 to 24 years of age. The cut-off value of I3M=0.08 was used to discriminate adults and minors. The results demonstrated high sensitivity (0.96, 0.86) and specificity (0.94, 0.98) in males and females, respectively. The proportion of correctly classified individuals was 0.95 in males and 0.91 in females. In conclusion, the suggested value of I3M=0.08 can be used on Serbian population with high accuracy.

  10. Accuracy of Cameriere's third molar maturity index in assessing legal adulthood on Serbian population.

    PubMed

    Zelic, Ksenija; Galic, Ivan; Nedeljkovic, Nenad; Jakovljevic, Aleksandar; Milosevic, Olga; Djuric, Marija; Cameriere, Roberto

    2016-02-01

    At the moment, a large number of asylum seekers from the Middle East are passing through Serbia. Most of them do not have identification documents. Also, the past wars in the Balkan region have left many unidentified victims and missing persons. From a legal point of view, it is crucial to determine whether a person is a minor or an adult (≥18 years of age). In recent years, methods based on the third molar development have been used for this purpose. The present article aims to verify the third molar maturity index (I3M) based on the correlation between the chronological age and normalized measures of the open apices and height of the third mandibular molar. The sample consisted of 598 panoramic radiographs (290 males and 299 females) from 13 to 24 years of age. The cut-off value of I3M=0.08 was used to discriminate adults and minors. The results demonstrated high sensitivity (0.96, 0.86) and specificity (0.94, 0.98) in males and females, respectively. The proportion of correctly classified individuals was 0.95 in males and 0.91 in females. In conclusion, the suggested value of I3M=0.08 can be used on Serbian population with high accuracy. PMID:26773223

  11. Assessing diagnostic accuracy of Haemoglobin Colour Scale in real-life setting.

    PubMed

    Shah, Pankaj P; Desai, Shrey A; Modi, Dhiren K; Shah, Shobha P

    2014-03-01

    The study was undertaken to determine diagnostic accuracy of Haemoglobin Colour Scale (HCS) in hands of village-based community health workers (CHWs) in real-life community setting in India. Participants (501 women) were randomly selected from 8 villages belonging to a project area of SEWA-Rural, a voluntary organization located in India. After receiving a brief training, CHWs and a research assistant obtained haemoglobin readings using HCS and HemoCue (reference) respectively. Sensitivity, specificity, positive and negative predictive-values, and likelihood ratios were calculated. Bland-Altman plot was constructed. Mean haemoglobin value, using HCS and HemoCue were 11.02 g/dL (CI 10.9-11.2) and 11.07 g/dL (CI 10.9-11.2) respectively. Mean difference between haemoglobin readings was 0.95 g/dL. Sensitivity of HCS was 0.74 (CI 0.65-0.81) and 0.84 (CI 0.8-0.87) whereas specificity was 0.84 (CI:0.51-0.98) and 0.99 (CI:0.97-0.99) using haemoglobin cutoff limits of 10 g/dL and 7 g/dL respectively. CHWs can accurately diagnose severe and moderately-severe anaemia by using HCS in real-life field condition after a brief training.

  12. Assessment of Completeness and Positional Accuracy of Linear Features in Volunteered Geographic Information (vgi)

    NASA Astrophysics Data System (ADS)

    Eshghi, M.; Alesheikh, A. A.

    2015-12-01

    Recent advances in spatial data collection technologies and online services dramatically increase the contribution of ordinary people to produce, share, and use geographic information. Collecting spatial data as well as disseminating them on the internet by citizens has led to a huge source of spatial data termed as Volunteered Geographic Information (VGI) by Mike Goodchild. Although, VGI has produced previously unavailable data assets, and enriched existing ones. But its quality can be highly variable and challengeable. This presents several challenges to potential end users who are concerned about the validation and the quality assurance of the data which are collected. Almost, all the existing researches are based on how to find accurate VGI data from existing VGI data which consist of a) comparing the VGI data with the accurate official data, or b) in cases that there is no access to correct data; therefore, looking for an alternative way to determine the quality of VGI data is essential, and so forth. In this paper it has been attempt to develop a useful method to reach this goal. In this process, the positional accuracy of linear feature of Iran, Tehran OSM data have been analyzed.

  13. 43 CFR 3836.10 - Performing assessment work.

    Code of Federal Regulations, 2013 CFR

    2013-10-01

    ... 43 Public Lands: Interior 2 2013-10-01 2013-10-01 false Performing assessment work. 3836.10... MANAGEMENT, DEPARTMENT OF THE INTERIOR MINERALS MANAGEMENT (3000) ANNUAL ASSESSMENT WORK REQUIREMENTS FOR MINING CLAIMS Performing Assessment Work § 3836.10 Performing assessment work....

  14. 43 CFR 3836.10 - Performing assessment work.

    Code of Federal Regulations, 2012 CFR

    2012-10-01

    ... 43 Public Lands: Interior 2 2012-10-01 2012-10-01 false Performing assessment work. 3836.10... MANAGEMENT, DEPARTMENT OF THE INTERIOR MINERALS MANAGEMENT (3000) ANNUAL ASSESSMENT WORK REQUIREMENTS FOR MINING CLAIMS Performing Assessment Work § 3836.10 Performing assessment work....

  15. 43 CFR 3836.10 - Performing assessment work.

    Code of Federal Regulations, 2014 CFR

    2014-10-01

    ... 43 Public Lands: Interior 2 2014-10-01 2014-10-01 false Performing assessment work. 3836.10... MANAGEMENT, DEPARTMENT OF THE INTERIOR MINERALS MANAGEMENT (3000) ANNUAL ASSESSMENT WORK REQUIREMENTS FOR MINING CLAIMS Performing Assessment Work § 3836.10 Performing assessment work....

  16. 43 CFR 3836.10 - Performing assessment work.

    Code of Federal Regulations, 2011 CFR

    2011-10-01

    ... 43 Public Lands: Interior 2 2011-10-01 2011-10-01 false Performing assessment work. 3836.10... MANAGEMENT, DEPARTMENT OF THE INTERIOR MINERALS MANAGEMENT (3000) ANNUAL ASSESSMENT WORK REQUIREMENTS FOR MINING CLAIMS Performing Assessment Work § 3836.10 Performing assessment work....

  17. Improving the performance of E-beam 2nd writing in mask alignment accuracy and pattern faultless for CPL technology

    NASA Astrophysics Data System (ADS)

    Lee, Booky; Hung, Richard; Lin, Orson; Wu, Yuan-Hsun; Kozuma, Makoto; Shih, Chiang-Lin; Hsu, Michael; Hsu, Stephen D.

    2005-01-01

    The chromeless phase lithography (CPL) is a potential technology for low k1 optical image. For the CPL technology, we can control the local transmission rate to get optimized through pitch imaging performance. The CPL use zebra pattern to manipulate the pattern local transmission as a tri-tone structure in mask manufacturing. It needs the 2nd level writing to create the zebra pattern. The zebra pattern must be small enough not to be printed out and the 2nd writing overlay accuracy must keep within 40nm. The request is a challenge to E-beam 2nd writing function. The focus of this paper is in how to improve the overlay accuracy and get a precise pattern to form accurate pattern transmission. To fulfill this work several items have been done. To check the possibility of contamination in E-Beam chamber by the conductive layer coating we monitor the particle count in the E-Beam chamber before and after the coated blank load-unload. The conductivity of our conductive layer has been checked to eliminate the charging effect by optimizing film thickness. The dimension of alignment mark has also been optimized through experimentation. And finally we checked the PR remain to ensure sufficient process window in our etching process. To verify the performance of our process we check the 3D SEM picture. Also we use AIMs to prove the resolution improvement capability in CPL compared to the traditional methods-Binary mask and Half Tone mask. The achieved overlay accuracy and process can provide promising approach for NGL reticle manufacturing of CPL technology.

  18. Performance-based assessment of reconstructed images

    SciTech Connect

    Hanson, Kenneth

    2009-01-01

    During the early 90s, I engaged in a productive and enjoyable collaboration with Robert Wagner and his colleague, Kyle Myers. We explored the ramifications of the principle that tbe quality of an image should be assessed on the basis of how well it facilitates the performance of appropriate visual tasks. We applied this principle to algorithms used to reconstruct scenes from incomplete and/or noisy projection data. For binary visual tasks, we used both the conventional disk detection and a new challenging task, inspired by the Rayleigh resolution criterion, of deciding whether an object was a blurred version of two dots or a bar. The results of human and machine observer tests were summarized with the detectability index based on the area under the ROC curve. We investigated a variety of reconstruction algorithms, including ART, with and without a nonnegativity constraint, and the MEMSYS3 algorithm. We concluded that the performance of the Raleigh task was optimized when the strength of the prior was near MEMSYS's default 'classic' value for both human and machine observers. A notable result was that the most-often-used metric of rms error in the reconstruction was not necessarily indicative of the value of a reconstructed image for the purpose of performing visual tasks.

  19. Assessing liner performance using on-farm milk meters.

    PubMed

    Penry, J F; Leonardi, S; Upton, J; Thompson, P D; Reinemann, D J

    2016-08-01

    The primary objective of this study was to quantify and compare the interactive effects of liner compression, milking vacuum level, and pulsation settings on average milk flow rates for liners representing the range of liner compression of commercial liners. A secondary objective was to evaluate a methodology for assessing liner performance that can be applied on commercial dairy farms. Eight different liner types were assessed using 9 different combinations of milking system vacuum and pulsation settings applied to a herd of 80 cows with vacuum and pulsation conditions changed daily for 36d using a central composite experimental design. Liner response surfaces were created for explanatory variables milking system vacuum (Vsystem) and pulsator ratio (PR) and response variable average milk flow rate (AMF=total yield/total cups-on time) expressed as a fraction of the within-cow average flow rate for all treatments (average milk flow rate fraction, AMFf). Response surfaces were also created for between-liner comparisons for standardized conditions of claw vacuum and milk ratio (fraction of pulsation cycle during which milk is flowing). The highest AMFf was observed at the highest levels of Vsystem, PR, and overpressure. All liners showed an increase in AMF as milking conditions were changed from low to high standardized conditions of claw vacuum and milk ratio. Differences in AMF between liners were smallest at the most gentle milking conditions (low Vsystem and low milk ratio), and these between-liner differences in AMF increased as liner overpressure increased. Differences were noted with vacuum drop between Vsystem and claw vacuum depending on the liner venting system, with short milk tube vented liners having the greater vacuum drop than mouthpiece chamber vented liners. The accuracy of liner performance assessment in commercial parlors fitted with milk meters can be improved by using a central composite experimental design with a repeated center point treatment

  20. Accuracy of total ozone retrieval from NOAA SBUV/2 measurements: Impact of instrument performance

    SciTech Connect

    Ahmad, Z.; Deland, M.T.; Cebula, R.P.; Weiss, H.; Wellemeyer, C.G.; Planet, W.G.; Lienesch, J.H.; Bowman, H.D.; Miller, A.J.; Nagatani, R.M. |

    1994-11-01

    The National Oceanic and Atmospheric Administration/National Environmental Satellite Data and Information Service (NOAA/NESDIS) has been collecting and evaluating the solar backscattered ultraviolet (SBUV/2) instrument data from NOAA 9 and NOAA 11 spacecraft since March 1985. Over 5 years (March 1985 to October 1990) of NOAA 9 (version 5.0) and over 4 years (January 1989 to June 1993) of NOAA 11 (version 6.0) reprocessed data are now available to the scientific community to study geophysical phenomena involving ozone. This paper examines the impact of the instrument performance on total ozone retrieval from the two instruments. We estimate that at the end of October 1990 the total postlaunch error for NOAA 9 due to instrument alone is -2.2%. A significant fraction of this error (-1.9%) is due to diffuser degradation which is not accounted for in the version 5 reprocessing. The estimate for NOAA 11 total postlaunch instrument error, at the end of June 1993, is -0.4%.

  1. Assessing Inter-Sensor Variability and Sensible Heat Flux Derivation Accuracy for a Large Aperture Scintillometer

    PubMed Central

    Rambikur, Evan H.; Chávez, José L.

    2014-01-01

    The accuracy in determining sensible heat flux (H) of three Kipp and Zonen large aperture scintillometers (LAS) was evaluated with reference to an eddy covariance (EC) system over relatively flat and uniform grassland near Timpas (CO, USA). Other tests have revealed inherent variability between Kipp and Zonen LAS units and bias to overestimate H. Average H fluxes were compared between LAS units and between LAS and EC. Despite good correlation, inter-LAS biases in H were found between 6% and 13% in terms of the linear regression slope. Physical misalignment was observed to result in increased scatter and bias between H solutions of a well-aligned and poorly-aligned LAS unit. Comparison of LAS and EC H showed little bias for one LAS unit, while the other two units overestimated EC H by more than 10%. A detector alignment issue may have caused the inter-LAS variability, supported by the observation in this study of differing power requirements between LAS units. It is possible that the LAS physical misalignment may have caused edge-of-beam signal noise as well as vulnerability to signal noise from wind-induced vibrations, both having an impact on the solution of H. In addition, there were some uncertainties in the solutions of H from the LAS and EC instruments, including lack of energy balance closure with the EC unit. However, the results obtained do not show clear evidence of inherent bias for the Kipp and Zonen LAS to overestimate H as found in other studies. PMID:24473285

  2. In vitro assessment of the accuracy of extraoral periapical radiography in root length determination

    PubMed Central

    Nazeer, Muhammad Rizwan; Khan, Farhan Raza; Rahman, Munawwar

    2016-01-01

    Objective: To determine the accuracy of extra oral periapical radiography in obtaining root length by comparing it with the radiographs obtained from standard intraoral approach and extended distance intraoral approach. Materials and Methods: It was an in vitro, comparative study conducted at the dental clinics of Aga Khan University Hospital. ERC exemption was obtained for this work, ref number 3407Sur-ERC-14. We included premolars and molars of a standard phantom head mounted with metal and radiopaque teeth. Radiation was exposed using three radiographic approaches: Standard intraoral, extended length intraoral and extraoral. Since, the unit of analysis was individual root, thus, we had a total of 24 images. The images were stored in VixWin software. The length of the roots was determined using the scale function of the measuring tool inbuilt in the software. Data were analyzed using SPSS version 19.0 and GraphPad software. Pearson correlation coefficient and Bland–Altman test was applied to determine whether the tooth length readings obtained from three different approaches were correlated. P = 0.05 was taken as statistically significant. Results: The correlation between standard intraoral and extended intraoral was 0.97; the correlation between standard intraoral and extraoral method was 0.82 while the correlation between extended intraoral and extraoral was 0.76. The results of Bland–Altman test showed that the average discrepancy between these methods is not large enough to be considered as significant. Conclusions: It appears that the extraoral radiographic method can be used in root length determination in subjects where intraoral radiography is not possible. PMID:27011737

  3. Accuracy and quality assessment of 454 GS-FLX Titanium pyrosequencing

    PubMed Central

    2011-01-01

    Background The rapid evolution of 454 GS-FLX sequencing technology has not been accompanied by a reassessment of the quality and accuracy of the sequences obtained. Current strategies for decision-making and error-correction are based on an initial analysis by Huse et al. in 2007, for the older GS20 system based on experimental sequences. We analyze here the quality of 454 sequencing data and identify factors playing a role in sequencing error, through the use of an extensive dataset for Roche control DNA fragments. Results We obtained a mean error rate for 454 sequences of 1.07%. More importantly, the error rate is not randomly distributed; it occasionally rose to more than 50% in certain positions, and its distribution was linked to several experimental variables. The main factors related to error are the presence of homopolymers, position in the sequence, size of the sequence and spatial localization in PT plates for insertion and deletion errors. These factors can be described by considering seven variables. No single variable can account for the error rate distribution, but most of the variation is explained by the combination of all seven variables. Conclusions The pattern identified here calls for the use of internal controls and error-correcting base callers, to correct for errors, when available (e.g. when sequencing amplicons). For shotgun libraries, the use of both sequencing primers and deep coverage, combined with the use of random sequencing primer sites should partly compensate for even high error rates, although it may prove more difficult than previous thought to distinguish between low-frequency alleles and errors. PMID:21592414

  4. Accuracy and feasibility of video analysis for assessing hamstring flexibility and validity of the sit-and-reach test.

    PubMed

    Mier, Constance M

    2011-12-01

    The accuracy of video analysis of the passive straight-leg raise test (PSLR) and the validity of the sit-and-reach test (SR) were tested in 60 men and women. Computer software measured static hip-joint flexion accurately. High within-session reliability of the PSLR was demonstrated (R > .97). Test-retest (separate days) reliability for SR was high in men (R = .97) and women R = .98) moderate for PSLR in men (R = .79) and women (R = .89). SR validity (PSLR as criterion) was higher in women (Day 1, r = .69; Day 2, r = .81) than men (Day 1, r = .64; Day 2, r = .66). In conclusion, video analysis is accurate and feasible for assessing static joint angles, PSLR and SR tests are very reliable methods for assessing flexibility, and the SR validity for hamstring flexibility was found to be moderate in women and low in men.

  5. Performing Probabilistic Risk Assessment Through RAVEN

    SciTech Connect

    A. Alfonsi; C. Rabiti; D. Mandelli; J. Cogliati; R. Kinoshita

    2013-06-01

    The Reactor Analysis and Virtual control ENviroment (RAVEN) code is a software tool that acts as the control logic driver and post-processing engine for the newly developed Thermal-Hydraulic code RELAP-7. RAVEN is now a multi-purpose Probabilistic Risk Assessment (PRA) software framework that allows dispatching different functionalities: Derive and actuate the control logic required to simulate the plant control system and operator actions (guided procedures), allowing on-line monitoring/controlling in the Phase Space Perform both Monte-Carlo sampling of random distributed events and Dynamic Event Tree based analysis Facilitate the input/output handling through a Graphical User Interface (GUI) and a post-processing data mining module

  6. Accuracy of dual energy X-ray absorptiometry (DXA) in assessing carcass composition from different pig populations.

    PubMed

    Soladoye, O P; López Campos, Ó; Aalhus, J L; Gariépy, C; Shand, P; Juárez, M

    2016-11-01

    The accuracy of dual energy X-ray absorptiometry (DXA) in assessing carcass composition from pigs with diverse characteristics was examined in the present study. A total of 648 pigs from three different sire breeds, two sexes, two slaughter weights and three different diets were employed. DXA estimations were used to predict the dissected/chemical yield for lean and fat of carcass sides and primal cuts. The accuracy of the predictions was assessed based on coefficient of determination (R(2)) and residual standard deviation (RSD). The linear relationships for dissected fat and lean for all the primal cuts and carcass sides were high (R(2)>0.94, P<0.01), with low RSD (<1.9%). Relationships between DXA and chemical fat and lean of pork bellies were also high (R(2)>0.94, P<0.01), with RSD <2.9%. These linear relationships remained high over the full range of variation in the pig population, except for sire breed, where the coefficient of determination decreased when carcasses were classified based on this variable. PMID:27395824

  7. Cascade impactor (CI) mensuration--an assessment of the accuracy and precision of commercially available optical measurement systems.

    PubMed

    Chambers, Frank; Ali, Aziz; Mitchell, Jolyon; Shelton, Christopher; Nichols, Steve

    2010-03-01

    Multi-stage cascade impactors (CIs) are the preferred measurement technique for characterizing the aerodynamic particle size distribution of an inhalable aerosol. Stage mensuration is the recommended pharmacopeial method for monitoring CI "fitness for purpose" within a GxP environment. The Impactor Sub-Team of the European Pharmaceutical Aerosol Group has undertaken an inter-laboratory study to assess both the precision and accuracy of a range of makes and models of instruments currently used for optical inspection of impactor stages. Measurement of two Andersen 8-stage 'non-viable' cascade impactor "reference" stages that were representative of jet sizes for this instrument type (stages 2 and 7) confirmed that all instruments evaluated were capable of reproducible jet measurement, with the overall capability being within the current pharmacopeial stage specifications for both stages. In the assessment of absolute accuracy, small, but consistent differences (ca. 0.6% of the certified value) observed between 'dots' and 'spots' of a calibrated chromium-plated reticule were observed, most likely the result of treatment of partially lit pixels along the circumference of this calibration standard. Measurements of three certified ring gauges, the smallest having a nominal diameter of 1.0 mm, were consistent with the observation where treatment of partially illuminated pixels at the periphery of the projected image can result in undersizing. However, the bias was less than 1% of the certified diameter. The optical inspection instruments evaluated are fully capable of confirming cascade impactor suitability in accordance with pharmacopeial practice.

  8. Integration of small run-of-river and solar power: The hydrological regime prediction/assessment accuracy

    NASA Astrophysics Data System (ADS)

    Francois, Baptiste; Creutin, Jean-Dominique; Hingray, Benoit; Zoccatelli, Davide

    2014-05-01

    analyzed how water discharge prediction accuracy controls the assessment quality of run-of-river- and solar-power interaction. We especially sought over which hydro-meteorological context a simple prediction method of water discharge is able to produce pertinent run-of-river and solar power interaction assessment. We considered three degrees of complexity to estimate water discharges: i) model-based estimation using calibrated parameters over the watershed, ii) model-based estimation using parameters from nearby watershed and then iii) a scaling law. This work has been performed for a set of watersheds over a climate transect going from the Alpine crests to the Veneto plains in the north eastern part of Italy, where observed run-of-river power generations present different degrees of complementarities with solar power. The work presented is part of the FP7 project COMPLEX (Knowledge based climate mitigation systems for a low carbon economy; http://www.complex.ac.uk/).

  9. Assessing weight perception accuracy to promote weight loss among U.S. female adolescents: A secondary analysis

    PubMed Central

    2010-01-01

    Background Overweight and obesity have become a global epidemic. The prevalence of overweight and obesity among U.S. adolescents has almost tripled in the last 30 years. Results from recent systematic reviews demonstrate that no single, particular intervention or strategy successfully assists overweight or obese adolescents in losing weight. An understanding of factors that influence healthy weight-loss behaviors among overweight and obese female adolescents promotes effective, multi-component weight-loss interventions. There is limited evidence demonstrating associations between demographic variables, body-mass index, and weight perception among female adolescents trying to lose weight. There is also a lack of previous studies examining the association of the accuracy of female adolescents' weight perception with their efforts to lose weight. This study, therefore, examined the associations of body-mass index, weight perception, and weight-perception accuracy with trying to lose weight and engaging in exercise as a weight-loss method among a representative sample of U.S. female adolescents. Methods A nonexperimental, descriptive, comparative secondary analysis design was conducted using data from Wave II (1996) of the National Longitudinal Study of Adolescent Health (Add Health). Data representative of U.S. female adolescents (N = 2216) were analyzed using STATA statistical software. Descriptive statistics and survey weight logistic regression were performed to determine if demographic and independent (body-mass index, weight perception, and weight perception accuracy) variables were associated with trying to lose weight and engaging in exercise as a weight-loss method. Results Age, Black or African American race, body-mass index, weight perception, and weight perceptions accuracy were consistently associated with the likeliness of trying to lose weight among U.S. female adolescents. Age, body-mass index, weight perception, and weight-perception accuracy were

  10. Validation of selected analytical methods using accuracy profiles to assess the impact of a Tobacco Heating System on indoor air quality.

    PubMed

    Mottier, Nicolas; Tharin, Manuel; Cluse, Camille; Crudo, Jean-René; Lueso, María Gómez; Goujon-Ginglinger, Catherine G; Jaquier, Anne; Mitova, Maya I; Rouget, Emmanuel G R; Schaller, Mathieu; Solioz, Jennifer

    2016-09-01

    Studies in environmentally controlled rooms have been used over the years to assess the impact of environmental tobacco smoke on indoor air quality. As new tobacco products are developed, it is important to determine their impact on air quality when used indoors. Before such an assessment can take place it is essential that the analytical methods used to assess indoor air quality are validated and shown to be fit for their intended purpose. Consequently, for this assessment, an environmentally controlled room was built and seven analytical methods, representing eighteen analytes, were validated. The validations were carried out with smoking machines using a matrix-based approach applying the accuracy profile procedure. The performances of the methods were compared for all three matrices under investigation: background air samples, the environmental aerosol of Tobacco Heating System THS 2.2, a heat-not-burn tobacco product developed by Philip Morris International, and the environmental tobacco smoke of a cigarette. The environmental aerosol generated by the THS 2.2 device did not have any appreciable impact on the performances of the methods. The comparison between the background and THS 2.2 environmental aerosol samples generated by smoking machines showed that only five compounds were higher when THS 2.2 was used in the environmentally controlled room. Regarding environmental tobacco smoke from cigarettes, the yields of all analytes were clearly above those obtained with the other two air sample types. PMID:27343591

  11. Validation of selected analytical methods using accuracy profiles to assess the impact of a Tobacco Heating System on indoor air quality.

    PubMed

    Mottier, Nicolas; Tharin, Manuel; Cluse, Camille; Crudo, Jean-René; Lueso, María Gómez; Goujon-Ginglinger, Catherine G; Jaquier, Anne; Mitova, Maya I; Rouget, Emmanuel G R; Schaller, Mathieu; Solioz, Jennifer

    2016-09-01

    Studies in environmentally controlled rooms have been used over the years to assess the impact of environmental tobacco smoke on indoor air quality. As new tobacco products are developed, it is important to determine their impact on air quality when used indoors. Before such an assessment can take place it is essential that the analytical methods used to assess indoor air quality are validated and shown to be fit for their intended purpose. Consequently, for this assessment, an environmentally controlled room was built and seven analytical methods, representing eighteen analytes, were validated. The validations were carried out with smoking machines using a matrix-based approach applying the accuracy profile procedure. The performances of the methods were compared for all three matrices under investigation: background air samples, the environmental aerosol of Tobacco Heating System THS 2.2, a heat-not-burn tobacco product developed by Philip Morris International, and the environmental tobacco smoke of a cigarette. The environmental aerosol generated by the THS 2.2 device did not have any appreciable impact on the performances of the methods. The comparison between the background and THS 2.2 environmental aerosol samples generated by smoking machines showed that only five compounds were higher when THS 2.2 was used in the environmentally controlled room. Regarding environmental tobacco smoke from cigarettes, the yields of all analytes were clearly above those obtained with the other two air sample types.

  12. Assessment of VIIRS radiometric performance using vicarious calibration sites

    NASA Astrophysics Data System (ADS)

    Uprety, Sirish; Cao, Changyong; Blonski, Slawomir; Wang, Wenhui

    2014-09-01

    Radiometric performance of satellite instruments needs to be regularly monitored to determine if there is any drift in the instrument response over time despite the calibration with the best effort. If a drift occurs, it needs to be characterized in order to keep the radiometric accuracy and stability well within the specification. Instrument gain change over time can be validated independently using many techniques such as using stable earth targets (desert, ocean, snow sites etc), inter-comparison with other well calibrated radiometers (using SNO, SNO-x), deep convective clouds (DCC), lunar observations or other methods. This study focus on using vicarious calibration sites for the assessment of radiometric performance of Suomi National Polar-Orbiting Partnership (S-NPP) Visible Infrared Imaging Radiometer Suite (VIIRS) reflective solar bands. The calibration stability is primarily analyzed by developing the top-of-atmosphere (TOA) reflectance time series over these sites. In addition, the radiometric bias relative to AQUA MODIS is estimated over these calibration sites and analyzed. The radiometric bias is quantified in terms of observed and spectral bias. The spectral characterization and bias analysis will be performed using hyperspectral measurements and radiative transfer models such as MODTRAN.

  13. Effect of training, education, professional experience, and need for cognition on accuracy of exposure assessment decision-making.

    PubMed

    Vadali, Monika; Ramachandran, Gurumurthy; Banerjee, Sudipto

    2012-04-01

    Results are presented from a study that investigated the effect of characteristics of occupational hygienists relating to educational and professional experience and task-specific experience on the accuracy of occupational exposure judgments. A total of 49 occupational hygienists from six companies participated in the study and 22 tasks were evaluated. Participating companies provided monitoring data on specific tasks. Information on nine educational and professional experience determinants (e.g. educational background, years of occupational hygiene and exposure assessment experience, professional certifications, statistical training and experience, and the 'need for cognition (NFC)', which is a measure of an individual's motivation for thinking) and four task-specific determinants was also collected from each occupational hygienist. Hygienists had a wide range of educational and professional backgrounds for tasks across a range of industries with different workplace and task characteristics. The American Industrial Hygiene Association exposure assessment strategy was used to make exposure judgments on the probability of the 95th percentile of the underlying exposure distribution being located in one of four exposure categories relative to the occupational exposure limit. After reviewing all available job/task/chemical information, hygienists were asked to provide their judgment in probabilistic terms. Both qualitative (judgments without monitoring data) and quantitative judgments (judgments with monitoring data) were recorded. Ninety-three qualitative judgments and 2142 quantitative judgments were obtained. Data interpretation training, with simple rules of thumb for estimating the 95th percentiles of lognormal distributions, was provided to all hygienists. A data interpretation test (DIT) was also administered and judgments were elicited before and after training. General linear models and cumulative logit models were used to analyze the relationship between

  14. Validating the Accuracy of Reaction Time Assessment on Computer-Based Tablet Devices.

    PubMed

    Schatz, Philip; Ybarra, Vincent; Leitner, Donald

    2015-08-01

    Computer-based assessment has evolved to tablet-based devices. Despite the availability of tablets and "apps," there is limited research validating their use. We documented timing delays between stimulus presentation and (simulated) touch response on iOS devices (3rd- and 4th-generation Apple iPads) and Android devices (Kindle Fire, Google Nexus, Samsung Galaxy) at response intervals of 100, 250, 500, and 1,000 milliseconds (ms). Results showed significantly greater timing error on Google Nexus and Samsung tablets (81-97 ms), than Kindle Fire and Apple iPads (27-33 ms). Within Apple devices, iOS 7 obtained significantly lower timing error than iOS 6. Simple reaction time (RT) trials (250 ms) on tablet devices represent 12% to 40% error (30-100 ms), depending on the device, which decreases considerably for choice RT trials (3-5% error at 1,000 ms). Results raise implications for using the same device for serial clinical assessment of RT using tablets, as well as the need for calibration of software and hardware. PMID:25612627

  15. Validating the Accuracy of Reaction Time Assessment on Computer-Based Tablet Devices.

    PubMed

    Schatz, Philip; Ybarra, Vincent; Leitner, Donald

    2015-08-01

    Computer-based assessment has evolved to tablet-based devices. Despite the availability of tablets and "apps," there is limited research validating their use. We documented timing delays between stimulus presentation and (simulated) touch response on iOS devices (3rd- and 4th-generation Apple iPads) and Android devices (Kindle Fire, Google Nexus, Samsung Galaxy) at response intervals of 100, 250, 500, and 1,000 milliseconds (ms). Results showed significantly greater timing error on Google Nexus and Samsung tablets (81-97 ms), than Kindle Fire and Apple iPads (27-33 ms). Within Apple devices, iOS 7 obtained significantly lower timing error than iOS 6. Simple reaction time (RT) trials (250 ms) on tablet devices represent 12% to 40% error (30-100 ms), depending on the device, which decreases considerably for choice RT trials (3-5% error at 1,000 ms). Results raise implications for using the same device for serial clinical assessment of RT using tablets, as well as the need for calibration of software and hardware.

  16. Display performance assessment and certification at NIDL

    NASA Astrophysics Data System (ADS)

    Enstrom, Ronald E.; Grote, Michael D.; Brill, Michael H.

    2000-08-01

    The National Information Display Laboratory (NIDL) is chartered to ensure the most efficient and cost-effective transfer of display technologies to government applications. To assure high quality in displays acquired by the government, the NIDL conducts systematic measurement of candidate displays, guided by standards of metrology and performance. The NIDL also initiates and promulgates such standards, through bodies such as ISO, VESA, ANSI, PIMA, and IEC. This paper discusses three aspects of the quality-assurance program, which correspond to successive steps in monitor verification: (1) set up the monitor so it performs as well as possible; (2) measure it carefully in dimensions such as grayscale, color, and resolution; and (3) compare the measurements against acceptance criteria that are stringent but achievable. In each of these stages, objective measurements are supplemented (and sometimes replaced) by allowing humans to assess a variety of test patterns (some designed by NIDL). Subjective and objective tests each have advantages: human vision is the ultimate arbiter of display quality, but objective measurements are more standardizable than the judgements of individual observers. The best of both worlds would be a metric based on an objective model of human vision. Toward this goal, the NIDL has applied a vision model to display-quality problems (NIIRS prediction and impacts of screen reflection).

  17. Accuracy Assessment for PPP by Comparing Various Online PPP Service Solutions with Bernese 5.2 Network Solution

    NASA Astrophysics Data System (ADS)

    Ozgur Uygur, Sureyya; Aydin, Cuneyt; Demir, Deniz Oz; Cetin, Seda; Dogan, Ugur

    2016-04-01

    GNSS precise point positioning (PPP) technique is frequently used for geodetic applications such as monitoring of reference stations and estimation of tropospheric parameters. This technique uses the undifferenced GNSS observations along with the IGS products to reach high level positioning accuracy. The accuracy level depends on the GNSS data quality as well as the length of the observation duration and the quality of the external data products. It is possible to reach the desired positioning accuracy in the reference frame of satellite coordinates by using a single receiver GNSS data applying PPP technique. PPP technique is provided to users by scientific GNSS processing software packages (like GIPSY of NASA-JPL and Bernese Processing Software of AIUB) as well as several online PPP services. The related services are Auto-GIPSY provided by JPL California Institute of Technology, CSRS-PPP provided by Natural Resources Canada, GAPS provided by the University of New Brunswick and Magic-PPP provided by GMV. In this study, we assess the accuracy of PPP by comparing the solutions from the online PPP services with Bernese 5.2 network solutions. Seven days (DoY 256-262 in 2015) of GNSS observations with 24 hours session duration on the CORS-TR network in Turkey collected on a set of 14 stations were processed in static mode using the above-mentioned PPP services. The average of daily coordinates from Bernese 5.2 static network solution related to 12 IGS stations were taken as the true coordinates. Our results indicate that the distributions of the north, east and up daily position differences are characterized by means and RMS of 1.9±0.5, 2.1±0.7, 4.7±2.1 mm for CSRS, 1.6±0.6, 1.4±0.8, 5.5±3.9 mm for Auto-GIPSY, 3.0±0.8, 3.0±1.2, 6.0±3.2 mm for Magic GNSS, 2.1±1.3, 2.8±1.7, 5.0±2.3 mm for GAPS, with respect to Bernese 5.2 network solution. Keywords: PPP, Online GNSS Service, Bernese, Accuracy

  18. Malignant mesothelioma, airborne asbestos, and the need for accuracy in chrysotile risk assessments.

    PubMed

    Meisenkothen, Christopher

    2013-01-01

    A man diagnosed with pleural mesothelioma sought legal representation with the author's law firm. He worked 33 years in a wire and cable factory in the northeastern United States (Connecticut) that exclusively used chrysotile asbestos in its manufacturing process. This is the first report of mesothelioma arising from employees of this factory. This report provides additional support for the proposition that chrysotile asbestos can cause malignant mesothelioma in humans. If chrysotile risk assessments are to be accurate, then the literature should contain an accurate accounting of all mesotheliomas alleged to be caused by chrysotile asbestos. This is important not just for public health professionals but also for individuals and companies involved in litigation over asbestos-related diseases. If reports such as these remain unknown, it is probable that cases of mesothelioma among chrysotile-exposed cohorts would go unrecognized and chrysotile-using factories would be incorrectly cited as having no mesotheliomas among their employees.

  19. Accuracy Assessment of Three-dimensional Surface Reconstructions of In vivo Teeth from Cone-beam Computed Tomography

    PubMed Central

    Sang, Yan-Hui; Hu, Hong-Cheng; Lu, Song-He; Wu, Yu-Wei; Li, Wei-Ran; Tang, Zhi-Hui

    2016-01-01

    Background: The accuracy of three-dimensional (3D) reconstructions from cone-beam computed tomography (CBCT) has been particularly important in dentistry, which will affect the effectiveness of diagnosis, treatment plan, and outcome in clinical practice. The aims of this study were to assess the linear, volumetric, and geometric accuracy of 3D reconstructions from CBCT and to investigate the influence of voxel size and CBCT system on the reconstructions results. Methods: Fifty teeth from 18 orthodontic patients were assigned to three groups as NewTom VG 0.15 mm group (NewTom VG; voxel size: 0.15 mm; n = 17), NewTom VG 0.30 mm group (NewTom VG; voxel size: 0.30 mm; n = 16), and VATECH DCTPRO 0.30 mm group (VATECH DCTPRO; voxel size: 0.30 mm; n = 17). The 3D reconstruction models of the teeth were segmented from CBCT data manually using Mimics 18.0 (Materialise Dental, Leuven, Belgium), and the extracted teeth were scanned by 3Shape optical scanner (3Shape A/S, Denmark). Linear and volumetric deviations were separately assessed by comparing the length and volume of the 3D reconstruction model with physical measurement by paired t-test. Geometric deviations were assessed by the root mean square value of the imposed 3D reconstruction and optical models by one-sample t-test. To assess the influence of voxel size and CBCT system on 3D reconstruction, analysis of variance (ANOVA) was used (α = 0.05). Results: The linear, volumetric, and geometric deviations were −0.03 ± 0.48 mm, −5.4 ± 2.8%, and 0.117 ± 0.018 mm for NewTom VG 0.15 mm group; −0.45 ± 0.42 mm, −4.5 ± 3.4%, and 0.116 ± 0.014 mm for NewTom VG 0.30 mm group; and −0.93 ± 0.40 mm, −4.8 ± 5.1%, and 0.194 ± 0.117 mm for VATECH DCTPRO 0.30 mm group, respectively. There were statistically significant differences between groups in terms of linear measurement (P < 0.001), but no significant difference in terms of volumetric measurement (P = 0.774). No statistically significant difference were

  20. Performance Assessment of Two GPS Receivers on Space Shuttle

    NASA Technical Reports Server (NTRS)

    Schroeder, Christine A.; Schutz, Bob E.

    1996-01-01

    Space Shuttle STS-69 was launched on September 7, 1995, carrying the Wake Shield Facility (WSF-02) among its payloads. The mission included two GPS receivers: a Collins 3M receiver onboard the Endeavour and an Osborne flight TurboRogue, known as the TurboStar, onboard the WSF-02. Two of the WSF-02 GPS Experiment objectives were to: (1) assess the ability to use GPS in a relative satellite positioning mode using the receivers on Endeavour and WSF-02; and (2) assess the performance of the receivers to support high precision orbit determination at the 400 km altitude. Three ground tests of the receivers were conducted in order to characterize the respective receivers. The analysis of the tests utilized the Double Differencing technique. A similar test in orbit was conducted during STS-69 while the WSF-02 was held by the Endeavour robot arm for a one hour period. In these tests, biases were observed in the double difference pseudorange measurements, implying that biases up to 140 m exist which do not cancel in double differencing. These biases appear to exist in the Collins receiver, but their effect can be mitigated by including measurement bias parameters to accommodate them in an estimation process. An additional test was conducted in which the orbit of the combined Endeavour/WSF-02 was determined independently with each receiver. These one hour arcs were based on forming double differences with 13 TurboRogue receivers in the global IGS network and estimating pseudorange biases for the Collins. Various analyses suggest the TurboStar overall orbit accuracy is about one to two meters for this period, based on double differenced phase residuals of 34 cm. These residuals indicate the level of unmodeled forces on Endeavour produced by gravitational and nongravitational effects. The rms differences between the two independently determined orbits are better than 10 meters, thereby demonstrating the accuracy of the Collins-determined orbit at this level as well as the

  1. Assessment of relative accuracy in the determination of organic matter concentrations in aquatic systems

    USGS Publications Warehouse

    Aiken, G.; Kaplan, L.A.; Weishaar, J.

    2002-01-01

    Accurate determinations of total (TOC), dissolved (DOC) and particulate (POC) organic carbon concentrations are critical for understanding the geochemical, environmental, and ecological roles of aquatic organic matter. Of particular significance for the drinking water industry, TOC measurements are the basis for compliance with US EPA regulations. The results of an interlaboratory comparison designed to identify problems associated with the determination of organic matter concentrations in drinking water supplies are presented. The study involved 31 laboratories and a variety of commercially available analytical instruments. All participating laboratories performed well on samples of potassium hydrogen phthalate (KHP), a compound commonly used as a standard in carbon analysis. However, problems associated with the oxidation of difficult to oxidize compounds, such as dodecylbenzene sulfonic acid and caffeine, were noted. Humic substances posed fewer problems for analysts. Particulate organic matter (POM) in the form of polystyrene beads, freeze-dried bacteria and pulverized leaf material were the most difficult for all analysts, with a wide range of performances reported. The POM results indicate that the methods surveyed in this study are inappropriate for the accurate determination of POC and TOC concentration. Finally, several analysts had difficulty in efficiently separating inorganic carbon from KHP solutions, thereby biasing DOC results.

  2. An assessment of coefficient accuracy in linear regression models with spatially varying coefficients

    NASA Astrophysics Data System (ADS)

    Wheeler, David C.; Calder, Catherine A.

    2007-06-01

    The realization in the statistical and geographical sciences that a relationship between an explanatory variable and a response variable in a linear regression model is not always constant across a study area has led to the development of regression models that allow for spatially varying coefficients. Two competing models of this type are geographically weighted regression (GWR) and Bayesian regression models with spatially varying coefficient processes (SVCP). In the application of these spatially varying coefficient models, marginal inference on the regression coefficient spatial processes is typically of primary interest. In light of this fact, there is a need to assess the validity of such marginal inferences, since these inferences may be misleading in the presence of explanatory variable collinearity. In this paper, we present the results of a simulation study designed to evaluate the sensitivity of the spatially varying coefficients in the competing models to various levels of collinearity. The simulation study results show that the Bayesian regression model produces more accurate inferences on the regression coefficients than does GWR. In addition, the Bayesian regression model is overall fairly robust in terms of marginal coefficient inference to moderate levels of collinearity, and degrades less substantially than GWR with strong collinearity.

  3. Human papillomavirus testing by self-sampling: assessment of accuracy in an unsupervised clinical setting

    PubMed Central

    Szarewski, Anne; Cadman, Louise; Mallett, Susan; Austin, Janet; Londesborough, Philip; Waller, Jo; Wardle, Jane; Altman, Douglas G; Cuzick, Jack

    2007-01-01

    Objectives: To compare the performance and acceptability of unsupervised self-sampling with clinician sampling for high-risk human papillomavirus (HPV) types for the first time in a UK screening setting. Setting: Nine hundred and twenty women, from two demographically different centres, attending for routine cervical smear testing Methods: Women performed an unsupervised HPV self-test. Immediately afterwards, a doctor or nurse took an HPV test and cervical smear. Women with an abnormality on any test were offered colposcopy. Results: Twenty-one high-grade and 39 low-grade cervical intraepithelial neoplasias (CINs) were detected. The sensitivity for high-grade disease (CIN2+) for the self HPV test was 81% (95% confidence interval [CI] 60–92), clinician HPV test 100% (95% CI 85–100), cytology 81% (95% CI 60–92). The sensitivity of both HPV tests to detect high- and low-grade cervical neoplasia was much higher than that of cytology (self-test 77% [95%CI 65–86], clinician test 80% [95% CI 68–88], cytology 48% [95% CI 36–61]). For both high-grade alone, and high and low grades together, the specificity was significantly higher for cytology (greater than 95%) than either HPV test (between 82% and 87%). The self-test proved highly acceptable to women and they reported that the instructions were easy to understand irrespective of educational level. Conclusions: Our results suggest that it would be reasonable to offer HPV self-testing to women who are reluctant to attend for cervical smears. This approach should now be directly evaluated among women who have been non-attenders in a cervical screening programme. PMID:17362570

  4. Ground-based differential absorption lidar for water-vapor profiling: assessment of accuracy, resolution, and meteorological applications.

    PubMed

    Wulfmeyer, V; Bösenberg, J

    1998-06-20

    The accuracy and the resolution of water-vapor measurements by use of the ground-based differential absorption lidar (DIAL) system of the Max-Planck-Institute (MPI) are determined. A theoretical analysis, intercomparisons with radiosondes, and measurements in high-altitude clouds allow the conclusion that, with the MPI DIAL system, water-vapor measurements with a systematic error of <5% in the whole troposphere can be performed. Special emphasis is laid on the outstanding daytime and nighttime performance of the DIAL system in the lower troposphere. With a time resolution of 1 min the statistical error varies between 0.05 g/m(3) in the near range using 75 m and-depending on the meteorological conditions-approximately 0.25 g/m(3) at 2 km using 150-m vertical resolution. When the eddy correlation method is applied, this accuracy and resolution are sufficient to determine water-vapor flux profiles in the convective boundary layer with a statistical error of <10% in each data point to approximately 1700 m. The results have contributed to the fact that the DIAL method has finally won recognition as an excellent tool for tropospheric research, in particular for boundary layer research and as a calibration standard for radiosondes and satellites. PMID:18273352

  5. Ground-based differential absorption lidar for water-vapor profiling: assessment of accuracy, resolution, and meteorological applications.

    PubMed

    Wulfmeyer, V; Bösenberg, J

    1998-06-20

    The accuracy and the resolution of water-vapor measurements by use of the ground-based differential absorption lidar (DIAL) system of the Max-Planck-Institute (MPI) are determined. A theoretical analysis, intercomparisons with radiosondes, and measurements in high-altitude clouds allow the conclusion that, with the MPI DIAL system, water-vapor measurements with a systematic error of <5% in the whole troposphere can be performed. Special emphasis is laid on the outstanding daytime and nighttime performance of the DIAL system in the lower troposphere. With a time resolution of 1 min the statistical error varies between 0.05 g/m(3) in the near range using 75 m and-depending on the meteorological conditions-approximately 0.25 g/m(3) at 2 km using 150-m vertical resolution. When the eddy correlation method is applied, this accuracy and resolution are sufficient to determine water-vapor flux profiles in the convective boundary layer with a statistical error of <10% in each data point to approximately 1700 m. The results have contributed to the fact that the DIAL method has finally won recognition as an excellent tool for tropospheric research, in particular for boundary layer research and as a calibration standard for radiosondes and satellites.

  6. Performance enhancement of low-cost, high-accuracy, state estimation for vehicle collision prevention system using ANFIS

    NASA Astrophysics Data System (ADS)

    Saadeddin, Kamal; Abdel-Hafez, Mamoun F.; Jaradat, Mohammad A.; Jarrah, Mohammad Amin

    2013-12-01

    In this paper, a low-cost navigation system that fuses the measurements of the inertial navigation system (INS) and the global positioning system (GPS) receiver is developed. First, the system's dynamics are obtained based on a vehicle's kinematic model. Second, the INS and GPS measurements are fused using an extended Kalman filter (EKF) approach. Subsequently, an artificial intelligence based approach for the fusion of INS/GPS measurements is developed based on an Input-Delayed Adaptive Neuro-Fuzzy Inference System (IDANFIS). Experimental tests are conducted to demonstrate the performance of the two sensor fusion approaches. It is found that the use of the proposed IDANFIS approach achieves a reduction in the integration development time and an improvement in the estimation accuracy of the vehicle's position and velocity compared to the EKF based approach.

  7. Accuracy and Utility of Deformable Image Registration in {sup 68}Ga 4D PET/CT Assessment of Pulmonary Perfusion Changes During and After Lung Radiation Therapy

    SciTech Connect

    Hardcastle, Nicholas; Hofman, Michael S.; Hicks, Rodney J.; Callahan, Jason; Kron, Tomas; MacManus, Michael P.; Ball, David L.; Jackson, Price; Siva, Shankar

    2015-09-01

    Purpose: Measuring changes in lung perfusion resulting from radiation therapy dose requires registration of the functional imaging to the radiation therapy treatment planning scan. This study investigates registration accuracy and utility for positron emission tomography (PET)/computed tomography (CT) perfusion imaging in radiation therapy for non–small cell lung cancer. Methods: {sup 68}Ga 4-dimensional PET/CT ventilation-perfusion imaging was performed before, during, and after radiation therapy for 5 patients. Rigid registration and deformable image registration (DIR) using B-splines and Demons algorithms was performed with the CT data to obtain a deformation map between the functional images and planning CT. Contour propagation accuracy and correspondence of anatomic features were used to assess registration accuracy. Wilcoxon signed-rank test was used to determine statistical significance. Changes in lung perfusion resulting from radiation therapy dose were calculated for each registration method for each patient and averaged over all patients. Results: With B-splines/Demons DIR, median distance to agreement between lung contours reduced modestly by 0.9/1.1 mm, 1.3/1.6 mm, and 1.3/1.6 mm for pretreatment, midtreatment, and posttreatment (P<.01 for all), and median Dice score between lung contours improved by 0.04/0.04, 0.05/0.05, and 0.05/0.05 for pretreatment, midtreatment, and posttreatment (P<.001 for all). Distance between anatomic features reduced with DIR by median 2.5 mm and 2.8 for pretreatment and midtreatment time points, respectively (P=.001) and 1.4 mm for posttreatment (P>.2). Poorer posttreatment results were likely caused by posttreatment pneumonitis and tumor regression. Up to 80% standardized uptake value loss in perfusion scans was observed. There was limited change in the loss in lung perfusion between registration methods; however, Demons resulted in larger interpatient variation compared with rigid and B-splines registration

  8. Constraining OCT with Knowledge of Device Design Enables High Accuracy Hemodynamic Assessment of Endovascular Implants

    PubMed Central

    Brown, Jonathan; Lopes, Augusto C.; Kunio, Mie; Kolachalama, Vijaya B.; Edelman, Elazer R.

    2016-01-01

    Background Stacking cross-sectional intravascular images permits three-dimensional rendering of endovascular implants, yet introduces between-frame uncertainties that limit characterization of device placement and the hemodynamic microenvironment. In a porcine coronary stent model, we demonstrate enhanced OCT reconstruction with preservation of between-frame features through fusion with angiography and a priori knowledge of stent design. Methods and Results Strut positions were extracted from sequential OCT frames. Reconstruction with standard interpolation generated discontinuous stent structures. By computationally constraining interpolation to known stent skeletons fitted to 3D ‘clouds’ of OCT-Angio-derived struts, implant anatomy was resolved, accurately rendering features from implant diameter and curvature (n = 1 vessels, r2 = 0.91, 0.90, respectively) to individual strut-wall configurations (average displacement error ~15 μm). This framework facilitated hemodynamic simulation (n = 1 vessel), showing the critical importance of accurate anatomic rendering in characterizing both quantitative and basic qualitative flow patterns. Discontinuities with standard approaches systematically introduced noise and bias, poorly capturing regional flow effects. In contrast, the enhanced method preserved multi-scale (local strut to regional stent) flow interactions, demonstrating the impact of regional contexts in defining the hemodynamic consequence of local deployment errors. Conclusion Fusion of planar angiography and knowledge of device design permits enhanced OCT image analysis of in situ tissue-device interactions. Given emerging interests in simulation-derived hemodynamic assessment as surrogate measures of biological risk, such fused modalities offer a new window into patient-specific implant environments. PMID:26906566

  9. Usefulness of the jump-and-reach test in assessment of vertical jump performance.

    PubMed

    Menzel, Hans-Joachim; Chagas, Mauro H; Szmuchrowski, Leszek A; Araujo, Silvia R; Campos, Carlos E; Giannetti, Marcus R

    2010-02-01

    The objective was to estimate the reliability and criterion-related validity of the Jump-and-Reach Test for the assessment of squat, countermovement, and drop jump performance of 32 male Brazilian professional volleyball players. Performance of squat, countermovement, and drop jumps with different dropping heights was assessed on the Jump-and-Reach Test and the measurement of flight time, then compared across different jump trials. The very high reliability coefficients of both assessment methods and the lower correlation coefficients between scores on the assessments indicate a very high consistency of each method but only moderate covariation, which means that they measure partly different items. As a consequence, the Jump-and-Reach Test has good ecological validity in situations when reaching height during the flight phase is critical for performance (e.g., basketball and volleyball) but only limited accuracy for the assessment of vertical impulse production with different jump techniques and conditions.

  10. Accuracy assessment of land cover dynamic in hill land on integration of DEM data and TM image

    NASA Astrophysics Data System (ADS)

    Li, Yunmei; Wang, Xin; Wang, Qiao; Wu, Chuanqing; Huang, Jiazhu

    2010-04-01

    To accurately assess the area of land cover in hill land, we integrated DEM data and remote sensing image in Lihe River Valley, China. Firstly, the DEM data was combined into decision tree to increase the accuracy of land cover classification. Secondly, a slope corrected model was built to transfer the projected area to surface area by DEM data. At last, the area of different land cover was calculated and the dynamic of land cover in Lihe River Valley were analyzed from 1998 to 2003. The results show that: the area of forestland increased more than 10% by the slope corrected model, that indicates the area correcting is very important for hill land; the accuracy of classification especially for forestland and garden plot is enhanced by integrating of DEM data. It can be greater than 85%. The indexes of land use extent were 266.2 in 1998, 273.1 in 2001, and 276.7 in 2003. The change rates of land use extent were 2.59 during 1998 to 2001 and 1.34 during 2001 to 2003.

  11. Assessing the prediction accuracy of cure in the Cox proportional hazards cure model: an application to breast cancer data.

    PubMed

    Asano, Junichi; Hirakawa, Akihiro; Hamada, Chikuma

    2014-01-01

    A cure rate model is a survival model incorporating the cure rate with the assumption that the population contains both uncured and cured individuals. It is a powerful statistical tool for prognostic studies, especially in cancer. The cure rate is important for making treatment decisions in clinical practice. The proportional hazards (PH) cure model can predict the cure rate for each patient. This contains a logistic regression component for the cure rate and a Cox regression component to estimate the hazard for uncured patients. A measure for quantifying the predictive accuracy of the cure rate estimated by the Cox PH cure model is required, as there has been a lack of previous research in this area. We used the Cox PH cure model for the breast cancer data; however, the area under the receiver operating characteristic curve (AUC) could not be estimated because many patients were censored. In this study, we used imputation-based AUCs to assess the predictive accuracy of the cure rate from the PH cure model. We examined the precision of these AUCs using simulation studies. The results demonstrated that the imputation-based AUCs were estimable and their biases were negligibly small in many cases, although ordinary AUC could not be estimated. Additionally, we introduced the bias-correction method of imputation-based AUCs and found that the bias-corrected estimate successfully compensated the overestimation in the simulation studies. We also illustrated the estimation of the imputation-based AUCs using breast cancer data.

  12. Topographic accuracy assessment of bare earth lidar-derived unstructured meshes

    NASA Astrophysics Data System (ADS)

    Bilskie, Matthew V.; Hagen, Scott C.

    2013-02-01

    This study is focused on the integration of bare earth lidar (Light Detection and Ranging) data into unstructured (triangular) finite element meshes and the implications on simulating storm surge inundation using a shallow water equations model. A methodology is developed to compute root mean square error (RMSE) and the 95th percentile of vertical elevation errors using four different interpolation methods (linear, inverse distance weighted, natural neighbor, and cell averaging) to resample bare earth lidar and lidar-derived digital elevation models (DEMs) onto unstructured meshes at different resolutions. The results are consolidated into a table of optimal interpolation methods that minimize the vertical elevation error of an unstructured mesh for a given mesh node density. The cell area averaging method performed most accurate when DEM grid cells within 0.25 times the ratio of local element size and DEM cell size were averaged. The methodology is applied to simulate inundation extent and maximum water levels in southern Mississippi due to Hurricane Katrina, which illustrates that local changes in topography such as adjusting element size and interpolation method drastically alter simulated storm surge locally and non-locally. The methods and results presented have utility and implications to any modeling application that uses bare earth lidar.

  13. A geostatistical methodology to assess the accuracy of unsaturated flow models

    SciTech Connect

    Smoot, J.L.; Williams, R.E.

    1996-04-01

    The Pacific Northwest National Laboratory spatiotemporal movement of water injected into (PNNL) has developed a Hydrologic unsaturated sediments at the Hanford Site in Evaluation Methodology (HEM) to assist the Washington State was used to develop a new U.S. Nuclear Regulatory Commission in method for evaluating mathematical model evaluating the potential that infiltrating meteoric predictions. Measured water content data were water will produce leachate at commercial low- interpolated geostatistically to a 16 x 16 x 36 level radioactive waste disposal sites. Two key grid at several time intervals. Then a issues are raised in the HEM: (1) evaluation of mathematical model was used to predict water mathematical models that predict facility content at the same grid locations at the selected performance, and (2) estimation of the times. Node-by-node comparison of the uncertainty associated with these mathematical mathematical model predictions with the model predictions. The technical objective of geostatistically interpolated values was this research is to adapt geostatistical tools conducted. The method facilitates a complete commonly used for model parameter estimation accounting and categorization of model error at to the problem of estimating the spatial every node. The comparison suggests that distribution of the dependent variable to be model results generally are within measurement calculated by the model. To fulfill this error. The worst model error occurs in silt objective, a database describing the lenses and is in excess of measurement error.

  14. Assessing the accuracy of the isotropic periodic sum method through Madelung energy computation

    NASA Astrophysics Data System (ADS)

    Ojeda-May, Pedro; Pu, Jingzhi

    2014-04-01

    We tested the isotropic periodic sum (IPS) method for computing Madelung energies of ionic crystals. The performance of the method, both in its nonpolar (IPSn) and polar (IPSp) forms, was compared with that of the zero-charge and Wolf potentials [D. Wolf, P. Keblinski, S. R. Phillpot, and J. Eggebrecht, J. Chem. Phys. 110, 8254 (1999)]. The results show that the IPSn and IPSp methods converge the Madelung energy to its reference value with an average deviation of ˜10-4 and ˜10-7 energy units, respectively, for a cutoff range of 18-24a (a/2 being the nearest-neighbor ion separation). However, minor oscillations were detected for the IPS methods when deviations of the computed Madelung energies were plotted on a logarithmic scale as a function of the cutoff distance. To remove such oscillations, we introduced a modified IPSn potential in which both the local-region and long-range electrostatic terms are damped, in analogy to the Wolf potential. With the damped-IPSn potential, a smoother convergence was achieved. In addition, we observed a better agreement between the damped-IPSn and IPSp methods, which suggests that damping the IPSn potential is in effect similar to adding a screening potential in IPSp.

  15. Assessing the accuracy of some popular DFT methods for computing harmonic vibrational frequencies of water clusters

    NASA Astrophysics Data System (ADS)

    Howard, J. Coleman; Enyard, Jordan D.; Tschumper, Gregory S.

    2015-12-01

    A wide range of density functional theory (DFT) methods (37 altogether), including pure, hybrid, range-separated hybrid, double-hybrid, and dispersion-corrected functionals, have been employed to compute the harmonic vibrational frequencies of eight small water clusters ranging in size from the dimer to four different isomers of the hexamer. These computed harmonic frequencies have been carefully compared to recently published benchmark values that are expected to be very close to the CCSD(T) complete basis set limit. Of the DFT methods examined here, ωB97 and ωB97X are the most consistently accurate, deviating from the reference values by less than 20 cm-1 on average and never more than 60 cm-1. The performance of double-hybrid methods including B2PLYP and mPW2-PLYP is only slightly better than more economical approaches, such as the M06-L pure functional and the M06-2X hybrid functional. Additionally, dispersion corrections offer very little improvement in computed frequencies.

  16. A multicentre evaluation of the accuracy and performance of IP-10 for the diagnosis of infection with M. tuberculosis.

    PubMed

    Ruhwald, Morten; Dominguez, Jose; Latorre, Irene; Losi, Monica; Richeldi, Luca; Pasticci, Maria Bruna; Mazzolla, Rosanna; Goletti, Delia; Butera, Ornella; Bruchfeld, Judith; Gaines, Hans; Gerogianni, Irini; Tuuminen, Tamara; Ferrara, Giovanni; Eugen-Olsen, Jesper; Ravn, Pernille

    2011-05-01

    IP-10 has potential as a diagnostic marker for infection with Mycobacterium tuberculosis, with comparable accuracy to QuantiFERON-TB Gold In-Tube test (QFT-IT). The aims were to assess the sensitivity and specificity of IP-10, and to evaluate the impact of co-morbidity on IP-10 and QFT-IT. 168 cases with active TB, 101 healthy controls and 175 non-TB patients were included. IP-10 and IFN-γ were measured in plasma of QFT-IT stimulated whole blood and analyzed using previously determined algorithms. A subgroup of 48 patients and 70 healthy controls was tested in parallel with T-SPOT.TB IP-10 and QFT-IT had comparable accuracy. Sensitivity was 81% and 84% with a specificity of 97% and 100%, respectively. Combining IP-10 and QFT-IT improved sensitivity to 87% (p < 0.0005), with a specificity of 97%. T-SPOT.TB was more sensitive than QFT-IT, but not IP-10. Among non-TB patients IP-10 had a higher rate of positive responders (35% vs 27%, p < 0.02) and for both tests a positive response was associated with relevant risk factors. IFN-γ but not IP-10 responses to mitogen stimulation were reduced in patients with TB and non-TB infection. This study confirms and validates previous findings and adds substance to IP-10 as a novel diagnostic marker for infection with M. tuberculosis. IP-10 appeared less influenced by infections other than TB; further studies are needed to test the clinical impact of these findings.

  17. An epidemiologic critique of current microbial risk assessment practices: the importance of prevalence and test accuracy data.

    PubMed

    Gardner, Ian A

    2004-09-01

    Data deficiencies are impeding the development and validation of microbial risk assessment models. One such deficiency is the failure to adjust test-based (apparent) prevalence estimates to true prevalence estimates by correcting for the imperfect accuracy of tests that are used. Such adjustments will facilitate comparability of data from different populations and from the same population over time as tests change and the unbiased quantification of effects of mitigation strategies. True prevalence can be estimated from apparent prevalence using frequentist and Bayesian methods, but the latter are more flexible and can incorporate uncertainty in test accuracy and prior prevalence data. Both approaches can be used for single or multiple populations, but the Bayesian approach can better deal with clustered data, inferences for rare events, and uncertainty in multiple variables. Examples of prevalence inferences based on results of Salmonella culture are presented. The opportunity to adjust test-based prevalence estimates is predicated on the availability of sensitivity and specificity estimates. These estimates can be obtained from studies using archived gold standard (reference) samples, by screening with the new test and follow-up of test-positive and test-negative samples with a gold standard test, and by use of latent class methods, which make no assumptions about the true status of each sampling unit. Latent class analysis can be done with maximum likelihood and Bayesian methods, and an example of their use in the evaluation of tests for Toxoplasma gondii in pigs is presented. Guidelines are proposed for more transparent incorporation of test data into microbial risk assessments.

  18. Deriving bio-equivalents from in vitro bioassays: assessment of existing uncertainties and strategies to improve accuracy and reporting.

    PubMed

    Wagner, Martin; Vermeirssen, Etiënne L M; Buchinger, Sebastian; Behr, Maximilian; Magdeburg, Axel; Oehlmann, Jörg

    2013-08-01

    Bio-equivalents (e.g., 17β-estradiol or dioxin equivalents) are commonly employed to quantify the in vitro effects of complex human or environmental samples. However, there is no generally accepted data analysis strategy for estimating and reporting bio-equivalents. Therefore, the aims of the present study are to 1) identify common mathematical models for the derivation of bio-equivalents from the literature, 2) assess the ability of those models to correctly predict bio-equivalents, and 3) propose measures to reduce uncertainty in their calculation and reporting. We compiled a database of 234 publications that report bio-equivalents. From the database, we extracted 3 data analysis strategies commonly used to estimate bio-equivalents. These models are based on linear or nonlinear interpolation, and the comparison of effect concentrations (ECX ). To assess their accuracy, we employed simulated data sets in different scenarios. The results indicate that all models lead to a considerable misestimation of bio-equivalents if certain mathematical assumptions (e.g., goodness of fit, parallelism of dose-response curves) are violated. However, nonlinear interpolation is most suitable to predict bio-equivalents from single-point estimates. Regardless of the model, subsequent linear extrapolation of bio-equivalents generates additional inaccuracy if the prerequisite of parallel dose-response curves is not met. When all these factors are taken into consideration, it becomes clear that data analysis introduces considerable uncertainty in the derived bio-equivalents. To improve accuracy and transparency of bio-equivalents, we propose a novel data analysis strategy and a checklist for reporting Minimum Information about Bio-equivalent ESTimates (MIBEST).

  19. High-Capacity Communications from Martian Distances Part 4: Assessment of Spacecraft Pointing Accuracy Capabilities Required For Large Ka-Band Reflector Antennas

    NASA Technical Reports Server (NTRS)

    Hodges, Richard E.; Sands, O. Scott; Huang, John; Bassily, Samir

    2006-01-01

    Improved surface accuracy for deployable reflectors has brought with it the possibility of Ka-band reflector antennas with extents on the order of 1000 wavelengths. Such antennas are being considered for high-rate data delivery from planetary distances. To maintain losses at reasonable levels requires a sufficiently capable Attitude Determination and Control System (ADCS) onboard the spacecraft. This paper provides an assessment of currently available ADCS strategies and performance levels. In addition to other issues, specific factors considered include: (1) use of "beaconless" or open loop tracking versus use of a beacon on the Earth side of the link, and (2) selection of fine pointing strategy (body-fixed/spacecraft pointing, reflector pointing or various forms of electronic beam steering). Capabilities of recent spacecraft are discussed.

  20. A diagnostic suite to assess NWP performance

    NASA Astrophysics Data System (ADS)

    Koh, T.-Y.; Wang, S.; Bhatt, B. C.

    2012-07-01

    A suite of numerical weather prediction (NWP) verification diagnostics applicable to both scalar and vector variables is developed, highlighting the normalization and successive decomposition of model errors. The normalized root-mean square error (NRMSE) is broken down into contributions from the normalized bias (NBias) and the normalized pattern error (NPE). The square of NPE, or the normalized error varianceα, is further analyzed into phase and amplitude errors, measured respectively by the correlation and the variance similarity. The variance similarity diagnostic is introduced to verify variability e.g. under different climates. While centered RMSE can be reduced by under-prediction of variability in the model,αpenalizes over- and under-prediction of variability equally. The error decomposition diagram, the correlation-similarity diagram and the anisotropy diagram are introduced. The correlation-similarity diagram was compared with the Taylor diagram: it has the advantage of analyzing the normalized error variance geometrically into contributions from the correlation and variance similarity. Normalization of the error metrics removes the dependence on the inherent variability of a variable and allows comparison among quantities of different physical units and from different regions and seasons. This method was used to assess the Coupled Ocean/Atmospheric Mesoscale Prediction System (COAMPS). The NWP performance degrades progressively from the midlatitudes through the sub-tropics to the tropics. But similar cold and moist biases are noted and position and timing errors are the main cause of pattern errors. Although the suite of metrics is applied to NWP verification here, it is generally applicable as diagnostics for differences between two data sets.

  1. Computer-aided placement of endosseous oral implants in patients after ablative tumour surgery: assessment of accuracy.

    PubMed

    Wagner, Arne; Wanschitz, Felix; Birkfellner, Wolfgang; Zauza, Konstantin; Klug, Clemens; Schicho, Kurt; Kainberger, Franz; Czerny, Christian; Bergmann, Helmar; Ewers, Rolf

    2003-06-01

    The objective of this study was to evaluate the feasibility and accuracy of a novel surgical computer-aided navigation system for the placement of endosseous implants in patients after ablative tumour surgery. Pre-operative planning was performed by developing a prosthetic concept and modifying the implant position according to surgical requirements after high-resolution computed tomography (HRCT) scans with VISIT, a surgical planning and navigation software developed at the Vienna General Hospital. The pre-operative plan was transferred to the patients intraoperatively using surgical navigation software and optical tracking technology. The patients were HRCT-scanned again to compare the position of the implants with the pre-operative plan on reformatted CT-slices after matching of the pre- and post-operative data sets using the mutual information-technique. A total of 32 implants was evaluated. The mean deviation was 1.1 mm (range: 0-3.5 mm). The mean angular deviation of the implants was 6.4 degrees (range: 0.4 degrees - 17.4 degrees, variance: 13.3 degrees ). The results demonstrate, that adequate accuracy in placing endosseous oral implants can be delivered to patients with most difficult implantologic situations.

  2. Electrode replacement does not affect classification accuracy in dual-session use of a passive brain-computer interface for assessing cognitive workload

    PubMed Central

    Estepp, Justin R.; Christensen, James C.

    2015-01-01

    The passive brain-computer interface (pBCI) framework has been shown to be a very promising construct for assessing cognitive and affective state in both individuals and teams. There is a growing body of work that focuses on solving the challenges of transitioning pBCI systems from the research laboratory environment to practical, everyday use. An interesting issue is what impact methodological variability may have on the ability to reliably identify (neuro)physiological patterns that are useful for state assessment. This work aimed at quantifying the effects of methodological variability in a pBCI design for detecting changes in cognitive workload. Specific focus was directed toward the effects of replacing electrodes over dual sessions (thus inducing changes in placement, electromechanical properties, and/or impedance between the electrode and skin surface) on the accuracy of several machine learning approaches in a binary classification problem. In investigating these methodological variables, it was determined that the removal and replacement of the electrode suite between sessions does not impact the accuracy of a number of learning approaches when trained on one session and tested on a second. This finding was confirmed by comparing to a control group for which the electrode suite was not replaced between sessions. This result suggests that sensors (both neurological and peripheral) may be removed and replaced over the course of many interactions with a pBCI system without affecting its performance. Future work on multi-session and multi-day pBCI system use should seek to replicate this (lack of) effect between sessions in other tasks, temporal time courses, and data analytic approaches while also focusing on non-stationarity and variable classification performance due to intrinsic factors. PMID:25805963

  3. NREL Evaluates the Thermal Performance of Uninsulated Walls to Improve the Accuracy of Building Energy Simulation Tools (Fact Sheet)

    SciTech Connect

    Not Available

    2012-01-01

    This technical highlight describes NREL research to develop models of uninsulated wall assemblies that help to improve the accuracy of building energy simulation tools when modeling potential energy savings in older homes. Researchers at the National Renewable Energy Laboratory (NREL) have developed models for evaluating the thermal performance of walls in existing homes that will improve the accuracy of building energy simulation tools when predicting potential energy savings of existing homes. Uninsulated walls are typical in older homes where the wall cavities were not insulated during construction or where the insulating material has settled. Accurate calculation of heat transfer through building enclosures will help determine the benefit of energy efficiency upgrades in order to reduce energy consumption in older American homes. NREL performed detailed computational fluid dynamics (CFD) analysis to quantify the energy loss/gain through the walls and to visualize different airflow regimes within the uninsulated cavities. The effects of ambient outdoor temperature, radiative properties of building materials, and insulation level were investigated. The study showed that multi-dimensional airflows occur in walls with uninsulated cavities and that the thermal resistance is a function of the outdoor temperature - an effect not accounted for in existing building energy simulation tools. The study quantified the difference between CFD prediction and the approach currently used in building energy simulation tools over a wide range of conditions. For example, researchers found that CFD predicted lower heating loads and slightly higher cooling loads. Implementation of CFD results into building energy simulation tools such as DOE2 and EnergyPlus will likely reduce the predicted heating load of homes. Researchers also determined that a small air gap in a partially insulated cavity can lead to a significant reduction in thermal resistance. For instance, a 4-in. tall air gap

  4. Scalable Methods for Uncertainty Quantification, Data Assimilation and Target Accuracy Assessment for Multi-Physics Advanced Simulation of Light Water Reactors

    NASA Astrophysics Data System (ADS)

    Khuwaileh, Bassam

    High fidelity simulation of nuclear reactors entails large scale applications characterized with high dimensionality and tremendous complexity where various physics models are integrated in the form of coupled models (e.g. neutronic with thermal-hydraulic feedback). Each of the coupled modules represents a high fidelity formulation of the first principles governing the physics of interest. Therefore, new developments in high fidelity multi-physics simulation and the corresponding sensitivity/uncertainty quantification analysis are paramount to the development and competitiveness of reactors achieved through enhanced understanding of the design and safety margins. Accordingly, this dissertation introduces efficient and scalable algorithms for performing efficient Uncertainty Quantification (UQ), Data Assimilation (DA) and Target Accuracy Assessment (TAA) for large scale, multi-physics reactor design and safety problems. This dissertation builds upon previous efforts for adaptive core simulation and reduced order modeling algorithms and extends these efforts towards coupled multi-physics models with feedback. The core idea is to recast the reactor physics analysis in terms of reduced order models. This can be achieved via identifying the important/influential degrees of freedom (DoF) via the subspace analysis, such that the required analysis can be recast by considering the important DoF only. In this dissertation, efficient algorithms for lower dimensional subspace construction have been developed for single physics and multi-physics applications with feedback. Then the reduced subspace is used to solve realistic, large scale forward (UQ) and inverse problems (DA and TAA). Once the elite set of DoF is determined, the uncertainty/sensitivity/target accuracy assessment and data assimilation analysis can be performed accurately and efficiently for large scale, high dimensional multi-physics nuclear engineering applications. Hence, in this work a Karhunen-Loeve (KL

  5. Positional Accuracy Assessment of the Openstreetmap Buildings Layer Through Automatic Homologous Pairs Detection: the Method and a Case Study

    NASA Astrophysics Data System (ADS)

    Brovelli, M. A.; Minghini, M.; Molinari, M. E.; Zamboni, G.

    2016-06-01

    OpenStreetMap (OSM) is currently the largest openly licensed collection of geospatial data. Being OSM increasingly exploited in a variety of applications, research has placed great attention on the assessment of its quality. This work focuses on assessing the quality of OSM buildings. While most of the studies available in literature are limited to the evaluation of OSM building completeness, this work proposes an original approach to assess the positional accuracy of OSM buildings based on comparison with a reference dataset. The comparison relies on a quasi-automated detection of homologous pairs on the two datasets. Based on the homologous pairs found, warping algorithms like e.g. affine transformations and multi-resolution splines can be applied to the OSM buildings to generate a new version having an optimal local match to the reference layer. A quality assessment of the OSM buildings of Milan Municipality (Northern Italy), having an area of about 180 km2, is then presented. After computing some measures of completeness, the algorithm based on homologous points is run using the building layer of the official vector cartography of Milan Municipality as the reference dataset. Approximately 100000 homologous points are found, which show a systematic translation of about 0.4 m on both the X and Y directions and a mean distance of about 0.8 m between the datasets. Besides its efficiency and high degree of automation, the algorithm generates a warped version of OSM buildings which, having by definition a closest match to the reference buildings, can be eventually integrated in the OSM database.

  6. Accuracy assessment on the analysis of unbound drug in plasma by comparing traditional centrifugal ultrafiltration with hollow fiber centrifugal ultrafiltration and application in pharmacokinetic study.

    PubMed

    Zhang, Lin; Zhang, Zhi-Qing; Dong, Wei-Chong; Jing, Shao-Jun; Zhang, Jin-Feng; Jiang, Ye

    2013-11-29

    In present study, accuracy assessment on the analysis of unbound drug in plasma was made by comparing traditional centrifugal ultrafiltration (CF-UF) with hollow fiber centrifugal ultrafiltration (HFCF-UF). We used metformin (MET) as a model drug and studied the influence of centrifugal time, plasma condition and freeze-thaw circle times on the ultrafiltrate volume and related effect on the measurement of MET. Our results demonstrated that ultrafiltrate volume was a crucial factor which influenced measurement accuracy of unbound drug in plasma. For traditional CF-UF, the ultrafiltrate volume cannot be well-controlled due to a series of factors. Compared with traditional CF-UF, the ultrafiltrate volume by HFCF-UF can be easily controlled by the inner capacity of the U-shaped hollow fiber inserted into the sample under enough centrifugal force and centrifugal time, which contributes to a more accurate measurement. Moreover, the developed HFCF-UF method achieved a successful application in real plasma samples and exhibited several advantages including high precision, extremely low detection limit and perfect recovery. The HFCF-UF method offers the advantage of highly satisfactory performance in addition to being simple and fast in pretreatment, with these characteristics being consistent with the practicability requirements in current scientific research.

  7. Implementing Performance Assessment: Promises, Problems, and Challenges.

    ERIC Educational Resources Information Center

    Kane, Michael B., Ed.; Mitchell, Ruth, Ed.

    The chapters in this collection contribute to the debate about the value and usefulness of radically different kinds of assessments in the U.S. educational system by considering and expanding on the theoretical underpinnings of reports and speculation. The chapters are: (1) "Assessment Reform: Promises and Challenges" (Nidhi Khattri and David…

  8. Building-In Quality Rather than Assessing Quality Afterwards: A Technological Solution to Ensuring Computational Accuracy in Learning Materials

    ERIC Educational Resources Information Center

    Dunn, Peter

    2008-01-01

    Quality encompasses a very broad range of ideas in learning materials, yet the accuracy of the content is often overlooked as a measure of quality. Various aspects of accuracy are briefly considered, and the issue of computational accuracy is then considered further. When learning materials are produced containing the results of mathematical…

  9. WebRASP: a server for computing energy scores to assess the accuracy and stability of RNA 3D structures

    PubMed Central

    Norambuena, Tomas; Cares, Jorge F.; Capriotti, Emidio; Melo, Francisco

    2013-01-01

    Summary: The understanding of the biological role of RNA molecules has changed. Although it is widely accepted that RNAs play important regulatory roles without necessarily coding for proteins, the functions of many of these non-coding RNAs are unknown. Thus, determining or modeling the 3D structure of RNA molecules as well as assessing their accuracy and stability has become of great importance for characterizing their functional activity. Here, we introduce a new web application, WebRASP, that uses knowledge-based potentials for scoring RNA structures based on distance-dependent pairwise atomic interactions. This web server allows the users to upload a structure in PDB format, select several options to visualize the structure and calculate the energy profile. The server contains online help, tutorials and links to other related resources. We believe this server will be a useful tool for predicting and assessing the quality of RNA 3D structures. Availability and implementation: The web server is available at http://melolab.org/webrasp. It has been tested on the most popular web browsers and requires Java plugin for Jmol visualization. Contact: fmelo@bio.puc.cl PMID:23929030

  10. Accuracy of the third molar index for assessing the legal majority of 18 years in Turkish population.

    PubMed

    Gulsahi, Ayse; De Luca, Stefano; Cehreli, S Burcak; Tirali, R Ebru; Cameriere, Roberto

    2016-09-01

    In the last few years, forced and unregistered child marriage has widely increased into Turkey. The aim of this study was to test the accuracy of cut-off value of 0.08 by measurement of third molar index (I3M) in assessing legal adult age of 18 years. Digital panoramic images of 293 Turkish children and young adults (165 girls and 128 boys), aged between 14 and 22 years, were analysed. Age distribution gradually decreases as I3M increases in both girls and boys. For girls, the sensitivity was 85.9% (95% CI 77.1-92.8%) and specificity was 100%. The proportion of correctly classified individuals was 92.7%. For boys, the sensitivity was 94.6% (95% CI 88.1-99.8%) and specificity was 100%. The proportion of correctly classified individuals was 97.6%. The cut-off value of 0.08 is a useful method to assess if a subject is older than 18 years of age or not.

  11. Accuracy of the third molar index for assessing the legal majority of 18 years in Turkish population.

    PubMed

    Gulsahi, Ayse; De Luca, Stefano; Cehreli, S Burcak; Tirali, R Ebru; Cameriere, Roberto

    2016-09-01

    In the last few years, forced and unregistered child marriage has widely increased into Turkey. The aim of this study was to test the accuracy of cut-off value of 0.08 by measurement of third molar index (I3M) in assessing legal adult age of 18 years. Digital panoramic images of 293 Turkish children and young adults (165 girls and 128 boys), aged between 14 and 22 years, were analysed. Age distribution gradually decreases as I3M increases in both girls and boys. For girls, the sensitivity was 85.9% (95% CI 77.1-92.8%) and specificity was 100%. The proportion of correctly classified individuals was 92.7%. For boys, the sensitivity was 94.6% (95% CI 88.1-99.8%) and specificity was 100%. The proportion of correctly classified individuals was 97.6%. The cut-off value of 0.08 is a useful method to assess if a subject is older than 18 years of age or not. PMID:27344224

  12. Performance and Accuracy of Lightweight and Low-Cost GPS Data Loggers According to Antenna Positions, Fix Intervals, Habitats and Animal Movements.

    PubMed

    Forin-Wiart, Marie-Amélie; Hubert, Pauline; Sirguey, Pascal; Poulle, Marie-Lazarine

    2015-01-01

    Recently developed low-cost Global Positioning System (GPS) data loggers are promising tools for wildlife research because of their affordability for low-budget projects and ability to simultaneously track a greater number of individuals compared with expensive built-in wildlife GPS. However, the reliability of these devices must be carefully examined because they were not developed to track wildlife. This study aimed to assess the performance and accuracy of commercially available GPS data loggers for the first time using the same methods applied to test built-in wildlife GPS. The effects of antenna position, fix interval and habitat on the fix-success rate (FSR) and location error (LE) of CatLog data loggers were investigated in stationary tests, whereas the effects of animal movements on these errors were investigated in motion tests. The units operated well and presented consistent performance and accuracy over time in stationary tests, and the FSR was good for all antenna positions and fix intervals. However, the LE was affected by the GPS antenna and fix interval. Furthermore, completely or partially obstructed habitats reduced the FSR by up to 80% in households and increased the LE. Movement across habitats had no effect on the FSR, whereas forest habitat influenced the LE. Finally, the mean FSR (0.90 ± 0.26) and LE (15.4 ± 10.1 m) values from low-cost GPS data loggers were comparable to those of built-in wildlife GPS collars (71.6% of fixes with LE < 10 m for motion tests), thus confirming their suitability for use in wildlife studies.

  13. Performance and Accuracy of Lightweight and Low-Cost GPS Data Loggers According to Antenna Positions, Fix Intervals, Habitats and Animal Movements

    PubMed Central

    Forin-Wiart, Marie-Amélie; Hubert, Pauline; Sirguey, Pascal; Poulle, Marie-Lazarine

    2015-01-01

    Recently developed low-cost Global Positioning System (GPS) data loggers are promising tools for wildlife research because of their affordability for low-budget projects and ability to simultaneously track a greater number of individuals compared with expensive built-in wildlife GPS. However, the reliability of these devices must be carefully examined because they were not developed to track wildlife. This study aimed to assess the performance and accuracy of commercially available GPS data loggers for the first time using the same methods applied to test built-in wildlife GPS. The effects of antenna position, fix interval and habitat on the fix-success rate (FSR) and location error (LE) of CatLog data loggers were investigated in stationary tests, whereas the effects of animal movements on these errors were investigated in motion tests. The units operated well and presented consistent performance and accuracy over time in stationary tests, and the FSR was good for all antenna positions and fix intervals. However, the LE was affected by the GPS antenna and fix interval. Furthermore, completely or partially obstructed habitats reduced the FSR by up to 80% in households and increased the LE. Movement across habitats had no effect on the FSR, whereas forest habitat influenced the LE. Finally, the mean FSR (0.90 ± 0.26) and LE (15.4 ± 10.1 m) values from low-cost GPS data loggers were comparable to those of built-in wildlife GPS collars (71.6% of fixes with LE < 10 m for motion tests), thus confirming their suitability for use in wildlife studies. PMID:26086958

  14. AN ACCURACY ASSESSMENT OF 1992 LANDSAT-MSS DERIVED LAND COVER FOR THE UPPER SAN PEDRO WATERSHED (U.S./MEXICO)

    EPA Science Inventory

    The utility of Digital Orthophoto Quads (DOQS) in assessing the classification accuracy of land cover derived from Landsat MSS data was investigated. Initially, the suitability of DOQs in distinguishing between different land cover classes was assessed using high-resolution airbo...

  15. Exploring the Utility of a Virtual Performance Assessment

    ERIC Educational Resources Information Center

    Clarke-Midura, Jody; Code, Jillianne; Zap, Nick; Dede, Chris

    2011-01-01

    With funding from the Institute of Education Sciences (IES), the Virtual Performance Assessment project at the Harvard Graduate School of Education is developing and studying the feasibility of immersive virtual performance assessments (VPAs) to assess scientific inquiry of middle school students as a standardized component of an accountability…

  16. Assessment in Performance-Based Secondary Music Classes

    ERIC Educational Resources Information Center

    Pellegrino, Kristen; Conway, Colleen M.; Russell, Joshua A.

    2015-01-01

    After sharing research findings about grading and assessment practices in secondary music ensemble classes, we offer examples of commonly used assessment tools (ratings scale, checklist, rubric) for the performance ensemble. Then, we explore the various purposes of assessment in performance-based music courses: (1) to meet state, national, and…

  17. Integration of Mobile AR Technology in Performance Assessment

    ERIC Educational Resources Information Center

    Kuo-Hung, Chao; Kuo-En, Chang; Chung-Hsien, Lan; Kinshuk; Yao-Ting, Sung

    2016-01-01

    This study was aimed at exploring how to use augmented reality (AR) technology to enhance the effect of performance assessment (PA). A mobile AR performance assessment system (MARPAS) was developed by integrating AR technology to reduce the limitations in observation and assessment during PA. This system includes three modules: Authentication, AR…

  18. [CONTROVERSIES REGARDING THE ACCURACY AND LIMITATIONS OF FROZEN SECTION IN THYROID PATHOLOGY: AN EVIDENCE-BASED ASSESSMENT].

    PubMed

    Stanciu-Pop, C; Pop, F C; Thiry, A; Scagnol, I; Maweja, S; Hamoir, E; Beckers, A; Meurisse, M; Grosu, F; Delvenne, Ph

    2015-12-01

    Palpable thyroid nodules are present clinically in 4-7% of the population and their prevalence increases to 50%-67% when using high-resolution neck ultrasonography. By contrast, thyroid carcinoma (TC) represents only 5-20% of these nodules, which underlines the need for an appropriate approach to avoid unnecessary surgery. Frozen section (PS) has been used for more than 40 years in thyroid surgery to establish the diagnosis of malignancy. However, a controversy persists regarding the accuracy of FS and its place in thyroid pathology has changed with the emergence of fine-needle aspiration (FNA). A PubMed Medline and SpringerLink search was made covering the period from January 2000 to June 2012 to assess the accuracy of ES, its limitations and indications for the diagnosis of thyroid nodules. Twenty publications encompassing 8.567 subjects were included in our study. The average value of TC among thyroid nodules in analyzed studies was 15.5 %. ES ability to detect cancer expressed by its sensitivity (Ss) was 67.5 %. More than two thirds of the authors considered PS useful exclusively in the presence of doubtful ENA and for guiding the surgical extension in cases confirmed as malignant by FNA; however, only 33% accepted FS as a routine examination for the management of thyroid nodules. The influence of FS on surgical reintervention rate in nodular thyroid pathology was considered to be negligible by most studies, whereas 31 % of the authors thought that FS has a favorable benefit by decreasing the number of surgical re-interventions. In conclusion, the role of FS in thyroid pathology evolved from a mandatory component for thyroid surgery to an optional examination after a pre-operative FNA cytology. The accuracy of FS seems to provide no sufficient additional benefit and most experts support its use only in the presence of equivocal or suspicious cytological features, for guiding the surgical extension in cases confirmed as malignant by FNA and for the

  19. Flight assessment of the onboard propulsion system model for the Performance Seeking Control algorithm on an F-15 aircraft

    NASA Technical Reports Server (NTRS)

    Orme, John S.; Schkolnik, Gerard S.

    1995-01-01

    Performance Seeking Control (PSC), an onboard, adaptive, real-time optimization algorithm, relies upon an onboard propulsion system model. Flight results illustrated propulsion system performance improvements as calculated by the model. These improvements were subject to uncertainty arising from modeling error. Thus to quantify uncertainty in the PSC performance improvements, modeling accuracy must be assessed. A flight test approach to verify PSC-predicted increases in thrust (FNP) and absolute levels of fan stall margin is developed and applied to flight test data. Application of the excess thrust technique shows that increases of FNP agree to within 3 percent of full-scale measurements for most conditions. Accuracy to these levels is significant because uncertainty bands may now be applied to the performance improvements provided by PSC. Assessment of PSC fan stall margin modeling accuracy was completed with analysis of in-flight stall tests. Results indicate that the model overestimates the stall margin by between 5 to 10 percent. Because PSC achieves performance gains by using available stall margin, this overestimation may represent performance improvements to be recovered with increased modeling accuracy. Assessment of thrust and stall margin modeling accuracy provides a critical piece for a comprehensive understanding of PSC's capabilities and limitations.

  20. Vertical Accuracy Assessment of 30-M Resolution Alos, Aster, and Srtm Global Dems Over Northeastern Mindanao, Philippines

    NASA Astrophysics Data System (ADS)

    Santillan, J. R.; Makinano-Santillan, M.

    2016-06-01

    The ALOS World 3D - 30 m (AW3D30), ASTER Global DEM Version 2 (GDEM2), and SRTM-30 m are Digital Elevation Models (DEMs) that have been made available to the general public free of charge. An important feature of these DEMs is their unprecedented horizontal resolution of 30-m and almost global coverage. The very recent release of these DEMs, particularly AW3D30 and SRTM- 30 m, calls for opportunities for the conduct of localized assessment of the DEM's quality and accuracy to verify their suitability for a wide range of applications in hydrology, geomorphology, archaelogy, and many others. In this study, we conducted a vertical accuracy assessment of these DEMs by comparing the elevation of 274 control points scattered over various sites in northeastern Mindanao, Philippines. The elevations of these control points (referred to the Mean Sea Level, MSL) were obtained through 3rd order differential levelling using a high precision digital level, and their horizontal positions measured using a global positioning system (GPS) receiver. These control points are representative of five (5) land-cover classes namely brushland (45 points), built-up (32), cultivated areas (97), dense vegetation (74), and grassland (26). Results showed that AW3D30 has the lowest Root Mean Square Error (RMSE) of 5.68 m, followed by SRTM-30 m (RMSE = 8.28 m), and ASTER GDEM2 (RMSE = 11.98 m). While all the three DEMs overestimated the true ground elevations, the mean and standard deviations of the differences in elevations were found to be lower in AW3D30 compared to SRTM-30 m and ASTER GDEM2. The superiority of AW3D30 over the other two DEMS was also found to be consistent even under different landcover types, with AW3D30's RMSEs ranging from 4.29 m (built-up) to 6.75 m (dense vegetation). For SRTM-30 m, the RMSE ranges from 5.91 m (built-up) to 10.42 m (brushland); for ASTER

  1. 40 CFR 194.34 - Results of performance assessments.

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... COMPLIANCE WITH THE 40 CFR PART 191 DISPOSAL REGULATIONS Compliance Certification and Re-certification Containment Requirements § 194.34 Results of performance assessments. (a) The results of performance... 40 Protection of Environment 26 2012-07-01 2011-07-01 true Results of performance assessments....

  2. A Short History of Performance Assessment: Lessons Learned.

    ERIC Educational Resources Information Center

    Madaus, George F.; O'Dwyer, Laura M.

    1999-01-01

    Places performance assessment in the context of high-stakes uses, describes underlying technologies, and outlines the history of performance testing from 210 B.C.E. to the present. Historical issues of fairness, efficiency, cost, and infrastructure influence contemporary efforts to use performance assessments in large-scale, high-stakes testing…

  3. OSLD energy response performance and dose accuracy at 24 - 1250 keV: Comparison with TLD-100H and TLD-100

    SciTech Connect

    Kadir, A. B. A.; Priharti, W.; Samat, S. B.; Dolah, M. T.

    2013-11-27

    OSLD was evaluated in terms of energy response and accuracy of the measured dose in comparison with TLD-100H and TLD-100. The OSLD showed a better energy response performance for H{sub p}(10) whereas for H{sub p}(0.07), TLD-100H is superior than the others. The OSLD dose accuracy is comparable with the other two dosimeters since it fulfilled the requirement of the ICRP trumpet graph analysis.

  4. Accuracy of the actuator disc-RANS approach for predicting the performance and wake of tidal turbines.

    PubMed

    Batten, W M J; Harrison, M E; Bahaj, A S

    2013-02-28

    The actuator disc-RANS model has widely been used in wind and tidal energy to predict the wake of a horizontal axis turbine. The model is appropriate where large-scale effects of the turbine on a flow are of interest, for example, when considering environmental impacts, or arrays of devices. The accuracy of the model for modelling the wake of tidal stream turbines has not been demonstrated, and flow predictions presented in the literature for similar modelled scenarios vary significantly. This paper compares the results of the actuator disc-RANS model, where the turbine forces have been derived using a blade-element approach, to experimental data measured in the wake of a scaled turbine. It also compares the results with those of a simpler uniform actuator disc model. The comparisons show that the model is accurate and can predict up to 94 per cent of the variation in the experimental velocity data measured on the centreline of the wake, therefore demonstrating that the actuator disc-RANS model is an accurate approach for modelling a turbine wake, and a conservative approach to predict performance and loads. It can therefore be applied to similar scenarios with confidence.

  5. Accuracy of the actuator disc-RANS approach for predicting the performance and wake of tidal turbines.

    PubMed

    Batten, W M J; Harrison, M E; Bahaj, A S

    2013-02-28

    The actuator disc-RANS model has widely been used in wind and tidal energy to predict the wake of a horizontal axis turbine. The model is appropriate where large-scale effects of the turbine on a flow are of interest, for example, when considering environmental impacts, or arrays of devices. The accuracy of the model for modelling the wake of tidal stream turbines has not been demonstrated, and flow predictions presented in the literature for similar modelled scenarios vary significantly. This paper compares the results of the actuator disc-RANS model, where the turbine forces have been derived using a blade-element approach, to experimental data measured in the wake of a scaled turbine. It also compares the results with those of a simpler uniform actuator disc model. The comparisons show that the model is accurate and can predict up to 94 per cent of the variation in the experimental velocity data measured on the centreline of the wake, therefore demonstrating that the actuator disc-RANS model is an accurate approach for modelling a turbine wake, and a conservative approach to predict performance and loads. It can therefore be applied to similar scenarios with confidence. PMID:23319711

  6. Statistical assessment of speech system performance

    NASA Technical Reports Server (NTRS)

    Moshier, Stephen L.

    1977-01-01

    Methods for the normalization of performance tests results of speech recognition systems are presented. Technological accomplishments in speech recognition systems, as well as planned research activities are described.

  7. Performance assessment of 3D surface imaging technique for medical imaging applications

    NASA Astrophysics Data System (ADS)

    Li, Tuotuo; Geng, Jason; Li, Shidong

    2013-03-01

    Recent development in optical 3D surface imaging technologies provide better ways to digitalize the 3D surface and its motion in real-time. The non-invasive 3D surface imaging approach has great potential for many medical imaging applications, such as motion monitoring of radiotherapy, pre/post evaluation of plastic surgery and dermatology, to name a few. Various commercial 3D surface imaging systems have appeared on the market with different dimension, speed and accuracy. For clinical applications, the accuracy, reproducibility and robustness across the widely heterogeneous skin color, tone, texture, shape properties, and ambient lighting is very crucial. Till now, a systematic approach for evaluating the performance of different 3D surface imaging systems still yet exist. In this paper, we present a systematic performance assessment approach to 3D surface imaging system assessment for medical applications. We use this assessment approach to exam a new real-time surface imaging system we developed, dubbed "Neo3D Camera", for image-guided radiotherapy (IGRT). The assessments include accuracy, field of view, coverage, repeatability, speed and sensitivity to environment, texture and color.

  8. High-resolution terrain and landcover mapping with a lightweight, semi-autonomous, remotely-piloted aircraft (RPA): a case study and accuracy assessment

    NASA Astrophysics Data System (ADS)

    Hugenholtz, C.; Whitehead, K.; Moorman, B.; Brown, O.; Hamilton, T.; Barchyn, T.; Riddell, K.; LeClair, A.

    2012-04-01

    Remotely-piloted aircraft (RPA) have evolved into a viable research tool for a range of Earth science applications. Significant technological advances driven by military and surveillance programs have steadily become mainstream and affordable. Thus, RPA technology has the potential to reinvigorate various aspects of geomorphological research, especially at the landform scale. In this presentation we will report results and experiences using a lightweight, semi-autonomous RPA for high-resolution terrain and landcover mapping. The goal was to test the accuracy of the photogrammetrically-derived terrain model and assess the overall performance of the RPA system for landform characterization. The test site was comprised an area of semi-vegetated sand dunes in the Canadian Prairies. The RPA survey was conducted with a RQ-84Z AreoHawk (Hawkeye UAV Ltd) and a low-cost digital camera. During the survey the RPA acquired images semi-autonomously with the aid of proprietary mission planning software developed by Accuas Inc. A total of 44 GCPs were used in the block adjustment to create the terrain model, while an additional 400 independent GPS check points were used for accuracy assessment. The 1 m resolution terrain model developed with Trimble's INPHO photogrammetric software was compared to the independent check points, yielding a RMS error comparable to airborne LiDAR data. The resulting orthophoto mosaic had a resolution of 0.1 m, revealing a number of geomorphic features beyond the resolution of airborne and QuickBird imagery. Overall, this case study highlights the potential of RPA technology for resolving terrain and landcover attributes at the landform scale. We believe one of the most significant and emerging applications of RPA in geomorphology is their potential to quantify rates of landform erosion/deposition in an affordable and flexible manner, allowing investigators to reduce the gap between recorded and natural morphodynamics.

  9. Short Term Survival after Admission for Heart Failure in Sweden: Applying Multilevel Analyses of Discriminatory Accuracy to Evaluate Institutional Performance

    PubMed Central

    Ghith, Nermin; Wagner, Philippe; Frølich, Anne; Merlo, Juan

    2016-01-01

    Background Hospital performance is frequently evaluated by analyzing differences between hospital averages in some quality indicators. The results are often expressed as quality charts of hospital variance (e.g., league tables, funnel plots). However, those analyses seldom consider patients heterogeneity around averages, which is of fundamental relevance for a correct evaluation. Therefore, we apply an innovative methodology based on measures of components of variance and discriminatory accuracy to analyze 30-day mortality after hospital discharge with a diagnosis of Heart Failure (HF) in Sweden. Methods We analyzed 36,943 patients aged 45–80 treated in 565 wards at 71 hospitals during 2007–2009. We applied single and multilevel logistic regression analyses to calculate the odds ratios and the area under the receiver-operating characteristic (AUC). We evaluated general hospital and ward effects by quantifying the intra-class correlation coefficient (ICC) and the increment in the AUC obtained by adding random effects in a multilevel regression analysis (MLRA). Finally, the Odds Ratios (ORs) for specific ward and hospital characteristics were interpreted jointly with the proportional change in variance (PCV) and the proportion of ORs in the opposite direction (POOR). Findings Overall, the average 30-day mortality was 9%. Using only patient information on age and previous hospitalizations for different diseases we obtained an AUC = 0.727. This value was almost unchanged when adding sex, country of birth as well as hospitals and wards levels. Average mortality was higher in small wards and municipal hospitals but the POOR values were 15% and 16% respectively. Conclusions Swedish wards and hospitals in general performed homogeneously well, resulting in a low 30-day mortality rate after HF. In our study, knowledge on a patient’s previous hospitalizations was the best predictor of 30-day mortality, and this information did not improve by knowing the sex and country

  10. Multidimensional analysis of suction feeding performance in fishes: fluid speed, acceleration, strike accuracy and the ingested volume of water.

    PubMed

    Higham, Timothy E; Day, Steven W; Wainwright, Peter C

    2006-07-01

    Suction feeding fish draw prey into the mouth using a flow field that they generate external to the head. In this paper we present a multidimensional perspective on suction feeding performance that we illustrate in a comparative analysis of suction feeding ability in two members of Centrarchidae, the largemouth bass (Micropterus salmoides) and bluegill sunfish (Lepomis macrochirus). We present the first direct measurements of maximum fluid speed capacity, and we use this to calculate local fluid acceleration and volumetric flow rate. We also calculated the ingested volume and a novel metric of strike accuracy. In addition, we quantified for each species the effects of gape magnitude, time to peak gape, and swimming speed on features of the ingested volume of water. Digital particle image velocimetry (DPIV) and high-speed video were used to measure the flow in front of the mouths of three fish from each species in conjunction with a vertical laser sheet positioned on the mid-sagittal plane of the fish. From this we quantified the maximum fluid speed (in the earthbound and fish's frame of reference), acceleration and ingested volume. Our method for determining strike accuracy involved quantifying the location of the prey relative to the center of the parcel of ingested water. Bluegill sunfish generated higher fluid speeds in the earthbound frame of reference, accelerated the fluid faster, and were more accurate than largemouth bass. However, largemouth bass ingested a larger volume of water and generated a higher volumetric flow rate than bluegill sunfish. In addition, because largemouth bass swam faster during prey capture, they generated higher fluid speeds in the fish's frame of reference. Thus, while bluegill can exert higher drag forces on stationary prey items, largemouth bass more quickly close the distance between themselves and prey. The ingested volume and volumetric flow rate significantly increased as gape increased for both species, while time to peak

  11. Binding Free Energy Calculations for Lead Optimization: Assessment of Their Accuracy in an Industrial Drug Design Context.

    PubMed

    Homeyer, Nadine; Stoll, Friederike; Hillisch, Alexander; Gohlke, Holger

    2014-08-12

    Correctly ranking compounds according to their computed relative binding affinities will be of great value for decision making in the lead optimization phase of industrial drug discovery. However, the performance of existing computationally demanding binding free energy calculation methods in this context is largely unknown. We analyzed the performance of the molecular mechanics continuum solvent, the linear interaction energy (LIE), and the thermodynamic integration (TI) approach for three sets of compounds from industrial lead optimization projects. The data sets pose challenges typical for this early stage of drug discovery. None of the methods was sufficiently predictive when applied out of the box without considering these challenges. Detailed investigations of failures revealed critical points that are essential for good binding free energy predictions. When data set-specific features were considered accordingly, predictions valuable for lead optimization could be obtained for all approaches but LIE. Our findings lead to clear recommendations for when to use which of the above approaches. Our findings also stress the important role of expert knowledge in this process, not least for estimating the accuracy of prediction results by TI, using indicators such as the size and chemical structure of exchanged groups and the statistical error in the predictions. Such knowledge will be invaluable when it comes to the question which of the TI results can be trusted for decision making.

  12. Assessing children's competency to take the oath in court: The influence of question type on children's accuracy.

    PubMed

    Evans, Angela D; Lyon, Thomas D

    2012-06-01

    This study examined children's accuracy in response to truth-lie competency questions asked in court. The participants included 164 child witnesses in criminal child sexual abuse cases tried in Los Angeles County over a 5-year period (1997-2001) and 154 child witnesses quoted in the U.S. state and federal appellate cases over a 35-year period (1974-2008). The results revealed that judges virtually never found children incompetent to testify, but children exhibited substantial variability in their performance based on question-type. Definition questions, about the meaning of the truth and lies, were the most difficult largely due to errors in response to "Do you know" questions. Questions about the consequences of lying were more difficult than questions evaluating the morality of lying. Children exhibited high rates of error in response to questions about whether they had ever told a lie. Attorneys rarely asked children hypothetical questions in a form that has been found to facilitate performance. Defense attorneys asked a higher proportion of the more difficult question types than prosecutors. The findings suggest that children's truth-lie competency is underestimated by courtroom questioning and support growing doubts about the utility of the competency requirements.

  13. 10 CFR 63.114 - Requirements for performance assessment.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... GEOLOGIC REPOSITORY AT YUCCA MOUNTAIN, NEVADA Technical Criteria Postclosure Performance Assessment § 63..., hydrology, and geochemistry (including disruptive processes and events) of the Yucca Mountain site, and...

  14. 10 CFR 63.114 - Requirements for performance assessment.

    Code of Federal Regulations, 2012 CFR

    2012-01-01

    ... GEOLOGIC REPOSITORY AT YUCCA MOUNTAIN, NEVADA Technical Criteria Postclosure Performance Assessment § 63..., hydrology, and geochemistry (including disruptive processes and events) of the Yucca Mountain site, and...

  15. 10 CFR 63.114 - Requirements for performance assessment.

    Code of Federal Regulations, 2011 CFR

    2011-01-01

    ... GEOLOGIC REPOSITORY AT YUCCA MOUNTAIN, NEVADA Technical Criteria Postclosure Performance Assessment § 63..., hydrology, and geochemistry (including disruptive processes and events) of the Yucca Mountain site, and...

  16. Design Rationale for a Complex Performance Assessment

    ERIC Educational Resources Information Center

    Williamson, David M.; Bauer, Malcolm; Steinberg, Linda S.; Mislevy, Robert J.; Behrens, John T.; DeMark, Sarah F.

    2004-01-01

    In computer-based interactive environments meant to support learning, students must bring a wide range of relevant knowledge, skills, and abilities to bear jointly as they solve meaningful problems in a learning domain. To function effectively as an assessment, a computer system must additionally be able to evoke and interpret observable evidence…

  17. Assessing Individual Performance in the College Band

    ERIC Educational Resources Information Center

    Reimer, Mark U.

    2009-01-01

    Semester assessment of college wind band members is an issue that conductors would probably agree falls within their academic freedom. Institutions may award as little as no credit or even a percentage of a credit for ensemble participation, although the time and effort required of the students and their conductor is undoubtedly equivalent to, or…

  18. Practical session assessments in human anatomy: Weightings and performance.

    PubMed

    McDonald, Aaron C; Chan, Siew-Pang; Schuijers, Johannes A

    2016-07-01

    Assessment weighting within a given module can be a motivating factor for students when deciding on their commitment level and time given to study a specific topic. In this study, an analysis of assessment performances of second year anatomy students was performed over four years to determine if (1) students performed better when a higher weighting was given to a set of practical session assessments and (2) whether an improved performance in the practical session assessments had a carry-over effect on other assessment tasks within that anatomy module and/or other anatomy modules that follow. Results showed that increasing the weighting of practical session assessments improved the average mark in that assessment and also improved the percentage of students passing that assessment. Further, it significantly improved performance in the written end-semester examination within the same module and had a carry-over effect on the anatomy module taught in the next teaching period, as students performed better in subsequent practical session assessments as well as subsequent end-semester examinations. It was concluded that the weighting of assessments had significant influences on a student's performance in that, and subsequent, assessments. It is postulated that practical session assessments, designed to develop deep learning skills in anatomy, improved efficacy in student performance in assessments undertaken in that and subsequent anatomy modules when the weighting of these assessments was greater. These deep learning skills were also transferable to other methods of assessing anatomy. Anat Sci Educ 9: 330-336. © 2015 American Association of Anatomists.

  19. Accuracy Assessment for the U.S. Geological Survey Regional Land-Cover Mapping Program: New York and New Jersey Region

    USGS Publications Warehouse

    Zhu, Zhi-Liang; Yang, Limin; Stehman, Stephen V.; Czaplewski, Raymond L.

    2000-01-01

    The U.S. Geological Survey, in cooperation with other government and private organizations, is producing a conterminous U.S. land-cover map using Landsat Thematic Mapper 30-meter data for the Federal regions designated by the U.S. Environmental Protection Agency. Accuracy assessment is to be conducted for each Federal region to estimate overall and class-specific accuracies. In Region 2, consisting of New York and New Jersey, the accuracy assessment was completed for 15 land-cover and land-use classes, using interpreted 1:40,000-scale aerial photographs as reference data. The methodology used for Region 2 features a two-stage, geographically stratified approach, with a general sample of all classes (1,033 sample sites), and a separate sample for rare classes (294 sample sites). A confidence index was recorded for each land-cover interpretation on the 1:40,000-scale aerial photography The estimated overall accuracy for Region 2 was 63 percent (standard error 1.4 percent) using all sample sites, and 75.2 percent (standard error 1.5 percent) using only reference sites with a high-confidence index. User's and producer's accuracies for the general sample and user's accuracy for the sample of rare classes, as well as variance for the estimated accuracy parameters, were also reported. Narrowly defined land-use classes and heterogeneous conditions of land cover are the major causes of misclassification errors. Recommendations for modifying the accuracy assessment methodology for use in the other nine Federal regions are provided.

  20. Mass evolution of Mediterranean, Black, Red, and Caspian Seas from GRACE and altimetry: accuracy assessment and solution calibration

    NASA Astrophysics Data System (ADS)

    Loomis, B. D.; Luthcke, S. B.

    2016-09-01

    We present new measurements of mass evolution for the Mediterranean, Black, Red, and Caspian Seas as determined by the NASA Goddard Space Flight Center (GSFC) GRACE time-variable global gravity mascon solutions. These new solutions are compared to sea surface altimetry measurements of sea level anomalies with steric corrections applied. To assess their accuracy, the GRACE- and altimetry-derived solutions are applied to the set of forward models used by GSFC for processing the GRACE Level-1B datasets, with the resulting inter-satellite range-acceleration residuals providing a useful metric for analyzing solution quality. We also present a differential correction strategy to calibrate the time series of mass change for each of the seas by establishing the strong linear relationship between differences in the forward modeled mass and the corresponding range-acceleration residuals between the two solutions. These calibrated time series of mass change are directly determined from the range-acceleration residuals, effectively providing regionally-tuned GRACE solutions without the need to form and invert normal equations. Finally, the calibrated GRACE time series are discussed and combined with the steric-corrected sea level anomalies to provide new measurements of the unmodeled steric variability for each of the seas over the span of the GRACE observation record. We apply ensemble empirical mode decomposition (EEMD) to adaptively sort the mass and steric components of sea level anomalies into seasonal, non-seasonal, and long-term temporal scales.

  1. Accuracy Assessment of a Canal-Tunnel 3d Model by Comparing Photogrammetry and Laserscanning Recording Techniques

    NASA Astrophysics Data System (ADS)

    Charbonnier, P.; Chavant, P.; Foucher, P.; Muzet, V.; Prybyla, D.; Perrin, T.; Grussenmeyer, P.; Guillemin, S.

    2013-07-01

    With recent developments in the field of technology and computer science, conventional methods are being supplanted by laser scanning and digital photogrammetry. These two different surveying techniques generate 3-D models of real world objects or structures. In this paper, we consider the application of terrestrial Laser scanning (TLS) and photogrammetry to the surveying of canal tunnels. The inspection of such structures requires time, safe access, specific processing and professional operators. Therefore, a French partnership proposes to develop a dedicated equipment based on image processing for visual inspection of canal tunnels. A 3D model of the vault and side walls of the tunnel is constructed from images recorded onboard a boat moving inside the tunnel. To assess the accuracy of this photogrammetric model (PM), a reference model is build using static TLS. We here address the problem comparing the resulting point clouds. Difficulties arise because of the highly differentiated acquisition processes, which result in very different point densities. We propose a new tool, designed to compare differences between pairs of point cloud or surfaces (triangulated meshes). Moreover, dealing with huge datasets requires the implementation of appropriate structures and algorithms. Several techniques are presented : point-to-point, cloud-to-cloud and cloud-to-mesh. In addition farthest point resampling, octree structure and Hausdorff distance are adopted and described. Experimental results are shown for a 475 m long canal tunnel located in France.

  2. Multinomial tree models for assessing the status of the reference in studies of the accuracy of tools for binary classification

    PubMed Central

    Botella, Juan; Huang, Huiling; Suero, Manuel

    2013-01-01

    Studies that evaluate the accuracy of binary classification tools are needed. Such studies provide 2 × 2 cross-classifications of test outcomes and the categories according to an unquestionable reference (or gold standard). However, sometimes a suboptimal reliability reference is employed. Several methods have been proposed to deal with studies where the observations are cross-classified with an imperfect reference. These methods require that the status of the reference, as a gold standard or as an imperfect reference, is known. In this paper a procedure for determining whether it is appropriate to maintain the assumption that the reference is a gold standard or an imperfect reference, is proposed. This procedure fits two nested multinomial tree models, and assesses and compares their absolute and incremental fit. Its implementation requires the availability of the results of several independent studies. These should be carried out using similar designs to provide frequencies of cross-classification between a test and the reference under investigation. The procedure is applied in two examples with real data. PMID:24106484

  3. Multinomial tree models for assessing the status of the reference in studies of the accuracy of tools for binary classification.

    PubMed

    Botella, Juan; Huang, Huiling; Suero, Manuel

    2013-01-01

    Studies that evaluate the accuracy of binary classification tools are needed. Such studies provide 2 × 2 cross-classifications of test outcomes and the categories according to an unquestionable reference (or gold standard). However, sometimes a suboptimal reliability reference is employed. Several methods have been proposed to deal with studies where the observations are cross-classified with an imperfect reference. These methods require that the status of the reference, as a gold standard or as an imperfect reference, is known. In this paper a procedure for determining whether it is appropriate to maintain the assumption that the reference is a gold standard or an imperfect reference, is proposed. This procedure fits two nested multinomial tree models, and assesses and compares their absolute and incremental fit. Its implementation requires the availability of the results of several independent studies. These should be carried out using similar designs to provide frequencies of cross-classification between a test and the reference under investigation. The procedure is applied in two examples with real data.

  4. Does diagnosis affect the predictive accuracy of risk assessment tools for juvenile offenders: Conduct Disorder and Attention Deficit Hyperactivity Disorder.

    PubMed

    Khanna, Dinesh; Shaw, Jenny; Dolan, Mairead; Lennox, Charlotte

    2014-10-01

    Studies have suggested an increased risk of criminality in juveniles if they suffer from co-morbid Attention Deficit Hyperactivity Disorder (ADHD) along with Conduct Disorder. The Structured Assessment of Violence Risk in Youth (SAVRY), the Psychopathy Checklist Youth Version (PCL:YV), and Youth Level of Service/Case Management Inventory (YLS/CMI) have been shown to be good predictors of violent and non-violent re-offending. The aim was to compare the accuracy of these tools to predict violent and non-violent re-offending in young people with co-morbid ADHD and Conduct Disorder and Conduct Disorder only. The sample included 109 White-British adolescent males in secure settings. Results revealed no significant differences between the groups for re-offending. SAVRY factors had better predictive values than PCL:YV or YLS/CMI. Tools generally had better predictive values for the Conduct Disorder only group than the co-morbid group. Possible reasons for these findings have been discussed along with limitations of the study. PMID:25173178

  5. Measures of Diagnostic Accuracy: Basic Definitions

    PubMed Central

    Šimundić, Ana-Maria

    2009-01-01

    Diagnostic accuracy relates to the ability of a test to discriminate between the target condition and health. This discriminative potential can be quantified by the measures of diagnostic accuracy such as sensitivity and specificity, predictive values, likelihood ratios, the area under the ROC curve, Youden's index and diagnostic odds ratio. Different measures of diagnostic accuracy relate to the different aspects of diagnostic procedure: while some measures are used to assess the discriminative property of the test, others are used to assess its predictive ability. Measures of diagnostic accuracy are not fixed indicators of a test performance, some are very sensitive to the disease prevalence, while others to the spectrum and definition of the disease. Furthermore, measures of diagnostic accuracy are extremely sensitive to the design of the study. Studies not meeting strict methodological standards usually over- or under-estimate the indicators of test performance as well as they limit the applicability of the results of the study. STARD initiative was a very important step toward the improvement the quality of reporting of studies of diagnostic accuracy. STARD statement should be included into the Instructions to authors by scientific journals and authors should be encouraged to use the checklist whenever reporting their studies on diagnostic accuracy. Such efforts could make a substantial difference in the quality of reporting of studies of diagnostic accuracy and serve to provide the best possible evidence to the best for the patient care. This brief review outlines some basic definitions and characteristics of the measures of diagnostic accuracy.

  6. Technical Highlight: NREL Evaluates the Thermal Performance of Uninsulated Walls to Improve the Accuracy of Building Energy Simulation Tools

    SciTech Connect

    Ridouane, E.H.

    2012-01-01

    This technical highlight describes NREL research to develop models of uninsulated wall assemblies that help to improve the accuracy of building energy simulation tools when modeling potential energy savings in older homes.

  7. Assessing the accuracy of hyperspectral and multispectral satellite imagery for categorical and Quantitative mapping of salinity stress in sugarcane fields

    NASA Astrophysics Data System (ADS)

    Hamzeh, Saeid; Naseri, Abd Ali; AlaviPanah, Seyed Kazem; Bartholomeus, Harm; Herold, Martin

    2016-10-01

    This study evaluates the feasibility of hyperspectral and multispectral satellite imagery for categorical and quantitative mapping of salinity stress in sugarcane fields located in the southwest of Iran. For this purpose a Hyperion image acquired on September 2, 2010 and a Landsat7 ETM+ image acquired on September 7, 2010 were used as hyperspectral and multispectral satellite imagery. Field data including soil salinity in the sugarcane root zone was collected at 191 locations in 25 fields during September 2010. In the first section of the paper, based on the yield potential of sugarcane as influenced by different soil salinity levels provided by FAO, soil salinity was classified into three classes, low salinity (1.7-3.4 dS/m), moderate salinity (3.5-5.9 dS/m) and high salinity (6-9.5) by applying different classification methods including Support Vector Machine (SVM), Spectral Angle Mapper (SAM), Minimum Distance (MD) and Maximum Likelihood (ML) on Hyperion and Landsat images. In the second part of the paper the performance of nine vegetation indices (eight indices from literature and a new developed index in this study) extracted from Hyperion and Landsat data was evaluated for quantitative mapping of salinity stress. The experimental results indicated that for categorical classification of salinity stress, Landsat data resulted in a higher overall accuracy (OA) and Kappa coefficient (KC) than Hyperion, of which the MD classifier using all bands or PCA (1-5) as an input performed best with an overall accuracy and kappa coefficient of 84.84% and 0.77 respectively. Vice versa for the quantitative estimation of salinity stress, Hyperion outperformed Landsat. In this case, the salinity and water stress index (SWSI) has the best prediction of salinity stress with an R2 of 0.68 and RMSE of 1.15 dS/m for Hyperion followed by Landsat data with an R2 and RMSE of 0.56 and 1.75 dS/m respectively. It was concluded that categorical mapping of salinity stress is the best option

  8. Technical Basis for Assessing Uranium Bioremediation Performance

    SciTech Connect

    PE Long; SB Yabusaki; PD Meyer; CJ Murray; AL N’Guessan

    2008-04-01

    In situ bioremediation of uranium holds significant promise for effective stabilization of U(VI) from groundwater at reduced cost compared to conventional pump and treat. This promise is unlikely to be realized unless researchers and practitioners successfully predict and demonstrate the long-term effectiveness of uranium bioremediation protocols. Field research to date has focused on both proof of principle and a mechanistic level of understanding. Current practice typically involves an engineering approach using proprietary amendments that focuses mainly on monitoring U(VI) concentration for a limited time period. Given the complexity of uranium biogeochemistry and uranium secondary minerals, and the lack of documented case studies, a systematic monitoring approach using multiple performance indicators is needed. This document provides an overview of uranium bioremediation, summarizes design considerations, and identifies and prioritizes field performance indicators for the application of uranium bioremediation. The performance indicators provided as part of this document are based on current biogeochemical understanding of uranium and will enable practitioners to monitor the performance of their system and make a strong case to clients, regulators, and the public that the future performance of the system can be assured and changes in performance addressed as needed. The performance indicators established by this document and the information gained by using these indicators do add to the cost of uranium bioremediation. However, they are vital to the long-term success of the application of uranium bioremediation and provide a significant assurance that regulatory goals will be met. The document also emphasizes the need for systematic development of key information from bench scale tests and pilot scales tests prior to full-scale implementation.

  9. Assessment of Breast Specimens With or Without Calcifications in Diagnosing Malignant and Atypia for Mammographic Breast Microcalcifications Without Mass: A STARD-Compliant Diagnostic Accuracy Article.

    PubMed

    Cheung, Yun-Chung; Juan, Yu-Hsiang; Ueng, Shir-Hwa; Lo, Yung-Feng; Huang, Pei-Chin; Lin, Yu-Ching; Chen, Shin-Cheh

    2015-10-01

    Presence of microcalcifications within the specimens frequently signifies a successful attempt of stereotactic vacuum-assisted breast biopsy (VABB) in obtaining a pathologic diagnosis of the breast microcalcifications. In this study, the authors aimed to assess and compare the accuracy and consistency of calcified or noncalcified specimens obtained from same sites of sampling on isolated microcalcifications without mass in diagnosing high-risk and malignant lesions. To the best of our knowledge, an individual case-based prospective comparison has not been reported.With the approval from institutional review board of our hospital (Chang Gung Memorial Hospital), the authors retrospectively reviewed all clinical cases of stereotactic VABBs on isolated breast microcalcifications without mass from our database. The authors included those having either surgery performed or had clinical follow-up of at least 3 years for analysis. All the obtained specimens with or without calcification were identified using specimen radiographs and separately submitted for pathologic evaluation. The concordance of diagnosis was assessed for both atypia and malignant lesions.A total of 390 stereotactic VABB procedures (1206 calcified and 1456 noncalcified specimens) were collected and reviewed. The consistent rates between calcified and noncalcified specimens were low for atypia and malignant microcalcifications (44.44% in flat epithelial atypia, 46.51% in atypical ductal hyperplasia, 55.73% in ductal carcinoma in situ, and 71.42% in invasive ductal carcinoma). The discordance in VABB diagnoses indicated that 41.33% of malignant lesions would be misdiagnosed by noncalcified specimens. Furthermore, calcified specimens showed higher diagnostic accuracy of breast cancer as compared with the noncalcified specimens (91.54 % versus 69.49%, respectively). The evaluation of both noncalcified specimens and calcified specimens did not show improvement of diagnostic accuracy as compared with

  10. Task-Based Variability in Children's Singing Accuracy

    ERIC Educational Resources Information Center

    Nichols, Bryan E.

    2013-01-01

    The purpose of this study was to explore task-based variability in children's singing accuracy performance. The research questions were: Does children's singing accuracy vary based on the nature of the singing assessment employed? Is there a hierarchy of difficulty and discrimination ability among singing assessment tasks? What is the…

  11. Accuracy assessment of NOGGIN Plus and MALÅ RAMAC X3M single channel ground penetrating RADAR (GPR) for underground utility mapping

    NASA Astrophysics Data System (ADS)

    Sazali Hashim, Mas; Nizam Saip, Saiful; Hani, Nurfauziah; Pradhan, Biswajeet; Abdullahi, Saleh

    2016-06-01

    Ground Penetrating Radar (GPR) becomes a popular device in investigation of the underground utilities in recent years. GPR analyses the type and position of utility objects. However, the performance accuracy of GPR models is an important issue that should be considered. This study conducts the accuracy analysis between two models of single channel GPR; NOGGIN PLUS and MALÅ RAMAC X3M, by focusing on the basic principles of single channel GPR, accuracy analysis and calibration methods implemented on GPR. The survey work has been performed to identify the most accurate instrument to detect underground utility objects. In addition, data analysis was carried out to compare between two models of single channel GPR. This study provides proper guidelines and assists surveyors to select the suitable instruments regarding on applications especially on utility mapping in terms of accuracy.

  12. OMPS Limb Profiler Instrument Performance Assessment

    NASA Technical Reports Server (NTRS)

    Jaross, Glen R.; Bhartia, Pawan K.; Chen, Grace; Kowitt, Mark; Haken, Michael; Chen, Zhong; Xu, Philippe; Warner, Jeremy; Kelly, Thomas

    2014-01-01

    Following the successful launch of the Ozone Mapping and Profiler Suite (OMPS) aboard the Suomi National Polar-orbiting Partnership (SNPP) spacecraft, the NASA OMPS Limb team began an evaluation of instrument and data product performance. The focus of this paper is the instrument performance in relation to the original design criteria. Performance that is closer to expectations increases the likelihood that limb scatter measurements by SNPP OMPS and successor instruments can form the basis for accurate long-term monitoring of ozone vertical profiles. The team finds that the Limb instrument operates mostly as designed and basic performance meets or exceeds the original design criteria. Internally scattered stray light and sensor pointing knowledge are two design challenges with the potential to seriously degrade performance. A thorough prelaunch characterization of stray light supports software corrections that are accurate to within 1% in radiances up to 60 km for the wavelengths used in deriving ozone. Residual stray light errors at 1000nm, which is useful in retrievals of stratospheric aerosols, currently exceed 10%. Height registration errors in the range of 1 km to 2 km have been observed that cannot be fully explained by known error sources. An unexpected thermal sensitivity of the sensor also causes wavelengths and pointing to shift each orbit in the northern hemisphere. Spectral shifts of as much as 0.5nm in the ultraviolet and 5 nm in the visible, and up to 0.3 km shifts in registered height, must be corrected in ground processing.

  13. Group 3: Performance evaluation and assessment

    NASA Technical Reports Server (NTRS)

    Frink, A.

    1981-01-01

    Line-oriented flight training provides a unique learning experience and an opportunity to look at aspects of performance other types of training did not provide. Areas such as crew coordination, resource management, leadership, and so forth, can be readily evaluated in such a format. While individual performance is of the utmost importance, crew performance deserves equal emphasis, therefore, these areas should be carefully observed by the instructors as an rea for discussion in the same way that individual performane is observed. To be effective, it must be accepted by the crew members, and administered by the instructors as pure training-learning through experience. To keep open minds, to benefit most from the experience, both in the doing and in the follow-on discussion, it is essential that it be entered into with a feeling of freedom, openness, and enthusiasm. Reserve or defensiveness because of concern for failure must be inhibit participation.

  14. A general strategy for performing temperature-programming in high performance liquid chromatography--further improvements in the accuracy of retention time predictions of segmented temperature gradients.

    PubMed

    Wiese, Steffen; Teutenberg, Thorsten; Schmidt, Torsten C

    2012-01-27

    In the present work it is shown that the linear elution strength (LES) model which was adapted from temperature-programming gas chromatography (GC) can also be employed for systematic method development in high-temperature liquid chromatography (HT-HPLC). The ability to predict isothermal retention times based on temperature-gradient as well as isothermal input data was investigated. For a small temperature interval of ΔT=40°C, both approaches result in very similar predictions. Average relative errors of predicted retention times of 2.7% and 1.9% were observed for simulations based on isothermal and temperature-gradient measurements, respectively. Concurrently, it was investigated whether the accuracy of retention time predictions of segmented temperature gradients can be further improved by temperature dependent calculation of the parameter S(T) of the LES relationship. It was found that the accuracy of retention time predictions of multi-step temperature gradients can be improved to around 1.5%, if S(T) was also calculated temperature dependent. The adjusted experimental design making use of four temperature-gradient measurements was applied for systematic method development of selected food additives by high-temperature liquid chromatography. Method development was performed within a temperature interval from 40°C to 180°C using water as mobile phase. Two separation methods were established where selected food additives were baseline separated. In addition, a good agreement between simulation and experiment was observed, because an average relative error of predicted retention times of complex segmented temperature gradients less than 5% was observed. Finally, a schedule of recommendations to assist the practitioner during systematic method development in high-temperature liquid chromatography was established.

  15. Performance Evaluation and Requirements Assessment for Gravity Gradient Referenced Navigation.

    PubMed

    Lee, Jisun; Kwon, Jay Hyoun; Yu, Myeongjong

    2015-01-01

    In this study, simulation tests for gravity gradient referenced navigation (GGRN) are conducted to verify the effects of various factors such as database (DB) and sensor errors, flight altitude, DB resolution, initial errors, and measurement update rates on the navigation performance. Based on the simulation results, requirements for GGRN are established for position determination with certain target accuracies. It is found that DB and sensor errors and flight altitude have strong effects on the navigation performance. In particular, a DB and sensor with accuracies of 0.1 E and 0.01 E, respectively, are required to determine the position more accurately than or at a level similar to the navigation performance of terrain referenced navigation (TRN). In most cases, the horizontal position error of GGRN is less than 100 m. However, the navigation performance of GGRN is similar to or worse than that of a pure inertial navigation system when the DB and sensor errors are 3 E or 5 E each and the flight altitude is 3000 m. Considering that the accuracy of currently available gradiometers is about 3 E or 5 E, GGRN does not show much advantage over TRN at present. However, GGRN is expected to exhibit much better performance in the near future when accurate DBs and gravity gradiometer are available. PMID:26184212

  16. Performance Evaluation and Requirements Assessment for Gravity Gradient Referenced Navigation

    PubMed Central

    Lee, Jisun; Kwon, Jay Hyoun; Yu, Myeongjong

    2015-01-01

    In this study, simulation tests for gravity gradient referenced navigation (GGRN) are conducted to verify the effects of various factors such as database (DB) and sensor errors, flight altitude, DB resolution, initial errors, and measurement update rates on the navigation performance. Based on the simulation results, requirements for GGRN are established for position determination with certain target accuracies. It is found that DB and sensor errors and flight altitude have strong effects on the navigation performance. In particular, a DB and sensor with accuracies of 0.1 E and 0.01 E, respectively, are required to determine the position more accurately than or at a level similar to the navigation performance of terrain referenced navigation (TRN). In most cases, the horizontal position error of GGRN is less than 100 m. However, the navigation performance of GGRN is similar to or worse than that of a pure inertial navigation system when the DB and sensor errors are 3 E or 5 E each and the flight altitude is 3000 m. Considering that the accuracy of currently available gradiometers is about 3 E or 5 E, GGRN does not show much advantage over TRN at present. However, GGRN is expected to exhibit much better performance in the near future when accurate DBs and gravity gradiometer are available. PMID:26184212

  17. An Empirical Study of a Solo Performance Assessment Model

    ERIC Educational Resources Information Center

    Russell, Brian E.

    2015-01-01

    The purpose of this study was to test a hypothesized model of solo music performance assessment. Specifically, this study investigates the influence of technique and musical expression on perceptions of overall performance quality. The Aural Musical Performance Quality (AMPQ) measure was created to measure overall performance quality, technique,…

  18. Diesel fuel detergent additive performance and assessment

    SciTech Connect

    Vincent, M.W.; Papachristos, M.J.; Williams, D.; Burton, J.

    1994-10-01

    Diesel fuel detergent additives are increasingly linked with high quality automotive diesel fuels. Both in Europe and in the USA, field problems associated with fuel injector coking or fouling have been experienced. In Europe indirect injection (IDI) light duty engines used in passenger cars were affected, while in the USA, a direct injection (DI) engine in heavy duty truck applications experienced field problems. In both cases, a fuel additive detergent performance test has evolved using an engine linked with the original field problem, although engine design modifications employed by the manufacturers have ensured improved operation in service. Increasing awareness of the potential for injector nozzle coking to cause deterioration in engine performance is coupled with a need to meet ever more stringent exhaust emissions legislation. These two requirements indicate that the use of detergency additives will continue to be associated with high quality diesel fuels. The paper examines detergency performance evaluated in a range of IDI and DI engines and correlates performance in the two most widely recognised test engines, namely the Peugeot 1.9 litre IDI, and Cummins L10 DI engines. 17 refs., 18 figs., 5 tabs.

  19. Assessing Basic Skill Performance in Appalachian Kentucky.

    ERIC Educational Resources Information Center

    DeYoung, Alan J.; Vaught, Charles

    Basic skill performance levels of third-, fifth-, seventh-, and tenth-grade students attending schools in the Appalachian School Districts of Kentucky are reported and discussed. School district scores on the reading, language and mathematics subtests of the Comprehensive Test of Basic Skills clearly show that children in most Appalachian school…

  20. Assessment beyond Performance: Phenomenography in Educational Evaluation

    ERIC Educational Resources Information Center

    Micari, Marina; Light, Gregory; Calkins, Susanna; Streitwieser, Bernhard

    2007-01-01

    Increasing calls for accountability in education have promoted improvements in quantitative evaluation approaches that measure student performance; however, this has often been to the detriment of qualitative approaches, reducing the richness of educational evaluation as an enterprise. In this article the authors assert that it is not merely…