Science.gov

Sample records for accuracy assessments performed

  1. Numerical accuracy assessment

    NASA Astrophysics Data System (ADS)

    Boerstoel, J. W.

    1988-12-01

    A framework is provided for numerical accuracy assessment. The purpose of numerical flow simulations is formulated. This formulation concerns the classes of aeronautical configurations (boundaries), the desired flow physics (flow equations and their properties), the classes of flow conditions on flow boundaries (boundary conditions), and the initial flow conditions. Next, accuracy and economical performance requirements are defined; the final numerical flow simulation results of interest should have a guaranteed accuracy, and be produced for an acceptable FLOP-price. Within this context, the validation of numerical processes with respect to the well known topics of consistency, stability, and convergence when the mesh is refined must be done by numerical experimentation because theory gives only partial answers. This requires careful design of text cases for numerical experimentation. Finally, the results of a few recent evaluation exercises of numerical experiments with a large number of codes on a few test cases are summarized.

  2. How could the replica method improve accuracy of performance assessment of channel coding?

    NASA Astrophysics Data System (ADS)

    Kabashima, Yoshiyuki

    2009-12-01

    We explore the relation between the techniques of statistical mechanics and information theory for assessing the performance of channel coding. We base our study on a framework developed by Gallager in IEEE Trans. Inform. Theory IT-11, 3 (1965), where the minimum decoding error probability is upper-bounded by an average of a generalized Chernoff's bound over a code ensemble. We show that the resulting bound in the framework can be directly assessed by the replica method, which has been developed in statistical mechanics of disordered systems, whereas in Gallager's original methodology further replacement by another bound utilizing Jensen's inequality is necessary. Our approach associates a seemingly ad hoc restriction with respect to an adjustable parameter for optimizing the bound with a phase transition between two replica symmetric solutions, and can improve the accuracy of performance assessments of general code ensembles including low density parity check codes, although its mathematical justification is still open.

  3. GEOSPATIAL DATA ACCURACY ASSESSMENT

    EPA Science Inventory

    The development of robust accuracy assessment methods for the validation of spatial data represent's a difficult scientific challenge for the geospatial science community. The importance and timeliness of this issue is related directly to the dramatic escalation in the developmen...

  4. Assessing the accuracy and performance of implicit solvent models for drug molecules: conformational ensemble approaches.

    PubMed

    Kolář, Michal; Fanfrlík, Jindřich; Lepšík, Martin; Forti, Flavio; Luque, F Javier; Hobza, Pavel

    2013-05-16

    The accuracy and performance of implicit solvent methods for solvation free energy calculations were assessed on a set of 20 neutral drug molecules. Molecular dynamics (MD) provided ensembles of conformations in water and water-saturated octanol. The solvation free energies were calculated by popular implicit solvent models based on quantum mechanical (QM) electronic densities (COSMO-RS, MST, SMD) as well as on molecular mechanical (MM) point-charge models (GB, PB). The performance of the implicit models was tested by a comparison with experimental water-octanol transfer free energies (ΔG(ow)) by using single- and multiconformation approaches. MD simulations revealed difficulties in a priori estimation of the flexibility features of the solutes from simple structural descriptors, such as the number of rotatable bonds. An increasing accuracy of the calculated ΔG(ow) was observed in the following order: GB1 ~ PB < GB7 ≪ MST < SMD ~ COSMO-RS with a clear distinction identified between MM- and QM-based models, although for the set excluding three largest molecules, the differences among COSMO-RS, MST, and SMD were negligible. It was shown that the single-conformation approach applied to crystal geometries provides a rather accurate estimate of ΔG(ow) for rigid molecules yet fails completely for the flexible ones. The multiconformation approaches improved the performance, but only when the deformation contribution was ignored. It was revealed that for large-scale calculations on small molecules a recent GB model, GB7, provided a reasonable accuracy/speed ratio. In conclusion, the study contributes to the understanding of solvation free energy calculations for physical and medicinal chemistry applications.

  5. Accuracy assessment: The statistical approach to performance evaluation in LACIE. [Great Plains corridor, United States

    NASA Technical Reports Server (NTRS)

    Houston, A. G.; Feiveson, A. H.; Chhikara, R. S.; Hsu, E. M. (Principal Investigator)

    1979-01-01

    A statistical methodology was developed to check the accuracy of the products of the experimental operations throughout crop growth and to determine whether the procedures are adequate to accomplish the desired accuracy and reliability goals. It has allowed the identification and isolation of key problems in wheat area yield estimation, some of which have been corrected and some of which remain to be resolved. The major unresolved problem in accuracy assessment is that of precisely estimating the bias of the LACIE production estimator. Topics covered include: (1) evaluation techniques; (2) variance and bias estimation for the wheat production estimate; (3) the 90/90 evaluation; (4) comparison of the LACIE estimate with reference standards; and (5) first and second order error source investigations.

  6. Assessment of the relationship between lesion segmentation accuracy and computer-aided diagnosis scheme performance

    NASA Astrophysics Data System (ADS)

    Zheng, Bin; Pu, Jiantao; Park, Sang Cheol; Zuley, Margarita; Gur, David

    2008-03-01

    In this study we randomly select 250 malignant and 250 benign mass regions as a training dataset. The boundary contours of these regions were manually identified and marked. Twelve image features were computed for each region. An artificial neural network (ANN) was trained as a classifier. To select a specific testing dataset, we applied a topographic multi-layer region growth algorithm to detect boundary contours of 1,903 mass regions in an initial pool of testing regions. All processed regions are sorted based on a size difference ratio between manual and automated segmentation. We selected a testing dataset involving 250 malignant and 250 benign mass regions with larger size difference ratios. Using the area under ROC curve (A Z value) as performance index we investigated the relationship between the accuracy of mass segmentation and the performance of a computer-aided diagnosis (CAD) scheme. CAD performance degrades as the size difference ratio increases. Then, we developed and tested a hybrid region growth algorithm that combined the topographic region growth with an active contour approach. In this hybrid algorithm, the boundary contour detected by the topographic region growth is used as the initial contour of the active contour algorithm. The algorithm iteratively searches for the optimal region boundaries. A CAD likelihood score of the growth region being a true-positive mass is computed in each iteration. The region growth is automatically terminated once the first maximum CAD score is reached. This hybrid region growth algorithm reduces the size difference ratios between two areas segmented automatically and manually to less than +/-15% for all testing regions and the testing A Z value increases to from 0.63 to 0.90. The results indicate that CAD performance heavily depends on the accuracy of mass segmentation. In order to achieve robust CAD performance, reducing lesion segmentation error is important.

  7. Increased Throwing Accuracy Improves Children's Catching Performance in a Ball-Catching Task from the Movement Assessment Battery (MABC-2).

    PubMed

    Dirksen, Tim; De Lussanet, Marc H E; Zentgraf, Karen; Slupinski, Lena; Wagner, Heiko

    2016-01-01

    The Movement Assessment Battery for Children (MABC-2) is a functional test for identifying deficits in the motor performance of children. The test contains a ball-catching task that requires the children to catch a self-thrown ball with one hand. As the task can be executed with a variety of different catching strategies, it is assumed that the task success can also vary considerably. Even though it is not clear, whether the performance merely depends on the catching skills or also to some extent on the throwing skills, the MABC-2 takes into account only the movement outcome. Therefore, the purpose of the current study was to examine (1) to what extent the throwing accuracy has an effect on the children's catching performance and (2) to what extent the throwing accuracy influences their choice of catching strategy. In line with the test manual, the children's catching performance was quantified on basis of the number of correctly caught balls. The throwing accuracy and the catching strategy were quantified by applying a kinematic analysis on the ball's trajectory and the hand movements. Based on linear regression analyses, we then investigated the relation between throwing accuracy, catching performance and catching strategy. The results show that an increased throwing accuracy is significantly correlated with an increased catching performance. Moreover, a higher throwing accuracy is significantly correlated with a longer duration of the hand on the ball's parabola, which indicates that throwing the ball more accurately could enable the children to effectively reduce the requirements on temporal precision. As the children's catching performance and their choice of catching strategy in the ball-catching task of the MABC-2 are substantially determined by their throwing accuracy, the test evaluation should not be based on the movement outcome alone, but should also take into account the children's throwing performance. Our findings could be of particular value for the

  8. Increased Throwing Accuracy Improves Children's Catching Performance in a Ball-Catching Task from the Movement Assessment Battery (MABC-2)

    PubMed Central

    Dirksen, Tim; De Lussanet, Marc H. E.; Zentgraf, Karen; Slupinski, Lena; Wagner, Heiko

    2016-01-01

    The Movement Assessment Battery for Children (MABC-2) is a functional test for identifying deficits in the motor performance of children. The test contains a ball-catching task that requires the children to catch a self-thrown ball with one hand. As the task can be executed with a variety of different catching strategies, it is assumed that the task success can also vary considerably. Even though it is not clear, whether the performance merely depends on the catching skills or also to some extent on the throwing skills, the MABC-2 takes into account only the movement outcome. Therefore, the purpose of the current study was to examine (1) to what extent the throwing accuracy has an effect on the children's catching performance and (2) to what extent the throwing accuracy influences their choice of catching strategy. In line with the test manual, the children's catching performance was quantified on basis of the number of correctly caught balls. The throwing accuracy and the catching strategy were quantified by applying a kinematic analysis on the ball's trajectory and the hand movements. Based on linear regression analyses, we then investigated the relation between throwing accuracy, catching performance and catching strategy. The results show that an increased throwing accuracy is significantly correlated with an increased catching performance. Moreover, a higher throwing accuracy is significantly correlated with a longer duration of the hand on the ball's parabola, which indicates that throwing the ball more accurately could enable the children to effectively reduce the requirements on temporal precision. As the children's catching performance and their choice of catching strategy in the ball-catching task of the MABC-2 are substantially determined by their throwing accuracy, the test evaluation should not be based on the movement outcome alone, but should also take into account the children's throwing performance. Our findings could be of particular value for the

  9. An Accuracy--Response Time Capacity Assessment Function that Measures Performance against Standard Parallel Predictions

    ERIC Educational Resources Information Center

    Townsend, James T.; Altieri, Nicholas

    2012-01-01

    Measures of human efficiency under increases in mental workload or attentional limitations are vital in studying human perception, cognition, and action. Assays of efficiency as workload changes have typically been confined to either reaction times (RTs) or accuracy alone. Within the realm of RTs, a nonparametric measure called the "workload…

  10. Task and Observer Skill Factors in Accuracy of Assessment of Performance

    DTIC Science & Technology

    1977-04-01

    BEHAVIORAL and SOCIAL SCIENCES r -j 1300 Wilson Boulevard ^ Arlingfon, Virginia 22209 Approved for public release; distributed unlimitacl U. S...Unclassified /i Unclafislfli’L BCCUKtTY CLASSIFICATION OF THIS PAGEfIVh«n Data Entat-d) ( r .’O continued) H^wQn the basis of these data, a...34 approach has been tak»in to questions of accuracy of clinical judge- ments (Sarbin, Taft, & Bailey, I960; Bierl , Atkins, Briar, Leaman

  11. Future dedicated Venus-SGG flight mission: Accuracy assessment and performance analysis

    NASA Astrophysics Data System (ADS)

    Zheng, Wei; Hsu, Houtse; Zhong, Min; Yun, Meijuan

    2016-01-01

    This study concentrates principally on the systematic requirements analysis for the future dedicated Venus-SGG (spacecraft gravity gradiometry) flight mission in China in respect of the matching measurement accuracies of the spacecraft-based scientific instruments and the orbital parameters of the spacecraft. Firstly, we created and proved the single and combined analytical error models of the cumulative Venusian geoid height influenced by the gravity gradient error of the spacecraft-borne atom-interferometer gravity gradiometer (AIGG) and the orbital position error and orbital velocity error tracked by the deep space network (DSN) on the Earth station. Secondly, the ultra-high-precision spacecraft-borne AIGG is propitious to making a significant contribution to globally mapping the Venusian gravitational field and modeling the geoid with unprecedented accuracy and spatial resolution through weighing the advantages and disadvantages among the electrostatically suspended gravity gradiometer, the superconducting gravity gradiometer and the AIGG. Finally, the future dedicated Venus-SGG spacecraft had better adopt the optimal matching accuracy indices consisting of 3 × 10-13/s2 in gravity gradient, 10 m in orbital position and 8 × 10-4 m/s in orbital velocity and the preferred orbital parameters comprising an orbital altitude of 300 ± 50 km, an observation time of 60 months and a sampling interval of 1 s.

  12. A Comparative Analysis of Diagnostic Accuracy of Focused Assessment With Sonography for Trauma Performed by Emergency Medicine and Radiology Residents

    PubMed Central

    Zamani, Majid; Masoumi, Babak; Esmailian, Mehrdad; Habibi, Amin; Khazaei, Mehdi; Mohammadi Esfahani, Mohammad

    2015-01-01

    Background: Focused assessment with sonography in trauma (FAST) is a method for prompt detection of the abdominal free fluid in patients with abdominal trauma. Objectives: This study was conducted to compare the diagnostic accuracy of FAST performed by emergency medicine residents (EMR) and radiology residents (RRs) in detecting peritoneal free fluids. Patients and Methods: Patients triaged in the emergency department with blunt abdominal trauma, high energy trauma, and multiple traumas underwent a FAST examination by EMRs and RRs with the same techniques to obtain the standard views. Ultrasound findings for free fluid in peritoneal cavity for each patient (positive/negative) were compared with the results of computed tomography, operative exploration, or observation as the final outcome. Results: A total of 138 patients were included in the final analysis. Good diagnostic agreement was noted between the results of FAST scans performed by EMRs and RRs (κ = 0.701, P < 0.001), also between the results of EMRs-performed FAST and the final outcome (κ = 0.830, P < 0.0010), and finally between the results of RRs-performed FAST and final outcome (κ = 0.795, P < 0.001). No significant differences were noted between EMRs- and RRs-performed FASTs regarding sensitivity (84.6% vs 84.6%), specificity (98.4% vs 97.6%), positive predictive value (84.6% vs 84.6%), and negative predictive value (98.4% vs 98.4%). Conclusions: Trained EMRs like their fellow RRs have the ability to perform FAST scan with high diagnostic value in patients with blunt abdominal trauma. PMID:26756009

  13. Classification Accuracy of Oral Reading Fluency and Maze in Predicting Performance on Large-Scale Reading Assessments

    ERIC Educational Resources Information Center

    Decker, Dawn M.; Hixson, Michael D.; Shaw, Amber; Johnson, Gloria

    2014-01-01

    The purpose of this study was to examine whether using a multiple-measure framework yielded better classification accuracy than oral reading fluency (ORF) or maze alone in predicting pass/fail rates for middle-school students on a large-scale reading assessment. Participants were 178 students in Grades 7 and 8 from a Midwestern school district.…

  14. PLÉIADES Project: Assessment of Georeferencing Accuracy, Image Quality, Pansharpening Performence and Dsm/dtm Quality

    NASA Astrophysics Data System (ADS)

    Topan, Hüseyin; Cam, Ali; Özendi, Mustafa; Oruç, Murat; Jacobsen, Karsten; Taşkanat, Talha

    2016-06-01

    Pléiades 1A and 1B are twin optical satellites of Optical and Radar Federated Earth Observation (ORFEO) program jointly running by France and Italy. They are the first satellites of Europe with sub-meter resolution. Airbus DS (formerly Astrium Geo) runs a MyGIC (formerly Pléiades Users Group) program to validate Pléiades images worldwide for various application purposes. The authors conduct three projects, one is within this program, the second is supported by BEU Scientific Research Project Program, and the third is supported by TÜBİTAK. Assessment of georeferencing accuracy, image quality, pansharpening performance and Digital Surface Model/Digital Terrain Model (DSM/DTM) quality subjects are investigated in these projects. For these purposes, triplet panchromatic (50 cm Ground Sampling Distance (GSD)) and VNIR (2 m GSD) Pléiades 1A images were investigated over Zonguldak test site (Turkey) which is urbanised, mountainous and covered by dense forest. The georeferencing accuracy was estimated with a standard deviation in X and Y (SX, SY) in the range of 0.45m by bias corrected Rational Polynomial Coefficient (RPC) orientation, using ~170 Ground Control Points (GCPs). 3D standard deviation of ±0.44m in X, ±0.51m in Y, and ±1.82m in Z directions have been reached in spite of the very narrow angle of convergence by bias corrected RPC orientation. The image quality was also investigated with respect to effective resolution, Signal to Noise Ratio (SNR) and blur coefficient. The effective resolution was estimated with factor slightly below 1.0, meaning that the image quality corresponds to the nominal resolution of 50cm. The blur coefficients were achieved between 0.39-0.46 for triplet panchromatic images, indicating a satisfying image quality. SNR is in the range of other comparable space borne images which may be caused by de-noising of Pléiades images. The pansharpened images were generated by various methods, and are validated by most common statistical

  15. Assessment of the dosimetric accuracies of CATPhan 504 and CIRS 062 using kV-CBCT for performing direct calculations.

    PubMed

    Annkah, James Kwame; Rosenberg, Ivan; Hindocha, Naina; Moinuddin, Syed Ali; Ricketts, Kate; Adeyemi, Abiodun; Royle, Gary

    2014-07-01

    The dosimetric accuracies of CATPhan 504 and CIRS 062 have been evaluated using the kV-CBCT of Varian TrueBeam linac and Eclipse TPS. The assessment was done using the kV-CBCT as a standalone tool for dosimetric calculations towards Adaptive replanning. Dosimetric calculations were made without altering the HU-ED curves of the planning computed tomography (CT) scanner that is used by the Eclipse TPS. All computations were done using the images and dataset from kV-CBCT while maintaining the HU-ED calibration curve of the planning CT (pCT), assuming pCT was used for the initial treatment plan. Results showed that the CIRS phantom produces doses within ±5% of the CT-based plan while CATPhan 504 produces a variation of ±14% of the CT-based plan.

  16. Assessment of the dosimetric accuracies of CATPhan 504 and CIRS 062 using kV-CBCT for performing direct calculations

    PubMed Central

    Annkah, James Kwame; Rosenberg, Ivan; Hindocha, Naina; Moinuddin, Syed Ali; Ricketts, Kate; Adeyemi, Abiodun; Royle, Gary

    2014-01-01

    The dosimetric accuracies of CATPhan 504 and CIRS 062 have been evaluated using the kV-CBCT of Varian TrueBeam linac and Eclipse TPS. The assessment was done using the kV-CBCT as a standalone tool for dosimetric calculations towards Adaptive replanning. Dosimetric calculations were made without altering the HU-ED curves of the planning computed tomography (CT) scanner that is used by the Eclipse TPS. All computations were done using the images and dataset from kV-CBCT while maintaining the HU-ED calibration curve of the planning CT (pCT), assuming pCT was used for the initial treatment plan. Results showed that the CIRS phantom produces doses within ±5% of the CT-based plan while CATPhan 504 produces a variation of ±14% of the CT-based plan. PMID:25190991

  17. When Does Choice of Accuracy Measure Alter Imputation Accuracy Assessments?

    PubMed Central

    Ramnarine, Shelina; Zhang, Juan; Chen, Li-Shiun; Culverhouse, Robert; Duan, Weimin; Hancock, Dana B.; Hartz, Sarah M.; Johnson, Eric O.; Olfson, Emily; Schwantes-An, Tae-Hwi; Saccone, Nancy L.

    2015-01-01

    Imputation, the process of inferring genotypes for untyped variants, is used to identify and refine genetic association findings. Inaccuracies in imputed data can distort the observed association between variants and a disease. Many statistics are used to assess accuracy; some compare imputed to genotyped data and others are calculated without reference to true genotypes. Prior work has shown that the Imputation Quality Score (IQS), which is based on Cohen’s kappa statistic and compares imputed genotype probabilities to true genotypes, appropriately adjusts for chance agreement; however, it is not commonly used. To identify differences in accuracy assessment, we compared IQS with concordance rate, squared correlation, and accuracy measures built into imputation programs. Genotypes from the 1000 Genomes reference populations (AFR N = 246 and EUR N = 379) were masked to match the typed single nucleotide polymorphism (SNP) coverage of several SNP arrays and were imputed with BEAGLE 3.3.2 and IMPUTE2 in regions associated with smoking behaviors. Additional masking and imputation was conducted for sequenced subjects from the Collaborative Genetic Study of Nicotine Dependence and the Genetic Study of Nicotine Dependence in African Americans (N = 1,481 African Americans and N = 1,480 European Americans). Our results offer further evidence that concordance rate inflates accuracy estimates, particularly for rare and low frequency variants. For common variants, squared correlation, BEAGLE R2, IMPUTE2 INFO, and IQS produce similar assessments of imputation accuracy. However, for rare and low frequency variants, compared to IQS, the other statistics tend to be more liberal in their assessment of accuracy. IQS is important to consider when evaluating imputation accuracy, particularly for rare and low frequency variants. PMID:26458263

  18. Ground Truth Sampling and LANDSAT Accuracy Assessment

    NASA Technical Reports Server (NTRS)

    Robinson, J. W.; Gunther, F. J.; Campbell, W. J.

    1982-01-01

    It is noted that the key factor in any accuracy assessment of remote sensing data is the method used for determining the ground truth, independent of the remote sensing data itself. The sampling and accuracy procedures developed for nuclear power plant siting study are described. The purpose of the sampling procedure was to provide data for developing supervised classifications for two study sites and for assessing the accuracy of that and the other procedures used. The purpose of the accuracy assessment was to allow the comparison of the cost and accuracy of various classification procedures as applied to various data types.

  19. Arizona Vegetation Resource Inventory (AVRI) accuracy assessment

    USGS Publications Warehouse

    Szajgin, John; Pettinger, L.R.; Linden, D.S.; Ohlen, D.O.

    1982-01-01

    A quantitative accuracy assessment was performed for the vegetation classification map produced as part of the Arizona Vegetation Resource Inventory (AVRI) project. This project was a cooperative effort between the Bureau of Land Management (BLM) and the Earth Resources Observation Systems (EROS) Data Center. The objective of the accuracy assessment was to estimate (with a precision of ?10 percent at the 90 percent confidence level) the comission error in each of the eight level II hierarchical vegetation cover types. A stratified two-phase (double) cluster sample was used. Phase I consisted of 160 photointerpreted plots representing clusters of Landsat pixels, and phase II consisted of ground data collection at 80 of the phase I cluster sites. Ground data were used to refine the phase I error estimates by means of a linear regression model. The classified image was stratified by assigning each 15-pixel cluster to the stratum corresponding to the dominant cover type within each cluster. This method is known as stratified plurality sampling. Overall error was estimated to be 36 percent with a standard error of 2 percent. Estimated error for individual vegetation classes ranged from a low of 10 percent ?6 percent for evergreen woodland to 81 percent ?7 percent for cropland and pasture. Total cost of the accuracy assessment was $106,950 for the one-million-hectare study area. The combination of the stratified plurality sampling (SPS) method of sample allocation with double sampling provided the desired estimates within the required precision levels. The overall accuracy results confirmed that highly accurate digital classification of vegetation is difficult to perform in semiarid environments, due largely to the sparse vegetation cover. Nevertheless, these techniques show promise for providing more accurate information than is presently available for many BLM-administered lands.

  20. Skinfold Assessment: Accuracy and Application

    ERIC Educational Resources Information Center

    Ball, Stephen; Swan, Pamela D.; Altena, Thomas S.

    2006-01-01

    Although not perfect, skinfolds (SK), or the measurement of fat under the skin, remains the most popular and practical method available to assess body composition on a large scale (Kuczmarski, Flegal, Campbell, & Johnson, 1994). Even for practitioners who have been using SK for years and are highly proficient at locating the correct anatomical…

  1. Charts of operational process specifications ("OPSpecs charts") for assessing the precision, accuracy, and quality control needed to satisfy proficiency testing performance criteria.

    PubMed

    Westgard, J O

    1992-07-01

    "Operational process specifications" have been derived from an analytical quality-planning model to assess the precision, accuracy, and quality control (QC) needed to satisfy Proficiency Testing (PT) criteria. These routine operating specifications are presented in the form of an "OPSpecs chart," which describes the operational limits for imprecision and inaccuracy when a desired level of quality assurance is provided by a specific QC procedure. OPSpecs charts can be used to compare the operational limits for different QC procedures and to select a QC procedure that is appropriate for the precision and accuracy of a specific measurement procedure. To select a QC procedure, one plots the inaccuracy and imprecision observed for a measurement procedure on the OPSpecs chart to define the current operating point, which is then compared with the operational limits of candidate QC procedures. Any QC procedure whose operational limits are greater than the measurement procedure's operating point will provide a known assurance, with the percent chance specified by the OPSpecs chart, that critical analytical errors will be detected. OPSpecs charts for a 10% PT criterion are presented to illustrate the selection of QC procedures for measurement procedures with different amounts of imprecision and inaccuracy. Normalized OPSpecs charts are presented to permit a more general assessment of the analytical performance required with commonly used QC procedures.

  2. Alaska national hydrography dataset positional accuracy assessment study

    USGS Publications Warehouse

    Arundel, Samantha; Yamamoto, Kristina H.; Constance, Eric; Mantey, Kim; Vinyard-Houx, Jeremy

    2013-01-01

    Initial visual assessments Wide range in the quality of fit between features in NHD and these new image sources. No statistical analysis has been performed to actually quantify accuracy Determining absolute accuracy is cost prohibitive (must collect independent, well defined test points) Quantitative analysis of relative positional error is feasible.

  3. A fast RCS accuracy assessment method for passive radar calibrators

    NASA Astrophysics Data System (ADS)

    Zhou, Yongsheng; Li, Chuanrong; Tang, Lingli; Ma, Lingling; Liu, QI

    2016-10-01

    In microwave radar radiometric calibration, the corner reflector acts as the standard reference target but its structure is usually deformed during the transportation and installation, or deformed by wind and gravity while permanently installed outdoor, which will decrease the RCS accuracy and therefore the radiometric calibration accuracy. A fast RCS accuracy measurement method based on 3-D measuring instrument and RCS simulation was proposed in this paper for tracking the characteristic variation of the corner reflector. In the first step, RCS simulation algorithm was selected and its simulation accuracy was assessed. In the second step, the 3-D measuring instrument was selected and its measuring accuracy was evaluated. Once the accuracy of the selected RCS simulation algorithm and 3-D measuring instrument was satisfied for the RCS accuracy assessment, the 3-D structure of the corner reflector would be obtained by the 3-D measuring instrument, and then the RCSs of the obtained 3-D structure and corresponding ideal structure would be calculated respectively based on the selected RCS simulation algorithm. The final RCS accuracy was the absolute difference of the two RCS calculation results. The advantage of the proposed method was that it could be applied outdoor easily, avoiding the correlation among the plate edge length error, plate orthogonality error, plate curvature error. The accuracy of this method is higher than the method using distortion equation. In the end of the paper, a measurement example was presented in order to show the performance of the proposed method.

  4. Assessment of the Thematic Accuracy of Land Cover Maps

    NASA Astrophysics Data System (ADS)

    Höhle, J.

    2015-08-01

    Several land cover maps are generated from aerial imagery and assessed by different approaches. The test site is an urban area in Europe for which six classes (`building', `hedge and bush', `grass', `road and parking lot', `tree', `wall and car port') had to be derived. Two classification methods were applied (`Decision Tree' and `Support Vector Machine') using only two attributes (height above ground and normalized difference vegetation index) which both are derived from the images. The assessment of the thematic accuracy applied a stratified design and was based on accuracy measures such as user's and producer's accuracy, and kappa coefficient. In addition, confidence intervals were computed for several accuracy measures. The achieved accuracies and confidence intervals are thoroughly analysed and recommendations are derived from the gained experiences. Reliable reference values are obtained using stereovision, false-colour image pairs, and positioning to the checkpoints with 3D coordinates. The influence of the training areas on the results is studied. Cross validation has been tested with a few reference points in order to derive approximate accuracy measures. The two classification methods perform equally for five classes. Trees are classified with a much better accuracy and a smaller confidence interval by means of the decision tree method. Buildings are classified by both methods with an accuracy of 99% (95% CI: 95%-100%) using independent 3D checkpoints. The average width of the confidence interval of six classes was 14% of the user's accuracy.

  5. Evaluating the Accuracy of Pharmacy Students' Self-Assessment Skills

    PubMed Central

    Gregory, Paul A. M.

    2007-01-01

    Objectives To evaluate the accuracy of self-assessment skills of senior-level bachelor of science pharmacy students. Methods A method proposed by Kruger and Dunning involving comparisons of pharmacy students' self-assessment with weighted average assessments of peers, standardized patients, and pharmacist-instructors was used. Results Eighty students participated in the study. Differences between self-assessment and external assessments were found across all performance quartiles. These differences were particularly large and significant in the third and fourth (lowest) quartiles and particularly marked in the areas of empathy, and logic/focus/coherence of interviewing. Conclusions The quality and accuracy of pharmacy students' self-assessment skills were not as strong as expected, particularly given recent efforts to include self-assessment in the curriculum. Further work is necessary to ensure this important practice competency and life skill is at the level expected for professional practice and continuous professional development. PMID:17998986

  6. Positional Accuracy Assessment of Googleearth in Riyadh

    NASA Astrophysics Data System (ADS)

    Farah, Ashraf; Algarni, Dafer

    2014-06-01

    Google Earth is a virtual globe, map and geographical information program that is controlled by Google corporation. It maps the Earth by the superimposition of images obtained from satellite imagery, aerial photography and GIS 3D globe. With millions of users all around the globe, GoogleEarth® has become the ultimate source of spatial data and information for private and public decision-support systems besides many types and forms of social interactions. Many users mostly in developing countries are also using it for surveying applications, the matter that raises questions about the positional accuracy of the Google Earth program. This research presents a small-scale assessment study of the positional accuracy of GoogleEarth® Imagery in Riyadh; capital of Kingdom of Saudi Arabia (KSA). The results show that the RMSE of the GoogleEarth imagery is 2.18 m and 1.51 m for the horizontal and height coordinates respectively.

  7. Accuracy Assessment of Altimeter Derived Geostrophic Velocities

    NASA Astrophysics Data System (ADS)

    Leben, R. R.; Powell, B. S.; Born, G. H.; Guinasso, N. L.

    2002-12-01

    Along track sea surface height anomaly gradients are proportional to cross track geostrophic velocity anomalies allowing satellite altimetry to provide much needed satellite observations of changes in the geostrophic component of surface ocean currents. Often, surface height gradients are computed from altimeter data archives that have been corrected to give the most accurate absolute sea level, a practice that may unnecessarily increase the error in the cross track velocity anomalies and thereby require excessive smoothing to mitigate noise. Because differentiation along track acts as a high-pass filter, many of the path length corrections applied to altimeter data for absolute height accuracy are unnecessary for the corresponding gradient calculations. We report on a study to investigate appropriate altimetric corrections and processing techniques for improving geostrophic velocity accuracy. Accuracy is assessed by comparing cross track current measurements from two moorings placed along the descending TOPEX/POSEIDON ground track number 52 in the Gulf of Mexico to the corresponding altimeter velocity estimates. The buoys are deployed and maintained by the Texas Automated Buoy System (TABS) under Interagency Contracts with Texas A&M University. The buoys telemeter observations in near real-time via satellite to the TABS station located at the Geochemical and Environmental Research Group (GERG) at Texas A&M. Buoy M is located in shelf waters of 57 m depth with a second, Buoy N, 38 km away on the shelf break at 105 m depth. Buoy N has been operational since the beginning of 2002 and has a current meter at 2m depth providing in situ measurements of surface velocities coincident with Jason and TOPEX/POSEIDON altimeter over flights. This allows one of the first detailed comparisons of shallow water near surface current meter time series to coincident altimetry.

  8. Accuracy of quantitative visual soil assessment

    NASA Astrophysics Data System (ADS)

    van Leeuwen, Maricke; Heuvelink, Gerard; Stoorvogel, Jetse; Wallinga, Jakob; de Boer, Imke; van Dam, Jos; van Essen, Everhard; Moolenaar, Simon; Verhoeven, Frank; Stoof, Cathelijne

    2016-04-01

    Visual soil assessment (VSA) is a method to assess soil quality visually, when standing in the field. VSA is increasingly used by farmers, farm organisations and companies, because it is rapid and cost-effective, and because looking at soil provides understanding about soil functioning. Often VSA is regarded as subjective, so there is a need to verify VSA. Also, many VSAs have not been fine-tuned for contrasting soil types. This could lead to wrong interpretation of soil quality and soil functioning when contrasting sites are compared to each other. We wanted to assess accuracy of VSA, while taking into account soil type. The first objective was to test whether quantitative visual field observations, which form the basis in many VSAs, could be validated with standardized field or laboratory measurements. The second objective was to assess whether quantitative visual field observations are reproducible, when used by observers with contrasting backgrounds. For the validation study, we made quantitative visual observations at 26 cattle farms. Farms were located at sand, clay and peat soils in the North Friesian Woodlands, the Netherlands. Quantitative visual observations evaluated were grass cover, number of biopores, number of roots, soil colour, soil structure, number of earthworms, number of gley mottles and soil compaction. Linear regression analysis showed that four out of eight quantitative visual observations could be well validated with standardized field or laboratory measurements. The following quantitative visual observations correlated well with standardized field or laboratory measurements: grass cover with classified images of surface cover; number of roots with root dry weight; amount of large structure elements with mean weight diameter; and soil colour with soil organic matter content. Correlation coefficients were greater than 0.3, from which half of the correlations were significant. For the reproducibility study, a group of 9 soil scientists and 7

  9. Classification accuracy of actuarial risk assessment instruments.

    PubMed

    Neller, Daniel J; Frederick, Richard I

    2013-01-01

    Users of commonly employed actuarial risk assessment instruments (ARAIs) hope to generate numerical probability statements about risk; however, ARAI manuals often do not explicitly report data that are essential for understanding the classification accuracy of the instruments. In addition, ARAI manuals often contain data that have the potential for misinterpretation. The authors of the present article address the accurate generation of probability statements. First, they illustrate how the reporting of numerical probability statements based on proportions rather than predictive values can mislead users of ARAIs. Next, they report essential test characteristics that, to date, have gone largely unreported in ARAI manuals. Then they discuss a graphing method that can enhance the practice of clinicians who communicate risk via numerical probability statements. After the authors review several strategies for selecting optimal cut-off scores, they show how the graphing method can be used to estimate positive predictive values for each cut-off score of commonly used ARAIs, across all possible base rates. They also show how the graphing method can be used to estimate base rates of violent recidivism in local samples.

  10. A Framework for the Objective Assessment of Registration Accuracy

    PubMed Central

    Simonetti, Flavio; Foroni, Roberto Israel

    2014-01-01

    Validation and accuracy assessment are the main bottlenecks preventing the adoption of image processing algorithms in the clinical practice. In the classical approach, a posteriori analysis is performed through objective metrics. In this work, a different approach based on Petri nets is proposed. The basic idea consists in predicting the accuracy of a given pipeline based on the identification and characterization of the sources of inaccuracy. The concept is demonstrated on a case study: intrasubject rigid and affine registration of magnetic resonance images. Both synthetic and real data are considered. While synthetic data allow the benchmarking of the performance with respect to the ground truth, real data enable to assess the robustness of the methodology in real contexts as well as to determine the suitability of the use of synthetic data in the training phase. Results revealed a higher correlation and a lower dispersion among the metrics for simulated data, while the opposite trend was observed for pathologic ones. Results show that the proposed model not only provides a good prediction performance but also leads to the optimization of the end-to-end chain in terms of accuracy and robustness, setting the ground for its generalization to different and more complex scenarios. PMID:24659997

  11. Accuracy Assessment for AG500, Electromagnetic Articulograph

    ERIC Educational Resources Information Center

    Yunusova, Yana; Green, Jordan R.; Mefferd, Antje

    2009-01-01

    Purpose: The goal of this article was to evaluate the accuracy and reliability of the AG500 (Carstens Medizinelectronik, Lenglern, Germany), an electromagnetic device developed recently to register articulatory movements in three dimensions. This technology seems to have unprecedented capabilities to provide rich information about time-varying…

  12. Accuracy of Surgery Clerkship Performance Raters.

    ERIC Educational Resources Information Center

    Littlefield, John H.; And Others

    1991-01-01

    Interrater reliability in numerical ratings of clerkship performance (n=1,482 students) in five surgery programs was studied. Raters were classified as accurate or moderately or significantly stringent or lenient. Results indicate that increasing the proportion of accurate raters would substantially improve the precision of class rankings. (MSE)

  13. Accuracy assessment of NLCD 2006 land cover and impervious surface

    USGS Publications Warehouse

    Wickham, James D.; Stehman, Stephen V.; Gass, Leila; Dewitz, Jon; Fry, Joyce A.; Wade, Timothy G.

    2013-01-01

    Release of NLCD 2006 provides the first wall-to-wall land-cover change database for the conterminous United States from Landsat Thematic Mapper (TM) data. Accuracy assessment of NLCD 2006 focused on four primary products: 2001 land cover, 2006 land cover, land-cover change between 2001 and 2006, and impervious surface change between 2001 and 2006. The accuracy assessment was conducted by selecting a stratified random sample of pixels with the reference classification interpreted from multi-temporal high resolution digital imagery. The NLCD Level II (16 classes) overall accuracies for the 2001 and 2006 land cover were 79% and 78%, respectively, with Level II user's accuracies exceeding 80% for water, high density urban, all upland forest classes, shrubland, and cropland for both dates. Level I (8 classes) accuracies were 85% for NLCD 2001 and 84% for NLCD 2006. The high overall and user's accuracies for the individual dates translated into high user's accuracies for the 2001–2006 change reporting themes water gain and loss, forest loss, urban gain, and the no-change reporting themes for water, urban, forest, and agriculture. The main factor limiting higher accuracies for the change reporting themes appeared to be difficulty in distinguishing the context of grass. We discuss the need for more research on land-cover change accuracy assessment.

  14. Evaluating the Effect of Learning Style and Student Background on Self-Assessment Accuracy

    ERIC Educational Resources Information Center

    Alaoutinen, Satu

    2012-01-01

    This study evaluates a new taxonomy-based self-assessment scale and examines factors that affect assessment accuracy and course performance. The scale is based on Bloom's Revised Taxonomy and is evaluated by comparing students' self-assessment results with course performance in a programming course. Correlation has been used to reveal possible…

  15. Risk Assessment of Head Start Children with the Brigance K&1 Screen: Differential Performance by Sex, Age, and Predictive Accuracy for Early School Achievement and Special Education Placement.

    ERIC Educational Resources Information Center

    Mantzicopoulos, Panayota

    1999-01-01

    Examined differences in performance as well as reliability and validity indices for 256 Head Start children screened with Brigance K&1 screen. Found high overall test consistency, but considerable variability across subscales. Classification analyses established that the Brigance was not completely accurate in predicting early school…

  16. Thematic Accuracy Assessment of the 2011 National Land ...

    EPA Pesticide Factsheets

    Accuracy assessment is a standard protocol of National Land Cover Database (NLCD) mapping. Here we report agreement statistics between map and reference labels for NLCD 2011, which includes land cover for ca. 2001, ca. 2006, and ca. 2011. The two main objectives were assessment of agreement between map and reference labels for the three, single-date NLCD land cover products at Level II and Level I of the classification hierarchy, and agreement for 17 land cover change reporting themes based on Level I classes (e.g., forest loss; forest gain; forest, no change) for three change periods (2001–2006, 2006–2011, and 2001–2011). The single-date overall accuracies were 82%, 83%, and 83% at Level II and 88%, 89%, and 89% at Level I for 2011, 2006, and 2001, respectively. Many class-specific user's accuracies met or exceeded a previously established nominal accuracy benchmark of 85%. Overall accuracies for 2006 and 2001 land cover components of NLCD 2011 were approximately 4% higher (at Level II and Level I) than the overall accuracies for the same components of NLCD 2006. The high Level I overall, user's, and producer's accuracies for the single-date eras in NLCD 2011 did not translate into high class-specific user's and producer's accuracies for many of the 17 change reporting themes. User's accuracies were high for the no change reporting themes, commonly exceeding 85%, but were typically much lower for the reporting themes that represented change. Only forest l

  17. Assessing and Ensuring GOES-R Magnetometer Accuracy

    NASA Technical Reports Server (NTRS)

    Carter, Delano R.; Todirita, Monica; Kronenwetter, Jeffrey; Chu, Donald

    2016-01-01

    The GOES-R magnetometer subsystem accuracy requirement is 1.7 nanoteslas (nT). During quiet times (100 nT), accuracy is defined as absolute mean plus 3 sigma. During storms (300 nT), accuracy is defined as absolute mean plus 2 sigma. Error comes both from outside the magnetometers, e.g. spacecraft fields and misalignments, as well as inside, e.g. zero offset and scale factor errors. Because zero offset and scale factor drift over time, it will be necessary to perform annual calibration maneuvers. To predict performance before launch, we have used Monte Carlo simulations and covariance analysis. Both behave as expected, and their accuracy predictions agree within 30%. With the proposed calibration regimen, both suggest that the GOES-R magnetometer subsystem will meet its accuracy requirements.

  18. Assessing and Ensuring GOES-R Magnetometer Accuracy

    NASA Technical Reports Server (NTRS)

    Kronenwetter, Jeffrey; Carter, Delano R.; Todirita, Monica; Chu, Donald

    2016-01-01

    The GOES-R magnetometer accuracy requirement is 1.7 nanoteslas (nT). During quiet times (100 nT), accuracy is defined as absolute mean plus 3 sigma. During storms (300 nT), accuracy is defined as absolute mean plus 2 sigma. To achieve this, the sensor itself has better than 1 nT accuracy. Because zero offset and scale factor drift over time, it is also necessary to perform annual calibration maneuvers. To predict performance, we used covariance analysis and attempted to corroborate it with simulations. Although not perfect, the two generally agree and show the expected behaviors. With the annual calibration regimen, these predictions suggest that the magnetometers will meet their accuracy requirements.

  19. Assessing the Accuracy of Ancestral Protein Reconstruction Methods

    PubMed Central

    Williams, Paul D; Pollock, David D; Blackburne, Benjamin P; Goldstein, Richard A

    2006-01-01

    The phylogenetic inference of ancestral protein sequences is a powerful technique for the study of molecular evolution, but any conclusions drawn from such studies are only as good as the accuracy of the reconstruction method. Every inference method leads to errors in the ancestral protein sequence, resulting in potentially misleading estimates of the ancestral protein's properties. To assess the accuracy of ancestral protein reconstruction methods, we performed computational population evolution simulations featuring near-neutral evolution under purifying selection, speciation, and divergence using an off-lattice protein model where fitness depends on the ability to be stable in a specified target structure. We were thus able to compare the thermodynamic properties of the true ancestral sequences with the properties of “ancestral sequences” inferred by maximum parsimony, maximum likelihood, and Bayesian methods. Surprisingly, we found that methods such as maximum parsimony and maximum likelihood that reconstruct a “best guess” amino acid at each position overestimate thermostability, while a Bayesian method that sometimes chooses less-probable residues from the posterior probability distribution does not. Maximum likelihood and maximum parsimony apparently tend to eliminate variants at a position that are slightly detrimental to structural stability simply because such detrimental variants are less frequent. Other properties of ancestral proteins might be similarly overestimated. This suggests that ancestral reconstruction studies require greater care to come to credible conclusions regarding functional evolution. Inferred functional patterns that mimic reconstruction bias should be reevaluated. PMID:16789817

  20. 20 CFR 404.1643 - Performance accuracy standard.

    Code of Federal Regulations, 2013 CFR

    2013-04-01

    ... 20 Employees' Benefits 2 2013-04-01 2013-04-01 false Performance accuracy standard. 404.1643 Section 404.1643 Employees' Benefits SOCIAL SECURITY ADMINISTRATION FEDERAL OLD-AGE, SURVIVORS AND... stepping stones to progress towards our targeted level of performance. (d) Threshold levels. The...

  1. 20 CFR 404.1643 - Performance accuracy standard.

    Code of Federal Regulations, 2011 CFR

    2011-04-01

    ... 20 Employees' Benefits 2 2011-04-01 2011-04-01 false Performance accuracy standard. 404.1643 Section 404.1643 Employees' Benefits SOCIAL SECURITY ADMINISTRATION FEDERAL OLD-AGE, SURVIVORS AND... stepping stones to progress towards our targeted level of performance. (d) Threshold levels. The...

  2. 20 CFR 404.1643 - Performance accuracy standard.

    Code of Federal Regulations, 2014 CFR

    2014-04-01

    ... 20 Employees' Benefits 2 2014-04-01 2014-04-01 false Performance accuracy standard. 404.1643 Section 404.1643 Employees' Benefits SOCIAL SECURITY ADMINISTRATION FEDERAL OLD-AGE, SURVIVORS AND... stepping stones to progress towards our targeted level of performance. (d) Threshold levels. The...

  3. 20 CFR 404.1643 - Performance accuracy standard.

    Code of Federal Regulations, 2012 CFR

    2012-04-01

    ... 20 Employees' Benefits 2 2012-04-01 2012-04-01 false Performance accuracy standard. 404.1643 Section 404.1643 Employees' Benefits SOCIAL SECURITY ADMINISTRATION FEDERAL OLD-AGE, SURVIVORS AND... stepping stones to progress towards our targeted level of performance. (d) Threshold levels. The...

  4. Wind Forecast Accuracy and PADS Performance Assessment

    DTIC Science & Technology

    2007-05-01

    Therefore a class of models was developed in ASTRAL software that simulates accurately generic navigation logic with simplified guidance and control...on the systems (1st order dynamics with 3 DoF, as opposed to the 2nd order 6 DoF dynamics model also available in ASTRAL ), and are highly...Assuming the system guidance is effective, the ASTRAL models allow simulating a mission plan with an “expected” wind forecast, then simulating an airdrop

  5. Assessing and ensuring GOES-R magnetometer accuracy

    NASA Astrophysics Data System (ADS)

    Carter, Delano; Todirita, Monica; Kronenwetter, Jeffrey; Dahya, Melissa; Chu, Donald

    2016-05-01

    The GOES-R magnetometer subsystem accuracy requirement is 1.7 nanoteslas (nT). During quiet times (100 nT), accuracy is defined as absolute mean plus 3 sigma error per axis. During storms (300 nT), accuracy is defined as absolute mean plus 2 sigma error per axis. Error comes both from outside the magnetometers, e.g. spacecraft fields and misalignments, as well as inside, e.g. zero offset and scale factor errors. Because zero offset and scale factor drift over time, it will be necessary to perform annual calibration maneuvers. To predict performance before launch, we have used Monte Carlo simulations and covariance analysis. With the proposed calibration regimen, both suggest that the magnetometer subsystem will meet its accuracy requirements.

  6. Performance Assessment: Lessons from Performers

    ERIC Educational Resources Information Center

    Parkes, Kelly A.

    2010-01-01

    The performing arts studio is a highly complex learning setting, and assessing student outcomes relative to reliable and valid standards has presented challenges to this teaching and learning method. Building from the general international higher education literature, this article illustrates details, processes, and solutions, drawing on…

  7. 20 CFR 416.1043 - Performance accuracy standard.

    Code of Federal Regulations, 2010 CFR

    2010-04-01

    ... 20 Employees' Benefits 2 2010-04-01 2010-04-01 false Performance accuracy standard. 416.1043 Section 416.1043 Employees' Benefits SOCIAL SECURITY ADMINISTRATION SUPPLEMENTAL SECURITY INCOME FOR THE... have been in the file but was not included, even though its inclusion does not change the result in...

  8. 20 CFR 404.1643 - Performance accuracy standard.

    Code of Federal Regulations, 2010 CFR

    2010-04-01

    ... 20 Employees' Benefits 2 2010-04-01 2010-04-01 false Performance accuracy standard. 404.1643 Section 404.1643 Employees' Benefits SOCIAL SECURITY ADMINISTRATION FEDERAL OLD-AGE, SURVIVORS AND... have been in the file but was not included, even though its inclusion does not change the result in...

  9. Impulsivity and Speed-Accuracy Strategies in Intelligence Test Performance.

    ERIC Educational Resources Information Center

    Phillips, Louise H.; Rabbitt, Patrick M. A.

    1995-01-01

    Whether relations between intelligence test performance and information processing measures depend on individual differences in speed-accuracy preferences rather than capacity limitations and whether the impact of strategic variables changes with increasing age or extraversion was studied with 83 adults ages 50 to 79 years. Results are discussed…

  10. Modelling Second Language Performance: Integrating Complexity, Accuracy, Fluency, and Lexis

    ERIC Educational Resources Information Center

    Skehan, Peter

    2009-01-01

    Complexity, accuracy, and fluency have proved useful measures of second language performance. The present article will re-examine these measures themselves, arguing that fluency needs to be rethought if it is to be measured effectively, and that the three general measures need to be supplemented by measures of lexical use. Building upon this…

  11. On the accuracy assessment of Laplacian models in MPS

    NASA Astrophysics Data System (ADS)

    Ng, K. C.; Hwang, Y. H.; Sheu, T. W. H.

    2014-10-01

    From the basis of the Gauss divergence theorem applied on a circular control volume that was put forward by Isshiki (2011) in deriving the MPS-based differential operators, a more general Laplacian model is further deduced from the current work which involves the proposal of an altered kernel function. The Laplacians of several functions are evaluated and the accuracies of various MPS Laplacian models in solving the Poisson equation that is subjected to both Dirichlet and Neumann boundary conditions are assessed. For regular grids, the Laplacian model with smaller N is generally more accurate, owing to the reduction of leading errors due to those higher-order derivatives appearing in the modified equation. For irregular grids, an optimal N value does exist in ensuring better global accuracy, in which this optimal value of N will increase when cases employing highly irregular grids are computed. Finally, the accuracies of these MPS Laplacian models are assessed in an incompressible flow problem.

  12. ASSESSING ACCURACY OF NET CHANGE DERIVED FROM LAND COVER MAPS

    EPA Science Inventory

    Net change derived from land-cover maps provides important descriptive information for environmental monitoring and is often used as an input or explanatory variable in environmental models. The sampling design and analysis for assessing net change accuracy differ from traditio...

  13. Estimating Classification Consistency and Accuracy for Cognitive Diagnostic Assessment

    ERIC Educational Resources Information Center

    Cui, Ying; Gierl, Mark J.; Chang, Hua-Hua

    2012-01-01

    This article introduces procedures for the computation and asymptotic statistical inference for classification consistency and accuracy indices specifically designed for cognitive diagnostic assessments. The new classification indices can be used as important indicators of the reliability and validity of classification results produced by…

  14. [Navigation in implantology: Accuracy assessment regarding the literature].

    PubMed

    Barrak, Ibrahim Ádám; Varga, Endre; Piffko, József

    2016-06-01

    Our objective was to assess the literature regarding the accuracy of the different static guided systems. After applying electronic literature search we found 661 articles. After reviewing 139 articles, the authors chose 52 articles for full-text evaluation. 24 studies involved accuracy measurements. Fourteen of our selected references were clinical and ten of them were in vitro (modell or cadaver). Variance-analysis (Tukey's post-hoc test; p < 0.05) was conducted to summarize the selected publications. Regarding 2819 results the average mean error at the entry point was 0.98 mm. At the level of the apex the average deviation was 1.29 mm while the mean of the angular deviation was 3,96 degrees. Significant difference could be observed between the two methods of implant placement (partially and fully guided sequence) in terms of deviation at the entry point, apex and angular deviation. Different levels of quality and quantity of evidence were available for assessing the accuracy of the different computer-assisted implant placement. The rapidly evolving field of digital dentistry and the new developments will further improve the accuracy of guided implant placement. In the interest of being able to draw dependable conclusions and for the further evaluation of the parameters used for accuracy measurements, randomized, controlled single or multi-centered clinical trials are necessary.

  15. Examining rating quality in writing assessment: rater agreement, error, and accuracy.

    PubMed

    Wind, Stefanie A; Engelhard, George

    2012-01-01

    The use of performance assessments in which human raters evaluate student achievement has become increasingly prevalent in high-stakes assessment systems such as those associated with recent policy initiatives (e.g., Race to the Top). In this study, indices of rating quality are compared between two measurement perspectives. Within the context of a large-scale writing assessment, this study focuses on the alignment between indices of rater agreement, error, and accuracy based on traditional and Rasch measurement theory perspectives. Major empirical findings suggest that Rasch-based indices of model-data fit for ratings provide information about raters that is comparable to direct measures of accuracy. The use of easily obtained approximations of direct accuracy measures holds significant implications for monitoring rating quality in large-scale rater-mediated performance assessments.

  16. Clinical assessment of intraarterial blood gas monitor accuracy

    NASA Astrophysics Data System (ADS)

    Aziz, Salim; Spiess, R.; Roby, Paul; Kenny, Margaret

    1993-08-01

    The accuracy of intraarterial blood gas monitoring (IABGM) devices is challenging to assess under routine clinical conditions. When comparing discrete measurements by blood gas analyzer (BGA) to IABGM values, it is important that the BGA determinations (reference method) be as accurate as possible. In vitro decay of gas tensions caused by delay in BGA analysis is particularly problematic for specimens with high arterial oxygen tension (PaO2) values. Clinical instability of blood gases in the acutely ill patient may cause disagreement between BGA and IABGM values because of IABGM response time lag, particularly in the measurement of arterial blood carbon dioxide tension (PaCO2). We recommend that clinical assessments of IABGM accuracy by comparison with BGA use multiple bedside BGA instruments, and that blood sampling only occur during periods when IABGM values appear stable.

  17. Survey methods for assessing land cover map accuracy

    USGS Publications Warehouse

    Nusser, S.M.; Klaas, E.E.

    2003-01-01

    The increasing availability of digital photographic materials has fueled efforts by agencies and organizations to generate land cover maps for states, regions, and the United States as a whole. Regardless of the information sources and classification methods used, land cover maps are subject to numerous sources of error. In order to understand the quality of the information contained in these maps, it is desirable to generate statistically valid estimates of accuracy rates describing misclassification errors. We explored a full sample survey framework for creating accuracy assessment study designs that balance statistical and operational considerations in relation to study objectives for a regional assessment of GAP land cover maps. We focused not only on appropriate sample designs and estimation approaches, but on aspects of the data collection process, such as gaining cooperation of land owners and using pixel clusters as an observation unit. The approach was tested in a pilot study to assess the accuracy of Iowa GAP land cover maps. A stratified two-stage cluster sampling design addressed sample size requirements for land covers and the need for geographic spread while minimizing operational effort. Recruitment methods used for private land owners yielded high response rates, minimizing a source of nonresponse error. Collecting data for a 9-pixel cluster centered on the sampled pixel was simple to implement, and provided better information on rarer vegetation classes as well as substantial gains in precision relative to observing data at a single-pixel.

  18. Assessing Team Performance.

    ERIC Educational Resources Information Center

    Trimble, Susan; Rottier, Jerry

    Interdisciplinary middle school level teams capitalize on the idea that the whole is greater than the sum of its parts. Administrators and team members can maximize the advantages of teamwork using team assessments to increase the benefits for students, teachers, and the school environment. Assessing team performance can lead to high performing…

  19. Standardized accuracy assessment of the calypso wireless transponder tracking system

    NASA Astrophysics Data System (ADS)

    Franz, A. M.; Schmitt, D.; Seitel, A.; Chatrasingh, M.; Echner, G.; Oelfke, U.; Nill, S.; Birkfellner, W.; Maier-Hein, L.

    2014-11-01

    Electromagnetic (EM) tracking allows localization of small EM sensors in a magnetic field of known geometry without line-of-sight. However, this technique requires a cable connection to the tracked object. A wireless alternative based on magnetic fields, referred to as transponder tracking, has been proposed by several authors. Although most of the transponder tracking systems are still in an early stage of development and not ready for clinical use yet, Varian Medical Systems Inc. (Palo Alto, California, USA) presented the Calypso system for tumor tracking in radiation therapy which includes transponder technology. But it has not been used for computer-assisted interventions (CAI) in general or been assessed for accuracy in a standardized manner, so far. In this study, we apply a standardized assessment protocol presented by Hummel et al (2005 Med. Phys. 32 2371-9) to the Calypso system for the first time. The results show that transponder tracking with the Calypso system provides a precision and accuracy below 1 mm in ideal clinical environments, which is comparable with other EM tracking systems. Similar to other systems the tracking accuracy was affected by metallic distortion, which led to errors of up to 3.2 mm. The potential of the wireless transponder tracking technology for use in many future CAI applications can be regarded as extremely high.

  20. Accuracy assessment of fluoroscopy-transesophageal echocardiography registration

    NASA Astrophysics Data System (ADS)

    Lang, Pencilla; Seslija, Petar; Bainbridge, Daniel; Guiraudon, Gerard M.; Jones, Doug L.; Chu, Michael W.; Holdsworth, David W.; Peters, Terry M.

    2011-03-01

    This study assesses the accuracy of a new transesophageal (TEE) ultrasound (US) fluoroscopy registration technique designed to guide percutaneous aortic valve replacement. In this minimally invasive procedure, a valve is inserted into the aortic annulus via a catheter. Navigation and positioning of the valve is guided primarily by intra-operative fluoroscopy. Poor anatomical visualization of the aortic root region can result in incorrect positioning, leading to heart valve embolization, obstruction of the coronary ostia and acute kidney injury. The use of TEE US images to augment intra-operative fluoroscopy provides significant improvements to image-guidance. Registration is achieved using an image-based TEE probe tracking technique and US calibration. TEE probe tracking is accomplished using a single-perspective pose estimation algorithm. Pose estimation from a single image allows registration to be achieved using only images collected in standard OR workflow. Accuracy of this registration technique is assessed using three models: a point target phantom, a cadaveric porcine heart with implanted fiducials, and in-vivo porcine images. Results demonstrate that registration can be achieved with an RMS error of less than 1.5mm, which is within the clinical accuracy requirements of 5mm. US-fluoroscopy registration based on single-perspective pose estimation demonstrates promise as a method for providing guidance to percutaneous aortic valve replacement procedures. Future work will focus on real-time implementation and a visualization system that can be used in the operating room.

  1. Performance and Accuracy of LAPACK's Symmetric TridiagonalEigensolvers

    SciTech Connect

    Demmel, Jim W.; Marques, Osni A.; Parlett, Beresford N.; Vomel,Christof

    2007-04-19

    We compare four algorithms from the latest LAPACK 3.1 release for computing eigenpairs of a symmetric tridiagonal matrix. These include QR iteration, bisection and inverse iteration (BI), the Divide-and-Conquer method (DC), and the method of Multiple Relatively Robust Representations (MR). Our evaluation considers speed and accuracy when computing all eigenpairs, and additionally subset computations. Using a variety of carefully selected test problems, our study includes a variety of today's computer architectures. Our conclusions can be summarized as follows. (1) DC and MR are generally much faster than QR and BI on large matrices. (2) MR almost always does the fewest floating point operations, but at a lower MFlop rate than all the other algorithms. (3) The exact performance of MR and DC strongly depends on the matrix at hand. (4) DC and QR are the most accurate algorithms with observed accuracy O({radical}ne). The accuracy of BI and MR is generally O(ne). (5) MR is preferable to BI for subset computations.

  2. The Social Accuracy Model of Interpersonal Perception: Assessing Individual Differences in Perceptive and Expressive Accuracy

    ERIC Educational Resources Information Center

    Biesanz, Jeremy C.

    2010-01-01

    The social accuracy model of interpersonal perception (SAM) is a componential model that estimates perceiver and target effects of different components of accuracy across traits simultaneously. For instance, Jane may be generally accurate in her perceptions of others and thus high in "perceptive accuracy"--the extent to which a particular…

  3. Accuracy Assessment of a Uav-Based Landslide Monitoring System

    NASA Astrophysics Data System (ADS)

    Peppa, M. V.; Mills, J. P.; Moore, P.; Miller, P. E.; Chambers, J. E.

    2016-06-01

    Landslides are hazardous events with often disastrous consequences. Monitoring landslides with observations of high spatio-temporal resolution can help mitigate such hazards. Mini unmanned aerial vehicles (UAVs) complemented by structure-from-motion (SfM) photogrammetry and modern per-pixel image matching algorithms can deliver a time-series of landslide elevation models in an automated and inexpensive way. This research investigates the potential of a mini UAV, equipped with a Panasonic Lumix DMC-LX5 compact camera, to provide surface deformations at acceptable levels of accuracy for landslide assessment. The study adopts a self-calibrating bundle adjustment-SfM pipeline using ground control points (GCPs). It evaluates misalignment biases and unresolved systematic errors that are transferred through the SfM process into the derived elevation models. To cross-validate the research outputs, results are compared to benchmark observations obtained by standard surveying techniques. The data is collected with 6 cm ground sample distance (GSD) and is shown to achieve planimetric and vertical accuracy of a few centimetres at independent check points (ICPs). The co-registration error of the generated elevation models is also examined in areas of stable terrain. Through this error assessment, the study estimates that the vertical sensitivity to real terrain change of the tested landslide is equal to 9 cm.

  4. Diagnostic accuracy assessment of cytopathological examination of feline sporotrichosis.

    PubMed

    Jessica, N; Sonia, R L; Rodrigo, C; Isabella, D F; Tânia, M P; Jeferson, C; Anna, B F; Sandro, A

    2015-11-01

    Sporotrichosis is an implantation mycosis caused by pathogenic species of Sporothrix schenckii complex that affects humans and animals, especially cats. Its main forms of zoonotic transmission include scratching, biting and/or contact with the exudate from lesions of sick cats. In Brazil, epidemic involving humans, dogs and cats has occurred since 1998. The definitive diagnosis of sporotrichosis is obtained by the isolation of the fungus in culture; however, the result can take up to four weeks, which may delay the beginning of antifungal treatment in some cases. Cytopathological examination is often used in feline sporotrichosis diagnosis, but accuracy parameters have not been established yet. The aim of this study was to evaluate the accuracy and reliability of cytopathological examination in the diagnosis of feline sporotrichosis. The present study included 244 cats from the metropolitan region of Rio de Janeiro, mostly males in reproductive age with three or more lesions in non-adjacent anatomical places. To evaluate the inter-observer reliability, two different observers performed the microscopic examination of the slides blindly. Test sensitivity was 84.9%. The values of positive predictive value, negative predictive value, positive likelihood ratio, negative likelihood ratio and accuracy were 86.0, 24.4, 2.02, 0.26 and 82.8%, respectively. The reliability between the two observers was considered substantial. We conclude that the cytopathological examination is a sensitive, rapid and practical method to be used in feline sporotrichosis diagnosis in outbreaks of this mycosis.

  5. Assessing Scientific Performance.

    ERIC Educational Resources Information Center

    Weiner, John M.; And Others

    1984-01-01

    A method for assessing scientific performance based on relationships displayed numerically in published documents is proposed and illustrated using published documents in pediatric oncology for the period 1979-1982. Contributions of a major clinical investigations group, the Childrens Cancer Study Group, are analyzed. Twenty-nine references are…

  6. Accuracy assessment of gridded precipitation datasets in the Himalayas

    NASA Astrophysics Data System (ADS)

    Khan, A.

    2015-12-01

    Accurate precipitation data are vital for hydro-climatic modelling and water resources assessments. Based on mass balance calculations and Turc-Budyko analysis, this study investigates the accuracy of twelve widely used precipitation gridded datasets for sub-basins in the Upper Indus Basin (UIB) in the Himalayas-Karakoram-Hindukush (HKH) region. These datasets are: 1) Global Precipitation Climatology Project (GPCP), 2) Climate Prediction Centre (CPC) Merged Analysis of Precipitation (CMAP), 3) NCEP / NCAR, 4) Global Precipitation Climatology Centre (GPCC), 5) Climatic Research Unit (CRU), 6) Asian Precipitation Highly Resolved Observational Data Integration Towards Evaluation of Water Resources (APHRODITE), 7) Tropical Rainfall Measuring Mission (TRMM), 8) European Reanalysis (ERA) interim data, 9) PRINCETON, 10) European Reanalysis-40 (ERA-40), 11) Willmott and Matsuura, and 12) WATCH Forcing Data based on ERA interim (WFDEI). Precipitation accuracy and consistency was assessed by physical mass balance involving sum of annual measured flow, estimated actual evapotranspiration (average of 4 datasets), estimated glacier mass balance melt contribution (average of 4 datasets), and ground water recharge (average of 3 datasets), during 1999-2010. Mass balance assessment was complemented by Turc-Budyko non-dimensional analysis, where annual precipitation, measured flow and potential evapotranspiration (average of 5 datasets) data were used for the same period. Both analyses suggest that all tested precipitation datasets significantly underestimate precipitation in the Karakoram sub-basins. For the Hindukush and Himalayan sub-basins most datasets underestimate precipitation, except ERA-interim and ERA-40. The analysis indicates that for this large region with complicated terrain features and stark spatial precipitation gradients the reanalysis datasets have better consistency with flow measurements than datasets derived from records of only sparsely distributed climatic

  7. OLEM Performance Assessment Information

    EPA Pesticide Factsheets

    This asset includes a variety of data sets that measure the performance of Office of Land and Emergency Management (OLEM) programs in support of the Office of the Chief Financial Officer's Annual Commitment System (ACS) and Performance Evaluation Reporting System (PERS). Information is drawn from OLEM's ACRES, RCRAInfo, CERCLIS/SEMS, ICIS, and LUST4 systems, as well as input manually by authorized individuals in OLEM's program offices. Information is reviewed by OLEM program staff prior to being pushed to ACS and entered into PERS. This data asset also pulls in certain performance information input directly by Regional Office staff into ACS. Information is managed by the Performance Assessment Tool (PAT) and displayed in the PAT Dashboard.Information in this asset include:--Government Performance and Results Act (GPRA) of 1993: Measures reported for Innovations, Partnerships and Communications Office (IPCO), the Office of Brownfields and Land Revitalization (OBLR), the Office of Emergency Management (OEM), the Office of Resource Conservation and Recovery (ORCR), the Office of Superfund Remediation and Technology Innovation (OSRTI), and the Office of Underground Storage Tanks (OUST).-- Performance and Environmental Results System (PERS): Includes OLEM's information on performance results and baselines for the EPA Annual Plan and Budget.--Key Performance Indicators: OLEM has identified five KPIs that are tracked annually.--Integrated Cleanup Initiative: A pilot pe

  8. Accuracy assessment of EPA protocol gases purchased in 1991

    SciTech Connect

    Coppedge, E.A.; Logan, T.J.; Midgett, M.R.; Shores, R.C.; Messner, M.J.

    1992-12-01

    The U.S. Environmental Protection Agency (EPA) has established quality assurance procedures for air pollution measurement systems that are intended to reduce the uncertainty in environmental measurements. The compressed gas standards of the program are used for calibration and audits of continuous emission monitoring systems. EPA's regulations require that the certified values for these standards be traceable to National Institute of Standards and Technology (NIST) Standard Reference Materials or to NIST/EPA-approved Certified Reference Materials via either of two traceability protocols. The manufacturer assessment was conducted to: (1) document the accuracy of the compressed gas standards' certified concentrations; and (2) ensure that the compressed gas standards' written certification reports met the documentation requirements of the protocol. All available sources were contacted and the following gas mixtures were acquired: (1) 300-ppm SO2 and 400-ppm NO in N2; and (2) 1500-ppm SO2 and 900-ppm NO in N2.

  9. Evaluating the effect of learning style and student background on self-assessment accuracy

    NASA Astrophysics Data System (ADS)

    Alaoutinen, Satu

    2012-06-01

    This study evaluates a new taxonomy-based self-assessment scale and examines factors that affect assessment accuracy and course performance. The scale is based on Bloom's Revised Taxonomy and is evaluated by comparing students' self-assessment results with course performance in a programming course. Correlation has been used to reveal possible connections between student information and both self-assessment and course performance. The results show that students can place their knowledge along the taxonomy-based scale quite well and the scale seems to fit engineering students' learning style. Advanced students assess themselves more accurately than novices. The results also show that reflective students were better in programming than active. The scale used in this study gives a more objective picture of students' knowledge than general scales and with modifications it can be used in other classes than programming.

  10. Accuracy assessment of a surface electromyogram decomposition system in human first dorsal interosseus muscle

    NASA Astrophysics Data System (ADS)

    Hu, Xiaogang; Rymer, William Z.; Suresh, Nina L.

    2014-04-01

    Objective. The aim of this study is to assess the accuracy of a surface electromyogram (sEMG) motor unit (MU) decomposition algorithm during low levels of muscle contraction. Approach. A two-source method was used to verify the accuracy of the sEMG decomposition system, by utilizing simultaneous intramuscular and surface EMG recordings from the human first dorsal interosseous muscle recorded during isometric trapezoidal force contractions. Spike trains from each recording type were decomposed independently utilizing two different algorithms, EMGlab and dEMG decomposition algorithms. The degree of agreement of the decomposed spike timings was assessed for three different segments of the EMG signals, corresponding to specified regions in the force task. A regression analysis was performed to examine whether certain properties of the sEMG and force signal can predict the decomposition accuracy. Main results. The average accuracy of successful decomposition among the 119 MUs that were common to both intramuscular and surface records was approximately 95%, and the accuracy was comparable between the different segments of the sEMG signals (i.e., force ramp-up versus steady state force versus combined). The regression function between the accuracy and properties of sEMG and force signals revealed that the signal-to-noise ratio of the action potential and stability in the action potential records were significant predictors of the surface decomposition accuracy. Significance. The outcomes of our study confirm the accuracy of the sEMG decomposition algorithm during low muscle contraction levels and provide confidence in the overall validity of the surface dEMG decomposition algorithm.

  11. After Detection: The Improved Accuracy of Lung Cancer Assessment Using Radiologic Computer-aided Diagnosis

    PubMed Central

    Amir, Guy J.; Lehmann, Harold P.

    2015-01-01

    Rationale and Objectives The aim of this study was to evaluate the improved accuracy of radiologic assessment of lung cancer afforded by computer-aided diagnosis (CADx). Materials and Methods Inclusion/exclusion criteria were formulated, and a systematic inquiry of research databases was conducted. Following title and abstract review, an in-depth review of 149 surviving articles was performed with accepted articles undergoing a Quality Assessment of Diagnostic Accuracy Studies (QUADAS)-based quality review and data abstraction. Results A total of 14 articles, representing 1868 scans, passed the review. Increases in the receiver operating characteristic (ROC) area under the curve of .8 or higher were seen in all nine studies that reported it, except for one that employed subspecialized radiologists. Conclusions This systematic review demonstrated improved accuracy of lung cancer assessment using CADx over manual review, in eight high-quality observer-performance studies. The improved accuracy afforded by radiologic lung-CADx suggests the need to explore its use in screening and regular clinical workflow. PMID:26616209

  12. Inertial Measures of Motion for Clinical Biomechanics: Comparative Assessment of Accuracy under Controlled Conditions – Changes in Accuracy over Time

    PubMed Central

    Lebel, Karina; Boissy, Patrick; Hamel, Mathieu; Duval, Christian

    2015-01-01

    Background Interest in 3D inertial motion tracking devices (AHRS) has been growing rapidly among the biomechanical community. Although the convenience of such tracking devices seems to open a whole new world of possibilities for evaluation in clinical biomechanics, its limitations haven’t been extensively documented. The objectives of this study are: 1) to assess the change in absolute and relative accuracy of multiple units of 3 commercially available AHRS over time; and 2) to identify different sources of errors affecting AHRS accuracy and to document how they may affect the measurements over time. Methods This study used an instrumented Gimbal table on which AHRS modules were carefully attached and put through a series of velocity-controlled sustained motions including 2 minutes motion trials (2MT) and 12 minutes multiple dynamic phases motion trials (12MDP). Absolute accuracy was assessed by comparison of the AHRS orientation measurements to those of an optical gold standard. Relative accuracy was evaluated using the variation in relative orientation between modules during the trials. Findings Both absolute and relative accuracy decreased over time during 2MT. 12MDP trials showed a significant decrease in accuracy over multiple phases, but accuracy could be enhanced significantly by resetting the reference point and/or compensating for initial Inertial frame estimation reference for each phase. Interpretation The variation in AHRS accuracy observed between the different systems and with time can be attributed in part to the dynamic estimation error, but also and foremost, to the ability of AHRS units to locate the same Inertial frame. Conclusions Mean accuracies obtained under the Gimbal table sustained conditions of motion suggest that AHRS are promising tools for clinical mobility assessment under constrained conditions of use. However, improvement in magnetic compensation and alignment between AHRS modules are desirable in order for AHRS to reach their

  13. Empathic accuracy for happiness in the daily lives of older couples: Fluid cognitive performance predicts pattern accuracy among men.

    PubMed

    Hülür, Gizem; Hoppmann, Christiane A; Rauers, Antje; Schade, Hannah; Ram, Nilam; Gerstorf, Denis

    2016-08-01

    Correctly identifying other's emotional states is a central cognitive component of empathy. We examined the role of fluid cognitive performance for empathic accuracy for happiness in the daily lives of 86 older couples (mean relationship length = 45 years; mean age = 75 years) on up to 42 occasions over 7 consecutive days. Men performing better on the Digit Symbol test were more accurate in identifying ups and downs of their partner's happiness. A similar association was not found for women. We discuss the potential role of fluid cognitive performance and other individual, partner, and situation characteristics for empathic accuracy. (PsycINFO Database Record

  14. Assessment of the accuracy and stability of ENSN sensors responses

    NASA Astrophysics Data System (ADS)

    Nofal, Hamed; Mohamed, Omar; Mohanna, Mahmoud; El-Gabry, Mohamed

    2015-06-01

    The Egyptian National Seismic Network (ENSN) is an advanced scientific tool used to investigate earth structure and seismic activity in Egypt. One of the main tasks of the engineering team of ENSN is to keep the accuracy and stability of the high performance seismic instruments as close as possible to the international standards used in international seismic network. To achieve this task, the seismometers are routinely calibrated. One of the final outcomes of the calibration process is a set of the actual poles and zeros of the seismometers. Due to the strategic importance of the High Dam, we present in this paper the results of the calibrating broad band (BB) seismometers type Trillium-40 (40 second). From these sets we computed both amplitude and phase responses as well as their deviations from the nominal responses of this particular seismometer type. The computed deviation of this sub-network is then statistically analyzed to obtain an overall estimate of the accuracy of measurements recorded by it. Such analysis might also discover some stations which are far from the international standards. This test will be carried out regularly at periods of several months to find out how stable the seismometer response is. As a result, the values of the magnitude and phase errors are confined between 0% and 2% for about 90% of the calibrated seismometers. The average magnitude error was found to be 5% from the nominal and 4% for average phase error. In order to eliminate any possible error in the measured data, the measured (true) poles and zeroes are used in the response files to replace the nominal values.

  15. Airborne Topographic Mapper Calibration Procedures and Accuracy Assessment

    NASA Technical Reports Server (NTRS)

    Martin, Chreston F.; Krabill, William B.; Manizade, Serdar S.; Russell, Rob L.; Sonntag, John G.; Swift, Robert N.; Yungel, James K.

    2012-01-01

    Description of NASA Airborn Topographic Mapper (ATM) lidar calibration procedures including analysis of the accuracy and consistancy of various ATM instrument parameters and the resulting influence on topographic elevation measurements. The ATM elevations measurements from a nominal operating altitude 500 to 750 m above the ice surface was found to be: Horizontal Accuracy 74 cm, Horizontal Precision 14 cm, Vertical Accuracy 6.6 cm, Vertical Precision 3 cm.

  16. Performance Testing using Silicon Devices - Analysis of Accuracy: Preprint

    SciTech Connect

    Sengupta, M.; Gotseff, P.; Myers, D.; Stoffel, T.

    2012-06-01

    Accurately determining PV module performance in the field requires accurate measurements of solar irradiance reaching the PV panel (i.e., Plane-of-Array - POA Irradiance) with known measurement uncertainty. Pyranometers are commonly based on thermopile or silicon photodiode detectors. Silicon detectors, including PV reference cells, are an attractive choice for reasons that include faster time response (10 us) than thermopile detectors (1 s to 5 s), lower cost and maintenance. The main drawback of silicon detectors is their limited spectral response. Therefore, to determine broadband POA solar irradiance, a pyranometer calibration factor that converts the narrowband response to broadband is required. Normally this calibration factor is a single number determined under clear-sky conditions with respect to a broadband reference radiometer. The pyranometer is then used for various scenarios including varying airmass, panel orientation and atmospheric conditions. This would not be an issue if all irradiance wavelengths that form the broadband spectrum responded uniformly to atmospheric constituents. Unfortunately, the scattering and absorption signature varies widely with wavelength and the calibration factor for the silicon photodiode pyranometer is not appropriate for other conditions. This paper reviews the issues that will arise from the use of silicon detectors for PV performance measurement in the field based on measurements from a group of pyranometers mounted on a 1-axis solar tracker. Also we will present a comparison of simultaneous spectral and broadband measurements from silicon and thermopile detectors and estimated measurement errors when using silicon devices for both array performance and resource assessment.

  17. Mapping with Small UAS: A Point Cloud Accuracy Assessment

    NASA Astrophysics Data System (ADS)

    Toth, Charles; Jozkow, Grzegorz; Grejner-Brzezinska, Dorota

    2015-12-01

    Interest in using inexpensive Unmanned Aerial System (UAS) technology for topographic mapping has recently significantly increased. Small UAS platforms equipped with consumer grade cameras can easily acquire high-resolution aerial imagery allowing for dense point cloud generation, followed by surface model creation and orthophoto production. In contrast to conventional airborne mapping systems, UAS has limited ground coverage due to low flying height and limited flying time, yet it offers an attractive alternative to high performance airborne systems, as the cost of the sensors and platform, and the flight logistics, is relatively low. In addition, UAS is better suited for small area data acquisitions and to acquire data in difficult to access areas, such as urban canyons or densely built-up environments. The main question with respect to the use of UAS is whether the inexpensive consumer sensors installed in UAS platforms can provide the geospatial data quality comparable to that provided by conventional systems. This study aims at the performance evaluation of the current practice of UAS-based topographic mapping by reviewing the practical aspects of sensor configuration, georeferencing and point cloud generation, including comparisons between sensor types and processing tools. The main objective is to provide accuracy characterization and practical information for selecting and using UAS solutions in general mapping applications. The analysis is based on statistical evaluation as well as visual examination of experimental data acquired by a Bergen octocopter with three different image sensor configurations, including a GoPro HERO3+ Black Edition, a Nikon D800 DSLR and a Velodyne HDL-32. In addition, georeferencing data of varying quality were acquired and evaluated. The optical imagery was processed by using three commercial point cloud generation tools. Comparing point clouds created by active and passive sensors by using different quality sensors, and finally

  18. The Effects of Caffeine on Arousal, Response Time, Accuracy, and Performance in Division I Collegiate Fencers.

    PubMed

    Doyle, Taylor P; Lutz, Rafer S; Pellegrino, Joseph K; Sanders, David J; Arent, Shawn M

    2016-11-01

    Doyle, TP, Lutz, RS, Pellegrino, JK, Sanders, DJ, and Arent, SM. The effects of caffeine on arousal, response time, accuracy, and performance in Division I collegiate fencers. J Strength Cond Res 30(11): 3228-3235, 2016-Caffeine has displayed ergogenic effects on aerobic performance. However, sports requiring precision and quick reaction may also be impacted by central nervous system arousal because of caffeine consumption. The purpose of this study was to assess the effects of caffeine on arousal, response time (RT), and accuracy during a simulated fencing practice. Using a randomized, within-subjects, placebo-controlled, double-blind design, Division I male and female college fencers (N = 13; 69.1 ± 3.5 kg) were administered caffeine doses of 0, 1.5, 3.0, 4.5, 6.0, or 7.5 mg·kg during separate testing days. Performance was assessed via RT and accuracy to a 4-choice reaction task. A total of 25 trials were performed each day using a random 2- to 8-s delay between trials. Arousal was assessed using the activation-deactivation adjective check list. Results of repeated-measures multivariate analysis of variance revealed a significant dose effect (p = 0.02) on performance. Follow-up analyses indicated this was due to a significant effect for RT (p = 0.03), with the dose-response curve exhibiting a quadratic relationship. Response time was significantly faster (p < 0.01) for the 1.5, 3.0, and 6.0 mg·kg conditions than for the placebo condition. Results also indicated a significant dose effect for composite RT + accuracy performance (p < 0.01). The dose-response curve was again quadratic, with performance beginning to deteriorate at 7.5 mg·kg. Energetic arousal, tiredness, tension, and calmness all significantly changed as a function of caffeine dose (p ≤ 0.05). Based on these results, caffeine improves RT and overall performance in fencers, particularly as doses increase up to 4.5-6.0 mg·kg. Above this level, performance begins to deteriorate, consistent with an

  19. Assessment Of Accuracies Of Remote-Sensing Maps

    NASA Technical Reports Server (NTRS)

    Card, Don H.; Strong, Laurence L.

    1992-01-01

    Report describes study of accuracies of classifications of picture elements in map derived by digital processing of Landsat-multispectral-scanner imagery of coastal plain of Arctic National Wildlife Refuge. Accuracies of portions of map analyzed with help of statistical sampling procedure called "stratified plurality sampling", in which all picture elements in given cluster classified in stratum to which plurality of them belong.

  20. Biological Performance Assessment

    SciTech Connect

    2013-07-09

    The BioPA provides turbine designers with a set of tools that can be used to assess biological risks of turbines during the design phase, before expensive construction begins. The toolset can also be used to assess existing installations under a variety of operating conditions, supplementing data obtained through expensive field testing. The BioPA uses computational fluid dynamics (CFD) simulations of a turbine design to quantify the exposure of passing fish to a set of known injury mechanisms. By appropriate sampling of the fluid domain, the BioPA assigns exposure probabilities to each mechanism. The exposure probabilities are combined with dose-response data from laboratory stress studies of fish to produce a set of biological BioPA Scores. These metrics provide an objective measure that can be used to compare competing turbines or to refine a new design. The BioPA process can be performed during the turbine design phase and is considerably less expensive than prototype-scale field testing.

  1. Assessment of sensor performance

    NASA Astrophysics Data System (ADS)

    Waldmann, C.; Tamburri, M.; Prien, R. D.; Fietzek, P.

    2010-02-01

    There is an international commitment to develop a comprehensive, coordinated and sustained ocean observation system. However, a foundation for any observing, monitoring or research effort is effective and reliable in situ sensor technologies that accurately measure key environmental parameters. Ultimately, the data used for modelling efforts, management decisions and rapid responses to ocean hazards are only as good as the instruments that collect them. There is also a compelling need to develop and incorporate new or novel technologies to improve all aspects of existing observing systems and meet various emerging challenges. Assessment of Sensor Performance was a cross-cutting issues session at the international OceanSensors08 workshop in Warnemünde, Germany, which also has penetrated some of the papers published as a result of the workshop (Denuault, 2009; Kröger et al., 2009; Zielinski et al., 2009). The discussions were focused on how best to classify and validate the instruments required for effective and reliable ocean observations and research. The following is a summary of the discussions and conclusions drawn from this workshop, which specifically addresses the characterisation of sensor systems, technology readiness levels, verification of sensor performance and quality management of sensor systems.

  2. Assessing the accuracy of self-reported self-talk

    PubMed Central

    Brinthaupt, Thomas M.; Benson, Scott A.; Kang, Minsoo; Moore, Zaver D.

    2015-01-01

    As with most kinds of inner experience, it is difficult to assess actual self-talk frequency beyond self-reports, given the often hidden and subjective nature of the phenomenon. The Self-Talk Scale (STS; Brinthaupt et al., 2009) is a self-report measure of self-talk frequency that has been shown to possess acceptable reliability and validity. However, no research using the STS has examined the accuracy of respondents’ self-reports. In the present paper, we report a series of studies directly examining the measurement of self-talk frequency and functions using the STS. The studies examine ways to validate self-reported self-talk by (1) comparing STS responses from 6 weeks earlier to recent experiences that might precipitate self-talk, (2) using experience sampling methods to determine whether STS scores are related to recent reports of self-talk over a period of a week, and (3) comparing self-reported STS scores to those provided by a significant other who rated the target on the STS. Results showed that (1) overall self-talk scores, particularly self-critical and self-reinforcing self-talk, were significantly related to reports of context-specific self-talk; (2) high STS scorers reported talking to themselves significantly more often during recent events compared to low STS scorers, and, contrary to expectations, (3) friends reported less agreement than strangers in their self-other self-talk ratings. Implications of the results for the validity of the STS and for measuring self-talk are presented. PMID:25999887

  3. Accuracy Assessment of Coastal Topography Derived from Uav Images

    NASA Astrophysics Data System (ADS)

    Long, N.; Millescamps, B.; Pouget, F.; Dumon, A.; Lachaussée, N.; Bertin, X.

    2016-06-01

    To monitor coastal environments, Unmanned Aerial Vehicle (UAV) is a low-cost and easy to use solution to enable data acquisition with high temporal frequency and spatial resolution. Compared to Light Detection And Ranging (LiDAR) or Terrestrial Laser Scanning (TLS), this solution produces Digital Surface Model (DSM) with a similar accuracy. To evaluate the DSM accuracy on a coastal environment, a campaign was carried out with a flying wing (eBee) combined with a digital camera. Using the Photoscan software and the photogrammetry process (Structure From Motion algorithm), a DSM and an orthomosaic were produced. Compared to GNSS surveys, the DSM accuracy is estimated. Two parameters are tested: the influence of the methodology (number and distribution of Ground Control Points, GCPs) and the influence of spatial image resolution (4.6 cm vs 2 cm). The results show that this solution is able to reproduce the topography of a coastal area with a high vertical accuracy (< 10 cm). The georeferencing of the DSM require a homogeneous distribution and a large number of GCPs. The accuracy is correlated with the number of GCPs (use 19 GCPs instead of 10 allows to reduce the difference of 4 cm); the required accuracy should be dependant of the research problematic. Last, in this particular environment, the presence of very small water surfaces on the sand bank does not allow to improve the accuracy when the spatial resolution of images is decreased.

  4. Assessment of the Accuracy of Close Distance Photogrammetric JRC Data

    NASA Astrophysics Data System (ADS)

    Kim, Dong Hyun; Poropat, George; Gratchev, Ivan; Balasubramaniam, Arumugam

    2016-11-01

    By using close range photogrammetry, this article investigates the accuracy of the photogrammetric estimation of rock joint roughness coefficients (JRC), a measure of the degree of roughness of rock joint surfaces. This methodology has proven to be convenient both in laboratory and in site conditions. However, the accuracy and precision of roughness profiles obtained from photogrammetric 3D images have not been properly established due to the variances caused by factors such as measurement errors and systematic errors in photogrammetry. In this study, the influences of camera-to-object distance, focal length and profile orientation on the accuracy of JRC values are investigated using several photogrammetry field surveys. Directional photogrammetric JRC data are compared with data derived from the measured profiles, so as to determine their accuracy. The extent of the accuracy of JRC values was examined based on the error models which were previously developed from laboratory tests and revised for better estimation in this study. The results show that high-resolution 3D images (point interval ≤1 mm) can reduce the JRC errors obtained from field photogrammetric surveys. Using the high-resolution images, the photogrammetric JRC values in the range of high oblique camera angles are highly consistent with the revised error models. Therefore, the analysis indicates that the revised error models facilitate the verification of the accuracy of photogrammetric JRC values.

  5. Changes in Memory Prediction Accuracy: Age and Performance Effects

    ERIC Educational Resources Information Center

    Pearman, Ann; Trujillo, Amanda

    2013-01-01

    Memory performance predictions are subjective estimates of possible memory task performance. The purpose of this study was to examine possible factors related to changes in word list performance predictions made by younger and older adults. Factors included memory self-efficacy, actual performance, and perceptions of performance. The current study…

  6. Georgia's Teacher Performance Assessment

    ERIC Educational Resources Information Center

    Fenton, Anne Marie; Wetherington, Pamela

    2016-01-01

    Like most states, Georgia until recently depended on an assessment of content knowledge to award teaching licenses, along with a licensure recommendation from candidates' educator preparation programs. While the content assessment reflected candidates' grasp of subject matter, licensure decisions did not hinge on direct, statewide assessment of…

  7. Bootstrap performance profiles in stochastic algorithms assessment

    SciTech Connect

    Costa, Lino; Espírito Santo, Isabel A.C.P.; Oliveira, Pedro

    2015-03-10

    Optimization with stochastic algorithms has become a relevant research field. Due to its stochastic nature, its assessment is not straightforward and involves integrating accuracy and precision. Performance profiles for the mean do not show the trade-off between accuracy and precision, and parametric stochastic profiles require strong distributional assumptions and are limited to the mean performance for a large number of runs. In this work, bootstrap performance profiles are used to compare stochastic algorithms for different statistics. This technique allows the estimation of the sampling distribution of almost any statistic even with small samples. Multiple comparison profiles are presented for more than two algorithms. The advantages and drawbacks of each assessment methodology are discussed.

  8. Pixels, Blocks of Pixels, and Polygons: Choosing a Spatial Unit for Thematic Accuracy Assessment

    EPA Science Inventory

    Pixels, polygons, and blocks of pixels are all potentially viable spatial assessment units for conducting an accuracy assessment. We develop a statistical population-based framework to examine how the spatial unit chosen affects the outcome of an accuracy assessment. The populati...

  9. Accuracy of Students' Self-Assessment and Their Beliefs about Its Utility

    ERIC Educational Resources Information Center

    Lew, Magdeleine D. N.; Alwis, W. A. M.; Schmidt, Henk G.

    2010-01-01

    The purpose of the two studies presented here was to evaluate the accuracy of students' self-assessment ability, to examine whether this ability improves over time and to investigate whether self-assessment is more accurate if students believe that it contributes to improving learning. To that end, the accuracy of the self-assessments of 3588…

  10. Does it Make a Difference? Investigating the Assessment Accuracy of Teacher Tutors and Student Tutors

    ERIC Educational Resources Information Center

    Herppich, Stephanie; Wittwer, Jorg; Nuckles, Matthias; Renkl, Alexander

    2013-01-01

    Tutors often have difficulty with accurately assessing a tutee's understanding. However, little is known about whether the professional expertise of tutors influences their assessment accuracy. In this study, the authors examined the accuracy with which 21 teacher tutors and 25 student tutors assessed a tutee's understanding of the human…

  11. Accuracy of virtual models in the assessment of maxillary defects

    PubMed Central

    Kurşun, Şebnem; Kılıç, Cenk; Özen, Tuncer

    2015-01-01

    Purpose This study aimed to assess the reliability of measurements performed on three-dimensional (3D) virtual models of maxillary defects obtained using cone-beam computed tomography (CBCT) and 3D optical scanning. Materials and Methods Mechanical cavities simulating maxillary defects were prepared on the hard palate of nine cadavers. Images were obtained using a CBCT unit at three different fields-of-views (FOVs) and voxel sizes: 1) 60×60 mm FOV, 0.125 mm3 (FOV60); 2) 80×80 mm FOV, 0.160 mm3 (FOV80); and 3) 100×100 mm FOV, 0.250 mm3 (FOV100). Superimposition of the images was performed using software called VRMesh Design. Automated volume measurements were conducted, and differences between surfaces were demonstrated. Silicon impressions obtained from the defects were also scanned with a 3D optical scanner. Virtual models obtained using VRMesh Design were compared with impressions obtained by scanning silicon models. Gold standard volumes of the impression models were then compared with CBCT and 3D scanner measurements. Further, the general linear model was used, and the significance was set to p=0.05. Results A comparison of the results obtained by the observers and methods revealed the p values to be smaller than 0.05, suggesting that the measurement variations were caused by both methods and observers along with the different cadaver specimens used. Further, the 3D scanner measurements were closer to the gold standard measurements when compared to the CBCT measurements. Conclusion In the assessment of artificially created maxillary defects, the 3D scanner measurements were more accurate than the CBCT measurements. PMID:25793180

  12. Accuracy of Estrogen and Progesterone Receptor Assessment in Core Needle Biopsy Specimens of Breast Cancer

    PubMed Central

    Omranipour, Ramesh; Alipour, Sadaf; Hadji, Maryam; Fereidooni, Forouzandeh; Jahanzad, Issa; Bagheri, Khojasteh

    2013-01-01

    Background Diagnosis of breast cancer is completed through core needle biopsy (CNB) of the tumors but there is controversy on the accuracy of hormone receptor results on CNB specimens. Objectives We undertook this study to compare the results of hormone receptor assessment in CNB and surgical samples on our patients. Patients and Methods Hormone receptor status was determined in CNB and surgical samples in breast cancer patients whose CNB and operation had been performed in this institute from 2009 to 2011 and had not undergone neoadjuvant chemotherapy. Results About 350 patients, 60 cases met all the criteria for entering the study. The mean age was 49.8 years. Considering a confidence interval (CI) of 95%, the sensitivity of ER and PR assessment in CNB was 92.9% and 81%, respectively and the specificity of both was 100%. The Accuracy of CNB was 98% for ER and 93% for PR. Conclusions Our results confirm the acceptable accuracy of ER assessment on CNB. The subject needs further investigation in developing countries where omission of the test in surgical samples can be cost and time-saving. PMID:24349751

  13. Assessment of relative accuracy of AHN-2 laser scanning data using planar features.

    PubMed

    van der Sande, Corné; Soudarissanane, Sylvie; Khoshelham, Kourosh

    2010-01-01

    AHN-2 is the second part of the Actueel Hoogtebestand Nederland project, which concerns the acquisition of high-resolution altimetry data over the entire Netherlands using airborne laser scanning. The accuracy assessment of laser altimetry data usually relies on comparing corresponding tie elements, often points or lines, in the overlapping strips. This paper proposes a new approach to strip adjustment and accuracy assessment of AHN-2 data by using planar features. In the proposed approach a transformation is estimated between two overlapping strips by minimizing the distances between points in one strip and their corresponding planes in the other. The planes and the corresponding points are extracted in an automated segmentation process. The point-to-plane distances are used as observables in an estimation model, whereby the parameters of a transformation between the two strips and their associated quality measures are estimated. We demonstrate the performance of the method for the accuracy assessment of the AHN-2 dataset over Zeeland province of The Netherlands. The results show vertical offsets of up to 4 cm between the overlapping strips, and horizontal offsets ranging from 2 cm to 34 cm.

  14. Comparative Accuracy Assessment of Global Land Cover Datasets Using Existing Reference Data

    NASA Astrophysics Data System (ADS)

    Tsendbazar, N. E.; de Bruin, S.; Mora, B.; Herold, M.

    2014-12-01

    Land cover is a key variable to monitor the impact of human and natural processes on the biosphere. As one of the Essential Climate Variables, land cover observations are used for climate models and several other applications. Remote sensing technologies have enabled the generation of several global land cover (GLC) products that are based on different data sources and methods (e.g. legends). Moreover, the reported map accuracies result from varying validation strategies. Such differences make the comparison of the GLC products challenging and create confusion on selecting suitable datasets for different applications. This study aims to conduct comparative accuracy assessment of GLC datasets (LC-CCI 2005, MODIS 2005, and Globcover 2005) using the Globcover 2005 reference data which can represent the thematic differences of these GLC maps. This GLC reference dataset provides LCCS classifier information for 3 main land cover types for each sample plot. The LCCS classifier information was translated according to the legends of the GLC maps analysed. The preliminary analysis showed some challenges in LCCS classifier translation arising from missing important classifier information, differences in class definition between the legends and absence of class proportion of main land cover types. To overcome these issues, we consolidated the entire reference data (i.e. 3857 samples distributed at global scale). Then the GLC maps and the reference dataset were harmonized into 13 general classes to perform the comparative accuracy assessments. To help users on selecting suitable GLC dataset(s) for their application, we conducted the map accuracy assessments considering different users' perspectives: climate modelling, bio-diversity assessments, agriculture monitoring, and map producers. This communication will present the method and the results of this study and provide a set of recommendations to the GLC map producers and users with the aim to facilitate the use of GLC maps.

  15. ASSESSING THE ACCURACY OF NATIONAL LAND COVER DATASET AREA ESTIMATES AT MULTIPLE SPATIAL EXTENTS

    EPA Science Inventory

    Site specific accuracy assessments provide fine-scale evaluation of the thematic accuracy of land use/land cover (LULC) datasets; however, they provide little insight into LULC accuracy across varying spatial extents. Additionally, LULC data are typically used to describe lands...

  16. Constraint on Absolute Accuracy of Metacomprehension Assessments: The Anchoring and Adjustment Model vs. the Standards Model

    ERIC Educational Resources Information Center

    Kwon, Heekyung

    2011-01-01

    The objective of this study is to provide a systematic account of three typical phenomena surrounding absolute accuracy of metacomprehension assessments: (1) the absolute accuracy of predictions is typically quite low; (2) there exist individual differences in absolute accuracy of predictions as a function of reading skill; and (3) postdictions…

  17. Bilingual Language Assessment: A Meta-Analysis of Diagnostic Accuracy

    ERIC Educational Resources Information Center

    Dollaghan, Christine A.; Horner, Elizabeth A.

    2011-01-01

    Purpose: To describe quality indicators for appraising studies of diagnostic accuracy and to report a meta-analysis of measures for diagnosing language impairment (LI) in bilingual Spanish-English U.S. children. Method: The authors searched electronically and by hand to locate peer-reviewed English-language publications meeting inclusion criteria;…

  18. Assessing genomic selection prediction accuracy in a dynamic barley breeding

    Technology Transfer Automated Retrieval System (TEKTRAN)

    Genomic selection is a method to improve quantitative traits in crops and livestock by estimating breeding values of selection candidates using phenotype and genome-wide marker data sets. Prediction accuracy has been evaluated through simulation and cross-validation, however validation based on prog...

  19. Assessing accuracy in citizen science-based plant phenology monitoring

    NASA Astrophysics Data System (ADS)

    Fuccillo, Kerissa K.; Crimmins, Theresa M.; de Rivera, Catherine E.; Elder, Timothy S.

    2015-07-01

    In the USA, thousands of volunteers are engaged in tracking plant and animal phenology through a variety of citizen science programs for the purpose of amassing spatially and temporally comprehensive datasets useful to scientists and resource managers. The quality of these observations and their suitability for scientific analysis, however, remains largely unevaluated. We aimed to evaluate the accuracy of plant phenology observations collected by citizen scientist volunteers following protocols designed by the USA National Phenology Network (USA-NPN). Phenology observations made by volunteers receiving several hours of formal training were compared to those collected independently by a professional ecologist. Approximately 11,000 observations were recorded by 28 volunteers over the course of one field season. Volunteers consistently identified phenophases correctly (91 % overall) for the 19 species observed. Volunteers demonstrated greatest overall accuracy identifying unfolded leaves, ripe fruits, and open flowers. Transitional accuracy decreased for some species/phenophase combinations (70 % average), and accuracy varied significantly by phenophase and species ( p < 0.0001). Volunteers who submitted fewer observations over the period of study did not exhibit a higher error rate than those who submitted more total observations. Overall, these results suggest that volunteers with limited training can provide reliable observations when following explicit, standardized protocols. Future studies should investigate different observation models (i.e., group/individual, online/in-person training) over subsequent seasons with multiple expert comparisons to further substantiate the ability of these monitoring programs to supply accurate broadscale datasets capable of answering pressing ecological questions about global change.

  20. Sleep restriction and serving accuracy in performance tennis players, and effects of caffeine.

    PubMed

    Reyner, L A; Horne, J A

    2013-08-15

    Athletes often lose sleep on the night before a competition. Whilst it is unlikely that sleep loss will impair sports mostly relying on strength and endurance, little is known about potential effects on sports involving psychomotor performance necessitating judgement and accuracy, rather than speed, as in tennis for example, and where caffeine is 'permitted'. Two studies were undertaken, on 5h sleep (33%) restriction versus normal sleep, on serving accuracy in semi-professional tennis players. Testing (14:00 h-16:00 h) comprised 40 serves into a (1.8 m×1.1 m) 'service box' diagonally, over the net. Study 1 (8 m; 8 f) was within-Ss, counterbalanced (normal versus sleep restriction). Study 2 (6m;6f -different Ss) comprised three conditions (Latin square), identical to Study 1, except for an extra sleep restriction condition with 80 mg caffeine vs placebo in a sugar-free drink, given (double blind), 30 min before testing. Both studies showed significant impairments to serving accuracy after sleep restriction. Caffeine at this dose had no beneficial effect. Study 1 also assessed gender differences, with women significantly poorer under all conditions, and non-significant indications that women were more impaired by sleep restriction (also seen in Study 2). We conclude that adequate sleep is essential for best performance of this type of skill in tennis players and that caffeine is no substitute for 'lost sleep'. 210.

  1. Positioning accuracy assessment for the 4GEO/5IGSO/2MEO constellation of COMPASS

    NASA Astrophysics Data System (ADS)

    Zhou, ShanShi; Cao, YueLing; Zhou, JianHua; Hu, XiaoGong; Tang, ChengPan; Liu, Li; Guo, Rui; He, Feng; Chen, JunPing; Wu, Bin

    2012-12-01

    Determined to become a new member of the well-established GNSS family, COMPASS (or BeiDou-2) is developing its capabilities to provide high accuracy positioning services. Two positioning modes are investigated in this study to assess the positioning accuracy of COMPASS' 4GEO/5IGSO/2MEO constellation. Precise Point Positioning (PPP) for geodetic users and real-time positioning for common navigation users are utilized. To evaluate PPP accuracy, coordinate time series repeatability and discrepancies with GPS' precise positioning are computed. Experiments show that COMPASS PPP repeatability for the east, north and up components of a receiver within mainland China is better than 2 cm, 2 cm and 5 cm, respectively. Apparent systematic offsets of several centimeters exist between COMPASS precise positioning and GPS precise positioning, indicating errors remaining in the treatments of COMPASS measurement and dynamic models and reference frame differences existing between two systems. For common positioning users, COMPASS provides both open and authorized services with rapid differential corrections and integrity information available to authorized users. Our assessment shows that in open service positioning accuracy of dual-frequency and single-frequency users is about 5 m and 6 m (RMS), respectively, which may be improved to about 3 m and 4 m (RMS) with the addition of differential corrections. Less accurate Signal In Space User Ranging Error (SIS URE) and Geometric Dilution of Precision (GDOP) contribute to the relatively inferior accuracy of COMPASS as compared to GPS. Since the deployment of the remaining 1 GEO and 2 MEO is not able to significantly improve GDOP, the performance gap could only be overcome either by the use of differential corrections or improvement of the SIS URE, or both.

  2. Accuracy assessment of seven global land cover datasets over China

    NASA Astrophysics Data System (ADS)

    Yang, Yongke; Xiao, Pengfeng; Feng, Xuezhi; Li, Haixing

    2017-03-01

    Land cover (LC) is the vital foundation to Earth science. Up to now, several global LC datasets have arisen with efforts of many scientific communities. To provide guidelines for data usage over China, nine LC maps from seven global LC datasets (IGBP DISCover, UMD, GLC, MCD12Q1, GLCNMO, CCI-LC, and GlobeLand30) were evaluated in this study. First, we compared their similarities and discrepancies in both area and spatial patterns, and analysed their inherent relations to data sources and classification schemes and methods. Next, five sets of validation sample units (VSUs) were collected to calculate their accuracy quantitatively. Further, we built a spatial analysis model and depicted their spatial variation in accuracy based on the five sets of VSUs. The results show that, there are evident discrepancies among these LC maps in both area and spatial patterns. For LC maps produced by different institutes, GLC 2000 and CCI-LC 2000 have the highest overall spatial agreement (53.8%). For LC maps produced by same institutes, overall spatial agreement of CCI-LC 2000 and 2010, and MCD12Q1 2001 and 2010 reach up to 99.8% and 73.2%, respectively; while more efforts are still needed if we hope to use these LC maps as time series data for model inputting, since both CCI-LC and MCD12Q1 fail to represent the rapid changing trend of several key LC classes in the early 21st century, in particular urban and built-up, snow and ice, water bodies, and permanent wetlands. With the highest spatial resolution, the overall accuracy of GlobeLand30 2010 is 82.39%. For the other six LC datasets with coarse resolution, CCI-LC 2010/2000 has the highest overall accuracy, and following are MCD12Q1 2010/2001, GLC 2000, GLCNMO 2008, IGBP DISCover, and UMD in turn. Beside that all maps exhibit high accuracy in homogeneous regions; local accuracies in other regions are quite different, particularly in Farming-Pastoral Zone of North China, mountains in Northeast China, and Southeast Hills. Special

  3. [Assessment of precision and accuracy of digital surface photogrammetry with the DSP 400 system].

    PubMed

    Krimmel, M; Kluba, S; Dietz, K; Reinert, S

    2005-03-01

    The objective of the present study was to evaluate the precision and accuracy of facial anthropometric measurements obtained through digital 3-D surface photogrammetry with the DSP 400 system in comparison to traditional 2-D photogrammetry. Fifty plaster casts of cleft infants were imaged and 21 standard anthropometric measurements were obtained. For precision assessment the measurements were performed twice in a subsample. Accuracy was determined by comparison of direct measurements and indirect 2-D and 3-D image measurements. Precision of digital surface photogrammetry was almost as good as direct anthropometry and clearly better than 2-D photogrammetry. Measurements derived from 3-D images showed better congruence to direct measurements than from 2-D photos. Digital surface photogrammetry with the DSP 400 system is sufficiently precise and accurate for craniofacial anthropometric examinations.

  4. Assessing accuracy in citizen science-based plant phenology monitoring.

    PubMed

    Fuccillo, Kerissa K; Crimmins, Theresa M; de Rivera, Catherine E; Elder, Timothy S

    2015-07-01

    In the USA, thousands of volunteers are engaged in tracking plant and animal phenology through a variety of citizen science programs for the purpose of amassing spatially and temporally comprehensive datasets useful to scientists and resource managers. The quality of these observations and their suitability for scientific analysis, however, remains largely unevaluated. We aimed to evaluate the accuracy of plant phenology observations collected by citizen scientist volunteers following protocols designed by the USA National Phenology Network (USA-NPN). Phenology observations made by volunteers receiving several hours of formal training were compared to those collected independently by a professional ecologist. Approximately 11,000 observations were recorded by 28 volunteers over the course of one field season. Volunteers consistently identified phenophases correctly (91% overall) for the 19 species observed. Volunteers demonstrated greatest overall accuracy identifying unfolded leaves, ripe fruits, and open flowers. Transitional accuracy decreased for some species/phenophase combinations (70% average), and accuracy varied significantly by phenophase and species (p < 0.0001). Volunteers who submitted fewer observations over the period of study did not exhibit a higher error rate than those who submitted more total observations. Overall, these results suggest that volunteers with limited training can provide reliable observations when following explicit, standardized protocols. Future studies should investigate different observation models (i.e., group/individual, online/in-person training) over subsequent seasons with multiple expert comparisons to further substantiate the ability of these monitoring programs to supply accurate broadscale datasets capable of answering pressing ecological questions about global change.

  5. 12 CFR 620.3 - Accuracy of reports and assessment of internal control over financial reporting.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... 12 Banks and Banking 6 2010-01-01 2010-01-01 false Accuracy of reports and assessment of internal control over financial reporting. 620.3 Section 620.3 Banks and Banking FARM CREDIT ADMINISTRATION FARM CREDIT SYSTEM DISCLOSURE TO SHAREHOLDERS General § 620.3 Accuracy of reports and assessment of...

  6. Accuracy assessment of modeling architectural structures and details using terrestrial laser scanning

    NASA Astrophysics Data System (ADS)

    Kedzierski, M.; Walczykowski, P.; Orych, A.; Czarnecka, P.

    2015-08-01

    One of the most important aspects when performing architectural documentation of cultural heritage structures is the accuracy of both the data and the products which are generated from these data: documentation in the form of 3D models or vector drawings. The paper describes an assessment of the accuracy of modelling data acquired using a terrestrial phase scanner in relation to the density of a point cloud representing the surface of different types of construction materials typical for cultural heritage structures. This analysis includes the impact of the scanning geometry: the incidence angle of the laser beam and the scanning distance. For the purposes of this research, a test field consisting of samples of different types of construction materials (brick, wood, plastic, plaster, a ceramic tile, sheet metal) was built. The study involved conducting measurements at different angles and from a range of distances for chosen scanning densities. Data, acquired in the form of point clouds, were then filtered and modelled. An accuracy assessment of the 3D model was conducted by fitting it with the point cloud. The reflection intensity of each type of material was also analyzed, trying to determine which construction materials have the highest reflectance coefficients, and which have the lowest reflection coefficients, and in turn how this variable changes for different scanning parameters. Additionally measurements were taken of a fragment of a building in order to compare the results obtained in laboratory conditions, with those taken in field conditions.

  7. Simply Performance Assessment

    ERIC Educational Resources Information Center

    McLaughlin, Cheryl A.; McLaughlin, Felecia C.; Pringle, Rose M.

    2013-01-01

    This article presents the experiences of Miss Felecia McLaughlin, a fourth-grade teacher from the island of Jamaica who used the model proposed by Bass et al. (2009) to assess conceptual understanding of four of the six types of simple machines while encouraging collaboration through the creation of learning teams. Students had an opportunity to…

  8. Assessing the accuracy and reproducibility of modality independent elastography in a murine model of breast cancer

    PubMed Central

    Weis, Jared A.; Flint, Katelyn M.; Sanchez, Violeta; Yankeelov, Thomas E.; Miga, Michael I.

    2015-01-01

    Abstract. Cancer progression has been linked to mechanics. Therefore, there has been recent interest in developing noninvasive imaging tools for cancer assessment that are sensitive to changes in tissue mechanical properties. We have developed one such method, modality independent elastography (MIE), that estimates the relative elastic properties of tissue by fitting anatomical image volumes acquired before and after the application of compression to biomechanical models. The aim of this study was to assess the accuracy and reproducibility of the method using phantoms and a murine breast cancer model. Magnetic resonance imaging data were acquired, and the MIE method was used to estimate relative volumetric stiffness. Accuracy was assessed using phantom data by comparing to gold-standard mechanical testing of elasticity ratios. Validation error was <12%. Reproducibility analysis was performed on animal data, and within-subject coefficients of variation ranged from 2 to 13% at the bulk level and 32% at the voxel level. To our knowledge, this is the first study to assess the reproducibility of an elasticity imaging metric in a preclinical cancer model. Our results suggest that the MIE method can reproducibly generate accurate estimates of the relative mechanical stiffness and provide guidance on the degree of change needed in order to declare biological changes rather than experimental error in future therapeutic studies. PMID:26158120

  9. Gender differences in structured risk assessment: comparing the accuracy of five instruments.

    PubMed

    Coid, Jeremy; Yang, Min; Ullrich, Simone; Zhang, Tianqiang; Sizmur, Steve; Roberts, Colin; Farrington, David P; Rogers, Robert D

    2009-04-01

    Structured risk assessment should guide clinical risk management, but it is uncertain which instrument has the highest predictive accuracy among men and women. In the present study, the authors compared the Psychopathy Checklist-Revised (PCL-R; R. D. Hare, 1991, 2003); the Historical, Clinical, Risk Management-20 (HCR-20; C. D. Webster, K. S. Douglas, D. Eaves, & S. D. Hart, 1997); the Risk Matrix 2000-Violence (RM2000[V]; D. Thornton et al., 2003); the Violence Risk Appraisal Guide (VRAG; V. L. Quinsey, G. T. Harris, M. E. Rice, & C. A. Cormier, 1998); the Offenders Group Reconviction Scale (OGRS; J. B. Copas & P. Marshall, 1998; R. Taylor, 1999); and the total previous convictions among prisoners, prospectively assessed prerelease. The authors compared predischarge measures with subsequent offending and instruments ranked using multivariate regression. Most instruments demonstrated significant but moderate predictive ability. The OGRS ranked highest for violence among men, and the PCL-R and HCR-20 H subscale ranked highest for violence among women. The OGRS and total previous acquisitive convictions demonstrated greatest accuracy in predicting acquisitive offending among men and women. Actuarial instruments requiring no training to administer performed as well as personality assessment and structured risk assessment and were superior among men for violence.

  10. Supporting Reform through Performance Assessment.

    ERIC Educational Resources Information Center

    Kitchen, Richard; Cherrington, April; Gates, Joanne; Hitchings, Judith; Majka, Maria; Merk, Michael; Trubow, George

    2002-01-01

    Describes the impact of a performance assessment project on six teachers' teaching at Borel Middle School in the San Mateo/Foster City School District in California. Reports positive gains in student performance on the tasks over three years. (YDS)

  11. An accuracy assessment of Magellan Very Long Baseline Interferometry (VLBI)

    NASA Technical Reports Server (NTRS)

    Engelhardt, D. B.; Kronschnabl, G. R.; Border, J. S.

    1990-01-01

    Very Long Baseline Interferometry (VLBI) measurements of the Magellan spacecraft's angular position and velocity were made during July through September, 1989, during the spacecraft's heliocentric flight to Venus. The purpose of this data acquisition and reduction was to verify this data type for operational use before Magellan is inserted into Venus orbit, in August, 1990. The accuracy of these measurements are shown to be within 20 nanoradians in angular position, and within 5 picoradians/sec in angular velocity. The media effects and their calibrations are quantified; the wet fluctuating troposphere is the dominant source of measurement error for angular velocity. The charged particle effect is completely calibrated with S- and X-Band dual-frequency calibrations. Increasing the accuracy of the Earth platform model parameters, by using VLBI-derived tracking station locations consistent with the planetary ephemeris frame, and by including high frequency Earth tidal terms in the Earth rotation model, add a few nanoradians improvement to the angular position measurements. Angular velocity measurements were insensitive to these Earth platform modelling improvements.

  12. Assessing expected accuracy of probe vehicle travel time reports

    SciTech Connect

    Hellinga, B.; Fu, L.

    1999-12-01

    The use of probe vehicles to provide estimates of link travel times has been suggested as a means of obtaining travel times within signalized networks for use in advanced travel information systems. Past research in the literature has proved contradictory conclusions regarding the expected accuracy of these probe-based estimates, and consequently has estimated different levels of market penetration of probe vehicles required to sustain accurate data within an advanced traveler information system. This paper examines the effect of sampling bias on the accuracy of the probe estimates. An analytical expression is derived on the basis of queuing theory to prove that bias in arrival time distributions and/or in the proportion of probes associated with each link departure turning movement will lead to a systematic bias in the sample estimate of the mean delay. Subsequently, the potential for and impact of sampling bias on a signalized link is examined by simulating an arterial corridor. The analytical derivation and the simulation analysis show that the reliability of probe-based average link travel times is highly affected by sampling bias. Furthermore, this analysis shows that the contradictory conclusions of previous research are directly related to the presence of absence of sample bias.

  13. Human Performance Assessment Methods

    DTIC Science & Technology

    1989-05-01

    deja aborde let problmes souleves par e’tude des performances humaines en situation op ~ rationnelle mais tr~s souvent il t’est av&r impossiblc de faire...La mission dii Groupe fit dl󈨁abortr inc batterie standardiq~e de tests et de rechercher et crs~er unc structure pour I𔄀change de donnees. La...fit de compiler ct dc publier in Annusire international des 6quipes de recherche en performances humaines. Cette publication. quoique loin d

  14. Accuracy assessment of a mobile terrestrial lidar survey at Padre Island National Seashore

    USGS Publications Warehouse

    Lim, Samsung; Thatcher, Cindy A.; Brock, John C.; Kimbrow, Dustin R.; Danielson, Jeffrey J.; Reynolds, B.J.

    2013-01-01

    The higher point density and mobility of terrestrial laser scanning (light detection and ranging (lidar)) is desired when extremely detailed elevation data are needed for mapping vertically orientated complex features such as levees, dunes, and cliffs, or when highly accurate data are needed for monitoring geomorphic changes. Mobile terrestrial lidar scanners have the capability for rapid data collection on a larger spatial scale compared with tripod-based terrestrial lidar, but few studies have examined the accuracy of this relatively new mapping technology. For this reason, we conducted a field test at Padre Island National Seashore of a mobile lidar scanner mounted on a sport utility vehicle and integrated with a position and orientation system. The purpose of the study was to assess the vertical and horizontal accuracy of data collected by the mobile terrestrial lidar system, which is georeferenced to the Universal Transverse Mercator coordinate system and the North American Vertical Datum of 1988. To accomplish the study objectives, independent elevation data were collected by conducting a high-accuracy global positioning system survey to establish the coordinates and elevations of 12 targets spaced throughout the 12 km transect. These independent ground control data were compared to the lidar scanner-derived elevations to quantify the accuracy of the mobile lidar system. The performance of the mobile lidar system was also tested at various vehicle speeds and scan density settings (e.g. field of view and linear point spacing) to estimate the optimal parameters for desired point density. After adjustment of the lever arm parameters, the final point cloud accuracy was 0.060 m (east), 0.095 m (north), and 0.053 m (height). The very high density of the resulting point cloud was sufficient to map fine-scale topographic features, such as the complex shape of the sand dunes.

  15. Performance Assessment Sampler: A Workbook.

    ERIC Educational Resources Information Center

    Educational Testing Service, Princeton, NJ. Policy Information Center.

    Performance assessment, constructed-response, and authentic assessment are topics of current interest in educational testing and reform. This workbook presents the following articles on educational assessment to highlight work being done in this area: (1) "Aquarium Problem and Teacher Guidelines: New Standards Project" (University of…

  16. Performance Assessment in the Arts.

    ERIC Educational Resources Information Center

    Clark, Robin E.

    2002-01-01

    Explains that evaluating student work in the arts must be balanced between student engagement in creation and quality of the finished product or performance, noting that art teachers must be ready with creative and effective means of evaluation. The paper defines performance assessment, examines the many forms of assessment used in art education,…

  17. In control: systematic assessment of microarray performance.

    PubMed

    van Bakel, Harm; Holstege, Frank C P

    2004-10-01

    Expression profiling using DNA microarrays is a powerful technique that is widely used in the life sciences. How reliable are microarray-derived measurements? The assessment of performance is challenging because of the complicated nature of microarray experiments and the many different technology platforms. There is a mounting call for standards to be introduced, and this review addresses some of the issues that are involved. Two important characteristics of performance are accuracy and precision. The assessment of these factors can be either for the purpose of technology optimization or for the evaluation of individual microarray hybridizations. Microarray performance has been evaluated by at least four approaches in the past. Here, we argue that external RNA controls offer the most versatile system for determining performance and describe how such standards could be implemented. Other uses of external controls are discussed, along with the importance of probe sequence availability and the quantification of labelled material.

  18. A Multilaboratory Comparison of Calibration Accuracy and the Performance of External References in Analytical Ultracentrifugation

    PubMed Central

    Zhao, Huaying; Ghirlando, Rodolfo; Alfonso, Carlos; Arisaka, Fumio; Attali, Ilan; Bain, David L.; Bakhtina, Marina M.; Becker, Donald F.; Bedwell, Gregory J.; Bekdemir, Ahmet; Besong, Tabot M. D.; Birck, Catherine; Brautigam, Chad A.; Brennerman, William; Byron, Olwyn; Bzowska, Agnieszka; Chaires, Jonathan B.; Chaton, Catherine T.; Cölfen, Helmut; Connaghan, Keith D.; Crowley, Kimberly A.; Curth, Ute; Daviter, Tina; Dean, William L.; Díez, Ana I.; Ebel, Christine; Eckert, Debra M.; Eisele, Leslie E.; Eisenstein, Edward; England, Patrick; Escalante, Carlos; Fagan, Jeffrey A.; Fairman, Robert; Finn, Ron M.; Fischle, Wolfgang; de la Torre, José García; Gor, Jayesh; Gustafsson, Henning; Hall, Damien; Harding, Stephen E.; Cifre, José G. Hernández; Herr, Andrew B.; Howell, Elizabeth E.; Isaac, Richard S.; Jao, Shu-Chuan; Jose, Davis; Kim, Soon-Jong; Kokona, Bashkim; Kornblatt, Jack A.; Kosek, Dalibor; Krayukhina, Elena; Krzizike, Daniel; Kusznir, Eric A.; Kwon, Hyewon; Larson, Adam; Laue, Thomas M.; Le Roy, Aline; Leech, Andrew P.; Lilie, Hauke; Luger, Karolin; Luque-Ortega, Juan R.; Ma, Jia; May, Carrie A.; Maynard, Ernest L.; Modrak-Wojcik, Anna; Mok, Yee-Foong; Mücke, Norbert; Nagel-Steger, Luitgard; Narlikar, Geeta J.; Noda, Masanori; Nourse, Amanda; Obsil, Tomas; Park, Chad K.; Park, Jin-Ku; Pawelek, Peter D.; Perdue, Erby E.; Perkins, Stephen J.; Perugini, Matthew A.; Peterson, Craig L.; Peverelli, Martin G.; Piszczek, Grzegorz; Prag, Gali; Prevelige, Peter E.; Raynal, Bertrand D. E.; Rezabkova, Lenka; Richter, Klaus; Ringel, Alison E.; Rosenberg, Rose; Rowe, Arthur J.; Rufer, Arne C.; Scott, David J.; Seravalli, Javier G.; Solovyova, Alexandra S.; Song, Renjie; Staunton, David; Stoddard, Caitlin; Stott, Katherine; Strauss, Holger M.; Streicher, Werner W.; Sumida, John P.; Swygert, Sarah G.; Szczepanowski, Roman H.; Tessmer, Ingrid; Toth, Ronald T.; Tripathy, Ashutosh; Uchiyama, Susumu; Uebel, Stephan F. W.; Unzai, Satoru; Gruber, Anna Vitlin; von Hippel, Peter H.; Wandrey, Christine; Wang, Szu-Huan; Weitzel, Steven E.; Wielgus-Kutrowska, Beata; Wolberger, Cynthia; Wolff, Martin; Wright, Edward; Wu, Yu-Sung; Wubben, Jacinta M.; Schuck, Peter

    2015-01-01

    Analytical ultracentrifugation (AUC) is a first principles based method to determine absolute sedimentation coefficients and buoyant molar masses of macromolecules and their complexes, reporting on their size and shape in free solution. The purpose of this multi-laboratory study was to establish the precision and accuracy of basic data dimensions in AUC and validate previously proposed calibration techniques. Three kits of AUC cell assemblies containing radial and temperature calibration tools and a bovine serum albumin (BSA) reference sample were shared among 67 laboratories, generating 129 comprehensive data sets. These allowed for an assessment of many parameters of instrument performance, including accuracy of the reported scan time after the start of centrifugation, the accuracy of the temperature calibration, and the accuracy of the radial magnification. The range of sedimentation coefficients obtained for BSA monomer in different instruments and using different optical systems was from 3.655 S to 4.949 S, with a mean and standard deviation of (4.304 ± 0.188) S (4.4%). After the combined application of correction factors derived from the external calibration references for elapsed time, scan velocity, temperature, and radial magnification, the range of s-values was reduced 7-fold with a mean of 4.325 S and a 6-fold reduced standard deviation of ± 0.030 S (0.7%). In addition, the large data set provided an opportunity to determine the instrument-to-instrument variation of the absolute radial positions reported in the scan files, the precision of photometric or refractometric signal magnitudes, and the precision of the calculated apparent molar mass of BSA monomer and the fraction of BSA dimers. These results highlight the necessity and effectiveness of independent calibration of basic AUC data dimensions for reliable quantitative studies. PMID:25997164

  19. A multilaboratory comparison of calibration accuracy and the performance of external references in analytical ultracentrifugation.

    PubMed

    Zhao, Huaying; Ghirlando, Rodolfo; Alfonso, Carlos; Arisaka, Fumio; Attali, Ilan; Bain, David L; Bakhtina, Marina M; Becker, Donald F; Bedwell, Gregory J; Bekdemir, Ahmet; Besong, Tabot M D; Birck, Catherine; Brautigam, Chad A; Brennerman, William; Byron, Olwyn; Bzowska, Agnieszka; Chaires, Jonathan B; Chaton, Catherine T; Cölfen, Helmut; Connaghan, Keith D; Crowley, Kimberly A; Curth, Ute; Daviter, Tina; Dean, William L; Díez, Ana I; Ebel, Christine; Eckert, Debra M; Eisele, Leslie E; Eisenstein, Edward; England, Patrick; Escalante, Carlos; Fagan, Jeffrey A; Fairman, Robert; Finn, Ron M; Fischle, Wolfgang; de la Torre, José García; Gor, Jayesh; Gustafsson, Henning; Hall, Damien; Harding, Stephen E; Cifre, José G Hernández; Herr, Andrew B; Howell, Elizabeth E; Isaac, Richard S; Jao, Shu-Chuan; Jose, Davis; Kim, Soon-Jong; Kokona, Bashkim; Kornblatt, Jack A; Kosek, Dalibor; Krayukhina, Elena; Krzizike, Daniel; Kusznir, Eric A; Kwon, Hyewon; Larson, Adam; Laue, Thomas M; Le Roy, Aline; Leech, Andrew P; Lilie, Hauke; Luger, Karolin; Luque-Ortega, Juan R; Ma, Jia; May, Carrie A; Maynard, Ernest L; Modrak-Wojcik, Anna; Mok, Yee-Foong; Mücke, Norbert; Nagel-Steger, Luitgard; Narlikar, Geeta J; Noda, Masanori; Nourse, Amanda; Obsil, Tomas; Park, Chad K; Park, Jin-Ku; Pawelek, Peter D; Perdue, Erby E; Perkins, Stephen J; Perugini, Matthew A; Peterson, Craig L; Peverelli, Martin G; Piszczek, Grzegorz; Prag, Gali; Prevelige, Peter E; Raynal, Bertrand D E; Rezabkova, Lenka; Richter, Klaus; Ringel, Alison E; Rosenberg, Rose; Rowe, Arthur J; Rufer, Arne C; Scott, David J; Seravalli, Javier G; Solovyova, Alexandra S; Song, Renjie; Staunton, David; Stoddard, Caitlin; Stott, Katherine; Strauss, Holger M; Streicher, Werner W; Sumida, John P; Swygert, Sarah G; Szczepanowski, Roman H; Tessmer, Ingrid; Toth, Ronald T; Tripathy, Ashutosh; Uchiyama, Susumu; Uebel, Stephan F W; Unzai, Satoru; Gruber, Anna Vitlin; von Hippel, Peter H; Wandrey, Christine; Wang, Szu-Huan; Weitzel, Steven E; Wielgus-Kutrowska, Beata; Wolberger, Cynthia; Wolff, Martin; Wright, Edward; Wu, Yu-Sung; Wubben, Jacinta M; Schuck, Peter

    2015-01-01

    Analytical ultracentrifugation (AUC) is a first principles based method to determine absolute sedimentation coefficients and buoyant molar masses of macromolecules and their complexes, reporting on their size and shape in free solution. The purpose of this multi-laboratory study was to establish the precision and accuracy of basic data dimensions in AUC and validate previously proposed calibration techniques. Three kits of AUC cell assemblies containing radial and temperature calibration tools and a bovine serum albumin (BSA) reference sample were shared among 67 laboratories, generating 129 comprehensive data sets. These allowed for an assessment of many parameters of instrument performance, including accuracy of the reported scan time after the start of centrifugation, the accuracy of the temperature calibration, and the accuracy of the radial magnification. The range of sedimentation coefficients obtained for BSA monomer in different instruments and using different optical systems was from 3.655 S to 4.949 S, with a mean and standard deviation of (4.304 ± 0.188) S (4.4%). After the combined application of correction factors derived from the external calibration references for elapsed time, scan velocity, temperature, and radial magnification, the range of s-values was reduced 7-fold with a mean of 4.325 S and a 6-fold reduced standard deviation of ± 0.030 S (0.7%). In addition, the large data set provided an opportunity to determine the instrument-to-instrument variation of the absolute radial positions reported in the scan files, the precision of photometric or refractometric signal magnitudes, and the precision of the calculated apparent molar mass of BSA monomer and the fraction of BSA dimers. These results highlight the necessity and effectiveness of independent calibration of basic AUC data dimensions for reliable quantitative studies.

  20. Accuracy of genomic prediction of purebreds for cross bred performance in pigs.

    PubMed

    Hidalgo, A M; Bastiaansen, J W M; Lopes, M S; Calus, M P L; de Koning, D J

    2016-12-01

    In pig breeding, as the final product is a cross bred (CB) animal, the goal is to increase the CB performance. This goal requires different strategies for the implementation of genomic selection from what is currently implemented in, for example dairy cattle breeding. A good strategy is to estimate marker effects on the basis of CB performance and subsequently use them to select pure bred (PB) breeding animals. The objective of our study was to assess empirically the predictive ability (accuracy) of direct genomic values of PB for CB performance across two traits using CB and PB genomic and phenotypic data. We studied three scenarios in which genetic merit was predicted within each population, and four scenarios where PB genetic merit for CB performance was predicted based on either CB or a PB training data. Accuracy of prediction of PB genetic merit for CB performance based on CB training data ranged from 0.23 to 0.27 for gestation length (GLE), whereas it ranged from 0.11 to 0.22 for total number of piglets born (TNB). When based on PB training data, it ranged from 0.35 to 0.55 for GLE and from 0.30 to 0.40 for TNB. Our results showed that it is possible to predict PB genetic merit for CB performance using CB training data, but predictive ability was lower than training using PB training data. This result is mainly due to the structure of our data, which had small-to-moderate size of the CB training data set, low relationship between the CB training and the PB validation populations, and a high genetic correlation (0.94 for GLE and 0.90 for TNB) between the studied traits in PB and CB individuals, thus favouring selection on the basis of PB data.

  1. Precision and accuracy of visual foliar injury assessments

    SciTech Connect

    Gumpertz, M.L.; Tingey, D.T.; Hogsett, W.E.

    1982-07-01

    The study compared three measures of foliar injury: (i) mean percent leaf area injured of all leaves on the plant, (ii) mean percent leaf area injured of the three most injured leaves, and (iii) the proportion of injured leaves to total number of leaves. For the first measure, the variation caused by reader biases and day-to-day variations were compared with the innate plant-to-plant variation. Bean (Phaseolus vulgaris 'Pinto'), pea (Pisum sativum 'Little Marvel'), radish (Rhaphanus sativus 'Cherry Belle'), and spinach (Spinacia oleracea 'Northland') plants were exposed to either 3 ..mu..L L/sup -1/ SO/sub 2/ or 0.3 ..mu..L L/sup -1/ ozone for 2 h. Three leaf readers visually assessed the percent injury on every leaf of each plant while a fourth reader used a transparent grid to make an unbiased assessment for each plant. The mean leaf area injured of the three most injured leaves was highly correlated with all leaves on the plant only if the three most injured leaves were <100% injured. The proportion of leaves injured was not highly correlated with percent leaf area injured of all leaves on the plant for any species in this study. The largest source of variation in visual assessments was plant-to-plant variation, which ranged from 44 to 97% of the total variance, followed by variation among readers (0-32% of the variance). Except for radish exposed to ozone, the day-to-day variation accounted for <18% of the total. Reader bias in assessment of ozone injury was significant but could be adjusted for each reader by a simple linear regression (R/sup 2/ = 0.89-0.91) of the visual assessments against the grid assessments.

  2. Performance of the COROT CCD for high-accuracy photometry

    NASA Astrophysics Data System (ADS)

    Bernardi, P.; Lapeyrere, V.; Buey, J.-T.; Parisot, J.; Schmidt, R.; Leruyet, B.; Tiphène, D.; Gilard, O.; Rolland, G.

    2004-01-01

    The focal plane of the COROT instrument is made of four CCDs, two dedicated to asteroseismology and two dedicated to the detection of telluric planets. The detectors are provided by E2V (4280 series), each having 2k×4k pixels. They work in AIMO and frame transfer mode, at a working temperature of -40°C. As the COROT photometer will have to detect fluctuations expressed in ppm (part per million), a specific calibration of the whole photometric chain has to be achieved. Moreover, we have ten CCDs available and therefore need to calibrate them in order to select four CCDs for the flight focal plane. A specific test bench is dedicated to the calibration of these CCDs, and five have already been tested. The main characteristics of interest are: - cosmetics (black and white pixels or columns): zero white pixels, less than 6 black columns; - pixel response non uniformity (PRNU) versus wavelength: 1% rms at 650 nm. - gain versus temperature: -900 ppm K-1 relative fluctuation - absolute quantum efficiency: 95% max at 650 nm; - quantum efficiency versus temperature: 2000 ppm K-1 relative fluctuation at 650 nm; - full well capacity: 85 to 100 ke-; - dark current at -40°C: <0.5e- pixel-1 sec-1. Comparing the measured characteristics to those provided by E2V, we first check that CCDs meet specifications. Then, when all CCDs are tested, we will select the best CCDs for each scientific program (asteroseismology or planet finding). Specific tools have been developed to compare the performances of the detectors using the images acquired on the test bench and the shapes of the PSF for the two scientific programs. The detectors have also been tested under irradiation; the results strongly depend on the specific orbit of COROT (irradiation dose and related particles) and have to be compared to the global performances of the instrument.

  3. In vivo estimation of the glenohumeral joint centre by functional methods: accuracy and repeatability assessment.

    PubMed

    Lempereur, Mathieu; Leboeuf, Fabien; Brochard, Sylvain; Rousset, Jean; Burdin, Valérie; Rémy-Néris, Olivier

    2010-01-19

    Several algorithms have been proposed for determining the centre of rotation of ball joints. These algorithms are used rather to locate the hip joint centre. Few studies have focused on the determination of the glenohumeral joint centre. However, no studies have assessed the accuracy and repeatability of functional methods for glenohumeral joint centre. This paper aims at evaluating the accuracy and the repeatability with which the glenohumeral joint rotation centre (GHRC) can be estimated in vivo by functional methods. The reference joint centre is the glenohumeral anatomical centre obtained by medical imaging. Five functional methods were tested: the algorithm of Gamage and Lasenby (2002), bias compensated (Halvorsen, 2003), symmetrical centre of rotation estimation (Ehrig et al., 2006), normalization method (Chang and Pollard, 2007), helical axis (Woltring et al., 1985). The glenohumeral anatomical centre (GHAC) was deduced from the fitting of the humeral head. Four subjects performed three cycles of three different movements (flexion/extension, abduction/adduction and circumduction). For each test, the location of the glenohumeral joint centre was estimated by the five methods. Analyses focused on the 3D location, on the repeatability of location and on the accuracy by computing the Euclidian distance between the estimated GHRC and the GHAC. For all the methods, the error repeatability was inferior to 8.25 mm. This study showed that there are significant differences between the five functional methods. The smallest distance between the estimated joint centre and the centre of the humeral head was obtained with the method of Gamage and Lasenby (2002).

  4. Inter-comparison and accuracy assessment of TRMM 3B42 products over Turkey

    NASA Astrophysics Data System (ADS)

    Amjad, Muhammad; Yilmaz, M. Tugrul

    2016-04-01

    Accurate estimation of precipitation, especially over complex topography, is impeded by many factors depending on the platform that it is acquired. Satellites have the advantage of providing spatially and temporally continuous and consistent datasets. However, utilizing satellite precipitation data in various applications requires its uncertainty estimation to be carried out robustly. In this study, accuracy of two Tropical Rainfall Measurement Mission (Version 3B42) products, TRMM 3B42 V6 and TRMM3B42 V7, are assessed for their accuracy by inter-comparing their monthly time series against ground observations obtained over 256 stations in Turkey. Errors are further analyzed for their seasonal and climate-dependent variability. Both V6 and V7 products show better performance during summers than winters. V6 product has dry bias over drier regions and V7 product has wet bias over wetter regions of the country. Moreover, rainfall measuring accuracies of both versions are much lower along coastal regions and at lower altitudes. Overall, the statistics of the monthly products confirm V7 product is an improved version compared to V6. (This study was supported by TUBITAK fund # 114Y676).

  5. Accuracy assessment of CKC high-density surface EMG decomposition in biceps femoris muscle

    NASA Astrophysics Data System (ADS)

    Marateb, H. R.; McGill, K. C.; Holobar, A.; Lateva, Z. C.; Mansourian, M.; Merletti, R.

    2011-10-01

    The aim of this study was to assess the accuracy of the convolution kernel compensation (CKC) method in decomposing high-density surface EMG (HDsEMG) signals from the pennate biceps femoris long-head muscle. Although the CKC method has already been thoroughly assessed in parallel-fibered muscles, there are several factors that could hinder its performance in pennate muscles. Namely, HDsEMG signals from pennate and parallel-fibered muscles differ considerably in terms of the number of detectable motor units (MUs) and the spatial distribution of the motor-unit action potentials (MUAPs). In this study, monopolar surface EMG signals were recorded from five normal subjects during low-force voluntary isometric contractions using a 92-channel electrode grid with 8 mm inter-electrode distances. Intramuscular EMG (iEMG) signals were recorded concurrently using monopolar needles. The HDsEMG and iEMG signals were independently decomposed into MUAP trains, and the iEMG results were verified using a rigorous a posteriori statistical analysis. HDsEMG decomposition identified from 2 to 30 MUAP trains per contraction. 3 ± 2 of these trains were also reliably detected by iEMG decomposition. The measured CKC decomposition accuracy of these common trains over a selected 10 s interval was 91.5 ± 5.8%. The other trains were not assessed. The significant factors that affected CKC decomposition accuracy were the number of HDsEMG channels that were free of technical artifact and the distinguishability of the MUAPs in the HDsEMG signal (P < 0.05). These results show that the CKC method reliably identifies at least a subset of MUAP trains in HDsEMG signals from low force contractions in pennate muscles.

  6. Improving the accuracy of weight status assessment in infancy research.

    PubMed

    Dixon, Wallace E; Dalton, William T; Berry, Sarah M; Carroll, Vincent A

    2014-08-01

    Both researchers and primary care providers vary in their methods for assessing weight status in infants. The purpose of the present investigation was to compare standing-height-derived to recumbent-length-derived weight-for-length standardized (WLZ) scores, using the WHO growth curves, in a convenience sample of infants who visited the lab at 18 and 21 months of age. Fifty-eight primarily White, middle class infants (25 girls) from a semi-rural region of southern Appalachia visited the lab at 18 months, with 45 infants returning 3 months later. We found that recumbent-length-derived WLZ scores were significantly higher at 18 months than corresponding standing-height-derived WLZ scores. We also found that recumbent-length-derived WLZ scores, but not those derived from standing height measures, decreased significantly from 18 to 21 months. Although these differential results are attributable to the WHO database data entry syntax, which automatically corrects standing height measurements by adding 0.7 cm, they suggest that researchers proceed cautiously when using standing-height derived measures when calculating infant BMI z-scores. Our results suggest that for practical purposes, standing height measurements may be preferred, so long as they are entered into the WHO database as recumbent length measurements. We also encourage basic science infancy researchers to include BMI assessments as part of their routine assessment protocols, to serve as potential outcome measures for other basic science variables of theoretical interest.

  7. Assessing the accuracy of quantitative molecular microbial profiling.

    PubMed

    O'Sullivan, Denise M; Laver, Thomas; Temisak, Sasithon; Redshaw, Nicholas; Harris, Kathryn A; Foy, Carole A; Studholme, David J; Huggett, Jim F

    2014-11-21

    The application of high-throughput sequencing in profiling microbial communities is providing an unprecedented ability to investigate microbiomes. Such studies typically apply one of two methods: amplicon sequencing using PCR to target a conserved orthologous sequence (typically the 16S ribosomal RNA gene) or whole (meta)genome sequencing (WGS). Both methods have been used to catalog the microbial taxa present in a sample and quantify their respective abundances. However, a comparison of the inherent precision or bias of the different sequencing approaches has not been performed. We previously developed a metagenomic control material (MCM) to investigate error when performing different sequencing strategies. Amplicon sequencing using four different primer strategies and two 16S rRNA regions was examined (Roche 454 Junior) and compared to WGS (Illumina HiSeq). All sequencing methods generally performed comparably and in good agreement with organism specific digital PCR (dPCR); WGS notably demonstrated very high precision. Where discrepancies between relative abundances occurred they tended to differ by less than twofold. Our findings suggest that when alternative sequencing approaches are used for microbial molecular profiling they can perform with good reproducibility, but care should be taken when comparing small differences between distinct methods. This work provides a foundation for future work comparing relative differences between samples and the impact of extraction methods. We also highlight the value of control materials when conducting microbial profiling studies to benchmark methods and set appropriate thresholds.

  8. Assessing the Accuracy of Quantitative Molecular Microbial Profiling

    PubMed Central

    O’Sullivan, Denise M.; Laver, Thomas; Temisak, Sasithon; Redshaw, Nicholas; Harris, Kathryn A.; Foy, Carole A.; Studholme, David J.; Huggett, Jim F.

    2014-01-01

    The application of high-throughput sequencing in profiling microbial communities is providing an unprecedented ability to investigate microbiomes. Such studies typically apply one of two methods: amplicon sequencing using PCR to target a conserved orthologous sequence (typically the 16S ribosomal RNA gene) or whole (meta)genome sequencing (WGS). Both methods have been used to catalog the microbial taxa present in a sample and quantify their respective abundances. However, a comparison of the inherent precision or bias of the different sequencing approaches has not been performed. We previously developed a metagenomic control material (MCM) to investigate error when performing different sequencing strategies. Amplicon sequencing using four different primer strategies and two 16S rRNA regions was examined (Roche 454 Junior) and compared to WGS (Illumina HiSeq). All sequencing methods generally performed comparably and in good agreement with organism specific digital PCR (dPCR); WGS notably demonstrated very high precision. Where discrepancies between relative abundances occurred they tended to differ by less than twofold. Our findings suggest that when alternative sequencing approaches are used for microbial molecular profiling they can perform with good reproducibility, but care should be taken when comparing small differences between distinct methods. This work provides a foundation for future work comparing relative differences between samples and the impact of extraction methods. We also highlight the value of control materials when conducting microbial profiling studies to benchmark methods and set appropriate thresholds. PMID:25421243

  9. Attribute-Level and Pattern-Level Classification Consistency and Accuracy Indices for Cognitive Diagnostic Assessment

    ERIC Educational Resources Information Center

    Wang, Wenyi; Song, Lihong; Chen, Ping; Meng, Yaru; Ding, Shuliang

    2015-01-01

    Classification consistency and accuracy are viewed as important indicators for evaluating the reliability and validity of classification results in cognitive diagnostic assessment (CDA). Pattern-level classification consistency and accuracy indices were introduced by Cui, Gierl, and Chang. However, the indices at the attribute level have not yet…

  10. Accuracy of Optimized Branched Algorithms to Assess Activity-Specific PAEE

    PubMed Central

    Edwards, Andy G.; Hill, James O.; Byrnes, William C.; Browning, Raymond C.

    2009-01-01

    PURPOSE To assess the activity-specific accuracy achievable by branched algorithm (BA) analysis of simulated daily-living physical activity energy expenditure (PAEE) within a sedentary population. METHODS Sedentary men (n=8) and women (n=8) first performed a treadmill calibration protocol, during which heart rate (HR), accelerometry (ACC), and PAEE were measured in 1-minute epochs. From these data, HR-PAEE, and ACC-PAEE regressions were constructed and used in each of six analytic models to predict PAEE from ACC and HR data collected during a subsequent simulated daily-living protocol. Criterion PAEE was measured during both protocols via indirect calorimetry. The accuracy achieved by each model was assessed by the root mean square of the difference between model-predicted daily–living PAEE and the criterion daily-living PAEE (expressed here as % of mean daily living PAEE). RESULTS Across the range of activities an unconstrained post hoc optimized branched algorithm best predicted criterion PAEE. Estimates using individual calibration were generally more accurate than those using group calibration (14 vs. 16 % error, respectively). These analyses also performed well within each of the six daily-living activities, but systematic errors appeared for several of those activities, which may be explained by an inability of the algorithm to simultaneously accommodate a heterogeneous range of activities. Analyses of between mean square error by subject and activity suggest that optimization involving minimization of RMS for total daily-living PAEE is associated with decreased error between subjects but increased error between activities. CONCLUSION The performance of post hoc optimized branched algorithms may be limited by heterogeneity in the daily-living activities being performed. PMID:19952842

  11. Teaching Students to Perform Assessments.

    ERIC Educational Resources Information Center

    Franklin, Cynthia; Jordan, Catheleen

    1992-01-01

    This article presents an integrative skills assessment approach for teaching students of social work to perform assessments. The approach is based on technical eclecticism and combines three practice models: (1) psychosocial, (2) cognitive-behavioral, and (3) systems. Specific teaching strategies as well as shortcomings of this approach are…

  12. Assessment of RFID Read Accuracy for ISS Water Kit

    NASA Technical Reports Server (NTRS)

    Chu, Andrew

    2011-01-01

    The Space Life Sciences Directorate/Medical Informatics and Health Care Systems Branch (SD4) is assessing the benefits Radio Frequency Identification (RFID) technology for tracking items flown onboard the International Space Station (ISS). As an initial study, the Avionic Systems Division Electromagnetic Systems Branch (EV4) is collaborating with SD4 to affix RFID tags to a water kit supplied by SD4 and studying the read success rate of the tagged items. The tagged water kit inside a Cargo Transfer Bag (CTB) was inventoried using three different RFID technologies, including the Johnson Space Center Building 14 Wireless Habitat Test Bed RFID portal, an RFID hand-held reader being targeted for use on board the ISS, and an RFID enclosure designed and prototyped by EV4.

  13. Shuttle radar topography mission accuracy assessment and evaluation for hydrologic modeling

    NASA Astrophysics Data System (ADS)

    Mercuri, Pablo Alberto

    Digital Elevation Models (DEMs) are increasingly used even in low relief landscapes for multiple mapping applications and modeling approaches such as surface hydrology, flood risk mapping, agricultural suitability, and generation of topographic attributes. The National Aeronautics and Space Administration (NASA) has produced a nearly global database of highly accurate elevation data, the Shuttle Radar Topography Mission (SRTM) DEM. The main goals of this thesis were to investigate quality issues of SRTM, provide measures of vertical accuracy with emphasis on low relief areas, and to analyze the performance for the generation of physical boundaries and streams for watershed modeling and characterization. The absolute and relative accuracy of the two SRTM resolutions, at 1 and 3 arc-seconds, were investigated to generate information that can be used as a reference in areas with similar characteristics in other regions of the world. The absolute accuracy was obtained from accurate point estimates using the best available federal geodetic network in Indiana. The SRTM root mean square error for this area of the Midwest US surpassed data specifications. It was on the order of 2 meters for the 1 arc-second resolution in flat areas of the Midwest US. Estimates of error were smaller for the global coverage 3 arc-second data with very similar results obtained in the flat plains in Argentina. In addition to calculating the vertical accuracy, the impacts of physiography and terrain attributes, like slope, on the error magnitude were studied. The assessment also included analysis of the effects of land cover on vertical accuracy. Measures of local variability were described to identify the adjacency effects produced by surface features in the SRTM DEM, like forests and manmade features near the geodetic point. Spatial relationships among the bare-earth National Elevation Data and SRTM were also analyzed to assess the relative accuracy that was 2.33 meters in terms of the total

  14. 12 CFR 630.5 - Accuracy of reports and assessment of internal control over financial reporting.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... CREDIT SYSTEM General § 630.5 Accuracy of reports and assessment of internal control over financial... assessment of internal control over financial reporting. (1) Annual reports must include a report by the Funding Corporation's management assessing the effectiveness of the internal control over...

  15. Point Cloud Derived Fromvideo Frames: Accuracy Assessment in Relation to Terrestrial Laser Scanningand Digital Camera Data

    NASA Astrophysics Data System (ADS)

    Delis, P.; Zacharek, M.; Wierzbicki, D.; Grochala, A.

    2017-02-01

    The use of image sequences in the form of video frames recorded on data storage is very useful in especially when working with large and complex structures. Two cameras were used in this study: Sony NEX-5N (for the test object) and Sony NEX-VG10 E (for the historic building). In both cases, a Sony α f = 16 mm fixed focus wide-angle lens was used. Single frames with sufficient overlap were selected from the video sequence using an equation for automatic frame selection. In order to improve the quality of the generated point clouds, each video frame underwent histogram equalization and image sharpening. Point clouds were generated from the video frames using the SGM-like image matching algorithm. The accuracy assessment was based on two reference point clouds: the first from terrestrial laser scanning and the second generated based on images acquired using a high resolution camera, the NIKON D800. The performed research has shown, that highest accuracies are obtained for point clouds generated from video frames, for which a high pass filtration and histogram equalization had been performed. Studies have shown that to obtain a point cloud density comparable to TLS, an overlap between subsequent video frames must be 85 % or more. Based on the point cloud generated from video data, a parametric 3D model can be generated. This type of the 3D model can be used in HBIM construction.

  16. Conformity assessment of the measurement accuracy in testing laboratories using a software application

    NASA Astrophysics Data System (ADS)

    Diniţă, A.

    2017-02-01

    This article presents a method for assessing the accuracy of the measurements obtained at different tests conducted in laboratories by implementing the interlaboratory comparison method (organization, performance and evaluation of measurements of tests on the same or similar items by two or more laboratories under predetermined conditions). The program (independent software application), realised by the author and described in this paper, analyses the measurement accuracy and performance of testing laboratory by comparing the results obtained from different tests, using the modify Youden diagram, helping identify different types of errors that can occur in measurement, according to ISO 13528:2015, Statistical methods for use in proficiency testing by interlaboratory comparison. A case study is presented in the article by determining the chemical composition of identical samples from five different laboratories. The Youden diagram obtained from this study case was used to identify errors in the laboratory testing equipment. This paper was accepted for publication in Proceedings after double peer reviewing process but was not presented at the Conference ROTRIB’16

  17. The Eye Phone Study: reliability and accuracy of assessing Snellen visual acuity using smartphone technology

    PubMed Central

    Perera, C; Chakrabarti, R; Islam, F M A; Crowston, J

    2015-01-01

    Purpose Smartphone-based Snellen visual acuity charts has become popularized; however, their accuracy has not been established. This study aimed to evaluate the equivalence of a smartphone-based visual acuity chart with a standard 6-m Snellen visual acuity (6SVA) chart. Methods First, a review of available Snellen chart applications on iPhone was performed to determine the most accurate application based on optotype size. Subsequently, a prospective comparative study was performed by measuring conventional 6SVA and then iPhone visual acuity using the ‘Snellen' application on an Apple iPhone 4. Results Eleven applications were identified, with accuracy of optotype size ranging from 4.4–39.9%. Eighty-eight patients from general medical and surgical wards in a tertiary hospital took part in the second part of the study. The mean difference in logMAR visual acuity between the two charts was 0.02 logMAR (95% limit of agreement −0.332, 0.372 logMAR). The largest mean difference in logMAR acuity was noted in the subgroup of patients with 6SVA worse than 6/18 (n=5), who had a mean difference of two Snellen visual acuity lines between the charts (0.276 logMAR). Conclusion We did not identify a Snellen visual acuity app at the time of study, which could predict a patients standard Snellen visual acuity within one line. There was considerable variability in the optotype accuracy of apps. Further validation is required for assessment of acuity in patients with severe vision impairment. PMID:25931170

  18. An evaluation of the accuracy and performance of lightweight GPS collars in a suburban environment.

    PubMed

    Adams, Amy L; Dickinson, Katharine J M; Robertson, Bruce C; van Heezik, Yolanda

    2013-01-01

    The recent development of lightweight GPS collars has enabled medium-to-small sized animals to be tracked via GPS telemetry. Evaluation of the performance and accuracy of GPS collars is largely confined to devices designed for large animals for deployment in natural environments. This study aimed to assess the performance of lightweight GPS collars within a suburban environment, which may be different from natural environments in a way that is relevant to satellite signal acquisition. We assessed the effects of vegetation complexity, sky availability (percentage of clear sky not obstructed by natural or artificial features of the environment), proximity to buildings, and satellite geometry on fix success rate (FSR) and location error (LE) for lightweight GPS collars within a suburban environment. Sky availability had the largest affect on FSR, while LE was influenced by sky availability, vegetation complexity, and HDOP (Horizontal Dilution of Precision). Despite the complexity and modified nature of suburban areas, values for FSR (mean= 90.6%) and LE (mean = 30.1 m) obtained within the suburban environment are comparable to those from previous evaluations of GPS collars designed for larger animals and within less built-up environments. Due to fine-scale patchiness of habitat within urban environments, it is recommended that resource selection methods that are not reliant on buffer sizes be utilised for selection studies.

  19. Integrated three-dimensional digital assessment of accuracy of anterior tooth movement using clear aligners

    PubMed Central

    Zhang, Xiao-Juan; He, Li; Tian, Jie; Bai, Yu-Xing; Li, Song

    2015-01-01

    Objective To assess the accuracy of anterior tooth movement using clear aligners in integrated three-dimensional digital models. Methods Cone-beam computed tomography was performed before and after treatment with clear aligners in 32 patients. Plaster casts were laser-scanned for virtual setup and aligner fabrication. Differences in predicted and achieved root and crown positions of anterior teeth were compared on superimposed maxillofacial digital images and virtual models and analyzed by Student's t-test. Results The mean discrepancies in maxillary and mandibular crown positions were 0.376 ± 0.041 mm and 0.398 ± 0.037 mm, respectively. Maxillary and mandibular root positions differed by 2.062 ± 0.128 mm and 1.941 ± 0.154 mm, respectively. Conclusions Crowns but not roots of anterior teeth can be moved to designated positions using clear aligners, because these appliances cause tooth movement by tilting motion. PMID:26629473

  20. The SCoRE residual: a quality index to assess the accuracy of joint estimations.

    PubMed

    Ehrig, Rainald M; Heller, Markus O; Kratzenstein, Stefan; Duda, Georg N; Trepczynski, Adam; Taylor, William R

    2011-04-29

    The determination of an accurate centre of rotation (CoR) from skin markers is essential for the assessment of abnormal gait patterns in clinical gait analysis. Despite the many functional approaches to estimate CoRs, no non-invasive analytical determination of the error in the reconstructed joint location is currently available. The purpose of this study was therefore to verify the residual of the symmetrical centre of rotation estimation (SCoRE) as a reliable indirect measure of the error of the computed joint centre. To evaluate the SCoRE residual, numerical simulations were performed to evaluate CoR estimations at different ranges of joint motion. A statistical model was developed and used to determine the theoretical relationships among the SCoRE residual, the magnitude of the skin marker artefact, the corrections to the marker positions, and the error of the CoR estimations to the known centre of rotation. We found that the equation err=0.5r(s) provides a reliable relationship among the CoR error, err, and the scaled SCoRE residual, r(s), providing that any skin marker artefact is first minimised using the optimal common shape technique (OCST). Measurements on six healthy volunteers showed a reduction of SCoRE residual from 11 to below 6mm and therefore demonstrated consistency of the theoretical considerations and numerical simulations with the in vivo data. This study also demonstrates the significant benefit of the OCST for reducing skin marker artefact and thus for predicting the accuracy of determining joint centre positions in functional gait analysis. For the first time, this understanding of the SCoRE residual allows a measure of error in the non-invasive assessment of joint centres. This measure now enables a rapid assessment of the accuracy of the CoR as well as an estimation of the reproducibility and repeatability of skeletal motion patterns.

  1. Observer performance using virtual pathology slides: impact of LCD color reproduction accuracy.

    PubMed

    Krupinski, Elizabeth A; Silverstein, Louis D; Hashmi, Syed F; Graham, Anna R; Weinstein, Ronald S; Roehrig, Hans

    2012-12-01

    The use of color LCDs in medical imaging is growing as more clinical specialties use digital images as a resource in diagnosis and treatment decisions. Telemedicine applications such as telepathology, teledermatology, and teleophthalmology rely heavily on color images. However, standard methods for calibrating, characterizing, and profiling color displays do not exist, resulting in inconsistent presentation. To address this, we developed a calibration, characterization, and profiling protocol for color-critical medical imaging applications. Physical characterization of displays calibrated with and without the protocol revealed high color reproduction accuracy with the protocol. The present study assessed the impact of this protocol on observer performance. A set of 250 breast biopsy virtual slide regions of interest (half malignant, half benign) were shown to six pathologists, once using the calibration protocol and once using the same display in its "native" off-the-shelf uncalibrated state. Diagnostic accuracy and time to render a decision were measured. In terms of ROC performance, Az (area under the curve) calibrated = 0.8570 and Az uncalibrated = 0.8488. No statistically significant difference (p = 0.4112) was observed. In terms of interpretation speed, mean calibrated = 4.895 s; mean uncalibrated = 6.304 s which is statistically significant (p = 0.0460). Early results suggest a slight advantage diagnostically for a properly calibrated and color-managed display and a significant potential advantage in terms of improved workflow. Future work should be conducted using different types of color images that may be more dependent on accurate color rendering and a wider range of LCDs with varying characteristics.

  2. Influence of LCD color reproduction accuracy on observer performance using virtual pathology slides

    NASA Astrophysics Data System (ADS)

    Krupinski, Elizabeth A.; Silverstein, Louis D.; Hashmi, Syed F.; Graham, Anna R.; Weinstein, Ronald S.; Roehrig, Hans

    2012-02-01

    The use of color LCDs in medical imaging is growing as more clinical specialties use digital images as a resource in diagnosis and treatment decisions. Telemedicine applications such as telepathology, teledermatology and teleophthalmology rely heavily on color images. However, standard methods for calibrating, characterizing and profiling color displays do not exist, resulting in inconsistent presentation. To address this, we developed a calibration, characterization and profiling protocol for color-critical medical imaging applications. Physical characterization of displays calibrated with and without the protocol revealed high color reproduction accuracy with the protocol. The present study assessed the impact of this protocol on observer performance. A set of 250 breast biopsy virtual slide regions of interest (half malignant, half benign) were shown to 6 pathologists, once using the calibration protocol and once using the same display in its "native" off-the-shelf uncalibrated state. Diagnostic accuracy and time to render a decision were measured. In terms of ROC performance, Az (area under the curve) calibrated = 0.8640; uncalibrated = 0.8558. No statistically significant difference (p = 0.2719) was observed. In terms of interpretation speed, mean calibrated = 4.895 sec, mean uncalibrated = 6.304 sec which is statistically significant (p = 0.0460). Early results suggest a slight advantage diagnostically for a properly calibrated and color-managed display and a significant potential advantage in terms of improved workflow. Future work should be conducted using different types of color images that may be more dependent on accurate color rendering and a wider range of LCDs with varying characteristics.

  3. Comparative assessment of thematic accuracy of GLC maps for specific applications using existing reference data

    NASA Astrophysics Data System (ADS)

    Tsendbazar, N. E.; de Bruin, S.; Mora, B.; Schouten, L.; Herold, M.

    2016-02-01

    Inputs to various applications and models, current global land cover (GLC) maps are based on different data sources and methods. Therefore, comparing GLC maps is challenging. Statistical comparison of GLC maps is further complicated by the lack of a reference dataset that is suitable for validating multiple maps. This study utilizes the existing Globcover-2005 reference dataset to compare thematic accuracies of three GLC maps for the year 2005 (Globcover, LC-CCI and MODIS). We translated and reinterpreted the LCCS (land cover classification system) classifier information of the reference dataset into the different map legends. The three maps were evaluated for a variety of applications, i.e., general circulation models, dynamic global vegetation models, agriculture assessments, carbon estimation and biodiversity assessments, using weighted accuracy assessment. Based on the impact of land cover confusions on the overall weighted accuracy of the GLC maps, we identified map improvement priorities. Overall accuracies were 70.8 ± 1.4%, 71.4 ± 1.3%, and 61.3 ± 1.5% for LC-CCI, MODIS, and Globcover, respectively. Weighted accuracy assessments produced increased overall accuracies (80-93%) since not all class confusion errors are important for specific applications. As a common denominator for all applications, the classes mixed trees, shrubs, grasses, and cropland were identified as improvement priorities. The results demonstrate the necessity of accounting for dissimilarities in the importance of map classification errors for different user application. To determine the fitness of use of GLC maps, accuracy of GLC maps should be assessed per application; there is no single-figure accuracy estimate expressing map fitness for all purposes.

  4. Reproducibility and accuracy of optic nerve sheath diameter assessment using ultrasound compared to magnetic resonance imaging

    PubMed Central

    2013-01-01

    Background Quantification of the optic nerve sheath diameter (ONSD) by transbulbar sonography is a promising non-invasive technique for the detection of altered intracranial pressure. In order to establish this method as follow-up tool in diseases with intracranial hyper- or hypotension scan-rescan reproducibility and accuracy need to be systematically investigated. Methods The right ONSD of 15 healthy volunteers (mean age 24.5 ± 0.8 years) were measured by both transbulbar sonography (9 – 3 MHz) and 3 Tesla MRI (half-Fourier acquisition single-shot turbo spin-echo sequences, HASTE) 3 and 5 mm behind papilla. All volunteers underwent repeated ultrasound and MRI examinations in order to assess scan-rescan reproducibility and accuracy. Moreover, inter- and intra-observer variabilities were calculated for both techniques. Results Scan-rescan reproducibility was robust for ONSD quantification by sonography and MRI at both depths (r > 0.75, p ≤ 0.001, mean differences < 2%). Comparing ultrasound- and MRI-derived ONSD values, we found acceptable agreement between both methods for measurements at a depth of 3 mm (r = 0.72, p = 0.002, mean difference < 5%). Further analyses revealed good inter- and intra-observer reliability for sonographic measurements 3 mm behind the papilla and for MRI at 3 and 5 mm (r > 0.82, p < 0.001, mean differences < 5%). Conclusions Sonographic ONSD quantification 3 mm behind the papilla can be performed with good reproducibility, measurement accuracy and observer agreement. Thus, our findings emphasize the feasibility of this technique as a non-invasive bedside tool for longitudinal ONSD measurements. PMID:24289136

  5. A laboratory assessment of the measurement accuracy of weighing type rainfall intensity gauges

    NASA Astrophysics Data System (ADS)

    Colli, M.; Chan, P. W.; Lanza, L. G.; La Barbera, P.

    2012-04-01

    In recent years the WMO Commission for Instruments and Methods of Observation (CIMO) fostered noticeable advancements in the accuracy of precipitation measurement issue by providing recommendations on the standardization of equipment and exposure, instrument calibration and data correction as a consequence of various comparative campaigns involving manufacturers and national meteorological services from the participating countries (Lanza et al., 2005; Vuerich et al., 2009). Extreme events analysis is proven to be highly affected by the on-site RI measurement accuracy (see e.g. Molini et al., 2004) and the time resolution of the available RI series certainly constitutes another key-factor in constructing hyetographs that are representative of real rain events. The OTT Pluvio2 weighing gauge (WG) and the GEONOR T-200 vibrating-wire precipitation gauge demonstrated very good performance under previous constant flow rate calibration efforts (Lanza et al., 2005). Although WGs do provide better performance than more traditional Tipping Bucket Rain gauges (TBR) under continuous and constant reference intensity, dynamic effects seem to affect the accuracy of WG measurements under real world/time varying rainfall conditions (Vuerich et al., 2009). The most relevant is due to the response time of the acquisition system and the derived systematic delay of the instrument in assessing the exact weight of the bin containing cumulated precipitation. This delay assumes a relevant role in case high resolution rain intensity time series are sought from the instrument, as is the case of many hydrologic and meteo-climatic applications. This work reports the laboratory evaluation of Pluvio2 and T-200 rainfall intensity measurements accuracy. Tests are carried out by simulating different artificial precipitation events, namely non-stationary rainfall intensity, using a highly accurate dynamic rainfall generator. Time series measured by an Ogawa drop counter (DC) at a field test site

  6. Thematic Accuracy Assessment of the 2011 National Land Cover Database (NLCD)

    EPA Science Inventory

    Accuracy assessment is a standard protocol of National Land Cover Database (NLCD) mapping. Here we report agreement statistics between map and reference labels for NLCD 2011, which includes land cover for ca. 2001, ca. 2006, and ca. 2011. The two main objectives were assessment o...

  7. Beyond the Correlation Coefficient in Studies of Self-Assessment Accuracy: Commentary on Zell & Krizan (2014).

    PubMed

    Dunning, David; Helzer, Erik G

    2014-03-01

    Zell and Krizan (2014, this issue) provide a comprehensive yet incomplete portrait of the factors influencing accurate self-assessment. This is no fault of their own. Much work on self-accuracy focuses on the correlation coefficient as the measure of accuracy, but it is not the only way self-accuracy can be measured. As such, its use can provide an incomplete and potentially misleading story. We urge researchers to explore measures of bias as well as correlation, because there are indirect hints that each respond to a different psychological dynamic. We further entreat researchers to develop other creative measures of accuracy and not to forget that self-accuracy may come not only from personal knowledge but also from insight about human nature more generally.

  8. Accuracy assessment of the integration of GNSS and a MEMS IMU in a terrestrial platform.

    PubMed

    Madeira, Sergio; Yan, Wenlin; Bastos, Luísa; Gonçalves, José A

    2014-11-04

    MEMS Inertial Measurement Units are available at low cost and can replace expensive units in mobile mapping platforms which need direct georeferencing. This is done through the integration with GNSS measurements in order to achieve a continuous positioning solution and to obtain orientation angles. This paper presents the results of the assessment of the accuracy of a system that integrates GNSS and a MEMS IMU in a terrestrial platform. We describe the methodology used and the tests realized where the accuracy of the positions and orientation parameters were assessed using an independent photogrammetric technique employing cameras that integrate the mobile mapping system developed by the authors. Results for the accuracy of attitude angles and coordinates show that accuracies better than a decimeter in positions, and under a degree in angles, can be achieved even considering that the terrestrial platform is operating in less than favorable environments.

  9. Thematic accuracy assessment of the 2011 National Land Cover Database (NLCD)

    USGS Publications Warehouse

    Wickham, James; Stehman, Stephen V.; Gass, Leila; Dewitz, Jon; Sorenson, Daniel G.; Granneman, Brian J.; Poss, Richard V.; Baer, Lori Anne

    2017-01-01

    Accuracy assessment is a standard protocol of National Land Cover Database (NLCD) mapping. Here we report agreement statistics between map and reference labels for NLCD 2011, which includes land cover for ca. 2001, ca. 2006, and ca. 2011. The two main objectives were assessment of agreement between map and reference labels for the three, single-date NLCD land cover products at Level II and Level I of the classification hierarchy, and agreement for 17 land cover change reporting themes based on Level I classes (e.g., forest loss; forest gain; forest, no change) for three change periods (2001–2006, 2006–2011, and 2001–2011). The single-date overall accuracies were 82%, 83%, and 83% at Level II and 88%, 89%, and 89% at Level I for 2011, 2006, and 2001, respectively. Many class-specific user's accuracies met or exceeded a previously established nominal accuracy benchmark of 85%. Overall accuracies for 2006 and 2001 land cover components of NLCD 2011 were approximately 4% higher (at Level II and Level I) than the overall accuracies for the same components of NLCD 2006. The high Level I overall, user's, and producer's accuracies for the single-date eras in NLCD 2011 did not translate into high class-specific user's and producer's accuracies for many of the 17 change reporting themes. User's accuracies were high for the no change reporting themes, commonly exceeding 85%, but were typically much lower for the reporting themes that represented change. Only forest loss, forest gain, and urban gain had user's accuracies that exceeded 70%. Lower user's accuracies for the other change reporting themes may be attributable to the difficulty in determining the context of grass (e.g., open urban, grassland, agriculture) and between the components of the forest-shrubland-grassland gradient at either the mapping phase, reference label assignment phase, or both. NLCD 2011 user's accuracies for forest loss, forest gain, and urban gain compare favorably with results from other

  10. Relationships between Individual Differences and Accuracy in Rating Air Force Jet Engine Mechanic Performance

    DTIC Science & Technology

    1989-08-01

    may be enhanced to some extent (Thornton & Zorich , 1980). However, training costs may still be reduced by selecting for training those candidates with...ratee behaviors correctly noted by the rater (Thornton & Zorich , 1980). Several studies have investigated evaluation accuracy. For example, Borman (1977...correlations: Uses in assessing rater reliability. Psychological Bulletin, 86, 420-428. Thornton, G., & Zorich , S. (1980). Training to improve

  11. Assessment of the Accuracy of Pharmacy Students’ Compounded Solutions Using Vapor Pressure Osmometry

    PubMed Central

    McPherson, Timothy B.

    2013-01-01

    Objective. To assess the effectiveness of using a vapor pressure osmometer to measure the accuracy of pharmacy students’ compounding skills. Design. Students calculated the theoretical osmotic pressure (mmol/kg) of a solution as a pre-laboratory exercise, compared their calculations with actual values, and then attempted to determine the cause of any errors found. Assessment. After the introduction of the vapor pressure osmometer, the first-time pass rate for solution compounding has varied from 85% to 100%. Approximately 85% of students surveyed reported that the instrument was valuable as a teaching tool because it objectively assessed their work and provided immediate formative assessment. Conclusions. This simple technique of measuring compounding accuracy using a vapor pressure osmometer allowed students to see the importance of quality control and assessment in practice for both pharmacists and technicians. PMID:23610476

  12. Assessment of the accuracy of pharmacy students' compounded solutions using vapor pressure osmometry.

    PubMed

    Kolling, William M; McPherson, Timothy B

    2013-04-12

    OBJECTIVE. To assess the effectiveness of using a vapor pressure osmometer to measure the accuracy of pharmacy students' compounding skills. DESIGN. Students calculated the theoretical osmotic pressure (mmol/kg) of a solution as a pre-laboratory exercise, compared their calculations with actual values, and then attempted to determine the cause of any errors found. ASSESSMENT. After the introduction of the vapor pressure osmometer, the first-time pass rate for solution compounding has varied from 85% to 100%. Approximately 85% of students surveyed reported that the instrument was valuable as a teaching tool because it objectively assessed their work and provided immediate formative assessment. CONCLUSIONS. This simple technique of measuring compounding accuracy using a vapor pressure osmometer allowed students to see the importance of quality control and assessment in practice for both pharmacists and technicians.

  13. A priori evaluation of two-stage cluster sampling for accuracy assessment of large-area land-cover maps

    USGS Publications Warehouse

    Wickham, J.D.; Stehman, S.V.; Smith, J.H.; Wade, T.G.; Yang, L.

    2004-01-01

    Two-stage cluster sampling reduces the cost of collecting accuracy assessment reference data by constraining sample elements to fall within a limited number of geographic domains (clusters). However, because classification error is typically positively spatially correlated, within-cluster correlation may reduce the precision of the accuracy estimates. The detailed population information to quantify a priori the effect of within-cluster correlation on precision is typically unavailable. Consequently, a convenient, practical approach to evaluate the likely performance of a two-stage cluster sample is needed. We describe such an a priori evaluation protocol focusing on the spatial distribution of the sample by land-cover class across different cluster sizes and costs of different sampling options, including options not imposing clustering. This protocol also assesses the two-stage design's adequacy for estimating the precision of accuracy estimates for rare land-cover classes. We illustrate the approach using two large-area, regional accuracy assessments from the National Land-Cover Data (NLCD), and describe how the a priorievaluation was used as a decision-making tool when implementing the NLCD design.

  14. Quality Assessment of Comparative Diagnostic Accuracy Studies: Our Experience Using a Modified Version of the QUADAS-2 Tool

    ERIC Educational Resources Information Center

    Wade, Ros; Corbett, Mark; Eastwood, Alison

    2013-01-01

    Assessing the quality of included studies is a vital step in undertaking a systematic review. The recently revised Quality Assessment of Diagnostic Accuracy Studies (QUADAS) tool (QUADAS-2), which is the only validated quality assessment tool for diagnostic accuracy studies, does not include specific criteria for assessing comparative studies. As…

  15. Validation of performance assessment models

    SciTech Connect

    Bergeron, M.P.; Kincaid, C.T.

    1991-11-01

    The purpose of model validation in a low-level waste site performance assessment is to increase confidence in predictions of the migration and fate of future releases from the wastes. Unlike the process of computer code verification, model validation is a site-specific process that requires site-specific data. This paper provides an overview of the topic of model validation and describes the general approaches, strategies, and limitations of model validation being considered by various researchers concerned with the subject.

  16. Preliminary melter performance assessment report

    SciTech Connect

    Elliott, M.L.; Eyler, L.L.; Mahoney, L.A.; Cooper, M.F.; Whitney, L.D.; Shafer, P.J.

    1994-08-01

    The Melter Performance Assessment activity, a component of the Pacific Northwest Laboratory`s (PNL) Vitrification Technology Development (PVTD) effort, was designed to determine the impact of noble metals on the operational life of the reference Hanford Waste Vitrification Plant (HWVP) melter. The melter performance assessment consisted of several activities, including a literature review of all work done with noble metals in glass, gradient furnace testing to study the behavior of noble metals during the melting process, research-scale and engineering-scale melter testing to evaluate effects of noble metals on melter operation, and computer modeling that used the experimental data to predict effects of noble metals on the full-scale melter. Feed used in these tests simulated neutralized current acid waste (NCAW) feed. This report summarizes the results of the melter performance assessment and predicts the lifetime of the HWVP melter. It should be noted that this work was conducted before the recent Tri-Party Agreement changes, so the reference melter referred to here is the Defense Waste Processing Facility (DWPF) melter design.

  17. Keystroke dynamics and timing: accuracy, precision and difference between hands in pianist's performance.

    PubMed

    Minetti, Alberto E; Ardigò, Luca P; McKee, Tom

    2007-01-01

    A commercially available acoustic grand piano, originally provided with keystroke speed sensors, is proposed as a standard instrument to quantitatively assess the technical side of pianist's performance, after the mechanical characteristics of the keyboard have been measured. We found a positional dependence of the relationship between the applied force and the resulting downstroke speed (i.e. treble keys descend fastest) due to the different hammer/hammer shaft mass to be accelerated. When this effect was removed by a custom software, the ability of 14 pianists was analysed in terms of variability in stroke intervals and keystroke speeds. C-major scales played by separate hands at different imposed tempos and at 5 subjectively chosen graded force levels were analysed to get insights into the achieved neuromuscular control. Accuracy and precision of time intervals and descent velocity of keystrokes were obtained by processing the generated MIDI files. The results quantitatively show: the difference between hands, the trade off between force range and tempo, and between time interval precision and tempo, the lower precision of descent speed associated to 'soft' playing, etc. Those results reflect well-established physiological and motor control characteristics of our movement system. Apart from revealing fundamental aspects of pianism, the proposed method could be used as a standard tool also for ergonomic (e.g. the mechanical work and power of playing), didactic and rehabilitation monitoring of pianists.

  18. Assessing map accuracy in a remotely sensed, ecoregion-scale cover map

    USGS Publications Warehouse

    Edwards, T.C.; Moisen, G.G.; Cutler, D.R.

    1998-01-01

    Landscape- and ecoregion-based conservation efforts increasingly use a spatial component to organize data for analysis and interpretation. A challenge particular to remotely sensed cover maps generated from these efforts is how best to assess the accuracy of the cover maps, especially when they can exceed 1000 s/km2 in size. Here we develop and describe a methodological approach for assessing the accuracy of large-area cover maps, using as a test case the 21.9 million ha cover map developed for Utah Gap Analysis. As part of our design process, we first reviewed the effect of intracluster correlation and a simple cost function on the relative efficiency of cluster sample designs to simple random designs. Our design ultimately combined clustered and subsampled field data stratified by ecological modeling unit and accessibility (hereafter a mixed design). We next outline estimation formulas for simple map accuracy measures under our mixed design and report results for eight major cover types and the three ecoregions mapped as part of the Utah Gap Analysis. Overall accuracy of the map was 83.2% (SE=1.4). Within ecoregions, accuracy ranged from 78.9% to 85.0%. Accuracy by cover type varied, ranging from a low of 50.4% for barren to a high of 90.6% for man modified. In addition, we examined gains in efficiency of our mixed design compared with a simple random sample approach. In regard to precision, our mixed design was more precise than a simple random design, given fixed sample costs. We close with a discussion of the logistical constraints facing attempts to assess the accuracy of large-area, remotely sensed cover maps.

  19. Performance Assessment Institute-NV

    SciTech Connect

    Lombardo, Joesph

    2012-12-31

    The National Supercomputing Center for Energy and the Environment’s intention is to purchase a multi-purpose computer cluster in support of the Performance Assessment Institute (PA Institute). The PA Institute will serve as a research consortium located in Las Vegas Nevada with membership that includes: national laboratories, universities, industry partners, and domestic and international governments. This center will provide a one-of-a-kind centralized facility for the accumulation of information for use by Institutions of Higher Learning, the U.S. Government, and Regulatory Agencies and approved users. This initiative will enhance and extend High Performance Computing (HPC) resources in Nevada to support critical national and international needs in "scientific confirmation". The PA Institute will be promoted as the leading Modeling, Learning and Research Center worldwide. The program proposes to utilize the existing supercomputing capabilities and alliances of the University of Nevada Las Vegas as a base, and to extend these resource and capabilities through a collaborative relationship with its membership. The PA Institute will provide an academic setting for interactive sharing, learning, mentoring and monitoring of multi-disciplinary performance assessment and performance confirmation information. The role of the PA Institute is to facilitate research, knowledge-increase, and knowledge-sharing among users.

  20. Accuracy of Specific BIVA for the Assessment of Body Composition in the United States Population

    PubMed Central

    Buffa, Roberto; Saragat, Bruno; Cabras, Stefano; Rinaldi, Andrea C.; Marini, Elisabetta

    2013-01-01

    Background Bioelectrical impedance vector analysis (BIVA) is a technique for the assessment of hydration and nutritional status, used in the clinical practice. Specific BIVA is an analytical variant, recently proposed for the Italian elderly population, that adjusts bioelectrical values for body geometry. Objective Evaluating the accuracy of specific BIVA in the adult U.S. population, compared to the ‘classic’ BIVA procedure, using DXA as the reference technique, in order to obtain an interpretative model of body composition. Design A cross-sectional sample of 1590 adult individuals (836 men and 754 women, 21–49 years old) derived from the NHANES 2003–2004 was considered. Classic and specific BIVA were applied. The sensitivity and specificity in recognizing individuals below the 5th and above the 95th percentiles of percent fat (FMDXA%) and extracellular/intracellular water (ECW/ICW) ratio were evaluated by receiver operating characteristic (ROC) curves. Classic and specific BIVA results were compared by a probit multiple-regression. Results Specific BIVA was significantly more accurate than classic BIVA in evaluating FMDXA% (ROC areas: 0.84–0.92 and 0.49–0.61 respectively; p = 0.002). The evaluation of ECW/ICW was accurate (ROC areas between 0.83 and 0.96) and similarly performed by the two procedures (p = 0.829). The accuracy of specific BIVA was similar in the two sexes (p = 0.144) and in FMDXA% and ECW/ICW (p = 0.869). Conclusions Specific BIVA showed to be an accurate technique. The tolerance ellipses of specific BIVA can be used for evaluating FM% and ECW/ICW in the U.S. adult population. PMID:23484033

  1. Gender Differences in Structured Risk Assessment: Comparing the Accuracy of Five Instruments

    ERIC Educational Resources Information Center

    Coid, Jeremy; Yang, Min; Ullrich, Simone; Zhang, Tianqiang; Sizmur, Steve; Roberts, Colin; Farrington, David P.; Rogers, Robert D.

    2009-01-01

    Structured risk assessment should guide clinical risk management, but it is uncertain which instrument has the highest predictive accuracy among men and women. In the present study, the authors compared the Psychopathy Checklist-Revised (PCL-R; R. D. Hare, 1991, 2003); the Historical, Clinical, Risk Management-20 (HCR-20; C. D. Webster, K. S.…

  2. 12 CFR 620.3 - Accuracy of reports and assessment of internal control over financial reporting.

    Code of Federal Regulations, 2011 CFR

    2011-01-01

    ... in accordance with all applicable statutory or regulatory requirements, and (3) The information is true, accurate, and complete to the best of signatories' knowledge and belief. (d) Management... CREDIT SYSTEM DISCLOSURE TO SHAREHOLDERS General § 620.3 Accuracy of reports and assessment of...

  3. Assessing the Accuracy of MODIS-NDVI Derived Land-Cover Across the Great Lakes Basin

    EPA Science Inventory

    This research describes the accuracy assessment process for a land-cover dataset developed for the Great Lakes Basin (GLB). This land-cover dataset was developed from the 2007 MODIS Normalized Difference Vegetation Index (NDVI) 16-day composite (MOD13Q) 250 m time-series data. Tr...

  4. The Word Writing CAFE: Assessing Student Writing for Complexity, Accuracy, and Fluency

    ERIC Educational Resources Information Center

    Leal, Dorothy J.

    2005-01-01

    The Word Writing CAFE is a new assessment tool designed for teachers to evaluate objectively students' word-writing ability for fluency, accuracy, and complexity. It is designed to be given to the whole class at one time. This article describes the development of the CAFE and provides directions for administering and scoring it. The author also…

  5. A PIXEL COMPOSITION-BASED REFERENCE DATA SET FOR THEMATIC ACCURACY ASSESSMENT

    EPA Science Inventory

    Developing reference data sets for accuracy assessment of land-cover classifications derived from coarse spatial resolution sensors such as MODIS can be difficult due to the large resolution differences between the image data and available reference data sources. Ideally, the spa...

  6. Assessing the Accuracy of Classwide Direct Observation Methods: Two Analyses Using Simulated and Naturalistic Data

    ERIC Educational Resources Information Center

    Dart, Evan H.; Radley, Keith C.; Briesch, Amy M.; Furlow, Christopher M.; Cavell, Hannah J.; Briesch, Amy M.

    2016-01-01

    Two studies investigated the accuracy of eight different interval-based group observation methods that are commonly used to assess the effects of classwide interventions. In Study 1, a Microsoft Visual Basic program was created to simulate a large set of observational data. Binary data were randomly generated at the student level to represent…

  7. Modifications to the accuracy assessment analysis routine MLTCRP to produce an output file

    NASA Technical Reports Server (NTRS)

    Carnes, J. G.

    1978-01-01

    Modifications are described that were made to the analysis program MLTCRP in the accuracy assessment software system to produce a disk output file. The output files produced by this modified program are used to aggregate data for regions greater than a single segment.

  8. Assessing Observer Accuracy in Continuous Recording of Rate and Duration: Three Algorithms Compared

    ERIC Educational Resources Information Center

    Mudford, Oliver C.; Martin, Neil T.; Hui, Jasmine K. Y.; Taylor, Sarah Ann

    2009-01-01

    The three algorithms most frequently selected by behavior-analytic researchers to compute interobserver agreement with continuous recording were used to assess the accuracy of data recorded from video samples on handheld computers by 12 observers. Rate and duration of responding were recorded for three samples each. Data files were compared with…

  9. Accuracy of Revised and Traditional Parallel Analyses for Assessing Dimensionality with Binary Data

    ERIC Educational Resources Information Center

    Green, Samuel B.; Redell, Nickalus; Thompson, Marilyn S.; Levy, Roy

    2016-01-01

    Parallel analysis (PA) is a useful empirical tool for assessing the number of factors in exploratory factor analysis. On conceptual and empirical grounds, we argue for a revision to PA that makes it more consistent with hypothesis testing. Using Monte Carlo methods, we evaluated the relative accuracy of the revised PA (R-PA) and traditional PA…

  10. In the Right Ballpark? Assessing the Accuracy of Net Price Calculators

    ERIC Educational Resources Information Center

    Anthony, Aaron M.; Page, Lindsay C.; Seldin, Abigail

    2016-01-01

    Large differences often exist between a college's sticker price and net price after accounting for financial aid. Net price calculators (NPCs) were designed to help students more accurately estimate their actual costs to attend a given college. This study assesses the accuracy of information provided by net price calculators. Specifically, we…

  11. Assessment of Integrated Nozzle Performance

    NASA Technical Reports Server (NTRS)

    Lambert, H. H.; Mizukami, M.

    1999-01-01

    This presentation highlights the activities that researchers at the NASA Lewis Research Center (LeRC) have been and will be involved in to assess integrated nozzle performance. Three different test activities are discussed. First, the results of the Propulsion Airframe Integration for High Speed Research 1 (PAIHSR1) study are presented. The PAIHSR1 experiment was conducted in the LeRC 9 ft x l5 ft wind tunnel from December 1991 to January 1992. Second, an overview of the proposed Mixer/ejector Inlet Distortion Study (MIDIS-E) is presented. The objective of MIDIS-E is to assess the effects of applying discrete disturbances to the ejector inlet flow on the acoustic and aero-performance of a mixer/ejector nozzle. Finally, an overview of the High-Lift Engine Aero-acoustic Technology (HEAT) test is presented. The HEAT test is a cooperative effort between the propulsion system and high-lift device research communities to assess wing/nozzle integration effects. The experiment is scheduled for FY94 in the NASA Ames Research Center (ARC) 40 ft x 80 ft Low Speed Wind Tunnel (LSWT).

  12. Parallel Reaction Monitoring: A Targeted Experiment Performed Using High Resolution and High Mass Accuracy Mass Spectrometry

    PubMed Central

    Rauniyar, Navin

    2015-01-01

    The parallel reaction monitoring (PRM) assay has emerged as an alternative method of targeted quantification. The PRM assay is performed in a high resolution and high mass accuracy mode on a mass spectrometer. This review presents the features that make PRM a highly specific and selective method for targeted quantification using quadrupole-Orbitrap hybrid instruments. In addition, this review discusses the label-based and label-free methods of quantification that can be performed with the targeted approach. PMID:26633379

  13. Salt site performance assessment activities

    SciTech Connect

    Kircher, J.F.; Gupta, S.K.

    1983-01-01

    During this year the first selection of the tools (codes) for performance assessments of potential salt sites have been tentatively selected and documented; the emphasis has shifted from code development to applications. During this period prior to detailed characterization of a salt site, the focus is on bounding calculations, sensitivity and with the data available. The development and application of improved methods for sensitivity and uncertainty analysis is a focus for the coming years activities and the subject of a following paper in these proceedings. Although the assessments to date are preliminary and based on admittedly scant data, the results indicate that suitable salt sites can be identified and repository subsystems designed which will meet the established criteria for protecting the health and safety of the public. 36 references, 5 figures, 2 tables.

  14. Accuracy Assessment and Correction of Vaisala RS92 Radiosonde Water Vapor Measurements

    NASA Technical Reports Server (NTRS)

    Whiteman, David N.; Miloshevich, Larry M.; Vomel, Holger; Leblanc, Thierry

    2008-01-01

    Relative humidity (RH) measurements from Vaisala RS92 radiosondes are widely used in both research and operational applications, although the measurement accuracy is not well characterized as a function of its known dependences on height, RH, and time of day (or solar altitude angle). This study characterizes RS92 mean bias error as a function of its dependences by comparing simultaneous measurements from RS92 radiosondes and from three reference instruments of known accuracy. The cryogenic frostpoint hygrometer (CFH) gives the RS92 accuracy above the 700 mb level; the ARM microwave radiometer gives the RS92 accuracy in the lower troposphere; and the ARM SurTHref system gives the RS92 accuracy at the surface using 6 RH probes with NIST-traceable calibrations. These RS92 assessments are combined using the principle of Consensus Referencing to yield a detailed estimate of RS92 accuracy from the surface to the lowermost stratosphere. An empirical bias correction is derived to remove the mean bias error, yielding corrected RS92 measurements whose mean accuracy is estimated to be +/-3% of the measured RH value for nighttime soundings and +/-4% for daytime soundings, plus an RH offset uncertainty of +/-0.5%RH that is significant for dry conditions. The accuracy of individual RS92 soundings is further characterized by the 1-sigma "production variability," estimated to be +/-1.5% of the measured RH value. The daytime bias correction should not be applied to cloudy daytime soundings, because clouds affect the solar radiation error in a complicated and uncharacterized way.

  15. Self-Confidence and Performance Goal Orientation Interactively Predict Performance in a Reasoning Test with Accuracy Feedback

    ERIC Educational Resources Information Center

    Beckmann, Nadin; Beckmann, Jens F.; Elliott, Julian G.

    2009-01-01

    This study takes an individual differences' perspective on performance feedback effects in psychometric testing. A total of 105 students in a mainstream secondary school in North East England undertook a cognitive ability test on two occasions. In one condition, students received item-specific accuracy feedback while in the other (standard…

  16. Accuracy assessment of airborne photogrammetrically derived high-resolution digital elevation models in a high mountain environment

    NASA Astrophysics Data System (ADS)

    Müller, Johann; Gärtner-Roer, Isabelle; Thee, Patrick; Ginzler, Christian

    2014-12-01

    High-resolution digital elevation models (DEMs) generated by airborne remote sensing are frequently used to analyze landform structures (monotemporal) and geomorphological processes (multitemporal) in remote areas or areas of extreme terrain. In order to assess and quantify such structures and processes it is necessary to know the absolute accuracy of the available DEMs. This study assesses the absolute vertical accuracy of DEMs generated by the High Resolution Stereo Camera-Airborne (HRSC-A), the Leica Airborne Digital Sensors 40/80 (ADS40 and ADS80) and the analogue camera system RC30. The study area is located in the Turtmann valley, Valais, Switzerland, a glacially and periglacially formed hanging valley stretching from 2400 m to 3300 m a.s.l. The photogrammetrically derived DEMs are evaluated against geodetic field measurements and an airborne laser scan (ALS). Traditional and robust global and local accuracy measurements are used to describe the vertical quality of the DEMs, which show a non Gaussian distribution of errors. The results show that all four sensor systems produce DEMs with similar accuracy despite their different setups and generations. The ADS40 and ADS80 (both with a ground sampling distance of 0.50 m) generate the most accurate DEMs in complex high mountain areas with a RMSE of 0.8 m and NMAD of 0.6 m They also show the highest accuracy relating to flying height (0.14‰). The pushbroom scanning system HRSC-A produces a RMSE of 1.03 m and a NMAD of 0.83 m (0.21‰ accuracy of the flying height and 10 times the ground sampling distance). The analogue camera system RC30 produces DEMs with a vertical accuracy of 1.30 m RMSE and 0.83 m NMAD (0.17‰ accuracy of the flying height and two times the ground sampling distance). It is also shown that the performance of the DEMs strongly depends on the inclination of the terrain. The RMSE of areas up to an inclination <40° is better than 1 m. In more inclined areas the error and outlier occurrence

  17. Increasing accuracy in the assessment of motion sickness: A construct methodology

    NASA Technical Reports Server (NTRS)

    Stout, Cynthia S.; Cowings, Patricia S.

    1993-01-01

    The purpose is to introduce a new methodology that should improve the accuracy of the assessment of motion sickness. This construct methodology utilizes both subjective reports of motion sickness and objective measures of physiological correlates to assess motion sickness. Current techniques and methods used in the framework of a construct methodology are inadequate. Current assessment techniques for diagnosing motion sickness and space motion sickness are reviewed, and attention is called to the problems with the current methods. Further, principles of psychophysiology that when applied will probably resolve some of these problems are described in detail.

  18. A diagnostic tool for determining the quality of accuracy validation. Assessing the method for determination of nitrate in drinking water.

    PubMed

    Escuder-Gilabert, L; Bonet-Domingo, E; Medina-Hernández, M J; Sagrado, S

    2007-01-01

    Realistic internal validation of a method implies the performance validation experiments under intermediate precision conditions. The validation results can be organized in an X (NrxNs) (replicates x runs) data matrix, analysis of which enables assessment of the accuracy of the method. By means of Monte Carlo simulation, uncertainty in the estimates of bias and precision can be assessed. A bivariate plot is presented for assessing whether the uncertainty intervals for the bias (E +/- U(E)) and intermediate precision (RSDi +/- U(RSDi) are included in prefixed limits (requirements for the method). As a case study, a method for determining the concentration of nitrate in drinking water at the official level set by 98/83/EC Directive is assessed by use of the proposed plot.

  19. Assessment of the genomic prediction accuracy for feed efficiency traits in meat-type chickens

    PubMed Central

    Wang, Jie; Ma, Jie; Shu, Dingming; Lund, Mogens Sandø; Su, Guosheng; Qu, Hao

    2017-01-01

    Feed represents the major cost of chicken production. Selection for improving feed utilization is a feasible way to reduce feed cost and greenhouse gas emissions. The objectives of this study were to investigate the efficiency of genomic prediction for feed conversion ratio (FCR), residual feed intake (RFI), average daily gain (ADG) and average daily feed intake (ADFI) and to assess the impact of selection for feed efficiency traits FCR and RFI on eviscerating percentage (EP), breast muscle percentage (BMP) and leg muscle percentage (LMP) in meat-type chickens. Genomic prediction was assessed using a 4-fold cross-validation for two validation scenarios. The first scenario was a random family sampling validation (CVF), and the second scenario was a random individual sampling validation (CVR). Variance components were estimated based on the genomic relationship built with single nucleotide polymorphism markers. Genomic estimated breeding values (GEBV) were predicted using a genomic best linear unbiased prediction model. The accuracies of GEBV were evaluated in two ways: the correlation between GEBV and corrected phenotypic value divided by the square root of heritability, i.e., the correlation-based accuracy, and model-based theoretical accuracy. Breeding values were also predicted using a conventional pedigree-based best linear unbiased prediction model in order to compare accuracies of genomic and conventional predictions. The heritability estimates of FCR and RFI were 0.29 and 0.50, respectively. The heritability estimates of ADG, ADFI, EP, BMP and LMP ranged from 0.34 to 0.53. In the CVF scenario, the correlation-based accuracy and the theoretical accuracy of genomic prediction for FCR were slightly higher than those for RFI. The correlation-based accuracies for FCR, RFI, ADG and ADFI were 0.360, 0.284, 0.574 and 0.520, respectively, and the model-based theoretical accuracies were 0.420, 0.414, 0.401 and 0.382, respectively. In the CVR scenario, the correlation

  20. A SVD-based method to assess the uniqueness and accuracy of SPECT geometrical calibration.

    PubMed

    Ma, Tianyu; Yao, Rutao; Shao, Yiping; Zhou, Rong

    2009-12-01

    Geometrical calibration is critical to obtaining high resolution and artifact-free reconstructed image for SPECT and CT systems. Most published calibration methods use analytical approach to determine the uniqueness condition for a specific calibration problem, and the calibration accuracy is often evaluated through empirical studies. In this work, we present a general method to assess the characteristics of both the uniqueness and the quantitative accuracy of the calibration. The method uses a singular value decomposition (SVD) based approach to analyze the Jacobian matrix from a least-square cost function for the calibration. With this method, the uniqueness of the calibration can be identified by assessing the nonsingularity of the Jacobian matrix, and the estimation accuracy of the calibration parameters can be quantified by analyzing the SVD components. A direct application of this method is that the efficacy of a calibration configuration can be quantitatively evaluated by choosing a figure-of-merit, e.g., the minimum required number of projection samplings to achieve desired calibration accuracy. The proposed method was validated with a slit-slat SPECT system through numerical simulation studies and experimental measurements with point sources and an ultra-micro hot-rod phantom. The predicted calibration accuracy from the numerical studies was confirmed by the experimental point source calibrations at approximately 0.1 mm for both the center of rotation (COR) estimation of a rotation stage and the slit aperture position (SAP) estimation of a slit-slat collimator by an optimized system calibration protocol. The reconstructed images of a hot rod phantom showed satisfactory spatial resolution with a proper calibration and showed visible resolution degradation with artificially introduced 0.3 mm COR estimation error. The proposed method can be applied to other SPECT and CT imaging systems to analyze calibration method assessment and calibration protocol

  1. The suitability of common metrics for assessing parotid and larynx autosegmentation accuracy.

    PubMed

    Beasley, William J; McWilliam, Alan; Aitkenhead, Adam; Mackay, Ranald I; Rowbottom, Carl G

    2016-03-08

    Contouring structures in the head and neck is time-consuming, and automatic seg-mentation is an important part of an adaptive radiotherapy workflow. Geometric accuracy of automatic segmentation algorithms has been widely reported, but there is no consensus as to which metrics provide clinically meaningful results. This study investigated whether geometric accuracy (as quantified by several commonly used metrics) was associated with dosimetric differences for the parotid and larynx, comparing automatically generated contours against manually drawn ground truth contours. This enabled the suitability of different commonly used metrics to be assessed for measuring automatic segmentation accuracy of the parotid and larynx. Parotid and larynx structures for 10 head and neck patients were outlined by five clinicians to create ground truth structures. An automatic segmentation algorithm was used to create automatically generated normal structures, which were then used to create volumetric-modulated arc therapy plans. The mean doses to the automatically generated structures were compared with those of the corresponding ground truth structures, and the relative difference in mean dose was calculated for each structure. It was found that this difference did not correlate with the geometric accuracy provided by several metrics, notably the Dice similarity coefficient, which is a commonly used measure of spatial overlap. Surface-based metrics provided stronger correlation and are, therefore, more suitable for assessing automatic seg-mentation of the parotid and larynx.

  2. A novel method for assessing the 3-D orientation accuracy of inertial/magnetic sensors.

    PubMed

    Faber, Gert S; Chang, Chien-Chi; Rizun, Peter; Dennerlein, Jack T

    2013-10-18

    A novel method for assessing the accuracy of inertial/magnetic sensors is presented. The method, referred to as the "residual matrix" method, is advantageous because it decouples the sensor's error with respect to Earth's gravity vector (attitude residual error: pitch and roll) from the sensor's error with respect to magnetic north (heading residual error), while remaining insensitive to singularity problems when the second Euler rotation is close to ±90°. As a demonstration, the accuracy of an inertial/magnetic sensor mounted to a participant's forearm was evaluated during a reaching task in a laboratory. Sensor orientation was measured internally (by the inertial/magnetic sensor) and externally using an optoelectronic measurement system with a marker cluster rigidly attached to the sensor's enclosure. Roll, pitch and heading residuals were calculated using the proposed novel method, as well as using a common orientation assessment method where the residuals are defined as the difference between the Euler angles measured by the inertial sensor and those measured by the optoelectronic system. Using the proposed residual matrix method, the roll and pitch residuals remained less than 1° and, as expected, no statistically significant difference between these two measures of attitude accuracy was found; the heading residuals were significantly larger than the attitude residuals but remained below 2°. Using the direct Euler angle comparison method, the residuals were in general larger due to singularity issues, and the expected significant difference between inertial/magnetic sensor attitude and heading accuracy was not present.

  3. Assessing the impact of measurement frequency on accuracy and uncertainty of water quality data

    NASA Astrophysics Data System (ADS)

    Helm, Björn; Schiffner, Stefanie; Krebs, Peter

    2014-05-01

    Physico-chemical water quality is a major objective for the evaluation of the ecological state of a river water body. Physical and chemical water properties are measured to assess the river state, identify prevalent pressures and develop mitigating measures. Regularly water quality is assessed based on weekly to quarterly grab samples. The increasing availability of online-sensor data measured at a high frequency allows for an enhanced understanding of emission and transport dynamics, as well as the identification of typical and critical states. In this study we present a systematic approach to assess the impact of measurement frequency on the accuracy and uncertainty of derived aggregate indicators of environmental quality. High frequency measured (10 min-1 and 15 min-1) data on water temperature, pH, turbidity, electric conductivity and concentrations of dissolved oxygen nitrate, ammonia and phosphate are assessed in resampling experiments. The data is collected at 14 sites in eastern and northern Germany representing catchments between 40 km2 and 140 000 km2 of varying properties. Resampling is performed to create series of hourly to quarterly frequency, including special restrictions like sampling at working hours or discharge compensation. Statistical properties and their confidence intervals are determined in a bootstrapping procedure and evaluated along a gradient of sampling frequency. For all variables the range of the aggregate indicators increases largely in the bootstrapping realizations with decreasing sampling frequency. Mean values of electric conductivity, pH and water temperature obtained with monthly frequency differ in average less than five percent from the original data. Mean dissolved oxygen, nitrate and phosphate had in most stations less than 15 % bias. Ammonia and turbidity are most sensitive to the increase of sampling frequency with up to 30 % in average and 250 % maximum bias at monthly sampling frequency. A systematic bias is recognized

  4. Calibration of ground-based microwave radiometers - Accuracy assessment and recommendations for network users

    NASA Astrophysics Data System (ADS)

    Pospichal, Bernhard; Küchler, Nils; Löhnert, Ulrich; Crewell, Susanne; Czekala, Harald; Güldner, Jürgen

    2016-04-01

    Ground-based microwave radiometers (MWR) are becoming widely used in atmospheric remote sensing and start to be routinely operated by national weather services and other institutions. However, common standards for calibration of these radiometers and a detailed knowledge about the error characteristics is needed, in order to assimilate the data into models. Intercomparisons of calibrations by different MWRs have rarely been done. Therefore, two calibration experiments in Lindenberg (2014) and Meckenheim (2015) were performed in the frame of TOPROF (Cost action ES1303) in order to assess uncertainties and differences between various instruments. In addition, a series of experiments were taken in Oklahoma in autumn 2014. The focus lay on the performance of the two main instrument types, which are currently used operationally. These are the MP-Profiler series by Radiometrics Corporation as well as the HATPRO series by Radiometer Physics GmbH (RPG). Both instrument types are operating in two frequency bands, one along the 22 GHz water vapour line, the other one at the lower wing of the 60 GHz oxygen absorption complex. The goal was to establish protocols for providing quality controlled (QC) MWR data and their uncertainties. To this end, standardized calibration procedures for MWR were developed and recommendations for radiometer users were compiled. We focus here mainly on data types, integration times and optimal settings for calibration intervals, both for absolute (liquid nitrogen, tipping curve) as well as relative (hot load, noise diode) calibrations. Besides the recommendations for ground-based MWR operators, we will present methods to determine the accuracy of the calibration as well as means for automatic data quality control. In addition, some results from the intercomparison of different radiometers will be discussed.

  5. Seismic Network Performance Estimation: Comparing Predictions of Magnitude of Completeness and Location Accuracy to Observations from an Earthquake Catalogue

    NASA Astrophysics Data System (ADS)

    Spriggs, N.; Greig, D. W.; Ackerley, N. J.

    2014-12-01

    The design of seismic networks for the monitoring of induced seismicity is of critical importance. The recent introduction of regulations in various locations around the world (with more upcoming) has created a need for a priori confirmation that certain performance standards are met. We develop a tool to assess two key measures of network performance without an earthquake catalogue: magnitude of completeness and location accuracy. Site noise measurements are taken at existing seismic stations or as part of a noise survey. We then interpolate between measured values to determine a noise map for the entire region. The site noise is then summed with the instrument noise to determine the effective station noise at each of the proposed station locations. Location accuracy is evaluated by generating a covariance matrix that represents the error ellipsoid from the travel time derivatives (Peters and Crosson, 1972). To determine the magnitude of completeness we assume isotropic radiation and mandate a minimum signal to noise ratio for detection. For every gridpoint, we compute the Brune spectra for synthetic events and iterate to determine the smallest magnitude event that can be detected by at least four stations. We apply this methodology to an example network. We predict the magnitude of completeness and the location accuracy and compare the predicted values to observed values generated from the existing earthquake catalogue for the network. We discuss the effects of hypothetical station additions and removals on network performance to simulate network expansions and station failures. The ability to predict hypothetical station performance allows for the optimization of seismic network design and enables prediction of network performance even for a purely hypothetical seismic network. This allows the operators of networks for induced seismicity monitoring to be confident that performance criteria are met from day one of operations.

  6. Initial Performance Assessment of CALIOP

    NASA Technical Reports Server (NTRS)

    Winker, David; Hunt, Bill; McGill, Matthew

    2007-01-01

    The Cloud-Aerosol Lidar with Orthogonal Polarization (CALIOP, pronounced the same as "calliope") is a spaceborne two-wavelength polarizatio n lidar that has been acquiring global data since June 2006. CALIOP p rovides high resolution vertical profiles of clouds and aerosols, and has been designed with a very large linear dynamic range to encompas s the full range of signal returns from aerosols and clouds. CALIOP is the primary instrument carried by the Cloud-Aerosol Lidar and Infrar ed Pathfinder Satellite Observations (CALIPSO) satellite, which was l aunched on April, 28 2006. CALIPSO was developed within the framework of a collaboration between NASA and the French space agency, CNES. I nitial data analysis and validation intercomparisons indicate the qua lity of data from CALIOP meets or exceeds expectations. This paper presents a description of the CALIPSO mission, the CALIOP instrument, an d an initial assessment of on-orbit measurement performance.

  7. Performance Assessment and Geometric Calibration of RESOURCESAT-2

    NASA Astrophysics Data System (ADS)

    Radhadevi, P. V.; Solanki, S. S.; Akilan, A.; Jyothi, M. V.; Nagasubramanian, V.

    2016-06-01

    Resourcesat-2 (RS-2) has successfully completed five years of operations in its orbit. This satellite has multi-resolution and multi-spectral capabilities in a single platform. A continuous and autonomous co-registration, geo-location and radiometric calibration of image data from different sensors with widely varying view angles and resolution was one of the challenges of RS-2 data processing. On-orbit geometric performance of RS-2 sensors has been widely assessed and calibrated during the initial phase operations. Since then, as an ongoing activity, various geometric performance data are being generated periodically. This is performed with sites of dense ground control points (GCPs). These parameters are correlated to the direct geo-location accuracy of the RS-2 sensors and are monitored and validated to maintain the performance. This paper brings out the geometric accuracy assessment, calibration and validation done for about 500 datasets of RS-2. The objectives of this study are to ensure the best absolute and relative location accuracy of different cameras, location performance with payload steering and co-registration of multiple bands. This is done using a viewing geometry model, given ephemeris and attitude data, precise camera geometry and datum transformation. In the model, the forward and reverse transformations between the coordinate systems associated with the focal plane, payload, body, orbit and ground are rigorously and explicitly defined. System level tests using comparisons to ground check points have validated the operational geo-location accuracy performance and the stability of the calibration parameters.

  8. QUADAS-2: a revised tool for the quality assessment of diagnostic accuracy studies.

    PubMed

    Whiting, Penny F; Rutjes, Anne W S; Westwood, Marie E; Mallett, Susan; Deeks, Jonathan J; Reitsma, Johannes B; Leeflang, Mariska M G; Sterne, Jonathan A C; Bossuyt, Patrick M M

    2011-10-18

    In 2003, the QUADAS tool for systematic reviews of diagnostic accuracy studies was developed. Experience, anecdotal reports, and feedback suggested areas for improvement; therefore, QUADAS-2 was developed. This tool comprises 4 domains: patient selection, index test, reference standard, and flow and timing. Each domain is assessed in terms of risk of bias, and the first 3 domains are also assessed in terms of concerns regarding applicability. Signalling questions are included to help judge risk of bias. The QUADAS-2 tool is applied in 4 phases: summarize the review question, tailor the tool and produce review-specific guidance, construct a flow diagram for the primary study, and judge bias and applicability. This tool will allow for more transparent rating of bias and applicability of primary diagnostic accuracy studies.

  9. Analytic Guided-Search Model of Human Performance Accuracy in Target- Localization Search Tasks

    NASA Technical Reports Server (NTRS)

    Eckstein, Miguel P.; Beutter, Brent R.; Stone, Leland S.

    2000-01-01

    Current models of human visual search have extended the traditional serial/parallel search dichotomy. Two successful models for predicting human visual search are the Guided Search model and the Signal Detection Theory model. Although these models are inherently different, it has been difficult to compare them because the Guided Search model is designed to predict response time, while Signal Detection Theory models are designed to predict performance accuracy. Moreover, current implementations of the Guided Search model require the use of Monte-Carlo simulations, a method that makes fitting the model's performance quantitatively to human data more computationally time consuming. We have extended the Guided Search model to predict human accuracy in target-localization search tasks. We have also developed analytic expressions that simplify simulation of the model to the evaluation of a small set of equations using only three free parameters. This new implementation and extension of the Guided Search model will enable direct quantitative comparisons with human performance in target-localization search experiments and with the predictions of Signal Detection Theory and other search accuracy models.

  10. Accuracy assessment of single and double difference models for the single epoch GPS compass

    NASA Astrophysics Data System (ADS)

    Chen, Wantong; Qin, Honglei; Zhang, Yanzhong; Jin, Tian

    2012-02-01

    The single epoch GPS compass is an important field of study, since it is a valuable technique for the orientation estimation of vehicles and it can guarantee a total independence from carrier phase slips in practical applications. To achieve highly accurate angular estimates, the unknown integer ambiguities of the carrier phase observables need to be resolved. Past researches focus on the ambiguity resolution for single epoch; however, accuracy is another significant problem for many challenging applications. In this contribution, the accuracy is evaluated for the non-common clock scheme of the receivers and the common clock scheme of the receivers, respectively. We focus on three scenarios for either scheme: single difference model vs. double difference model, single frequency model vs. multiple frequency model and optimal linear combinations vs. traditional triple-frequency least squares. We deduce the short baseline precision for a number of different available models and analyze the difference in accuracy for those models. Compared with the single or double difference model of the non-common clock scheme, the single difference model of the common clock scheme can greatly reduce the vertical component error of baseline vector, which results in higher elevation accuracy. The least squares estimator can also reduce the error of fixed baseline vector with the aid of the multi-frequency observation, thereby improving the attitude accuracy. In essence, the "accuracy improvement" is attributed to the difference in accuracy for different models, not a real improvement for any specific model. If all noise levels of GPS triple frequency carrier phase are assumed the same in unit of cycles, it can be proved that the optimal linear combination approach is equivalent to the traditional triple-frequency least squares, no matter which scheme is utilized. Both simulations and actual experiments have been performed to verify the correctness of theoretical analysis.

  11. Standardizing the protocol for hemispherical photographs: accuracy assessment of binarization algorithms.

    PubMed

    Glatthorn, Jonas; Beckschäfer, Philip

    2014-01-01

    Hemispherical photography is a well-established method to optically assess ecological parameters related to plant canopies; e.g. ground-level light regimes and the distribution of foliage within the crown space. Interpreting hemispherical photographs involves classifying pixels as either sky or vegetation. A wide range of automatic thresholding or binarization algorithms exists to classify the photographs. The variety in methodology hampers ability to compare results across studies. To identify an optimal threshold selection method, this study assessed the accuracy of seven binarization methods implemented in software currently available for the processing of hemispherical photographs. Therefore, binarizations obtained by the algorithms were compared to reference data generated through a manual binarization of a stratified random selection of pixels. This approach was adopted from the accuracy assessment of map classifications known from remote sensing studies. Percentage correct (Pc) and kappa-statistics (K) were calculated. The accuracy of the algorithms was assessed for photographs taken with automatic exposure settings (auto-exposure) and photographs taken with settings which avoid overexposure (histogram-exposure). In addition, gap fraction values derived from hemispherical photographs were compared with estimates derived from the manually classified reference pixels. All tested algorithms were shown to be sensitive to overexposure. Three of the algorithms showed an accuracy which was high enough to be recommended for the processing of histogram-exposed hemispherical photographs: "Minimum" (Pc 98.8%; K 0.952), "Edge Detection" (Pc 98.1%; K 0.950), and "Minimum Histogram" (Pc 98.1%; K 0.947). The Minimum algorithm overestimated gap fraction least of all (11%). The overestimation by the algorithms Edge Detection (63%) and Minimum Histogram (67%) were considerably larger. For the remaining four evaluated algorithms (IsoData, Maximum Entropy, MinError, and Otsu

  12. Comparison of c-space and p-space particle tracing schemes on high-performance computers: accuracy and performance

    NASA Astrophysics Data System (ADS)

    Schäfer, F.; Breuer, M.

    2002-06-01

    The present paper presents a comparison of four different particle tracing schemes which were integrated into a parallel multiblock flow simulation program within the frame of a co-visualization approach. One p-space and three different c-space particle tracing schemes are described in detail. With respect to application on high-performance computers, parallelization and vectorization of the particle tracing schemes are discussed. The accuracy and the performance of the particle tracing schemes are analyzed extensively on the basis of several test cases. The accuracy with respect to an analytically prescribed and a numerically calculated velocity field is investigated, the latter in order to take the contribution of the flow solver's error to the overall error of the particle traces into account. Performance measurements on both scalar and vector computers are discussed. With respect to practical CFD applications and the required performance especially on vector computers, a newly developed, improved c-space scheme is shown to be comparable to or better than the investigated p-space scheme. According to accuracy the new c-space scheme is considerably more advantageous than traditional c-space methods. Finally, an application to a direct numerical simulation of a turbulent channel flow is presented. Copyright

  13. A Method for Assessing the Accuracy of a Photogrammetry System for Precision Deployable Structures

    NASA Technical Reports Server (NTRS)

    Moore, Ashley

    2005-01-01

    The measurement techniques used to validate analytical models of large deployable structures are an integral Part of the technology development process and must be precise and accurate. Photogrammetry and videogrammetry are viable, accurate, and unobtrusive methods for measuring such large Structures. Photogrammetry uses Software to determine the three-dimensional position of a target using camera images. Videogrammetry is based on the same principle, except a series of timed images are analyzed. This work addresses the accuracy of a digital photogrammetry system used for measurement of large, deployable space structures at JPL. First, photogrammetry tests are performed on a precision space truss test article, and the images are processed using Photomodeler software. The accuracy of the Photomodeler results is determined through, comparison with measurements of the test article taken by an external testing group using the VSTARS photogrammetry system. These two measurements are then compared with Australis photogrammetry software that simulates a measurement test to predict its accuracy. The software is then used to study how particular factors, such as camera resolution and placement, affect the system accuracy to help design the setup for the videogrammetry system that will offer the highest level of accuracy for measurement of deploying structures.

  14. Accuracy assessment of topographic mapping using UAV image integrated with satellite images

    NASA Astrophysics Data System (ADS)

    Azmi, S. M.; Ahmad, Baharin; Ahmad, Anuar

    2014-02-01

    Unmanned Aerial Vehicle or UAV is extensively applied in various fields such as military applications, archaeology, agriculture and scientific research. This study focuses on topographic mapping and map updating. UAV is one of the alternative ways to ease the process of acquiring data with lower operating costs, low manufacturing and operational costs, plus it is easy to operate. Furthermore, UAV images will be integrated with QuickBird images that are used as base maps. The objective of this study is to make accuracy assessment and comparison between topographic mapping using UAV images integrated with aerial photograph and satellite image. The main purpose of using UAV image is as a replacement for cloud covered area which normally exists in aerial photograph and satellite image, and for updating topographic map. Meanwhile, spatial resolution, pixel size, scale, geometric accuracy and correction, image quality and information contents are important requirements needed for the generation of topographic map using these kinds of data. In this study, ground control points (GCPs) and check points (CPs) were established using real time kinematic Global Positioning System (RTK-GPS) technique. There are two types of analysis that are carried out in this study which are quantitative and qualitative assessments. Quantitative assessment is carried out by calculating root mean square error (RMSE). The outputs of this study include topographic map and orthophoto. From this study, the accuracy of UAV image is ± 0.460 m. As conclusion, UAV image has the potential to be used for updating of topographic maps.

  15. Accuracy of pattern detection methods in the performance of golf putting.

    PubMed

    Couceiro, Micael S; Dias, Gonçalo; Mendes, Rui; Araújo, Duarte

    2013-01-01

    The authors present a comparison of the classification accuracy of 5 pattern detection methods in the performance of golf putting. The detection of the position of the golf club was performed using a computer vision technique followed by the estimation algorithm Darwinian particle swarm optimization to obtain a kinematical model of each trial. The estimated parameters of the models were subsequently used as sample of five classification algorithms: (a) linear discriminant analysis, (b) quadratic discriminant analysis, (c) naive Bayes with normal distribution, (d) naive Bayes with kernel smoothing density estimate, and (e) least squares support vector machines. Beyond testing the performance of each classification method, it was also possible to identify a putting signature that characterized each golf player. It may be concluded that these methods can be applied to the study of coordination and motor control on the putting performance, allowing for the analysis of the intra- and interpersonal variability of motor behavior in performance contexts.

  16. Procedural Documentation and Accuracy Assessment of Bathymetric Maps and Area/Capacity Tables for Small Reservoirs

    USGS Publications Warehouse

    Wilson, Gary L.; Richards, Joseph M.

    2006-01-01

    Because of the increasing use and importance of lakes for water supply to communities, a repeatable and reliable procedure to determine lake bathymetry and capacity is needed. A method to determine the accuracy of the procedure will help ensure proper collection and use of the data and resulting products. It is important to clearly define the intended products and desired accuracy before conducting the bathymetric survey to ensure proper data collection. A survey-grade echo sounder and differential global positioning system receivers were used to collect water-depth and position data in December 2003 at Sugar Creek Lake near Moberly, Missouri. Data were collected along planned transects, with an additional set of quality-assurance data collected for use in accuracy computations. All collected data were imported into a geographic information system database. A bathymetric surface model, contour map, and area/capacity tables were created from the geographic information system database. An accuracy assessment was completed on the collected data, bathymetric surface model, area/capacity table, and contour map products. Using established vertical accuracy standards, the accuracy of the collected data, bathymetric surface model, and contour map product was 0.67 foot, 0.91 foot, and 1.51 feet at the 95 percent confidence level. By comparing results from different transect intervals with the quality-assurance transect data, it was determined that a transect interval of 1 percent of the longitudinal length of Sugar Creek Lake produced nearly as good results as 0.5 percent transect interval for the bathymetric surface model, area/capacity table, and contour map products.

  17. New technology in dietary assessment: a review of digital methods in improving food record accuracy.

    PubMed

    Stumbo, Phyllis J

    2013-02-01

    Methods for conducting dietary assessment in the United States date back to the early twentieth century. Methods of assessment encompassed dietary records, written and spoken dietary recalls, FFQ using pencil and paper and more recently computer and internet applications. Emerging innovations involve camera and mobile telephone technology to capture food and meal images. This paper describes six projects sponsored by the United States National Institutes of Health that use digital methods to improve food records and two mobile phone applications using crowdsourcing. The techniques under development show promise for improving accuracy of food records.

  18. Results of 17 Independent Geopositional Accuracy Assessments of Earth Satellite Corporation's GeoCover Landsat Thematic Mapper Imagery. Geopositional Accuracy Validation of Orthorectified Landsat TM Imagery: Northeast Asia

    NASA Technical Reports Server (NTRS)

    Smith, Charles M.

    2003-01-01

    This report provides results of an independent assessment of the geopositional accuracy of the Earth Satellite (EarthSat) Corporation's GeoCover, Orthorectified Landsat Thematic Mapper (TM) imagery over Northeast Asia. This imagery was purchased through NASA's Earth Science Enterprise (ESE) Scientific Data Purchase (SDP) program.

  19. Assessing accuracy and precision for field and laboratory data: a perspective in ecosystem restoration

    USGS Publications Warehouse

    Stapanian, Martin A.; Lewis, Timothy E; Palmer, Craig J.; Middlebrook Amos, Molly

    2016-01-01

    Unlike most laboratory studies, rigorous quality assurance/quality control (QA/QC) procedures may be lacking in ecosystem restoration (“ecorestoration”) projects, despite legislative mandates in the United States. This is due, in part, to ecorestoration specialists making the false assumption that some types of data (e.g. discrete variables such as species identification and abundance classes) are not subject to evaluations of data quality. Moreover, emergent behavior manifested by complex, adapting, and nonlinear organizations responsible for monitoring the success of ecorestoration projects tend to unconsciously minimize disorder, QA/QC being an activity perceived as creating disorder. We discuss similarities and differences in assessing precision and accuracy for field and laboratory data. Although the concepts for assessing precision and accuracy of ecorestoration field data are conceptually the same as laboratory data, the manner in which these data quality attributes are assessed is different. From a sample analysis perspective, a field crew is comparable to a laboratory instrument that requires regular “recalibration,” with results obtained by experts at the same plot treated as laboratory calibration standards. Unlike laboratory standards and reference materials, the “true” value for many field variables is commonly unknown. In the laboratory, specific QA/QC samples assess error for each aspect of the measurement process, whereas field revisits assess precision and accuracy of the entire data collection process following initial calibration. Rigorous QA/QC data in an ecorestoration project are essential for evaluating the success of a project, and they provide the only objective “legacy” of the dataset for potential legal challenges and future uses.

  20. 3D Surface Reconstruction of Plant Seeds by Volume Carving: Performance and Accuracies

    PubMed Central

    Roussel, Johanna; Geiger, Felix; Fischbach, Andreas; Jahnke, Siegfried; Scharr, Hanno

    2016-01-01

    We describe a method for 3D reconstruction of plant seed surfaces, focusing on small seeds with diameters as small as 200 μm. The method considers robotized systems allowing single seed handling in order to rotate a single seed in front of a camera. Even though such systems feature high position repeatability, at sub-millimeter object scales, camera pose variations have to be compensated. We do this by robustly estimating the tool center point from each acquired image. 3D reconstruction can then be performed by a simple shape-from-silhouette approach. In experiments we investigate runtimes, theoretically achievable accuracy, experimentally achieved accuracy, and show as a proof of principle that the proposed method is well sufficient for 3D seed phenotyping purposes. PMID:27375628

  1. Accuracy Assessment of Crown Delineation Methods for the Individual Trees Using LIDAR Data

    NASA Astrophysics Data System (ADS)

    Chang, K. T.; Lin, C.; Lin, Y. C.; Liu, J. K.

    2016-06-01

    Forest canopy density and height are used as variables in a number of environmental applications, including the estimation of biomass, forest extent and condition, and biodiversity. The airborne Light Detection and Ranging (LiDAR) is very useful to estimate forest canopy parameters according to the generated canopy height models (CHMs). The purpose of this work is to introduce an algorithm to delineate crown parameters, e.g. tree height and crown radii based on the generated rasterized CHMs. And accuracy assessment for the extraction of volumetric parameters of a single tree is also performed via manual measurement using corresponding aerial photo pairs. A LiDAR dataset of a golf course acquired by Leica ALS70-HP is used in this study. Two algorithms, i.e. a traditional one with the subtraction of a digital elevation model (DEM) from a digital surface model (DSM), and a pit-free approach are conducted to generate the CHMs firstly. Then two algorithms, a multilevel morphological active-contour (MMAC) and a variable window filter (VWF), are implemented and used in this study for individual tree delineation. Finally, experimental results of two automatic estimation methods for individual trees can be evaluated with manually measured stand-level parameters, i.e. tree height and crown diameter. The resulting CHM generated by a simple subtraction is full of empty pixels (called "pits") that will give vital impact on subsequent analysis for individual tree delineation. The experimental results indicated that if more individual trees can be extracted, tree crown shape will became more completely in the CHM data after the pit-free process.

  2. Assessing Differential Item Functioning in Performance Tests.

    ERIC Educational Resources Information Center

    Zwick, Rebecca; And Others

    Although the belief has been expressed that performance assessments are intrinsically more fair than multiple-choice measures, some forms of performance assessment may in fact be more likely than conventional tests to tap construct-irrelevant factors. As performance assessment grows in popularity, it will be increasingly important to monitor the…

  3. AREST. Performance Assessment HLRW Repository

    SciTech Connect

    Engel, D.W.; Liebetrau, A.M.; Apted, M.J.

    1990-01-01

    AREST was developed to provide a quantitative assessment of the performance of the high-level radioactive waste engineered barrier system and its individual components for a number of repository designs and geologic settings. AREST is a source-term model which simulates the releases from the engineered barrier system (EBS) in natural barriers of the geosphere. It consists of three component models and five process models that describe the post-emplacement environment of a waste package. All of these components are combined within a probabilistic framework. The three component models are a waste package containment (WPC) model that simulates the corrosion degradation processes which eventually result in waste package containment failure, a waste package release (WPR) model that calculates the rates of radionuclide release from the failed waste package, an engineered system release (ESR) model that controls the flow of information among all AREST components and process models and, in particular, combines release output from the WPR model with failure times from the WPC model to produce estimates of total release. The AREST model contains models of the thermal, geochemical, hydrological, mechanical, and radiation processes that determine the waste package environment. AREST provides a menu-based editor, which allows the user to change the input data prior to execution.

  4. Behavior model for performance assessment.

    SciTech Connect

    Borwn-VanHoozer, S. A.

    1999-07-23

    Every individual channels information differently based on their preference of the sensory modality or representational system (visual auditory or kinesthetic) we tend to favor most (our primary representational system (PRS)). Therefore, some of us access and store our information primarily visually first, some auditorily, and others kinesthetically (through feel and touch); which in turn establishes our information processing patterns and strategies and external to internal (and subsequently vice versa) experiential language representation. Because of the different ways we channel our information, each of us will respond differently to a task--the way we gather and process the external information (input), our response time (process), and the outcome (behavior). Traditional human models of decision making and response time focus on perception, cognitive and motor systems stimulated and influenced by the three sensory modalities, visual, auditory and kinesthetic. For us, these are the building blocks to knowing how someone is thinking. Being aware of what is taking place and how to ask questions is essential in assessing performance toward reducing human errors. Existing models give predications based on time values or response times for a particular event, and may be summed and averaged for a generalization of behavior(s). However, by our not establishing a basic understanding of the foundation of how the behavior was predicated through a decision making strategy process, predicative models are overall inefficient in their analysis of the means by which behavior was generated. What is seen is the end result.

  5. Assessment of accuracy of CFD simulations through quantification of a numerical dissipation rate

    NASA Astrophysics Data System (ADS)

    Domaradzki, J. A.; Sun, G.; Xiang, X.; Chen, K. K.

    2016-11-01

    The accuracy of CFD simulations is typically assessed through a time consuming process of multiple runs and comparisons with available benchmark data. We propose that the accuracy can be assessed in the course of actual runs using a simpler method based on a numerical dissipation rate which is computed at each time step for arbitrary sub-domains using only information provided by the code in question (Schranner et al., 2015; Castiglioni and Domaradzki, 2015). Here, the method has been applied to analyze numerical simulation results obtained using OpenFOAM software for a flow around a sphere at Reynolds number of 1000. Different mesh resolutions were used in the simulations. For the coarsest mesh the ratio of the numerical dissipation to the viscous dissipation downstream of the sphere varies from 4.5% immediately behind the sphere to 22% further away. For the finest mesh this ratio varies from 0.4% behind the sphere to 6% further away. The large numerical dissipation in the former case is a direct indicator that the simulation results are inaccurate, e.g., the predicted Strouhal number is 16% lower than the benchmark. Low numerical dissipation in the latter case is an indicator of an acceptable accuracy, with the Strouhal number in the simulations matching the benchmark. Supported by NSF.

  6. Numerical simulation of turbulence flow in a Kaplan turbine -Evaluation on turbine performance prediction accuracy-

    NASA Astrophysics Data System (ADS)

    Ko, P.; Kurosawa, S.

    2014-03-01

    The understanding and accurate prediction of the flow behaviour related to cavitation and pressure fluctuation in a Kaplan turbine are important to the design work enhancing the turbine performance including the elongation of the operation life span and the improvement of turbine efficiency. In this paper, high accuracy turbine and cavitation performance prediction method based on entire flow passage for a Kaplan turbine is presented and evaluated. Two-phase flow field is predicted by solving Reynolds-Averaged Navier-Stokes equations expressed by volume of fluid method tracking the free surface and combined with Reynolds Stress model. The growth and collapse of cavitation bubbles are modelled by the modified Rayleigh-Plesset equation. The prediction accuracy is evaluated by comparing with the model test results of Ns 400 Kaplan model turbine. As a result that the experimentally measured data including turbine efficiency, cavitation performance, and pressure fluctuation are accurately predicted. Furthermore, the cavitation occurrence on the runner blade surface and the influence to the hydraulic loss of the flow passage are discussed. Evaluated prediction method for the turbine flow and performance is introduced to facilitate the future design and research works on Kaplan type turbine.

  7. Quantitative Assessment of the Accuracy of Constitutive Laws for Plasticity with an Emphasis on Cyclic Deformation

    DTIC Science & Technology

    1993-04-01

    new law, the B-L law. The experimental database is constructed from a5 series of constant amplitude and random amplitude strain controlled cyclic...description of the experimental instrumentation is given in Appendix I. The cyclic plasticity experiments were performed under strain control at room5...instrumentation is present and control accuracy is not as good, the increments or difference of strain at two adjacent sampling intervals should be

  8. Immediate Feedback on Accuracy and Performance: The Effects of Wireless Technology on Food Safety Tracking at a Distribution Center

    ERIC Educational Resources Information Center

    Goomas, David T.

    2012-01-01

    The effects of wireless ring scanners, which provided immediate auditory and visual feedback, were evaluated to increase the performance and accuracy of order selectors at a meat distribution center. The scanners not only increased performance and accuracy compared to paper pick sheets, but were also instrumental in immediate and accurate data…

  9. Theory and methods for accuracy assessment of thematic maps using fuzzy sets

    SciTech Connect

    Gopal, S.; Woodcock, C. )

    1994-02-01

    The use of fuzzy sets in map accuracy assessment expands the amount of information that can be provided regarding the nature, frequency, magnitude, and source of errors in a thematic map. The need for using fuzzy sets arises from the observation that all map locations do not fit unambiguously in a single map category. Fuzzy sets allow for varying levels of set membership for multiple map categories. A linguistic measurement scale allows the kinds of comments commonly made during map evaluations to be used to quantify map accuracy. Four tables result from the use of fuzzy functions, and when taken together they provide more information than traditional confusion matrices. The use of a hypothetical dataset helps illustrate the benefits of the new methods. It is hoped that the enhanced ability to evaluate maps resulting from the use of fuzzy sets will improve our understanding of uncertainty in maps and facilitate improved error modeling. 40 refs.

  10. An accuracy assessment of Cartesian-mesh approaches for the Euler equations

    NASA Technical Reports Server (NTRS)

    Coirier, William J.; Powell, Kenneth G.

    1995-01-01

    A critical assessment of the accuracy of Cartesian-mesh approaches for steady, transonic solutions of the Euler equations of gas dynamics is made. An exact solution of the Euler equations (Ringleb's flow) is used not only to infer the order of the truncation error of the Cartesian-mesh approaches, but also to compare the magnitude of the discrete error directly to that obtained with a structured mesh approach. Uniformly and adaptively refined solutions using a Cartesian-mesh approach are obtained and compared to each other and to uniformly refined structured mesh results. The effect of cell merging is investigated as well as the use of two different K-exact reconstruction procedures. The solution methodology of the schemes is explained and tabulated results are presented to compare the solution accuracies.

  11. Evaluation of precision and accuracy assessment of different 3-D surface imaging systems for biomedical purposes.

    PubMed

    Eder, Maximilian; Brockmann, Gernot; Zimmermann, Alexander; Papadopoulos, Moschos A; Schwenzer-Zimmerer, Katja; Zeilhofer, Hans Florian; Sader, Robert; Papadopulos, Nikolaos A; Kovacs, Laszlo

    2013-04-01

    Three-dimensional (3-D) surface imaging has gained clinical acceptance, especially in the field of cranio-maxillo-facial and plastic, reconstructive, and aesthetic surgery. Six scanners based on different scanning principles (Minolta Vivid 910®, Polhemus FastSCAN™, GFM PRIMOS®, GFM TopoCAM®, Steinbichler Comet® Vario Zoom 250, 3dMD DSP 400®) were used to measure five sheep skulls of different sizes. In three areas with varying anatomical complexity (areas, 1 = high; 2 = moderate; 3 = low), 56 distances between 20 landmarks are defined on each skull. Manual measurement (MM), coordinate machine measurements (CMM) and computer tomography (CT) measurements were used to define a reference method for further precision and accuracy evaluation of different 3-D scanning systems. MM showed high correlation to CMM and CT measurements (both r = 0.987; p < 0.001) and served as the reference method. TopoCAM®, Comet® and Vivid 910® showed highest measurement precision over all areas of complexity; Vivid 910®, the Comet® and the DSP 400® demonstrated highest accuracy over all areas with Vivid 910® being most accurate in areas 1 and 3, and the DSP 400® most accurate in area 2. In accordance to the measured distance length, most 3-D devices present higher measurement precision and accuracy for large distances and lower degrees of precision and accuracy for short distances. In general, higher degrees of complexity are associated with lower 3-D assessment accuracy, suggesting that for optimal results, different types of scanners should be applied to specific clinical applications and medical problems according to their special construction designs and characteristics.

  12. Exploring Writing Accuracy and Writing Complexity as Predictors of High-Stakes State Assessments

    ERIC Educational Resources Information Center

    Edman, Ellie Whitner

    2012-01-01

    The advent of No Child Left Behind led to increased teacher accountability for student performance and placed strict sanctions in place for failure to meet a certain level of performance each year. With instructional time at a premium, it is imperative that educators have brief academic assessments that accurately predict performance on…

  13. [Assessment of overall spatial accuracy in image guided stereotactic body radiotherapy using a spine registration method].

    PubMed

    Nakazawa, Hisato; Uchiyama, Yukio; Komori, Masataka; Hayashi, Naoki

    2014-06-01

    Stereotactic body radiotherapy (SBRT) for lung and liver tumors is always performed under image guidance, a technique used to confirm the accuracy of setup positioning by fusing planning digitally reconstructed radiographs with X-ray, fluoroscopic, or computed tomography (CT) images, using bony structures, tumor shadows, or metallic markers as landmarks. The Japanese SBRT guidelines state that bony spinal structures should be used as the main landmarks for patient setup. In this study, we used the Novalis system as a linear accelerator for SBRT of lung and liver tumors. The current study compared the differences between spine registration and target registration and calculated total spatial accuracy including setup uncertainty derived from our image registration results and the geometric uncertainty of the Novalis system. We were able to evaluate clearly whether overall spatial accuracy is achieved within a setup margin (SM) for planning target volume (PTV) in treatment planning. After being granted approval by the Hospital and University Ethics Committee, we retrospectively analyzed eleven patients with lung tumor and seven patients with liver tumor. The results showed the total spatial accuracy to be within a tolerable range for SM of treatment planning. We therefore regard our method to be suitable for image fusion involving 2-dimensional X-ray images during the treatment planning stage of SBRT for lung and liver tumors.

  14. Diffraction based overlay metrology: accuracy and performance on front end stack

    NASA Astrophysics Data System (ADS)

    Leray, Philippe; Cheng, Shaunee; Kandel, Daniel; Adel, Michael; Marchelli, Anat; Vakshtein, Irina; Vasconi, Mauro; Salski, Bartlomiej

    2008-03-01

    The overlay metrology budget is typically 1/10 of the overlay control budget resulting in overlay metrology total measurement uncertainty requirements of 0.57 nm for the most challenging use cases of the 32nm technology generation. Theoretical considerations show that overlay technology based on differential signal scatterometry (SCOL TM) has inherent advantages, which will allow it to achieve the 32nm technology generation requirements and go beyond it. In this work we present results of an experimental and theoretical study of SCOL. We present experimental results, comparing this technology with the standard imaging overlay metrology. In particular, we present performance results, such as precision and tool induced shift, for different target designs. The response to a large range of induced misalignment is also shown. SCOL performance on these targets for a real stack is reported. We also show results of simulations of the expected accuracy and performance associated with a variety of scatterometry overlay target designs. The simulations were carried out on several stacks including FEOL and BEOL materials. The inherent limitations and possible improvements of the SCOL technology are discussed. We show that with the appropriate target design and algorithms, scatterometry overlay achieves the accuracy required for future technology generations.

  15. Performance and Diagnostic Accuracy of a Urine-Based Human Papillomavirus Assay in a Referral Population.

    PubMed

    Cuzick, Jack M; Cadman, Louise; Ahmad, Amar Sabri; Ho, Linda; Terry, George; Kleeman, Michelle; Lyons, Deirdre; Austin, Janet; Stoler, Mark H; Vibat, Cecile Rose T; Dockter, Janel; Robbins, David; Billings, Paul R; Erlander, Mark G

    2017-02-21

    Background HPV testing from clinician-collected cervical and self-collected cervico-vaginal samples is more sensitive for detecting CIN2+/CIN3+ than cytology-based screening, stimulating interest in HPV testing from urine. The objective was to determine the performance of the Trovagene HPV test for the detection of CIN2+ from urine and PreservCyt cervical samples. Methods Women referred for colposcopy at St Mary's Hospital London, following abnormal cytology, were recruited to this diagnostic accuracy study by convenience sampling (September 2011 and April 2013). 501 paired urine and cervical samples were collected.

  16. Multinational assessment of accuracy of equations for predicting risk of kidney failure: a meta-analysis

    PubMed Central

    Tangri, Navdeep; Grams, Morgan E.; Levey, Andrew S.; Coresh, Josef; Appel, Lawrence; Astor, Brad C.; Chodick, Gabriel; Collins, Allan J.; Djurdjev, Ognjenka; Elley, C. Raina; Evans, Marie; Garg, Amit X.; Hallan, Stein I.; Inker, Lesley; Ito, Sadayoshi; Jee, Sun Ha; Kovesdy, Csaba P.; Kronenberg, Florian; Lambers Heerspink, Hiddo J.; Marks, Angharad; Nadkarni, Girish N.; Navaneethan, Sankar D.; Nelson, Robert G.; Titze, Stephanie; Sarnak, Mark J.; Stengel, Benedicte; Woodward, Mark; Iseki, Kunitoshi

    2016-01-01

    Importance Identifying patients at risk of chronic kidney disease (CKD) progression may facilitate more optimal nephrology care. Kidney failure risk equations (KFREs) were previously developed and validated in two Canadian cohorts. Validation in other regions and in CKD populations not under the care of a nephrologist is needed. Objective To evaluate the accuracy of the KFREs across different geographic regions and patient populations through individual-participant data meta-analysis. Data Sources Thirty-one cohorts, including 721,357 participants with CKD Stages 3–5 in over 30 countries spanning 4 continents, were studied. These cohorts collected data from 1982 through 2014. Study Selection Cohorts participating in the CKD Prognosis Consortium with data on end-stage renal disease. Data Extraction and Synthesis Data were obtained and statistical analyses were performed between July 2012 and June 2015. Using the risk factors from the original KFREs, cohort-specific hazard ratios were estimated, and combined in meta-analysis to form new “pooled” KFREs. Original and pooled equation performance was compared, and the need for regional calibration factors was assessed. Main Outcome and Measure Kidney failure (treatment by dialysis or kidney transplantation). Results During a median follow-up of 4 years, 23,829 cases of kidney failure were observed. The original KFREs achieved excellent discrimination (ability to differentiate those who developed kidney failure from those who did not) across all cohorts (overall C statistic, 0.90 (95% CI 0.89–0.92) at 2 years and 0.88 (95% CI 0.86–0.90) at 5 years); discrimination in subgroups by age, race, and diabetes status was similar. There was no improvement with the pooled equations. Calibration (the difference between observed and predicted risk) was adequate in North American cohorts, but the original KFREs overestimated risk in some non-North American cohorts. Addition of a calibration factor that lowered the baseline

  17. 24 CFR 115.206 - Performance assessments; Performance standards.

    Code of Federal Regulations, 2010 CFR

    2010-04-01

    ... 24 Housing and Urban Development 1 2010-04-01 2010-04-01 false Performance assessments; Performance standards. 115.206 Section 115.206 Housing and Urban Development Regulations Relating to Housing... AGENCIES Certification of Substantially Equivalent Agencies § 115.206 Performance assessments;...

  18. 24 CFR 115.206 - Performance assessments; Performance standards.

    Code of Federal Regulations, 2011 CFR

    2011-04-01

    ... 24 Housing and Urban Development 1 2011-04-01 2011-04-01 false Performance assessments; Performance standards. 115.206 Section 115.206 Housing and Urban Development Regulations Relating to Housing... AGENCIES Certification of Substantially Equivalent Agencies § 115.206 Performance assessments;...

  19. Performance Assessment in Real Time.

    ERIC Educational Resources Information Center

    Kimball, Chip; Cone, Tom

    2002-01-01

    Two school districts in Washington State are seeing fundamental shifts in the learning process by using technology as a tool to support data-centric instruction. An effective student-assessment strategy includes comprehensive information about student progress, analysis of assessment information so specific instructional strategies can be…

  20. Proposed Testing to Assess the Accuracy of Glass-To-Metal Seal Stress Analyses.

    SciTech Connect

    Chambers, Robert S.; Emery, John M; Tandon, Rajan; Antoun, Bonnie R.; Stavig, Mark E.; Newton, Clay S.; Gibson, Cory S; Bencoe, Denise N.

    2014-09-01

    The material characterization tests conducted on 304L VAR stainless steel and Schott 8061 glass have provided higher fidelity data for calibration of material models used in Glass - T o - Metal (GTM) seal analyses. Specifically, a Thermo - Multi - Linear Elastic Plastic ( thermo - MLEP) material model has be en defined for S S304L and the Simplified Potential Energy Clock nonlinear visc oelastic model has been calibrated for the S8061 glass. To assess the accuracy of finite element stress analyses of GTM seals, a suite of tests are proposed to provide data for comparison to mo del predictions.

  1. Accuracy assessment of the global ionospheric model over the Southern Ocean based on dynamic observation

    NASA Astrophysics Data System (ADS)

    Luo, Xiaowen; Xu, Huajun; Li, Zishen; Zhang, Tao; Gao, Jinyao; Shen, Zhongyan; Yang, Chunguo; Wu, Ziyin

    2017-02-01

    The global ionospheric model based on the reference stations of the Global Navigation Satellite System (GNSS) of the International GNSS Services is presently the most commonly used products of the global ionosphere. It is very important to comprehensively analyze and evaluate the accuracy and reliability of the model for the reasonable use of this kind of ionospheric product. In terms of receiver station deployment, this work is different from the traditional performance evaluation of the global ionosphere model based on observation data of ground-based static reference stations. The preliminary evaluation and analysis of the the global ionospheric model was conducted with the dynamic observation data across different latitudes over the southern oceans. The validation results showed that the accuracy of the global ionospheric model over the southern oceans is about 5 TECu, which deviates from the measured ionospheric TEC by about -0.6 TECu.

  2. Assessing external cause of injury coding accuracy for transport injury hospitalizations.

    PubMed

    Bowman, Stephen M; Aitken, Mary E

    2011-01-01

    External cause of injury codes (E codes) capture circumstances surrounding injuries. While hospital discharge data are primarily collected for administrative/billing purposes, these data are secondarily used for injury surveillance. We assessed the accuracy and completeness of hospital discharge data for transport-related crashes using trauma registry data as the gold standard. We identified mechanisms of injury with significant disagreement and developed recommendations to improve the accuracy of E codes in administrative data. Overall, we linked 2,192 (99.9 percent) of the 2,195 discharge records to trauma registry records. General mechanism categories showed good agreement, with 84.7 percent of records coded consistently between registry and discharge data (Kappa 0.762, p < .001). However, agreement was lower for specific categories (e.g., ATV crashes), with discharge records capturing only 70.4 percent of cases identified in trauma registry records. Efforts should focus on systematically improving E-code accuracy and detail through training, education, and informatics such as automated data linkages to trauma registries.

  3. Application of a Monte Carlo accuracy assessment tool to TDRS and GPS

    NASA Technical Reports Server (NTRS)

    Pavloff, Michael S.

    1994-01-01

    In support of a NASA study on the application of radio interferometry to satellite orbit determination, MITRE developed a simulation tool for assessing interferometric tracking accuracy. Initially, the tool was applied to the problem of determining optimal interferometric station siting for orbit determination of the Tracking and Data Relay Satellite (TDRS). Subsequently, the Orbit Determination Accuracy Estimator (ODAE) was expanded to model the general batch maximum likelihood orbit determination algorithms of the Goddard Trajectory Determination System (GTDS) with measurement types including not only group and phase delay from radio interferometry, but also range, range rate, angular measurements, and satellite-to-satellite measurements. The user of ODAE specifies the statistical properties of error sources, including inherent observable imprecision, atmospheric delays, station location uncertainty, and measurement biases. Upon Monte Carlo simulation of the orbit determination process, ODAE calculates the statistical properties of the error in the satellite state vector and any other parameters for which a solution was obtained in the orbit determination. This paper presents results from ODAE application to two different problems: (1)determination of optimal geometry for interferometirc tracking of TDRS, and (2) expected orbit determination accuracy for Global Positioning System (GPS) tracking of low-earth orbit (LEO) satellites. Conclusions about optimal ground station locations for TDRS orbit determination by radio interferometry are presented, and the feasibility of GPS-based tracking for IRIDIUM, a LEO mobile satellite communications (MOBILSATCOM) system, is demonstrated.

  4. Assessing and improving the spatial accuracy in MEG source localization by depth-weighted minimum-norm estimates.

    PubMed

    Lin, Fa-Hsuan; Witzel, Thomas; Ahlfors, Seppo P; Stufflebeam, Steven M; Belliveau, John W; Hämäläinen, Matti S

    2006-05-15

    Cerebral currents responsible for the extra-cranially recorded magnetoencephalography (MEG) data can be estimated by applying a suitable source model. A popular choice is the distributed minimum-norm estimate (MNE) which minimizes the l2-norm of the estimated current. Under the l2-norm constraint, the current estimate is related to the measurements by a linear inverse operator. However, the MNE has a bias towards superficial sources, which can be reduced by applying depth weighting. We studied the effect of depth weighting in MNE using a shift metric. We assessed the localization performance of the depth-weighted MNE as well as depth-weighted noise-normalized MNE solutions under different cortical orientation constraints, source space densities, and signal-to-noise ratios (SNRs) in multiple subjects. We found that MNE with depth weighting parameter between 0.6 and 0.8 showed improved localization accuracy, reducing the mean displacement error from 12 mm to 7 mm. The noise-normalized MNE was insensitive to depth weighting. A similar investigation of EEG data indicated that depth weighting parameter between 2.0 and 5.0 resulted in an improved localization accuracy. The application of depth weighting to auditory and somatosensory experimental data illustrated the beneficial effect of depth weighting on the accuracy of spatiotemporal mapping of neuronal sources.

  5. Estimating Orientation Using Magnetic and Inertial Sensors and Different Sensor Fusion Approaches: Accuracy Assessment in Manual and Locomotion Tasks

    PubMed Central

    Bergamini, Elena; Ligorio, Gabriele; Summa, Aurora; Vannozzi, Giuseppe; Cappozzo, Aurelio; Sabatini, Angelo Maria

    2014-01-01

    Magnetic and inertial measurement units are an emerging technology to obtain 3D orientation of body segments in human movement analysis. In this respect, sensor fusion is used to limit the drift errors resulting from the gyroscope data integration by exploiting accelerometer and magnetic aiding sensors. The present study aims at investigating the effectiveness of sensor fusion methods under different experimental conditions. Manual and locomotion tasks, differing in time duration, measurement volume, presence/absence of static phases, and out-of-plane movements, were performed by six subjects, and recorded by one unit located on the forearm or the lower trunk, respectively. Two sensor fusion methods, representative of the stochastic (Extended Kalman Filter) and complementary (Non-linear observer) filtering, were selected, and their accuracy was assessed in terms of attitude (pitch and roll angles) and heading (yaw angle) errors using stereophotogrammetric data as a reference. The sensor fusion approaches provided significantly more accurate results than gyroscope data integration. Accuracy improved mostly for heading and when the movement exhibited stationary phases, evenly distributed 3D rotations, it occurred in a small volume, and its duration was greater than approximately 20 s. These results were independent from the specific sensor fusion method used. Practice guidelines for improving the outcome accuracy are provided. PMID:25302810

  6. The diagnostic accuracy of pharmacological stress echocardiography for the assessment of coronary artery disease: a meta-analysis

    PubMed Central

    Picano, Eugenio; Molinaro, Sabrina; Pasanisi, Emilio

    2008-01-01

    Background Recent American Heart Association/American College of Cardiology guidelines state that "dobutamine stress echo has substantially higher sensitivity than vasodilator stress echo for detection of coronary artery stenosis" while the European Society of Cardiology guidelines and the European Association of Echocardiography recommendations conclude that "the two tests have very similar applications". Who is right? Aim To evaluate the diagnostic accuracy of dobutamine versus dipyridamole stress echocardiography through an evidence-based approach. Methods From PubMed search, we identified all papers with coronary angiographic verification and head-to-head comparison of dobutamine stress echo (40 mcg/kg/min ± atropine) versus dipyridamole stress echo performed with state-of-the art protocols (either 0.84 mg/kg in 10' plus atropine, or 0.84 mg/kg in 6' without atropine). A total of 5 papers have been found. Pooled weight meta-analysis was performed. Results the 5 analyzed papers recruited 435 patients, 299 with and 136 without angiographically assessed coronary artery disease (quantitatively assessed stenosis > 50%). Dipyridamole and dobutamine showed similar accuracy (87%, 95% confidence intervals, CI, 83–90, vs. 84%, CI, 80–88, p = 0.48), sensitivity (85%, CI 80–89, vs. 86%, CI 78–91, p = 0.81) and specificity (89%, CI 82–94 vs. 86%, CI 75–89, p = 0.15). Conclusion When state-of-the art protocols are considered, dipyridamole and dobutamine stress echo have similar accuracy, specificity and – most importantly – sensitivity for detection of CAD. European recommendations concluding that "dobutamine and vasodilators (at appropriately high doses) are equally potent ischemic stressors for inducing wall motion abnormalities in presence of a critical coronary artery stenosis" are evidence-based. PMID:18565214

  7. The influence of sampling interval on the accuracy of trail impact assessment

    USGS Publications Warehouse

    Leung, Y.-F.; Marion, J.L.

    1999-01-01

    Trail impact assessment and monitoring (IA&M) programs have been growing in importance and application in recreation resource management at protected areas. Census-based and sampling-based approaches have been developed in such programs, with systematic point sampling being the most common survey design. This paper examines the influence of sampling interval on the accuracy of estimates for selected trail impact problems. A complete census of four impact types on 70 trails in Great Smoky Mountains National Park was utilized as the base data set for the analyses. The census data were resampled at increasing intervals to create a series of simulated point data sets. Estimates of frequency of occurrence and lineal extent for the four impact types were compared with the census data set. The responses of accuracy loss on lineal extent estimates to increasing sampling intervals varied across different impact types, while the responses on frequency of occurrence estimates were consistent, approximating an inverse asymptotic curve. These findings suggest that systematic point sampling may be an appropriate method for estimating the lineal extent but not the frequency of trail impacts. Sample intervals of less than 100 m appear to yield an excellent level of accuracy for the four impact types evaluated. Multiple regression analysis results suggest that appropriate sampling intervals are more likely to be determined by the type of impact in question rather than the length of trail. The census-based trail survey and the resampling-simulation method developed in this study can be a valuable first step in establishing long-term trail IA&M programs, in which an optimal sampling interval range with acceptable accuracy is determined before investing efforts in data collection.

  8. Assessing Sensor Accuracy for Non-Adjunct Use of Continuous Glucose Monitoring

    PubMed Central

    Patek, Stephen D.; Ortiz, Edward Andrew; Breton, Marc D.

    2015-01-01

    Abstract Background: The level of continuous glucose monitoring (CGM) accuracy needed for insulin dosing using sensor values (i.e., the level of accuracy permitting non-adjunct CGM use) is a topic of ongoing debate. Assessment of this level in clinical experiments is virtually impossible because the magnitude of CGM errors cannot be manipulated and related prospectively to clinical outcomes. Materials and Methods: A combination of archival data (parallel CGM, insulin pump, self-monitoring of blood glucose [SMBG] records, and meals for 56 pump users with type 1 diabetes) and in silico experiments was used to “replay” real-life treatment scenarios and relate sensor error to glycemic outcomes. Nominal blood glucose (BG) traces were extracted using a mathematical model, yielding 2,082 BG segments each initiated by insulin bolus and confirmed by SMBG. These segments were replayed at seven sensor accuracy levels (mean absolute relative differences [MARDs] of 3–22%) testing six scenarios: insulin dosing using sensor values, threshold, and predictive alarms, each without or with considering CGM trend arrows. Results: In all six scenarios, the occurrence of hypoglycemia (frequency of BG levels ≤50 mg/dL and BG levels ≤39 mg/dL) increased with sensor error, displaying an abrupt slope change at MARD =10%. Similarly, hyperglycemia (frequency of BG levels ≥250 mg/dL and BG levels ≥400 mg/dL) increased and displayed an abrupt slope change at MARD=10%. When added to insulin dosing decisions, information from CGM trend arrows, threshold, and predictive alarms resulted in improvement in average glycemia by 1.86, 8.17, and 8.88 mg/dL, respectively. Conclusions: Using CGM for insulin dosing decisions is feasible below a certain level of sensor error, estimated in silico at MARD=10%. In our experiments, further accuracy improvement did not contribute substantively to better glycemic outcomes. PMID:25436913

  9. Accuracy and performance of the state-based Φ and liveliness measures of information integration.

    PubMed

    Gamez, David; Aleksander, Igor

    2011-12-01

    A number of people have suggested that there is a link between information integration and consciousness, and a number of algorithms for calculating information integration have been put forward. The most recent of these is Balduzzi and Tononi's state-based Φ algorithm, which has factorial dependencies that severely limit the number of neurons that can be analyzed. To address this issue an alternative state-based measure known as liveliness has been developed, which uses the causal relationships between neurons to identify the areas of maximum information integration. This paper outlines the state-based Φ and liveliness algorithms and sets out a number of test networks that were used to compare their accuracy and performance. The results show that liveliness is a reasonable approximation to state-based Φ for some network topologies, and it has a much more scalable performance than state-based Φ.

  10. Accuracy of teacher assessments of second-language students at risk for reading disability.

    PubMed

    Limbos, M M; Geva, E

    2001-01-01

    This study examined the accuracy of teacher assessments in screening for reading disabilities among students of English as a second language (ESL) and as a first language (L1). Academic and oral language tests were administered to 369 children (249 ESL, 120 L1) at the beginning of Grade 1 and at the end of Grade 2. Concurrently, 51 teachers nominated children at risk for reading failure and completed rating scales assessing academic and oral language skills. Scholastic records were reviewed for notation of concern or referral. The criterion measure was a standardized reading score based on phonological awareness, rapid naming, and word recognition. Results indicated that teacher rating scales and nominations had low sensitivity in identifying ESL and L1 students at risk for reading disability at the 1-year mark. Relative to other forms of screening, teacher-expressed concern had lower sensitivity. Finally, oral language proficiency contributed to misclassifications in the ESL group.

  11. Accuracy of knee range of motion assessment after total knee arthroplasty.

    PubMed

    Lavernia, Carlos; D'Apuzzo, Michele; Rossi, Mark D; Lee, David

    2008-09-01

    Measurement of knee joint range of motion (ROM) is important to assess after total knee arthroplasty. Our objective was to determine level of agreement and accuracy between observers with different knowledge on total ROM after total knee arthroplasty. Forty-one patients underwent x-ray of active and passive knee ROM (gold standard). Five different raters evaluated observed and measured ROM: orthopedic surgeon, clinical fellow, physician assistant, research fellow, and a physical therapist. A 1-way analysis of variance was used to determine differences in ROM between raters over both conditions. Limit of agreement for each rater for both active and passive total ROM under both conditions was calculated. Analysis of variance indicated a difference between raters for all conditions (range, P = .004 to P < or =.0001). The trend for all raters was to overestimate ROM at higher ranges. Assessment of ROM through direct observation without a goniometer provides inaccurate findings.

  12. Geometric calibration and accuracy assessment of a multispectral imager on UAVs

    NASA Astrophysics Data System (ADS)

    Zheng, Fengjie; Yu, Tao; Chen, Xingfeng; Chen, Jiping; Yuan, Guoti

    2012-11-01

    The increasing developments in Unmanned Aerial Vehicles (UAVs) platforms and associated sensing technologies have widely promoted UAVs remote sensing application. UAVs, especially low-cost UAVs, limit the sensor payload in weight and dimension. Mostly, cameras on UAVs are panoramic, fisheye lens, small-format CCD planar array camera, unknown intrinsic parameters and lens optical distortion will cause serious image aberrations, even leading a few meters or tens of meters errors in ground per pixel. However, the characteristic of high spatial resolution make accurate geolocation more critical to UAV quantitative remote sensing research. A method for MCC4-12F Multispectral Imager designed to load on UAVs has been developed and implemented. Using multi-image space resection algorithm to assess geometric calibration parameters of random position and different photogrammetric altitudes in 3D test field, which is suitable for multispectral cameras. Both theoretical and practical accuracy assessments were selected. The results of theoretical strategy, resolving object space and image point coordinate differences by space intersection, showed that object space RMSE were 0.2 and 0.14 pixels in X direction and in Y direction, image space RMSE were superior to 0.5 pixels. In order to verify the accuracy and reliability of the calibration parameters,practical study was carried out in Tianjin UAV flight experiments, the corrected accuracy validated by ground checkpoints was less than 0.3m. Typical surface reflectance retrieved on the basis of geo-rectified data was compared with ground ASD measurement resulting 4% discrepancy. Hence, the approach presented here was suitable for UAV multispectral imager.

  13. Analyses of odontometric sexual dimorphism and sex assessment accuracy on a large sample.

    PubMed

    Angadi, Punnya V; Hemani, S; Prabhu, Sudeendra; Acharya, Ashith B

    2013-08-01

    Correct sex assessment of skeletonized human remains allows investigators to undertake a more focused search of missing persons' files to establish identity. Univariate and multivariate odontometric sex assessment has been explored in recent years on small sample sizes and have not used a test sample. Consequently, inconsistent results have been produced in terms of accuracy of sex allocation. This paper has derived data from a large sample of males and females, and applied logistic regression formulae on a test sample. Using a digital caliper, buccolingual and mesiodistal dimensions of all permanent teeth (except third molars) were measured on 600 dental casts (306 females, 294 males) of young adults (18-32 years), and the data subjected to univariate (independent samples' t-test) and multivariate statistics (stepwise logistic regression analysis, or LRA). The analyses revealed that canines were the most sexually dimorphic teeth followed by molars. All tooth variables were larger in males, with 51/56 (91.1%) being statistically larger (p < 0.05). When the stepwise LRA formulae were applied to a test sample of 69 subjects (40 females, 29 males) of the same age range, allocation accuracy of 68.1% for the maxillary teeth, 73.9% for the mandibular teeth, and 71% for teeth of both jaws combined, were obtained. The high univariate sexual dimorphism observed herein contrasts with some reports of low, and sometimes reverse, sexual dimorphism (the phenomenon of female tooth dimensions being larger than males'); the LRA results, too, are in contradiction to a previous report of virtually 100% sex allocation for a small heterogeneous sample. These reflect the importance of using a large sample to quantify sexual dimorphism in tooth dimensions and the application of the derived formulae on a test dataset to ascertain accuracy which, at best, is moderate in nature.

  14. Accuracy of subjective assessment of fever by Nigerian mothers in under-5 children

    PubMed Central

    Odinaka, Kelechi Kenneth; Edelu, Benedict O.; Nwolisa, Emeka Charles; Amamilo, Ifeyinwa B.; Okolo, Seline N.

    2014-01-01

    Background: Many mothers still rely on palpation to determine if their children have fever at home before deciding to seek medical attention or administer self-medications. This study was carried out to determine the accuracy of subjective assessment of fever by Nigerian mothers in Under-5 Children. Patients and Methods: Each eligible child had a tactile assessment of fever by the mother after which the axillary temperature was measured. Statistical analysis was done using SPSS version 19 (IBM Inc. Chicago Illinois, USA, 2010). Result: A total of 113 mother/child pairs participated in the study. Palpation overestimates fever by 24.6%. Irrespective of the surface of the hand used for palpation, the sensitivity, specificity, positive predictive value (PPV) and negative predictive value (NPV) of tactile assessment were 82.4%, 37.1%, 51.9% and 71.9%, respectively. The use of the palmer surface of the hand had a better sensitivity (95.2%) than the dorsum of the hand (69.2%). The use of multiple sites had better sensitivity (86.7%) than the use of single site (76.2%). Conclusion: Tactile assessment of childhood fevers by mothers is still a relevant screening tool for the presence or absence fever. Palpation with the palmer surface of the hand using multiple sites improves the reliability of tactile assessment of fever. PMID:25114371

  15. Evaluation of the accuracy of land-use based ecosystem service assessments for different thematic resolutions.

    PubMed

    Van der Biest, K; Vrebos, D; Staes, J; Boerema, A; Bodí, M B; Fransen, E; Meire, P

    2015-06-01

    The demand for pragmatic tools for mapping ecosystem services (ES) has led to the widespread application of land-use based proxy methods, mostly using coarse thematic resolution classification systems. Although various studies have demonstrated the limited reliability of land use as an indicator of service delivery, this does not prevent the method from being frequently applied on different institutional levels. It has recently been argued that a more detailed land use classification system may increase the accuracy of this approach. This research statistically compares maps of predicted ES delivery based on land use scoring for three different thematic resolutions (number of classes) with maps of ES delivery produced by biophysical models. Our results demonstrate that using a more detailed land use classification system does not significantly increase the accuracy of land-use based ES assessments for the majority of the considered ES. Correlations between land-use based assessments and biophysical model outcomes are relatively strong for provisioning services, independent of the classification system. However, large discrepancies occur frequently between the score and the model-based estimate. We conclude that land use, as a simple indicator, is not effective enough to be used in environmental management as it cannot capture differences in abiotic conditions and ecological processes that explain differences in service delivery. Using land use as a simple indicator will therefore result in inappropriate management decisions, even if a highly detailed land use classification system is used.

  16. Biased binomial assessment of cross-validated estimation of classification accuracies illustrated in diagnosis predictions.

    PubMed

    Noirhomme, Quentin; Lesenfants, Damien; Gomez, Francisco; Soddu, Andrea; Schrouff, Jessica; Garraux, Gaëtan; Luxen, André; Phillips, Christophe; Laureys, Steven

    2014-01-01

    Multivariate classification is used in neuroimaging studies to infer brain activation or in medical applications to infer diagnosis. Their results are often assessed through either a binomial or a permutation test. Here, we simulated classification results of generated random data to assess the influence of the cross-validation scheme on the significance of results. Distributions built from classification of random data with cross-validation did not follow the binomial distribution. The binomial test is therefore not adapted. On the contrary, the permutation test was unaffected by the cross-validation scheme. The influence of the cross-validation was further illustrated on real-data from a brain-computer interface experiment in patients with disorders of consciousness and from an fMRI study on patients with Parkinson disease. Three out of 16 patients with disorders of consciousness had significant accuracy on binomial testing, but only one showed significant accuracy using permutation testing. In the fMRI experiment, the mental imagery of gait could discriminate significantly between idiopathic Parkinson's disease patients and healthy subjects according to the permutation test but not according to the binomial test. Hence, binomial testing could lead to biased estimation of significance and false positive or negative results. In our view, permutation testing is thus recommended for clinical application of classification with cross-validation.

  17. Self Assessment in Schizophrenia: Accuracy of Evaluation of Cognition and Everyday Functioning

    PubMed Central

    Gould, Felicia; McGuire, Laura Stone; Durand, Dante; Sabbag, Samir; Larrauri, Carlos; Patterson, Thomas L.; Twamley, Elizabeth W.; Harvey, Philip D.

    2015-01-01

    Objective Self-assessment deficits, often referred to as impaired insight or unawareness of illness, are well established in people with schizophrenia. There are multiple levels of awareness, including awareness of symptoms, functional deficits, cognitive impairments, and the ability to monitor cognitive and functional performance in an ongoing manner. The present study aimed to evaluate the comparative predictive value of each aspect of awareness on the levels of everyday functioning in people with schizophrenia. Method We examined multiple aspects of self-assessment of functioning in 214 people with schizophrenia. We also collected information on everyday functioning rated by high contact clinicians and examined the importance of self-assessment for the prediction of real world functional outcomes. The relative impact of performance based measures of cognition, functional capacity, and metacognitive performance on everyday functioning was also examined. Results Misestimation of ability emerged as the strongest predictor of real world functioning and exceeded the influences of cognitive performance, functional capacity performance, and performance-based assessment of metacognitive monitoring. The relative contribution of the factors other than self-assessment varied according to which domain of everyday functioning was being examined, but in all cases, accounted for less predictive variance. Conclusions These results underscore the functional impact of misestimating one’s current functioning and relative level of ability. These findings are consistent with the use of insight-focused treatments and compensatory strategies designed to increase self-awareness in multiple functional domains. PMID:25643212

  18. Dimensions of L2 Performance and Proficiency: Complexity, Accuracy and Fluency in SLA. Language Learning & Language Teaching. Volume 32

    ERIC Educational Resources Information Center

    Housen, Alex, Ed.; Kuiken, Folkert, Ed.; Vedder, Ineke, Ed.

    2012-01-01

    Research into complexity, accuracy and fluency (CAF) as basic dimensions of second language performance, proficiency and development has received increased attention in SLA. However, the larger picture in this field of research is often obscured by the breadth of scope, multiple objectives and lack of clarity as to how complexity, accuracy and…

  19. Assessing Accuracy of Exchange-Correlation Functionals for the Description of Atomic Excited States

    NASA Astrophysics Data System (ADS)

    Makowski, Marcin; Hanas, Martyna

    2016-09-01

    The performance of exchange-correlation functionals for the description of atomic excitations is investigated. A benchmark set of excited states is constructed and experimental data is compared to Time-Dependent Density Functional Theory (TDDFT) calculations. The benchmark results show that for the selected group of functionals good accuracy may be achieved and the quality of predictions provided is competitive to computationally more demanding coupled-cluster approaches. Apart from testing the standard TDDFT approaches, also the role of self-interaction error plaguing DFT calculations and the adiabatic approximation to the exchange-correlation kernels is given some insight.

  20. Assessment of arterial stenosis in a flow model with power Doppler angiography: accuracy and observations on blood echogenicity.

    PubMed

    Cloutier, G; Qin, Z; Garcia, D; Soulez, G; Oliva, V; Durand, L G

    2000-11-01

    The objective of the project was to study the influence of various hemodynamic and rheologic factors on the accuracy of 3-D power Doppler angiography (PDA) for quantifying the percentage of area reduction of a stenotic artery along its longitudinal axis. The study was performed with a 3-D power Doppler ultrasound (US) imaging system and an in vitro mock flow model containing a simulated artery with a stenosis of 80% area reduction. Measurements were performed under steady and pulsatile flow conditions by circulating, at different flow rates, four types of fluid (porcine whole blood, porcine whole blood with a US contrast agent, porcine blood cell suspension and porcine blood cell suspension with a US contrast agent). A total of 120 measurements were performed. Computational simulations of the fluid dynamics in the vicinity of the axisymmetrical stenosis were performed with finite-element modeling (FEM) to locate and identify the PDA signal loss due to the wall filter of the US instrument. The performance of three segmentation algorithms used to delineate the vessel lumen on the PDA images was assessed and compared. It is shown that the type of fluid flowing in the phantom affects the echoicity of PDA images and the accuracy of the segmentation algorithms. The type of flow (steady or pulsatile) and the flow rate can also influence the PDA image accuracy, whereas the use of US contrast agent has no significant effect. For the conditions that would correspond to a US scan of a common femoral artery (whole blood flowing at a mean pulsatile flow rate of 450 mL min(-1)), the errors in the percentages of area reduction were 4.3 +/- 1.2% before the stenosis, -2.0 +/- 1.0% in the stenosis, 11.5 +/- 3.1% in the recirculation zone, and 2.8 +/- 1.7% after the stenosis, respectively. Based on the simulated blood flow patterns obtained with FEM, the lower accuracy in the recirculation zone can be attributed to the effect of the wall filter that removes low flow velocities. In

  1. A PRIOR EVALUATION OF TWO-STAGE CLUSTER SAMPLING FOR ACCURACY ASSESSMENT OF LARGE-AREA LAND-COVER MAPS

    EPA Science Inventory

    Two-stage cluster sampling reduces the cost of collecting accuracy assessment reference data by constraining sample elements to fall within a limited number of geographic domains (clusters). However, because classification error is typically positively spatially correlated, withi...

  2. Assessment of the accuracy of ABC/2 variations in traumatic epidural hematoma volume estimation: a retrospective study

    PubMed Central

    Hu, Tingting; Zhang, Zhen

    2016-01-01

    Background. The traumatic epidural hematoma (tEDH) volume is often used to assist in tEDH treatment planning and outcome prediction. ABC/2 is a well-accepted volume estimation method that can be used for tEDH volume estimation. Previous studies have proposed different variations of ABC/2; however, it is unclear which variation will provide a higher accuracy. Given the promising clinical contribution of accurate tEDH volume estimations, we sought to assess the accuracy of several ABC/2 variations in tEDH volume estimation. Methods. The study group comprised 53 patients with tEDH who had undergone non-contrast head computed tomography scans. For each patient, the tEDH volume was automatically estimated by eight ABC/2 variations (four traditional and four newly derived) with an in-house program, and results were compared to those from manual planimetry. Linear regression, the closest value, percentage deviation, and Bland-Altman plot were adopted to comprehensively assess accuracy. Results. Among all ABC/2 variations assessed, the traditional variations y = 0.5 × A1B1C1 (or A2B2C1) and the newly derived variations y = 0.65 × A1B1C1 (or A2B2C1) achieved higher accuracy than the other variations. No significant differences were observed between the estimated volume values generated by these variations and those of planimetry (p > 0.05). Comparatively, the former performed better than the latter in general, with smaller mean percentage deviations (7.28 ± 5.90% and 6.42 ± 5.74% versus 19.12 ± 6.33% and 21.28 ± 6.80%, respectively) and more values closest to planimetry (18/53 and 18/53 versus 2/53 and 0/53, respectively). Besides, deviations of most cases in the former fell within the range of <10% (71.70% and 84.91%, respectively), whereas deviations of most cases in the latter were in the range of 10–20% and >20% (90.57% and 96.23, respectively). Discussion. In the current study, we adopted an automatic approach to assess the accuracy of several ABC/2 variations

  3. Telerobot control mode performance assessment

    NASA Technical Reports Server (NTRS)

    Zimmerman, Wayne; Backes, Paul; Chirikjian, Greg

    1992-01-01

    With the maturation of various developing robot control schemes, it is becoming extremely important that the technical community evaluate the performance of these various control technologies against an established baseline to determine which technology provides the most reliable robust, and safe on-orbit robot control. The Supervisory Telerobotics Laboratory (STELER) at JPL has developed a unique robot control capability which has been evaluated by the NASA technical community and found useful for augmenting both the operator interface and control of intended robotic systems on-board the Space Station. As part of the technology development and prototyping effort, the STELER team has been evaluating the performance of different control modes; namely, teleoperation under position, or rate, control, teleoperation with force reflection and shared control. Nine trained subjects were employed in the performance evaluation involving several high fidelity servicing tasks. Four types of operator performance data were collected; task completion time, average force, peak force, and number of operator successes and errors. This paper summarizes the results of this performance evaluation.

  4. ACSB: A minimum performance assessment

    NASA Technical Reports Server (NTRS)

    Jones, Lloyd Thomas; Kissick, William A.

    1988-01-01

    Amplitude companded sideband (ACSB) is a new modulation technique which uses a much smaller channel width than does conventional frequency modulation (FM). Among the requirements of a mobile communications system is adequate speech intelligibility. This paper explores this aspect of minimum required performance. First, the basic principles of ACSB are described, with emphasis on those features that affect speech quality. Second, the appropriate performance measures for ACSB are reviewed. Third, a subjective voice quality scoring method is used to determine the values of the performance measures that equate to the minimum level of intelligibility. It is assumed that the intelligibility of an FM system operating at 12 dB SINAD represents that minimum. It was determined that ACSB operating at 12 dB SINAD with an audio-to-pilot ratio of 10 dB provides approximately the same intelligibility as FM operating at 12 dB SINAD.

  5. Assessment of the sources of error affecting the quantitative accuracy of SPECT imaging in small animals

    SciTech Connect

    Joint Graduate Group in Bioengineering, University of California, San Francisco and University of California, Berkeley; Department of Radiology, University of California; Gullberg, Grant T; Hwang, Andrew B.; Franc, Benjamin L.; Gullberg, Grant T.; Hasegawa, Bruce H.

    2008-02-15

    Small animal SPECT imaging systems have multiple potential applications in biomedical research. Whereas SPECT data are commonly interpreted qualitatively in a clinical setting, the ability to accurately quantify measurements will increase the utility of the SPECT data for laboratory measurements involving small animals. In this work, we assess the effect of photon attenuation, scatter and partial volume errors on the quantitative accuracy of small animal SPECT measurements, first with Monte Carlo simulation and then confirmed with experimental measurements. The simulations modeled the imaging geometry of a commercially available small animal SPECT system. We simulated the imaging of a radioactive source within a cylinder of water, and reconstructed the projection data using iterative reconstruction algorithms. The size of the source and the size of the surrounding cylinder were varied to evaluate the effects of photon attenuation and scatter on quantitative accuracy. We found that photon attenuation can reduce the measured concentration of radioactivity in a volume of interest in the center of a rat-sized cylinder of water by up to 50percent when imaging with iodine-125, and up to 25percent when imaging with technetium-99m. When imaging with iodine-125, the scatter-to-primary ratio can reach up to approximately 30percent, and can cause overestimation of the radioactivity concentration when reconstructing data with attenuation correction. We varied the size of the source to evaluate partial volume errors, which we found to be a strong function of the size of the volume of interest and the spatial resolution. These errors can result in large (>50percent) changes in the measured amount of radioactivity. The simulation results were compared with and found to agree with experimental measurements. The inclusion of attenuation correction in the reconstruction algorithm improved quantitative accuracy. We also found that an improvement of the spatial resolution through the

  6. Assessment of the sources of error affecting the quantitative accuracy of SPECT imaging in small animals

    NASA Astrophysics Data System (ADS)

    Hwang, Andrew B.; Franc, Benjamin L.; Gullberg, Grant T.; Hasegawa, Bruce H.

    2008-05-01

    Small animal SPECT imaging systems have multiple potential applications in biomedical research. Whereas SPECT data are commonly interpreted qualitatively in a clinical setting, the ability to accurately quantify measurements will increase the utility of the SPECT data for laboratory measurements involving small animals. In this work, we assess the effect of photon attenuation, scatter and partial volume errors on the quantitative accuracy of small animal SPECT measurements, first with Monte Carlo simulation and then confirmed with experimental measurements. The simulations modeled the imaging geometry of a commercially available small animal SPECT system. We simulated the imaging of a radioactive source within a cylinder of water, and reconstructed the projection data using iterative reconstruction algorithms. The size of the source and the size of the surrounding cylinder were varied to evaluate the effects of photon attenuation and scatter on quantitative accuracy. We found that photon attenuation can reduce the measured concentration of radioactivity in a volume of interest in the center of a rat-sized cylinder of water by up to 50% when imaging with iodine-125, and up to 25% when imaging with technetium-99m. When imaging with iodine-125, the scatter-to-primary ratio can reach up to approximately 30%, and can cause overestimation of the radioactivity concentration when reconstructing data with attenuation correction. We varied the size of the source to evaluate partial volume errors, which we found to be a strong function of the size of the volume of interest and the spatial resolution. These errors can result in large (>50%) changes in the measured amount of radioactivity. The simulation results were compared with and found to agree with experimental measurements. The inclusion of attenuation correction in the reconstruction algorithm improved quantitative accuracy. We also found that an improvement of the spatial resolution through the use of resolution

  7. Assessing effects of the e-Chasqui laboratory information system on accuracy and timeliness of bacteriology results in the Peruvian tuberculosis program.

    PubMed

    Blaya, Joaquin A; Shin, Sonya S; Yagui, Martin J A; Yale, Gloria; Suarez, Carmen; Asencios, Luis; Fraser, Hamish

    2007-10-11

    We created a web-based laboratory information system, e-Chasqui to connect public laboratories to health centers to improve communication and analysis. After one year, we performed a pre and post assessment of communication delays and found that e-Chasqui maintained the average delay but eliminated delays of over 60 days. Adding digital verification maintained the average delay, but should increase accuracy. We are currently performing a randomized evaluation of the impacts of e-Chasqui.

  8. The Effects of Performance-Based Assessment Criteria on Student Performance and Self-Assessment Skills

    ERIC Educational Resources Information Center

    Fastre, Greet Mia Jos; van der Klink, Marcel R.; van Merrienboer, Jeroen J. G.

    2010-01-01

    This study investigated the effect of performance-based versus competence-based assessment criteria on task performance and self-assessment skills among 39 novice secondary vocational education students in the domain of nursing and care. In a performance-based assessment group students are provided with a preset list of performance-based…

  9. Accuracy assessment of Kinect for Xbox One in point-based tracking applications

    NASA Astrophysics Data System (ADS)

    Goral, Adrian; Skalski, Andrzej

    2015-12-01

    We present the accuracy assessment of a point-based tracking system built on Kinect v2. In our approach, color, IR and depth data were used to determine the positions of spherical markers. To accomplish this task, we calibrated the depth/infrared and color cameras using a custom method. As a reference tool we used Polaris Spectra optical tracking system. The mean error obtained within the range from 0.9 to 2.9 m was 61.6 mm. Although the depth component of the error turned out to be the largest, the random error of depth estimation was only 1.24 mm on average. Our Kinect-based system also allowed for reliable angular measurements within the range of ±20° from the sensor's optical axis.

  10. Accuracy of actuarial procedures for assessment of sexual offender recidivism risk may vary across ethnicity.

    PubMed

    Långström, Niklas

    2004-04-01

    Little is known about whether the accuracy of tools for assessment of sexual offender recidivism risk holds across ethnic minority offenders. I investigated the predictive validity across ethnicity for the RRASOR and the Static-99 actuarial risk assessment procedures in a national cohort of all adult male sex offenders released from prison in Sweden 1993-1997. Subjects ordered out of Sweden upon release from prison were excluded and remaining subjects (N = 1303) divided into three subgroups based on citizenship. Eighty-three percent of the subjects were of Nordic ethnicity, and non-Nordic citizens were either of non-Nordic European (n = 49, hereafter called European) or African Asian descent (n = 128). The two tools were equally accurate among Nordic and European sexual offenders for the prediction of any sexual and any violent nonsexual recidivism. In contrast, neither measure could differentiate African Asian sexual or violent recidivists from nonrecidivists. Compared to European offenders, AfricanAsian offenders had more often sexually victimized a nonrelative or stranger, had higher Static-99 scores, were younger, more often single, and more often homeless. The results require replication, but suggest that the promising predictive validity seen with some risk assessment tools may not generalize across offender ethnicity or migration status. More speculatively, different risk factors or causal chains might be involved in the development or persistence of offending among minority or immigrant sexual abusers.

  11. Accuracy Assessment of GPS Buoy Sea Level Measurements for Coastal Applications

    NASA Astrophysics Data System (ADS)

    Chiu, S.; Cheng, K.

    2008-12-01

    The GPS buoy in this study contains a geodetic antenna and a compact floater with the GPS receiver and power supply tethered to a boat. The coastal applications using GPS include monitoring of sea level and its change, calibration of satellite altimeters, hydrological or geophysical parameters modeling, seafloor geodesy, and others. Among these applications, in order to understand the overall data or model quality, it is required to gain the knowledge of position accuracy of GPS buoys or GPS-equipped vessels. Despite different new GPS data processing techniques, e.g., Precise Point Positioning (PPP) and virtual reference station (VRS), that require a prioir information obtained from the a regional GPS network. While the required a prioir information can be implemented on land, it may not be available on the sea. Hence, in this study, the GPS buoy was positioned with respect to a onshore GPS reference station using the traditional double- difference technique. Since the atmosphere starts to decorrelate as the baseline, the distance between the buoy and the reference station, increases, the positioning accuracy consequently decreases. Therefore, this study aims to assess the buoy position accuracy as the baseline increases and in order to quantify the upper limit of sea level measured by the GPS buoy. A GPS buoy campaign was conducted by National Chung Cheng University in An Ping, Taiwan with a 8- hour GPS buoy data collection. In addition, a GPS network contains 4 Continuous GPS (CGPS) stations in Taiwan was established with the goal to enable baselines in different range for buoy data processing. A vector relation from the network was utilized in order to find the correct ambiguities, which were applied to the long-baseline solution to eliminate the position error caused by incorrect ambiguities. After this procedure, a 3.6-cm discrepancy was found in the mean sea level solution between the long (~80 km) and the short (~1.5 km) baselines. The discrepancy between a

  12. Diagnostic accuracy of refractometer and Brix refractometer to assess failure of passive transfer in calves: protocol for a systematic review and meta-analysis.

    PubMed

    Buczinski, S; Fecteau, G; Chigerwe, M; Vandeweerd, J M

    2016-06-01

    Calves are highly dependent of colostrum (and antibody) intake because they are born agammaglobulinemic. The transfer of passive immunity in calves can be assessed directly by dosing immunoglobulin G (IgG) or by refractometry or Brix refractometry. The latter are easier to perform routinely in the field. This paper presents a protocol for a systematic review meta-analysis to assess the diagnostic accuracy of refractometry or Brix refractometry versus dosage of IgG as a reference standard test. With this review protocol we aim to be able to report refractometer and Brix refractometer accuracy in terms of sensitivity and specificity as well as to quantify the impact of any study characteristic on test accuracy.

  13. A Method for Assessing Ground-Truth Accuracy of the 5DCT Technique

    PubMed Central

    Dou, T. H.; Thomas, D. H.; O'Connell, D.; Lamb, J.M.; Lee, P.; Low, D.A.

    2015-01-01

    Purpose To develop a technique that assesses the accuracy of the breathing phase-specific volume image generation process by patient-specific breathing motion model using the original free-breathing CT scans as ground truths. Methods 16 lung cancer patients underwent a previously published protocol in which 25 free-breathing fast helical CT scans were acquired with a simultaneous breathing surrogate. A patient-specific motion model was constructed based on the tissue displacements determined by a state-of-the-art deformable image registration. The first image was arbitrarily selected as the reference image. The motion model was used, along with the free-breathing phase information of the original 25 image datasets, to generate a set of deformation vector fields (DVF) that mapped the reference image to the 24 non-reference images. The high-pitch helically acquired original scans served as ground truths because they captured the instantaneous tissue positions during free breathing. Image similarity between the simulated and the original scans was assessed using deformable registration that evaluated the point-wise discordance throughout the lungs. Results Qualitative comparisons using image overlays showed excellent agreement between the simulated and the original images. Even large 2 cm diaphragm displacements were very well modeled, as was sliding motion across the lung-chest wall boundary. The mean error across the patient cohort was 1.15±0.37 mm, while the mean 95th percentile error was 2.47±0.78 mm. Conclusion The proposed ground truth based technique provided voxel-by-voxel accuracy analysis that could identify organ or tumor-specific motion modeling errors for treatment planning. Despite a large variety of breathing patterns and lung deformations during the free-breathing scanning session, the 5DCT technique was able to accurately reproduce the original helical CT scans, suggesting its applicability to a wide range of patients. PMID:26530763

  14. Accuracy Assessment of Direct Georeferencing for Photogrammetric Applications on Small Unmanned Aerial Platforms

    NASA Astrophysics Data System (ADS)

    Mian, O.; Lutes, J.; Lipa, G.; Hutton, J. J.; Gavelle, E.; Borghini, S.

    2016-03-01

    Microdrones md4-1000 quad-rotor VTOL UAV. The Sony A7R and each lens combination were focused and calibrated terrestrially using the Applanix camera calibration facility, and then integrated with the APX-15 GNSS-Inertial system using a custom mount specifically designed for UAV applications. The mount is constructed in such a way as to maintain the stability of both the interior orientation and IMU boresight calibration over shock and vibration, thus turning the Sony A7R into a metric imaging solution. In July and August 2015, Applanix and Avyon carried out a series of test flights of this system. The goal of these test flights was to assess the performance of DMS APX-15 direct georeferencing system under various scenarios. Furthermore, an examination of how DMS APX-15 can be used to produce accurate map products without the use of ground control points and with reduced sidelap was also carried out. Reducing the side lap for survey missions performed by small UAVs can significantly increase the mapping productivity of these platforms. The area mapped during the first flight campaign was a 250m x 300m block and a 775m long railway corridor in a rural setting in Ontario, Canada. The second area mapped was a 450m long corridor over a dam known as Fryer Dam (over Richelieu River in Quebec, Canada). Several ground control points were distributed within both test areas. The flight over the block area included 8 North-South lines and 1 cross strip flown at 80m AGL, resulting in a ~1cm GSD. The flight over the railway corridor included 2 North-South lines also flown at 80m AGL. Similarly, the flight over the dam corridor included 2 North-South lines flown at 50m AGL. The focus of this paper was to analyse the results obtained from the two corridors. Test results from both areas were processed using Direct Georeferencing techniques, and then compared for accuracy against the known positions of ground control points in each test area. The GNSS-Inertial data collected by the APX-15 was

  15. Assessment of the Sensitivity, Specificity, and Accuracy of Thermography in Identifying Patients with TMD

    PubMed Central

    Woźniak, Krzysztof; Szyszka-Sommerfeld, Liliana; Trybek, Grzegorz; Piątkowska, Dagmara

    2015-01-01

    Background The purpose of the present study was to evaluate the sensitivity, specificity, and accuracy of thermography in identifying patients with temporomandibular dysfunction (TMD). Material/Methods The study sample consisted of 50 patients (27 women and 23 men) ages 19.2 to 24.5 years (mean age 22.43±1.04) with subjective symptoms of TMD (Ai II–III) and 50 patients (25 women and 25 men) ages 19.3 to 25.1 years (mean age 22.21±1.18) with no subjective symptoms of TMD (Ai I). The anamnestic interviews were conducted according to the three-point anamnestic index of temporomandibular dysfunction (Ai). The thermography was performed using a ThermaCAM TMSC500 (FLIR Systems AB, Sweden) independent thermal vision system. Thermography was closely combined with a 10-min chewing test. Results The results of our study indicated that the absolute difference in temperature between the right and left side (ΔT) has the highest diagnostic value. The diagnostic effectiveness of this parameter increased after the chewing test. The cut-off points for values of temperature differences between the right and left side and identifying 95.5% of subjects with no functional disorders according to the temporomandibular dysfunction index Di (specificity 95.5%) were 0.26°C (AUC=0.7422, sensitivity 44.3%, accuracy 52.4%) before the chewing test and 0.52°C (AUC=0.7920, sensitivity 46.4%, accuracy 56.3%) after it. Conclusions The evaluation of thermography demonstrated its diagnostic usefulness in identifying patients with TMD with limited effectiveness. The chewing test helped in increasing the diagnostic efficiency of thermography in identifying patients with TMD. PMID:26002613

  16. Creating a Standard Set of Metrics to Assess Accuracy of Solar Forecasts: Preliminary Results

    NASA Astrophysics Data System (ADS)

    Banunarayanan, V.; Brockway, A.; Marquis, M.; Haupt, S. E.; Brown, B.; Fowler, T.; Jensen, T.; Hamann, H.; Lu, S.; Hodge, B.; Zhang, J.; Florita, A.

    2013-12-01

    The U.S. Department of Energy (DOE) SunShot Initiative, launched in 2011, seeks to reduce the cost of solar energy systems by 75% from 2010 to 2020. In support of the SunShot Initiative, the DOE Office of Energy Efficiency and Renewable Energy (EERE) is partnering with the National Oceanic and Atmospheric Administration (NOAA) and solar energy stakeholders to improve solar forecasting. Through a funding opportunity announcement issued in the April, 2012, DOE is funding two teams - led by National Center for Atmospheric Research (NCAR), and by IBM - to perform three key activities in order to improve solar forecasts. The teams will: (1) With DOE and NOAA's leadership and significant stakeholder input, develop a standardized set of metrics to evaluate forecast accuracy, and determine the baseline and target values for these metrics; (2) Conduct research that yields a transformational improvement in weather models and methods for forecasting solar irradiance and power; and (3) Incorporate solar forecasts into the system operations of the electric power grid, and evaluate the impact of forecast accuracy on the economics and reliability of operations using the defined, standard metrics. This paper will present preliminary results on the first activity: the development of a standardized set of metrics, baselines and target values. The results will include a proposed framework for metrics development, key categories of metrics, descriptions of each of the proposed set of specific metrics to measure forecast accuracy, feedback gathered from a range of stakeholders on the metrics, and processes to determine baselines and target values for each metric. The paper will also analyze the temporal and spatial resolutions under which these metrics would apply, and conclude with a summary of the work in progress on solar forecasting activities funded by DOE.

  17. Supervised Machine Learning Algorithms Can Classify Open-Text Feedback of Doctor Performance With Human-Level Accuracy

    PubMed Central

    2017-01-01

    Background Machine learning techniques may be an effective and efficient way to classify open-text reports on doctor’s activity for the purposes of quality assurance, safety, and continuing professional development. Objective The objective of the study was to evaluate the accuracy of machine learning algorithms trained to classify open-text reports of doctor performance and to assess the potential for classifications to identify significant differences in doctors’ professional performance in the United Kingdom. Methods We used 1636 open-text comments (34,283 words) relating to the performance of 548 doctors collected from a survey of clinicians’ colleagues using the General Medical Council Colleague Questionnaire (GMC-CQ). We coded 77.75% (1272/1636) of the comments into 5 global themes (innovation, interpersonal skills, popularity, professionalism, and respect) using a qualitative framework. We trained 8 machine learning algorithms to classify comments and assessed their performance using several training samples. We evaluated doctor performance using the GMC-CQ and compared scores between doctors with different classifications using t tests. Results Individual algorithm performance was high (range F score=.68 to .83). Interrater agreement between the algorithms and the human coder was highest for codes relating to “popular” (recall=.97), “innovator” (recall=.98), and “respected” (recall=.87) codes and was lower for the “interpersonal” (recall=.80) and “professional” (recall=.82) codes. A 10-fold cross-validation demonstrated similar performance in each analysis. When combined together into an ensemble of multiple algorithms, mean human-computer interrater agreement was .88. Comments that were classified as “respected,” “professional,” and “interpersonal” related to higher doctor scores on the GMC-CQ compared with comments that were not classified (P<.05). Scores did not vary between doctors who were rated as popular or

  18. Accuracy of emergency physician performed bedside ultrasound in determining gestational age in first trimester pregnancy

    PubMed Central

    2012-01-01

    Background Patient reported menstrual history, physician clinical evaluation, and ultrasonography are used to determine gestational age in the pregnant female. Previous studies have shown that pregnancy dating by last menstrual period (LMP) and physical examination findings can be inaccurate. An ultrasound performed in the radiology department is considered the standard for determining an accurate gestational age. The aim of this study is to determine the accuracy of emergency physician performed bedside ultrasound as an estimation of gestational age (EDUGA) as compared to the radiology department standard. Methods A prospective convenience sample of ED patients presenting in the first trimester of pregnancy (based upon self-reported LMP) regardless of their presenting complaint were enrolled. EDUGA was compared to gestational age estimated by ultrasound performed in the department of radiology (RGA) as the gold standard. Pearson’s product moment correlation coefficient was used to determine the correlation between EDUGA compared to RGA. Results Sixty-eight pregnant patients presumed to be in the 1st trimester of pregnancy based upon self-reported LMP consented to enrollment. When excluding the cases with no fetal pole, the median discrepancy of EDUGA versus RGA was 2 days (interquartile range (IQR) 1 to 3.25). The correlation coefficient of EDUGA with RGA was 0.978. When including the six cases without a fetal pole in the data analysis, the median discrepancy of EDUGA compared with RGA was 3 days (IQR 1 to 4). The correlation coefficient of EDUGA with RGA was 0.945. Conclusion Based on our comparison of EDUGA to RGA in patients presenting to the ED in the first trimester of pregnancy, we conclude that emergency physicians are capable of accurately performing this measurement. Emergency physicians should consider using ultrasound to estimate gestational age as it may be useful for the future care of that pregnant patient. PMID:23216683

  19. Breeding Jatropha curcas by genomic selection: A pilot assessment of the accuracy of predictive models

    PubMed Central

    de Azevedo Peixoto, Leonardo; Laviola, Bruno Galvêas; Alves, Alexandre Alonso; Rosado, Tatiana Barbosa; Bhering, Leonardo Lopes

    2017-01-01

    Genomic wide selection is a promising approach for improving the selection accuracy in plant breeding, particularly in species with long life cycles, such as Jatropha. Therefore, the objectives of this study were to estimate the genetic parameters for grain yield (GY) and the weight of 100 seeds (W100S) using restricted maximum likelihood (REML); to compare the performance of GWS methods to predict GY and W100S; and to estimate how many markers are needed to train the GWS model to obtain the maximum accuracy. Eight GWS models were compared in terms of predictive ability. The impact that the marker density had on the predictive ability was investigated using a varying number of markers, from 2 to 1,248. Because the genetic variance between evaluated genotypes was significant, it was possible to obtain selection gain. All of the GWS methods tested in this study can be used to predict GY and W100S in Jatropha. A training model fitted using 1,000 and 800 markers is sufficient to capture the maximum genetic variance and, consequently, maximum prediction ability of GY and W100S, respectively. This study demonstrated the applicability of genome-wide prediction to identify useful genetic sources of GY and W100S for Jatropha breeding. Further research is needed to confirm the applicability of the proposed approach to other complex traits. PMID:28296913

  20. Breeding Jatropha curcas by genomic selection: A pilot assessment of the accuracy of predictive models.

    PubMed

    Azevedo Peixoto, Leonardo de; Laviola, Bruno Galvêas; Alves, Alexandre Alonso; Rosado, Tatiana Barbosa; Bhering, Leonardo Lopes

    2017-01-01

    Genomic wide selection is a promising approach for improving the selection accuracy in plant breeding, particularly in species with long life cycles, such as Jatropha. Therefore, the objectives of this study were to estimate the genetic parameters for grain yield (GY) and the weight of 100 seeds (W100S) using restricted maximum likelihood (REML); to compare the performance of GWS methods to predict GY and W100S; and to estimate how many markers are needed to train the GWS model to obtain the maximum accuracy. Eight GWS models were compared in terms of predictive ability. The impact that the marker density had on the predictive ability was investigated using a varying number of markers, from 2 to 1,248. Because the genetic variance between evaluated genotypes was significant, it was possible to obtain selection gain. All of the GWS methods tested in this study can be used to predict GY and W100S in Jatropha. A training model fitted using 1,000 and 800 markers is sufficient to capture the maximum genetic variance and, consequently, maximum prediction ability of GY and W100S, respectively. This study demonstrated the applicability of genome-wide prediction to identify useful genetic sources of GY and W100S for Jatropha breeding. Further research is needed to confirm the applicability of the proposed approach to other complex traits.

  1. Quality assessment of comparative diagnostic accuracy studies: our experience using a modified version of the QUADAS-2 tool.

    PubMed

    Wade, Ros; Corbett, Mark; Eastwood, Alison

    2013-09-01

    Assessing the quality of included studies is a vital step in undertaking a systematic review. The recently revised Quality Assessment of Diagnostic Accuracy Studies (QUADAS) tool (QUADAS-2), which is the only validated quality assessment tool for diagnostic accuracy studies, does not include specific criteria for assessing comparative studies. As part of an assessment that included comparative diagnostic accuracy studies, we used a modified version of QUADAS-2 to assess study quality. We modified QUADAS-2 by duplicating questions relating to the index test, to assess the relevant potential sources of bias for both the index test and comparator test. We also added review-specific questions. We have presented our modified version of QUADAS-2 and outlined some key issues for consideration when assessing the quality of comparative diagnostic accuracy studies, to help guide other systematic reviewers conducting comparative diagnostic reviews. Until QUADAS is updated to incorporate assessment of comparative studies, QUADAS-2 can be used, although modification and careful thought is required. It is important to reflect upon whether aspects of study design and methodology favour one of the tests over another.

  2. Assessment of Accuracy and Reliability in Acetabular Cup Placement Using an iPhone/iPad System.

    PubMed

    Kurosaka, Kenji; Fukunishi, Shigeo; Fukui, Tomokazu; Nishio, Shoji; Fujihara, Yuki; Okahisa, Shohei; Takeda, Yu; Daimon, Takashi; Yoshiya, Shinichi

    2016-07-01

    Implant positioning is one of the critical factors that influences postoperative outcome of total hip arthroplasty (THA). Malpositioning of the implant may lead to an increased risk of postoperative complications such as prosthetic impingement, dislocation, restricted range of motion, polyethylene wear, and loosening. In 2012, the intraoperative use of smartphone technology in THA for improved accuracy of acetabular cup placement was reported. The purpose of this study was to examine the accuracy of an iPhone/iPad-guided technique in positioning the acetabular cup in THA compared with the reference values obtained from the image-free navigation system in a cadaveric experiment. Five hips of 5 embalmed whole-body cadavers were used in the study. Seven orthopedic surgeons (4 residents and 3 senior hip surgeons) participated in the study. All of the surgeons examined each of the 5 hips 3 times. The target angle was 38°/19° for operative inclination/anteversion angles, which corresponded to radiographic inclination/anteversion angles of 40°/15°. The simultaneous assessment using the navigation system showed mean±SD radiographic alignment angles of 39.4°±2.6° and 16.4°±2.6° for inclination and anteversion, respectively. Assessment of cup positioning based on Lewinnek's safe zone criteria showed all of the procedures (n=105) achieved acceptable alignment within the safe zone. A comparison of the performances by resident and senior hip surgeons showed no significant difference between the groups (P=.74 for inclination and P=.81 for anteversion). The iPhone/iPad technique examined in this study could achieve acceptable performance in determining cup alignment in THA regardless of the surgeon's expertise. [Orthopedics. 2016; 39(4):e621-e626.].

  3. Inertial Measures of Motion for Clinical Biomechanics: Comparative Assessment of Accuracy under Controlled Conditions - Effect of Velocity

    PubMed Central

    Lebel, Karina; Boissy, Patrick; Hamel, Mathieu; Duval, Christian

    2013-01-01

    Background Inertial measurement of motion with Attitude and Heading Reference Systems (AHRS) is emerging as an alternative to 3D motion capture systems in biomechanics. The objectives of this study are: 1) to describe the absolute and relative accuracy of multiple units of commercially available AHRS under various types of motion; and 2) to evaluate the effect of motion velocity on the accuracy of these measurements. Methods The criterion validity of accuracy was established under controlled conditions using an instrumented Gimbal table. AHRS modules were carefully attached to the center plate of the Gimbal table and put through experimental static and dynamic conditions. Static and absolute accuracy was assessed by comparing the AHRS orientation measurement to those obtained using an optical gold standard. Relative accuracy was assessed by measuring the variation in relative orientation between modules during trials. Findings Evaluated AHRS systems demonstrated good absolute static accuracy (mean error < 0.5o) and clinically acceptable absolute accuracy under condition of slow motions (mean error between 0.5o and 3.1o). In slow motions, relative accuracy varied from 2o to 7o depending on the type of AHRS and the type of rotation. Absolute and relative accuracy were significantly affected (p<0.05) by velocity during sustained motions. The extent of that effect varied across AHRS. Interpretation Absolute and relative accuracy of AHRS are affected by environmental magnetic perturbations and conditions of motions. Relative accuracy of AHRS is mostly affected by the ability of all modules to locate the same global reference coordinate system at all time. Conclusions Existing AHRS systems can be considered for use in clinical biomechanics under constrained conditions of use. While their individual capacity to track absolute motion is relatively consistent, the use of multiple AHRS modules to compute relative motion between rigid bodies needs to be optimized according to

  4. A retrospective study to validate an intraoperative robotic classification system for assessing the accuracy of kirschner wire (K-wire) placements with postoperative computed tomography classification system for assessing the accuracy of pedicle screw placements.

    PubMed

    Tsai, Tai-Hsin; Wu, Dong-Syuan; Su, Yu-Feng; Wu, Chieh-Hsin; Lin, Chih-Lung

    2016-09-01

    This purpose of this retrospective study is validation of an intraoperative robotic grading classification system for assessing the accuracy of Kirschner-wire (K-wire) placements with the postoperative computed tomography (CT)-base classification system for assessing the accuracy of pedicle screw placements.We conducted a retrospective review of prospectively collected data from 35 consecutive patients who underwent 176 robotic assisted pedicle screws instrumentation at Kaohsiung Medical University Hospital from September 2014 to November 2015. During the operation, we used a robotic grading classification system for verifying the intraoperative accuracy of K-wire placements. Three months after surgery, we used the common CT-base classification system to assess the postoperative accuracy of pedicle screw placements. The distributions of accuracy between the intraoperative robot-assisted and various postoperative CT-based classification systems were compared using kappa statistics of agreement.The intraoperative accuracies of K-wire placements before and after repositioning were classified as excellent (131/176, 74.4% and 133/176, 75.6%, respectively), satisfactory (36/176, 20.5% and 41/176, 23.3%, respectively), and malpositioned (9/176, 5.1% and 2/176, 1.1%, respectively)In postoperative CT-base classification systems were evaluated. No screw placements were evaluated as unacceptable under any of these systems. Kappa statistics revealed no significant differences between the proposed system and the aforementioned classification systems (P <0.001).Our results revealed no significant differences between the intraoperative robotic grading system and various postoperative CT-based grading systems. The robotic grading classification system is a feasible method for evaluating the accuracy of K-wire placements. Using the intraoperative robot grading system to classify the accuracy of K-wire placements enables predicting the postoperative accuracy of pedicle screw

  5. Mass Casualty Triage Performance Assessment Tool

    DTIC Science & Technology

    2015-02-01

    substantive information about a trainee’s performance. Unit personnel reviewed the tool for accuracy and functionality . A prototype of medical...government functions ” (Center for Army Lessons Learned, 2006, p.38). During these events, local and state medical personnel are often overwhelmed by...movement. Assessor Feedback: No respiration detected Position the air way using Head-Tilt /Chin-Lift • Kneel at the level of the victims shoulders

  6. Assessment of the labelling accuracy of spanish semipreserved anchovies products by FINS (forensically informative nucleotide sequencing).

    PubMed

    Velasco, Amaya; Aldrey, Anxela; Pérez-Martín, Ricardo I; Sotelo, Carmen G

    2016-06-01

    Anchovies have been traditionally captured and processed for human consumption for millennia. In the case of Spain, ripened and salted anchovies are a delicacy, which, in some cases, can reach high commercial values. Although there have been a number of studies presenting DNA methodologies for the identification of anchovies, this is one of the first studies investigating the level of mislabelling in this kind of products in Europe. Sixty-three commercial semipreserved anchovy products were collected in different types of food markets in four Spanish cities to check labelling accuracy. Species determination in these commercial products was performed by sequencing two different cyt-b mitochondrial DNA fragments. Results revealed mislabelling levels higher than 15%, what authors consider relatively high considering the importance of the product. The most frequent substitute species was the Argentine anchovy, Engraulis anchoita, which can be interpreted as an economic fraud.

  7. Special Forces Interpersonal Performance Assessment System

    DTIC Science & Technology

    2005-04-01

    Phase I interpersonal performance assessment system: selection of the target group , identification of performance dimensions, and performance scale...development. > Target Group Selection The U.S. Army Special Forces (SF) was chosen as the target group because interpersonal skills are critical for the...1 IDENTIFYING THE TARGET GROUP ................................................................................... 2 IDENTIFYING CRITICAL

  8. Improving the Accuracy of Urban Environmental Quality Assessment Using Geographically-Weighted Regression Techniques

    PubMed Central

    Faisal, Kamil; Shaker, Ahmed

    2017-01-01

    Urban Environmental Quality (UEQ) can be treated as a generic indicator that objectively represents the physical and socio-economic condition of the urban and built environment. The value of UEQ illustrates a sense of satisfaction to its population through assessing different environmental, urban and socio-economic parameters. This paper elucidates the use of the Geographic Information System (GIS), Principal Component Analysis (PCA) and Geographically-Weighted Regression (GWR) techniques to integrate various parameters and estimate the UEQ of two major cities in Ontario, Canada. Remote sensing, GIS and census data were first obtained to derive various environmental, urban and socio-economic parameters. The aforementioned techniques were used to integrate all of these environmental, urban and socio-economic parameters. Three key indicators, including family income, higher level of education and land value, were used as a reference to validate the outcomes derived from the integration techniques. The results were evaluated by assessing the relationship between the extracted UEQ results and the reference layers. Initial findings showed that the GWR with the spatial lag model represents an improved precision and accuracy by up to 20% with respect to those derived by using GIS overlay and PCA techniques for the City of Toronto and the City of Ottawa. The findings of the research can help the authorities and decision makers to understand the empirical relationships among environmental factors, urban morphology and real estate and decide for more environmental justice. PMID:28272334

  9. Accuracy assessment of satellite altimetry over central East Antarctica by kinematic GNSS and crossover analysis

    NASA Astrophysics Data System (ADS)

    Schröder, Ludwig; Richter, Andreas; Fedorov, Denis; Knöfel, Christoph; Ewert, Heiko; Dietrich, Reinhard; Matveev, Aleksey Yu.; Scheinert, Mirko; Lukin, Valery

    2014-05-01

    Satellite altimetry is a unique technique to observe the contribution of the Antarctic ice sheet to global sea-level change. To fulfill the high quality requirements for its application, the respective products need to be validated against independent data like ground-based measurements. Kinematic GNSS provides a powerful method to acquire precise height information along the track of a vehicle. Within a collaboration of TU Dresden and Russian partners during the Russian Antarctic Expeditions in the seasons from 2001 to 2013 we recorded several such profiles in the region of the subglacial Lake Vostok, East Antarctica. After 2006 these datasets also include observations along seven continental traverses with a length of about 1600km each between the Antarctic coast and the Russian research station Vostok (78° 28' S, 106° 50' E). After discussing some special issues concerning the processing of the kinematic GNSS profiles under the very special conditions of the interior of the Antarctic ice sheet, we will show their application for the validation of NASA's laser altimeter satellite mission ICESat and of ESA's ice mission CryoSat-2. Analysing the height differences at crossover points, we can get clear insights into the height regime at the subglacial Lake Vostok. Thus, these profiles as well as the remarkably flat lake surface itself can be used to investigate the accuracy and possible error influences of these missions. We will show how the transmit-pulse reference selection correction (Gaussian vs. centroid, G-C) released in January 2013 helped to further improve the release R633 ICESat data and discuss the height offsets and other effects of the CryoSat-2 radar data. In conclusion we show that only a combination of laser and radar altimetry can provide both, a high precision and a good spatial coverage. An independent validation with ground-based observations is crucial for a thorough accuracy assessment.

  10. Accuracy Assessment of Immediate and Delayed Implant Placements Using CAD/CAM Surgical Guides.

    PubMed

    Alzoubi, Fawaz; Massoomi, Nima; Nattestad, Anders

    2016-10-01

    The aim of this study is to assess the accuracy of immediately placed implants using Anatomage Invivo5 computer-assisted design/computer-assisted manufacturing (CAD/CAM) surgical guides and compare the accuracy to delayed implant placement protocol. Patients who had implants placed using Anatomage Invivo5 CAD/CAM surgical guides during the period of 2012-2015 were evaluated retrospectively. Patients who received immediate implant placements and/or delayed implant placements replacing 1-2 teeth were included in this study. Pre- and postsurgical images were superimposed to evaluate deviations at the crest, apex, and angle. A total of 40 implants placed in 29 patients were included in this study. The overall mean deviations measured at the crest, apex, and angle were 0.86 mm, 1.25 mm, and 3.79°, respectively. The means for the immediate group deviations were: crest = 0.85 mm, apex = 1.10, and angle = 3.49°. The means for the delayed group deviations were: crest = 0.88 mm, apex = 1.59, and angle = 4.29°. No statistically significant difference was found at the crest and angle; however, there was a statistically significant difference between the immediate and delayed group at the apex, with the immediate group presenting more accurate placements at the apical point than the delayed group. CAD/CAM surgical guides can be reliable tools to accurately place implants immediately and/or in a delayed fashion. No statistically significant differences were found between the delayed and the immediate group at the crest and angle, however apical position was more accurate in the immediate group.

  11. Accuracy Assessment of GO Pro Hero 3 (black) Camera in Underwater Environment

    NASA Astrophysics Data System (ADS)

    Helmholz, , P.; Long, J.; Munsie, T.; Belton, D.

    2016-06-01

    Modern digital cameras are increasing in quality whilst decreasing in size. In the last decade, a number of waterproof consumer digital cameras (action cameras) have become available, which often cost less than 500. A possible application of such action cameras is in the field of Underwater Photogrammetry. Especially with respect to the fact that with the change of the medium to below water can in turn counteract the distortions present. The goal of this paper is to investigate the suitability of such action cameras for underwater photogrammetric applications focusing on the stability of the camera and the accuracy of the derived coordinates for possible photogrammetric applications. For this paper a series of image sequences was capture in a water tank. A calibration frame was placed in the water tank allowing the calibration of the camera and the validation of the measurements using check points. The accuracy assessment covered three test sets operating three GoPro sports cameras of the same model (Hero 3 black). The test set included the handling of the camera in a controlled manner where the camera was only dunked into the water tank using 7MP and 12MP resolution and a rough handling where the camera was shaken as well as being removed from the waterproof case using 12MP resolution. The tests showed that the camera stability was given with a maximum standard deviation of the camera constant σc of 0.0031mm for 7MB (for an average c of 2.720mm) and 0.0072 mm for 12MB (for an average c of 3.642mm). The residual test of the check points gave for the 7MB test series the largest rms value with only 0.450mm and the largest maximal residual of only 2.5 mm. For the 12MB test series the maximum rms value is 0. 653mm.

  12. The effects of performance-based assessment criteria on student performance and self-assessment skills.

    PubMed

    Fastré, Greet Mia Jos; van der Klink, Marcel R; van Merriënboer, Jeroen J G

    2010-10-01

    This study investigated the effect of performance-based versus competence-based assessment criteria on task performance and self-assessment skills among 39 novice secondary vocational education students in the domain of nursing and care. In a performance-based assessment group students are provided with a preset list of performance-based assessment criteria, describing what students should do, for the task at hand. The performance-based group is compared to a competence-based assessment group in which students receive a preset list of competence-based assessment criteria, describing what students should be able to do. The test phase revealed that the performance-based group outperformed the competence-based group on test task performance. In addition, higher performance of the performance-based group was reached with lower reported mental effort during training, indicating a higher instructional efficiency for novice students.

  13. Assessing BMP Performance Using Microtox Toxicity Analysis

    EPA Science Inventory

    Best Management Practices (BMPs) have been shown to be effective in reducing runoff and pollutants from urban areas and thus provide a mechanism to improve downstream water quality. Currently, BMP performance regarding water quality improvement is assessed through measuring each...

  14. Assessing Vocal Performances Using Analytical Assessment: A Case Study

    ERIC Educational Resources Information Center

    Gynnild, Vidar

    2016-01-01

    This study investigated ways to improve the appraisal of vocal performances within a national academy of music. Since a criterion-based assessment framework had already been adopted, the conceptual foundation of an assessment rubric was used as a guide in an action research project. The group of teachers involved wanted to explore thinking…

  15. The Impact of Self-Evaluation Instruction on Student Self-Evaluation, Music Performance, and Self-Evaluation Accuracy

    ERIC Educational Resources Information Center

    Hewitt, Michael P.

    2011-01-01

    The author sought to determine whether self-evaluation instruction had an impact on student self-evaluation, music performance, and self-evaluation accuracy of music performance among middle school instrumentalists. Participants (N = 211) were students at a private middle school located in a metropolitan area of a mid-Atlantic state. Students in…

  16. QuickBird and OrbView-3 Geopositional Accuracy Assessment

    NASA Technical Reports Server (NTRS)

    Helder, Dennis; Ross, Kenton

    2006-01-01

    Objective: Compare vendor-provided image coordinates with known references visible in the imagery. Approach: Use multiple, well-characterized sites with >40 ground control points (GCPs); sites that are a) Well distributed; b) Accurately surveyed; and c) Easily found in imagery. Perform independent assessments with independent teams. Each team has slightly different measurement techniques and data processing methods. NASA Stennis Space Center. South Dakota State University.

  17. Personality, Assessment Methods and Academic Performance

    ERIC Educational Resources Information Center

    Furnham, Adrian; Nuygards, Sarah; Chamorro-Premuzic, Tomas

    2013-01-01

    This study examines the relationship between personality and two different academic performance (AP) assessment methods, namely exams and coursework. It aimed to examine whether the relationship between traits and AP was consistent across self-reported versus documented exam results, two different assessment techniques and across different…

  18. Accountable Individual Assessment for Cooperative Performance Assignments.

    ERIC Educational Resources Information Center

    Bastick, Tony

    This paper aims to make the techniques of cooperative learning more attractive to teachers by presenting a method of assessment that avoids the drawbacks associated with trying to extract valid and reliable individual marks from cooperative performances. The paper presents an easy-to-use method of assessing an individual's contribution to a…

  19. Accuracy assessment of blind and semi-blind restoration methods for hyperspectral images

    NASA Astrophysics Data System (ADS)

    Zhang, Mo; Vozel, Benoit; Chehdi, Kacem; Uss, Mykhail; Abramov, Sergey; Lukin, Vladimir

    2016-10-01

    Hyperspectral images acquired by remote sensing systems are generally degraded by noise and can be sometimes more severely degraded by blur. When no knowledge is available about the degradations present or the original image, blind restoration methods must be considered. Otherwise, when a partial information is needed, semi-blind restoration methods can be considered. Numerous semi-blind and quite advanced methods are available in the literature. So to get better insights and feedback on the applicability and potential efficiency of a representative set of four semi-blind methods recently proposed, we have performed a comparative study of these methods in objective terms of blur filter and original image error estimation accuracy. In particular, we have paid special attention to the accurate recovering in the spectral dimension of original spectral signatures. We have analyzed peculiarities and factors restricting the applicability of these methods. Our tests are performed on a synthetic hyperspectral image, degraded with various synthetic blurs (out-of-focus, gaussian, motion) and with signal independent noise of typical levels such as those encountered in real hyperspectral images. This synthetic image has been built from various samples from classified areas of a real-life hyperspectral image, in order to benefit from realistic reference spectral signatures to recover after synthetic degradation. Conclusions, practical recommendations and perspectives are drawn from the results experimentally obtained.

  20. Utilizing the Global Land Cover 2000 reference dataset for a comparative accuracy assessment of 1 km global land cover maps

    NASA Astrophysics Data System (ADS)

    Schultz, M.; Tsendbazazr, N. E.; Herold, M.; Jung, M.; Mayaux, P.; Goehman, H.

    2015-04-01

    Many investigators use global land cover (GLC) maps for different purposes, such as an input for global climate models. The current GLC maps used for such purposes are based on different remote sensing data, methodologies and legends. Consequently, comparison of GLC maps is difficult and information about their relative utility is limited. The objective of this study is to analyse and compare the thematic accuracies of GLC maps (i.e., IGBP-DISCover, UMD, MODIS, GLC2000 and SYNMAP) at 1 km resolutions by (a) re-analysing the GLC2000 reference dataset, (b) applying a generalized GLC legend and (c) comparing their thematic accuracies at different homogeneity levels. The accuracy assessment was based on the GLC2000 reference dataset with 1253 samples that were visually interpreted. The legends of the GLC maps and the reference datasets were harmonized into 11 general land cover classes. There results show that the map accuracy estimates vary up to 10-16% depending on the homogeneity of the reference point (HRP) for all the GLC maps. An increase of the HRP resulted in higher overall accuracies but reduced accuracy confidence for the GLC maps due to less number of accountable samples. The overall accuracy of the SYNMAP was the highest at any HRP level followed by the GLC2000. The overall accuracies of the maps also varied by up to 10% depending on the definition of agreement between the reference and map categories in heterogeneous landscape. A careful consideration of heterogeneous landscape is therefore recommended for future accuracy assessments of land cover maps.

  1. Accuracy of Cameriere's cut-off value for third molar in assessing 18 years of age.

    PubMed

    De Luca, S; Biagi, R; Begnoni, G; Farronato, G; Cingolani, M; Merelli, V; Ferrante, L; Cameriere, R

    2014-02-01

    Due to increasingly numerous international migrations, estimating the age of unaccompanied minors is becoming of enormous significance for forensic professionals who are required to deliver expert opinions. The third molar tooth is one of the few anatomical sites available for estimating the age of individuals in late adolescence. This study verifies the accuracy of Cameriere's cut-off value of the third molar index (I3M) in assessing 18 years of age. For this purpose, a sample of orthopantomographs (OPTs) of 397 living subjects aged between 13 and 22 years (192 female and 205 male) was analyzed. Age distribution gradually decreases as I3M increases in both males and females. The results show that the sensitivity of the test was 86.6%, with a 95% confidence interval of (80.8%, 91.1%), and its specificity was 95.7%, with a 95% confidence interval of (92.1%, 98%). The proportion of correctly classified individuals was 91.4%. Estimated post-test probability, p was 95.6%, with a 95% confidence interval of (92%, 98%). Hence, the probability that a subject positive on the test (i.e., I3M<0.08) was 18 years of age or older was 95.6%.

  2. Physician Performance Assessment: Prevention of Cardiovascular Disease

    ERIC Educational Resources Information Center

    Lipner, Rebecca S.; Weng, Weifeng; Caverzagie, Kelly J.; Hess, Brian J.

    2013-01-01

    Given the rising burden of healthcare costs, both patients and healthcare purchasers are interested in discerning which physicians deliver quality care. We proposed a methodology to assess physician clinical performance in preventive cardiology care, and determined a benchmark for minimally acceptable performance. We used data on eight…

  3. Construct Validity of Three Clerkship Performance Assessments

    ERIC Educational Resources Information Center

    Lee, Ming; Wimmers, Paul F.

    2010-01-01

    This study examined construct validity of three commonly used clerkship performance assessments: preceptors' evaluations, OSCE-type clinical performance measures, and the NBME [National Board of Medical Examiners] medicine subject examination. Six hundred and eighty-six students taking the inpatient medicine clerkship from 2003 to 2007…

  4. Accuracy assessment, using stratified plurality sampling, of portions of a LANDSAT classification of the Arctic National Wildlife Refuge Coastal Plain

    NASA Technical Reports Server (NTRS)

    Card, Don H.; Strong, Laurence L.

    1989-01-01

    An application of a classification accuracy assessment procedure is described for a vegetation and land cover map prepared by digital image processing of LANDSAT multispectral scanner data. A statistical sampling procedure called Stratified Plurality Sampling was used to assess the accuracy of portions of a map of the Arctic National Wildlife Refuge coastal plain. Results are tabulated as percent correct classification overall as well as per category with associated confidence intervals. Although values of percent correct were disappointingly low for most categories, the study was useful in highlighting sources of classification error and demonstrating shortcomings of the plurality sampling method.

  5. Integrating Landsat and California pesticide exposure estimation at aggregated analysis scales: Accuracy assessment of rurality

    NASA Astrophysics Data System (ADS)

    Vopham, Trang Minh

    Pesticide exposure estimation in epidemiologic studies can be constrained to analysis scales commonly available for cancer data - census tracts and ZIP codes. Research goals included (1) demonstrating the feasibility of modifying an existing geographic information system (GIS) pesticide exposure method using California Pesticide Use Reports (PURs) and land use surveys to incorporate Landsat remote sensing and to accommodate aggregated analysis scales, and (2) assessing the accuracy of two rurality metrics (quality of geographic area being rural), Rural-Urban Commuting Area (RUCA) codes and the U.S. Census Bureau urban-rural system, as surrogates for pesticide exposure when compared to the GIS gold standard. Segments, derived from 1985 Landsat NDVI images, were classified using a crop signature library (CSL) created from 1990 Landsat NDVI images via a sum of squared differences (SSD) measure. Organochlorine, organophosphate, and carbamate Kern County PUR applications (1974-1990) were matched to crop fields using a modified three-tier approach. Annual pesticide application rates (lb/ac), and sensitivity and specificity of each rurality metric were calculated. The CSL (75 land use classes) classified 19,752 segments [median SSD 0.06 NDVI]. Of the 148,671 PUR records included in the analysis, Landsat contributed 3,750 (2.5%) additional tier matches. ZIP Code Tabulation Area (ZCTA) rates ranged between 0 and 1.36 lb/ac and census tract rates between 0 and 1.57 lb/ac. Rurality was a mediocre pesticide exposure surrogate; higher rates were observed among urban areal units. ZCTA-level RUCA codes offered greater specificity (39.1-60%) and sensitivity (25-42.9%). The U.S. Census Bureau metric offered greater specificity (92.9-97.5%) at the census tract level; sensitivity was low (≤6%). The feasibility of incorporating Landsat into a modified three-tier GIS approach was demonstrated. Rurality accuracy is affected by rurality metric, areal aggregation, pesticide chemical

  6. Assessment of Precipitation Forecast Accuracy over Eastern Black Sea Region using WRF-ARW

    NASA Astrophysics Data System (ADS)

    Bıyık, G.; Unal, Y.; Onol, B.

    2009-09-01

    Surface topography such as mountain barriers, existing water bodies and semi-permanent mountain glaciers changes large scale atmospheric patterns and creates a challenge for a reliable precipitation prediction. Eastern Black sea region of Turkey is an example. Black Sea Mountain chains lies west to east along the coastline with the average height of 2000 m and the highest point is 3973 m, and from the coastline to inland there is a very sharp topography change. For this project we select the Eastern Black Sea region of Turkey to assess precipitation forecast accuracy. This is a unique region of Turkey which receive both highest amount of precipitation and precipitation throughout whole year. Amount of rain and snow is important because they supply water to the main river systems of Turkey. Turkey is in general under the influence of both continental polar (Cp) and tropical air masses. Their interaction with the orography causes orographic precipitation being effective on the region. Also Caucasus Mountains, which is the highest point of Georgia, moderates the climate of the southern parts by not letting penetration of colder air masses from north. Southern part of the western Black Sea region has more continental climate because of the lee side effect of the mountains Therefore, precipitation forecast in the region is important for operational forecasters and researchers. Our aim in this project is to investigate WRF precipitation accuracy during 10 extreme precipitation, 10 normal precipitation and 10 no precipitation days by using forecast for two days ahead. Cases are selected in years between 2000 and 2003. Eleven Eastern Black Sea stations located along the coastline are used to determine 20 extreme and 10 average precipitation days. During project, three different resolutions with three nested domains are tested to determine the model sensivity to domain boundaries and resolution. As a result of our tests, 6 km resolution for finer domain was found suitable

  7. Quantitative performance assessments for neuromagnetic imaging systems.

    PubMed

    Koga, Ryo; Hiyama, Ei; Matsumoto, Takuya; Sekihara, Kensuke

    2013-01-01

    We have developed a Monte-Carlo simulation method to assess the performance of neuromagnetic imaging systems using two kinds of performance metrics: A-prime metric and spatial resolution. We compute these performance metrics for virtual sensor systems having 80, 160, 320, and 640 sensors, and discuss how the system performance is improved, depending on the number of sensors. We also compute these metrics for existing whole-head MEG systems, MEGvision™ (Yokogawa Electric Corporation, Tokyo, Japan) that uses axial-gradiometer sensors, and TRIUX™ (Elekta Corporate, Stockholm, Sweden) that uses planar-gradiometer and magnetometer sensors. We discuss performance comparisons between these significantly different systems.

  8. Assessing the Accuracy of Sentinel-3 SLSTR Sea-Surface Temperature Retrievals Using High Accuracy Infrared Radiiometers on Ships of Opportunity

    NASA Astrophysics Data System (ADS)

    Minnett, P. J.; Izaguirre, M. A.; Szcszodrak, M.; Williams, E.; Reynolds, R. M.

    2015-12-01

    The assessment of errors and uncertainties in satellite-derived SSTs can be achieved by comparisons with independent measurements of skin SST of high accuracy. Such validation measurements are provided by well-calibrated infrared radiometers mounted on ships. The second generation of Marine-Atmospheric Emitted Radiance Interferometers (M-AERIs) have recently been developed and two are now deployed on cruise ships of Royal Caribbean Cruise Lines that operate in the Caribbean Sea, North Atlantic and Mediterranean Sea. In addition, two Infrared SST Autonomous Radiometers (ISARs) are mounted alternately on a vehicle transporter of NYK Lines that crosses the Pacific Ocean between Japan and the USA. Both M-AERIs and ISARs are self-calibrating radiometers having two internal blackbody cavities to provide at-sea calibration of the measured radiances, and the accuracy of the internal calibration is periodically determined by measurements of a NIST-traceable blackbody cavity in the laboratory. This provides SI-traceability for the at-sea measurements. It is anticipated that these sensors will be deployed during the next several years and will be available for the validation of the SLSTRs on Sentinel-3a and -3b.

  9. Assessing the accuracy of an inter-institutional automated patient-specific health problem list

    PubMed Central

    2010-01-01

    Background Health problem lists are a key component of electronic health records and are instrumental in the development of decision-support systems that encourage best practices and optimal patient safety. Most health problem lists require initial clinical information to be entered manually and few integrate information across care providers and institutions. This study assesses the accuracy of a novel approach to create an inter-institutional automated health problem list in a computerized medical record (MOXXI) that integrates three sources of information for an individual patient: diagnostic codes from medical services claims from all treating physicians, therapeutic indications from electronic prescriptions, and single-indication drugs. Methods Data for this study were obtained from 121 general practitioners and all medical services provided for 22,248 of their patients. At the opening of a patient's file, all health problems detected through medical service utilization or single-indication drug use were flagged to the physician in the MOXXI system. Each new arising health problem were presented as 'potential' and physicians were prompted to specify if the health problem was valid (Y) or not (N) or if they preferred to reassess its validity at a later time. Results A total of 263,527 health problems, representing 891 unique problems, were identified for the group of 22,248 patients. Medical services claims contributed to the majority of problems identified (77%), followed by therapeutic indications from electronic prescriptions (14%), and single-indication drugs (9%). Physicians actively chose to assess 41.7% (n = 106,950) of health problems. Overall, 73% of the problems assessed were considered valid; 42% originated from medical service diagnostic codes, 11% from single indication drugs, and 47% from prescription indications. Twelve percent of problems identified through other treating physicians were considered valid compared to 28% identified through study

  10. Using Generalizability Theory to Examine the Accuracy and Validity of Large-Scale ESL Writing Assessment

    ERIC Educational Resources Information Center

    Huang, Jinyan

    2012-01-01

    Using generalizability (G-) theory, this study examined the accuracy and validity of the writing scores assigned to secondary school ESL students in the provincial English examinations in Canada. The major research question that guided this study was: Are there any differences between the accuracy and construct validity of the analytic scores…

  11. Accuracy Assessments of Cloud Droplet Size Retrievals from Polarized Reflectance Measurements by the Research Scanning Polarimeter

    NASA Technical Reports Server (NTRS)

    Alexandrov, Mikhail Dmitrievic; Cairns, Brian; Emde, Claudia; Ackerman, Andrew S.; vanDiedenhove, Bastiaan

    2012-01-01

    We present an algorithm for the retrieval of cloud droplet size distribution parameters (effective radius and variance) from the Research Scanning Polarimeter (RSP) measurements. The RSP is an airborne prototype for the Aerosol Polarimetery Sensor (APS), which was on-board of the NASA Glory satellite. This instrument measures both polarized and total reflectance in 9 spectral channels with central wavelengths ranging from 410 to 2260 nm. The cloud droplet size retrievals use the polarized reflectance in the scattering angle range between 135deg and 165deg, where they exhibit the sharply defined structure known as the rain- or cloud-bow. The shape of the rainbow is determined mainly by the single scattering properties of cloud particles. This significantly simplifies both forward modeling and inversions, while also substantially reducing uncertainties caused by the aerosol loading and possible presence of undetected clouds nearby. In this study we present the accuracy evaluation of our algorithm based on the results of sensitivity tests performed using realistic simulated cloud radiation fields.

  12. Accuracy Assessment of Mobile Mapping Point Clouds Using the Existing Environment as Terrestrial Reference

    NASA Astrophysics Data System (ADS)

    Hofmann, S.; Brenner, C.

    2016-06-01

    Mobile mapping data is widely used in various applications, what makes it especially important for data users to get a statistically verified quality statement on the geometric accuracy of the acquired point clouds or its processed products. The accuracy of point clouds can be divided into an absolute and a relative quality, where the absolute quality describes the position of the point cloud in a world coordinate system such as WGS84 or UTM, whereas the relative accuracy describes the accuracy within the point cloud itself. Furthermore, the quality of processed products such as segmented features depends on the global accuracy of the point cloud but mainly on the quality of the processing steps. Several data sources with different characteristics and quality can be thought of as potential reference data, such as cadastral maps, orthophoto, artificial control objects or terrestrial surveys using a total station. In this work a test field in a selected residential area was acquired as reference data in a terrestrial survey using a total station. In order to reach high accuracy the stationing of the total station was based on a newly made geodetic network with a local accuracy of less than 3 mm. The global position of the network was determined using a long time GNSS survey reaching an accuracy of 8 mm. Based on this geodetic network a 3D test field with facades and street profiles was measured with a total station, each point with a two-dimensional position and altitude. In addition, the surface of poles of street lights, traffic signs and trees was acquired using the scanning mode of the total station. Comparing this reference data to the acquired mobile mapping point clouds of several measurement campaigns a detailed quality statement on the accuracy of the point cloud data is made. Additionally, the advantages and disadvantages of the described reference data source concerning availability, cost, accuracy and applicability are discussed.

  13. Accuracy of a Low-Cost Novel Computer-Vision Dynamic Movement Assessment: Potential Limitations and Future Directions

    NASA Astrophysics Data System (ADS)

    McGroarty, M.; Giblin, S.; Meldrum, D.; Wetterling, F.

    2016-04-01

    The aim of the study was to perform a preliminary validation of a low cost markerless motion capture system (CAPTURE) against an industry gold standard (Vicon). Measurements of knee valgus and flexion during the performance of a countermovement jump (CMJ) between CAPTURE and Vicon were compared. After correction algorithms were applied to the raw CAPTURE data acceptable levels of accuracy and precision were achieved. The knee flexion angle measured for three trials using Capture deviated by -3.8° ± 3° (left) and 1.7° ± 2.8° (right) compared to Vicon. The findings suggest that low-cost markerless motion capture has potential to provide an objective method for assessing lower limb jump and landing mechanics in an applied sports setting. Furthermore, the outcome of the study warrants the need for future research to examine more fully the potential implications of the use of low-cost markerless motion capture in the evaluation of dynamic movement for injury prevention.

  14. Assessing the Accuracy of the Tracer Dilution Method with Atmospheric Dispersion Modeling

    NASA Astrophysics Data System (ADS)

    Taylor, D.; Delkash, M.; Chow, F. K.; Imhoff, P. T.

    2015-12-01

    Landfill methane emissions are difficult to estimate due to limited observations and data uncertainty. The mobile tracer dilution method is a widely used and cost-effective approach for predicting landfill methane emissions. The method uses a tracer gas released on the surface of the landfill and measures the concentrations of both methane and the tracer gas downwind. Mobile measurements are conducted with a gas analyzer mounted on a vehicle to capture transects of both gas plumes. The idea behind the method is that if the measurements are performed far enough downwind, the methane plume from the large area source of the landfill and the tracer plume from a small number of point sources will be sufficiently well-mixed to behave similarly, and the ratio between the concentrations will be a good estimate of the ratio between the two emissions rates. The mobile tracer dilution method is sensitive to different factors of the setup such as placement of the tracer release locations and distance from the landfill to the downwind measurements, which have not been thoroughly examined. In this study, numerical modeling is used as an alternative to field measurements to study the sensitivity of the tracer dilution method and provide estimates of measurement accuracy. Using topography and wind conditions for an actual landfill, a landfill emissions rate is prescribed in the model and compared against the emissions rate predicted by application of the tracer dilution method. Two different methane emissions scenarios are simulated: homogeneous emissions over the entire surface of the landfill, and heterogeneous emissions with a hot spot containing 80% of the total emissions where the daily cover area is located. Numerical modeling of the tracer dilution method is a useful tool for evaluating the method without having the expense and labor commitment of multiple field campaigns. Factors tested include number of tracers, distance between tracers, distance from landfill to transect

  15. Designing a Multi-Objective Multi-Support Accuracy Assessment of the 2001 National Land Cover Data (NLCD 2001) of the Conterminous United States

    EPA Science Inventory

    The database design and diverse application of NLCD 2001 pose significant challenges for accuracy assessment because numerous objectives are of interest, including accuracy of land cover, percent urban imperviousness, percent tree canopy, land-cover composition, and net change. ...

  16. Do students know what they know? Exploring the accuracy of students' self-assessments

    NASA Astrophysics Data System (ADS)

    Lindsey, Beth A.; Nagel, Megan L.

    2015-12-01

    We have conducted an investigation into how well students in introductory science classes (both physics and chemistry) are able to predict which questions they will or will not be able to answer correctly on an upcoming assessment. An examination of the data at the level of students' overall scores reveals results consistent with the Dunning-Kruger effect, in which low-performing students tend to overestimate their abilities, while high-performing students estimate their abilities more accurately. Similar results have been widely reported in the science education literature. Breaking results out by students' responses to individual questions, however, reveals that students of all ability levels have difficulty distinguishing questions which they are able to answer correctly from those that they are not able to answer correctly. These results have implications for the future study and reporting of students' metacognitive abilities.

  17. On the use of polymer gels for assessing the total geometrical accuracy in clinical Gamma Knife radiosurgery applications

    NASA Astrophysics Data System (ADS)

    Moutsatsos, A.; Karaiskos, P.; Petrokokkinos, L.; Zourari, K.; Pantelis, E.; Sakelliou, L.; Seimenis, I.; Constantinou, C.; Peraticou, A.; Georgiou, E.

    2010-11-01

    The nearly tissue equivalent MRI properties and the unique ability of registering 3D dose distributions of polymer gels were exploited to assess the total geometrical accuracy in clinical Gamma Knife applications, taking into account the combined effect of the unit's mechanical accuracy, dose delivery precision and the geometrical distortions inherent in MR images used for irradiation planning. Comparison between planned and experimental data suggests that the MR-related distortions due to susceptibility effects dominate the total clinical geometrical accuracy which was found within 1 mm. The dosimetric effect of the observed sub-millimetre uncertainties on single shot GK irradiation plans was assessed using the target percentage coverage criterion, and a considerable target dose underestimation was found.

  18. Assessing the accuracy of image tracking algorithms on visible and thermal imagery using a deep restricted Boltzmann machine

    NASA Astrophysics Data System (ADS)

    Won, Stephen; Young, S. Susan

    2012-06-01

    Image tracking algorithms are critical to many applications including image super-resolution and surveillance. However, there exists no method to independently verify the accuracy of the tracking algorithm without a supplied control or visual inspection. This paper proposes an image tracking framework that uses deep restricted Boltzmann machines trained without external databases to quantify the accuracy of image tracking algorithms without the use of ground truths. In this paper, the tracking algorithm is comprised of the combination of flux tensor segmentation with four image registration methods, including correlation, Horn-Schunck optical flow, Lucas-Kanade optical flow, and feature correspondence methods. The robustness of the deep restricted Boltzmann machine is assessed by comparing between results from training with trusted and not-trusted data. Evaluations show that the deep restricted Boltzmann machine is a valid mechanism to assess the accuracy of a tracking algorithm without the use of ground truths.

  19. Accuracy assessment on the crop area estimating method based on RS sampling at national scale: a case study of China's rice area estimation assessment

    NASA Astrophysics Data System (ADS)

    Qian, Yonglan; Yang, Bangjie; Jiao, Xianfeng; Pei, Zhiyuan; Li, Xuan

    2008-08-01

    Remote Sensing technology has been used in agricultural statistics since early 1970s in developed countries and since late 1970s in China. It has greatly improved the efficiency with its accurate, timingly and credible information. But agricultural monitoring using remote sensing has not yet been assessed with credible data in China and its accuracy seems not consistent and reliable to many users. The paper reviews different methods and the corresponding assessments of agricultural monitoring using remote sensing in developed countries and China, then assesses the crop area estimating method using Landsat TM remotely sensed data as sampling area in Northeast China. The ground truth is ga-thered with global positioning system and 40 sampling areas are used to assess the classification accu-racy. The error matrix is constructed from which the accuracy is calculated. The producer accuracy, the user accuracy and total accuracy are 89.53%, 95.37% and 87.02% respectively and the correlation coefficient between the ground truth and classification results is 0.96. A new error index δ is introduced and the average δ of rice area estimation to the truth data is 0.084. δ measures how much the RS classification result is positive or negative apart from the truth data.

  20. Accuracy of Panoramic Radiograph in Assessment of the Relationship Between Mandibular Canal and Impacted Third Molars

    PubMed Central

    Tantanapornkul, Weeraya; Mavin, Darika; Prapaiphittayakun, Jaruthai; Phipatboonyarat, Natnicha; Julphantong, Wanchanok

    2016-01-01

    Background: The relationship between impacted mandibular third molar and mandibular canal is important for removal of this tooth. Panoramic radiography is one of the commonly used diagnostic tools for evaluating the relationship of these two structures. Objectives: To evaluate the accuracy of panoramic radiographic findings in predicting direct contact between mandibular canal and impacted third molars on 3D digital images, and to define panoramic criterion in predicting direct contact between the two structures. Methods: Two observers examined panoramic radiographs of 178 patients (256 impacted mandibular third molars). Panoramic findings of interruption of mandibular canal wall, isolated or with darkening of third molar root, diversion of mandibular canal and narrowing of third molar root were evaluated for 3D digital radiography. Direct contact between mandibular canal and impacted third molars on 3D digital images was then correlated with panoramic findings. Panoramic criterion was also defined in predicting direct contact between the two structures. Results: Panoramic findings of interruption of mandibular canal wall, isolated or with darkening of third molar root were statistically significantly correlated with direct contact between mandibular canal and impacted third molars on 3D digital images (p < 0.005), and were defined as panoramic criteria in predicting direct contact between the two structures. Conclusion: Interruption of mandibular canal wall, isolated or with darkening of third molar root observed on panoramic radiographs were effective in predicting direct contact between mandibular canal and impacted third molars on 3D digital images. Panoramic radiography is one of the efficient diagnostic tools for pre-operative assessment of impacted mandibular third molars. PMID:27398105

  1. Assessing the dosimetric accuracy of MR-generated synthetic CT images for focal brain VMAT radiotherapy

    PubMed Central

    Paradis, Eric; Cao, Yue; Lawrence, Theodore S.; Tsien, Christina; Vineberg, Karen; Balter, James M.

    2015-01-01

    Purpose To assess the dosimetric accuracy of synthetic CT volumes generated from MRI data for focal brain radiotherapy. Methods A study was conducted on 12 patients with gliomas who underwent both MR and CT imaging as part of their simulation for external beam treatment planning. Synthetic CT (MRCT) volumes were generated from the MR images. The patients’ clinical treatment planning directives were used to create 12 individual Volumetric Modulated Arc Therapy (VMAT) plans, which were then optimized 10 times on each of their respective CT and MRCT-derived electron density maps. Dose metrics derived from optimization criteria, as well as monitor units and gamma analyses, were evaluated to quantify differences between the imaging modalities. Results Mean differences between Planning Target Volume (PTV) doses on MRCT and CT plans across all patients were 0.0% (range −0.1 to 0.2%) for D95%, 0.0% (−0.7 to 0.6%) for D5%, and −0.2% (−1.0 to 0.2%) for Dmax. MRCT plans showed no significant change in monitor units (−0.4%) compared to CT plans. Organs at risk (OARs) had an average Dmax difference of 0.0 Gy (−2.2 to 1.9 Gy) over 85 structures across all 12 patients, with no significant differences when calculated doses approached planning constraints. Conclusions Focal brain VMAT plans optimized on MRCT images show excellent dosimetric agreement with standard CT-optimized plans. PTVs show equivalent coverage, and OARs do not show any overdose. These results indicate that MRI-derived synthetic CT volumes can be used to support treatment planning of most patients treated for intracranial lesions. PMID:26581151

  2. The accuracy of a patient or parent-administered bleeding assessment tool administered in a paediatric haematology clinic.

    PubMed

    Lang, A T; Sturm, M S; Koch, T; Walsh, M; Grooms, L P; O'Brien, S H

    2014-11-01

    Classifying and describing bleeding symptoms is essential in the diagnosis and management of patients with mild bleeding disorders (MBDs). There has been increased interest in the use of bleeding assessment tools (BATs) to more objectively quantify the presence and severity of bleeding symptoms. To date, the administration of BATs has been performed almost exclusively by clinicians; the accuracy of a parent-proxy BAT has not been studied. Our objective was to determine the accuracy of a parent-administered BAT by measuring the level of agreement between parent and clinician responses to the Condensed MCMDM-1VWD Bleeding Questionnaire. Our cross-sectional study included children 0-21 years presenting to a haematology clinic for initial evaluation of a suspected MBD or follow-up evaluation of a previously diagnosed MBD. The parent/caregiver completed a modified version of the BAT; the clinician separately completed the BAT through interview. The mean parent-report bleeding score (BS) was 6.09 (range: -2 to 25); the mean clinician report BS was 4.54 (range: -1 to 17). The mean percentage of agreement across all bleeding symptoms was 78% (mean κ = 0.40; Gwet's AC1 = 0.74). Eighty percent of the population had an abnormal BS (defined as ≥2) when rated by parents and 76% had an abnormal score when rated by clinicians (86% agreement, κ = 0.59, Gwet's AC1 = 0.79). While parents tended to over-report bleeding as compared to clinicians, overall, BSs were similar between groups. These results lend support for further study of a modified proxy-report BAT as a clinical and research tool.

  3. Assessment of the accuracy of coupled cluster perturbation theory for open-shell systems. II. Quadruples expansions.

    PubMed

    Eriksen, Janus J; Matthews, Devin A; Jørgensen, Poul; Gauss, Jürgen

    2016-05-21

    We extend our assessment of the potential of perturbative coupled cluster (CC) expansions for a test set of open-shell atoms and organic radicals to the description of quadruple excitations. Namely, the second- through sixth-order models of the recently proposed CCSDT(Q-n) quadruples series [J. J. Eriksen et al., J. Chem. Phys. 140, 064108 (2014)] are compared to the prominent CCSDT(Q) and ΛCCSDT(Q) models. From a comparison of the models in terms of their recovery of total CC singles, doubles, triples, and quadruples (CCSDTQ) energies, we find that the performance of the CCSDT(Q-n) models is independent of the reference used (unrestricted or restricted (open-shell) Hartree-Fock), in contrast to the CCSDT(Q) and ΛCCSDT(Q) models, for which the accuracy is strongly dependent on the spin of the molecular ground state. By further comparing the ability of the models to recover relative CCSDTQ total atomization energies, the discrepancy between them is found to be even more pronounced, stressing how a balanced description of both closed- and open-shell species-as found in the CCSDT(Q-n) models-is indeed of paramount importance if any perturbative CC model is to be of chemical relevance for high-accuracy applications. In particular, the third-order CCSDT(Q-3) model is found to offer an encouraging alternative to the existing choices of quadruples models used in modern computational thermochemistry, since the model is still only of moderate cost, albeit markedly more costly than, e.g., the CCSDT(Q) and ΛCCSDT(Q) models.

  4. Assessment of the accuracy of coupled cluster perturbation theory for open-shell systems. II. Quadruples expansions

    NASA Astrophysics Data System (ADS)

    Eriksen, Janus J.; Matthews, Devin A.; Jørgensen, Poul; Gauss, Jürgen

    2016-05-01

    We extend our assessment of the potential of perturbative coupled cluster (CC) expansions for a test set of open-shell atoms and organic radicals to the description of quadruple excitations. Namely, the second- through sixth-order models of the recently proposed CCSDT(Q-n) quadruples series [J. J. Eriksen et al., J. Chem. Phys. 140, 064108 (2014)] are compared to the prominent CCSDT(Q) and ΛCCSDT(Q) models. From a comparison of the models in terms of their recovery of total CC singles, doubles, triples, and quadruples (CCSDTQ) energies, we find that the performance of the CCSDT(Q-n) models is independent of the reference used (unrestricted or restricted (open-shell) Hartree-Fock), in contrast to the CCSDT(Q) and ΛCCSDT(Q) models, for which the accuracy is strongly dependent on the spin of the molecular ground state. By further comparing the ability of the models to recover relative CCSDTQ total atomization energies, the discrepancy between them is found to be even more pronounced, stressing how a balanced description of both closed- and open-shell species—as found in the CCSDT(Q-n) models—is indeed of paramount importance if any perturbative CC model is to be of chemical relevance for high-accuracy applications. In particular, the third-order CCSDT(Q-3) model is found to offer an encouraging alternative to the existing choices of quadruples models used in modern computational thermochemistry, since the model is still only of moderate cost, albeit markedly more costly than, e.g., the CCSDT(Q) and ΛCCSDT(Q) models.

  5. Performance Assessment of Passive Hearing Protection Devices

    DTIC Science & Technology

    2014-10-24

    AFRL-RH-WP-TR-2014-0148 Performance Assessment of Passive Hearing Protection Devices Hilary L. Gallagher Richard L. McKinley...Assessment of Passive Hearing Protection Devices 5a. CONTRACT NUMBER FA8650-14-D-6501 5b. GRANT NUMBER 5c. PROGRAM ELEMENT NUMBER 62202F 6. AUTHOR(S...is essential. Passive hearing protectors, capable of attenuating both continuous and impulsive noise, have been designed to reduce the risk of

  6. Enabling performance skills: Assessment in engineering education

    NASA Astrophysics Data System (ADS)

    Ferrone, Jenny Kristina

    Current reform in engineering education is part of a national trend emphasizing student learning as well as accountability in instruction. Assessing student performance to demonstrate accountability has become a necessity in academia. In newly adopted criterion proposed by the Accreditation Board for Engineering and Technology (ABET), undergraduates are expected to demonstrate proficiency in outcomes considered essential for graduating engineers. The case study was designed as a formative evaluation of freshman engineering students to assess the perceived effectiveness of performance skills in a design laboratory environment. The mixed methodology used both quantitative and qualitative approaches to assess students' performance skills and congruency among the respondents, based on individual, team, and faculty perceptions of team effectiveness in three ABET areas: Communications Skills. Design Skills, and Teamwork. The findings of the research were used to address future use of the assessment tool and process. The results of the study found statistically significant differences in perceptions of Teamwork Skills (p < .05). When groups composed of students and professors were compared, professors were less likely to perceive student's teaming skills as effective. The study indicated the need to: (1) improve non-technical performance skills, such as teamwork, among freshman engineering students; (2) incorporate feedback into the learning process; (3) strengthen the assessment process with a follow-up plan that specifically targets performance skill deficiencies, and (4) integrate the assessment instrument and practice with ongoing curriculum development. The findings generated by this study provides engineering departments engaged in assessment activity, opportunity to reflect, refine, and develop their programs as it continues. It also extends research on ABET competencies of engineering students in an under-investigated topic of factors correlated with team

  7. Documenting Student Performance through Effective Performance Assessments: Workshop Summary. Horticulture.

    ERIC Educational Resources Information Center

    Ohio State Univ., Columbus. Agricultural Education Curriculum Materials Service.

    This document contains materials about and from a workshop that was conducted to help Ohio horticulture teachers learn to document student competence through effective performance assessments. The document begins with background information about the workshop and a list of workshop objectives. Presented next is a key to the 40 performance…

  8. An improved multivariate analytical method to assess the accuracy of acoustic sediment classification maps.

    NASA Astrophysics Data System (ADS)

    Biondo, M.; Bartholomä, A.

    2014-12-01

    High resolution hydro acoustic methods have been successfully employed for the detailed classification of sedimentary habitats. The fine-scale mapping of very heterogeneous, patchy sedimentary facies, and the compound effect of multiple non-linear physical processes on the acoustic signal, cause the classification of backscatter images to be subject to a great level of uncertainty. Standard procedures for assessing the accuracy of acoustic classification maps are not yet established. This study applies different statistical techniques to automated classified acoustic images with the aim of i) quantifying the ability of backscatter to resolve grain size distributions ii) understanding complex patterns influenced by factors other than grain size variations iii) designing innovative repeatable statistical procedures to spatially assess classification uncertainties. A high-frequency (450 kHz) sidescan sonar survey, carried out in the year 2012 in the shallow upper-mesotidal inlet the Jade Bay (German North Sea), allowed to map 100 km2 of surficial sediment with a resolution and coverage never acquired before in the area. The backscatter mosaic was ground-truthed using a large dataset of sediment grab sample information (2009-2011). Multivariate procedures were employed for modelling the relationship between acoustic descriptors and granulometric variables in order to evaluate the correctness of acoustic classes allocation and sediment group separation. Complex patterns in the acoustic signal appeared to be controlled by the combined effect of surface roughness, sorting and mean grain size variations. The area is dominated by silt and fine sand in very mixed compositions; in this fine grained matrix, percentages of gravel resulted to be the prevailing factor affecting backscatter variability. In the absence of coarse material, sorting mostly affected the ability to detect gradual but significant changes in seabed types. Misclassification due to temporal discrepancies

  9. Measuring the accuracy of diagnostic imaging in symptomatic breast patients: team and individual performance

    PubMed Central

    Britton, P; Warwick, J; Wallis, M G; O'Keeffe, S; Taylor, K; Sinnatamby, R; Barter, S; Gaskarth, M; Duffy, S W; Wishart, G C

    2012-01-01

    Objective The combination of mammography and/or ultrasound remains the mainstay in current breast cancer diagnosis. The aims of this study were to evaluate the reliability of standard breast imaging and individual radiologist performance and to explore ways that this can be improved. Methods A total of 16 603 separate assessment episodes were undertaken on 13 958 patients referred to a specialist symptomatic breast clinic over a 6 year period. Each mammogram and ultrasound was reported prospectively using a five-point reporting scale and compared with final outcome. Results Mammographic sensitivity, specificity and receiver operating curve (ROC) area were 66.6%, 99.7% and 0.83, respectively. The sensitivity of mammography improved dramatically from 47.6 to 86.7% with increasing age. Overall ultrasound sensitivity, specificity and ROC area was 82.0%, 99.3% and 0.91, respectively. The sensitivity of ultrasound also improved dramatically with increasing age from 66.7 to 97.1%. Breast density also had a profound effect on imaging performance, with mammographic sensitivity falling from 90.1 to 45.9% and ultrasound sensitivity reducing from 95.2 to 72.0% with increasing breast density. Conclusion The sensitivity ranges widely between radiologists (53.1–74.1% for mammography and 67.1–87.0% for ultrasound). Reporting sensitivity was strongly correlated with radiologist experience. Those radiologists with less experience (and lower sensitivity) were relatively more likely to report a cancer as indeterminate/uncertain. To improve radiology reporting performance, the sensitivity of cancer reporting should be closely monitored; there should be regular feedback from needle biopsy results and discussion of reporting classification with colleagues. PMID:21224304

  10. A TECHNIQUE FOR ASSESSING THE ACCURACY OF SUB-PIXEL IMPERVIOUS SURFACE ESTIMATES DERIVED FROM LANDSAT TM IMAGERY

    EPA Science Inventory

    We developed a technique for assessing the accuracy of sub-pixel derived estimates of impervious surface extracted from LANDSAT TM imagery. We utilized spatially coincident
    sub-pixel derived impervious surface estimates, high-resolution planimetric GIS data, vector--to-
    r...

  11. How Nonrecidivism Affects Predictive Accuracy: Evidence from a Cross-Validation of the Ontario Domestic Assault Risk Assessment (ODARA)

    ERIC Educational Resources Information Center

    Hilton, N. Zoe; Harris, Grant T.

    2009-01-01

    Prediction effect sizes such as ROC area are important for demonstrating a risk assessment's generalizability and utility. How a study defines recidivism might affect predictive accuracy. Nonrecidivism is problematic when predicting specialized violence (e.g., domestic violence). The present study cross-validates the ability of the Ontario…

  12. Disease severity assessment in epidemiological studies: accuracy and reliability of visual estimates of Septoria leaf blotch (SLB) in winter wheat

    Technology Transfer Automated Retrieval System (TEKTRAN)

    The accuracy and reliability of visual assessments of SLB severity by raters (i.e. one plant pathologist with extensive experience and three other raters trained prior to field observations using standard area diagrams and DISTRAIN) was determined by comparison with assumed actual values obtained by...

  13. Diagnostic Accuracy of Computer-Aided Assessment of Intranodal Vascularity in Distinguishing Different Causes of Cervical Lymphadenopathy.

    PubMed

    Ying, Michael; Cheng, Sammy C H; Ahuja, Anil T

    2016-08-01

    Ultrasound is useful in assessing cervical lymphadenopathy. Advancement of computer science technology allows accurate and reliable assessment of medical images. The aim of the study described here was to evaluate the diagnostic accuracy of computer-aided assessment of the intranodal vascularity index (VI) in differentiating the various common causes of cervical lymphadenopathy. Power Doppler sonograms of 347 patients (155 with metastasis, 23 with lymphoma, 44 with tuberculous lymphadenitis, 125 reactive) with palpable cervical lymph nodes were reviewed. Ultrasound images of cervical nodes were evaluated, and the intranodal VI was quantified using a customized computer program. The diagnostic accuracy of using the intranodal VI to distinguish different disease groups was evaluated and compared. Metastatic and lymphomatous lymph nodes tend to be more vascular than tuberculous and reactive lymph nodes. The intranodal VI had the highest diagnostic accuracy in distinguishing metastatic and tuberculous nodes with a sensitivity of 80%, specificity of 73%, positive predictive value of 91%, negative predictive value of 51% and overall accuracy of 68% when a cutoff VI of 22% was used. Computer-aided assessment provides an objective and quantitative way to evaluate intranodal vascularity. The intranodal VI is a useful parameter in distinguishing certain causes of cervical lymphadenopathy and is particularly useful in differentiating metastatic and tuberculous lymph nodes. However, it has limited value in distinguishing lymphomatous nodes from metastatic and reactive nodes.

  14. Accuracy VS Performance: Finding the Sweet Spot in the Geospatial Resolution of Satellite Metadata

    NASA Astrophysics Data System (ADS)

    Baskin, W. E.; Mangosing, D. C.; Rinsland, P. L.

    2010-12-01

    NASA’s Atmospheric Science Data Center (ASDC) and the Cloud-Aerosol LIDAR and Infrared Pathfinder Satellite Observation (CALIPSO) team at the NASA Langley Research Center recently collaborated in the development of a new CALIPSO Search and Subset web application. The web application is comprised of three elements: (1) A PostGIS-enabled PostgreSQL database system, which is used to store temporal and geospatial metadata from CALIPSO’s LIDAR, Infrared, and Wide Field Camera datasets, (2) the SciFlo engine, which is a data flow engine that enables semantic, scientific data flow executions in a grid or clustered network computational environment, and (3) PHP-based web application that incorporates some Web 2.0 / AJAX technologies used in the web interface. The search portion of the web application leverages geodetic indexing and search capabilities that became available in the February 2010 release of PostGIS version1.5. This presentation highlights the lessons learned in experimenting with various geospatial resolutions of CALIPSO’s LIDAR sensor ground track metadata. Details of the various spatial resolutions, spatial database schema designs, spatial indexing strategies, and performance results will be discussed. The focus will be on illustrating our findings on the spatial resolutions for ground track metadata that optimized search time and search accuracy in the CALIPSO Search and Subset Application. The CALIPSO satellite provides new insight into the role that clouds and atmospheric aerosols (airborne particles) play in regulating Earth's weather, climate, and air quality. CALIPSO combines an active LIDAR instrument with passive infrared and visible imagers to probe the vertical structure and properties of thin clouds and aerosols over the globe. The CALIPSO satellite was launched on April 28, 2006 and is part of the A-train satellite constellation. The ASDC in Langley’s Science Directorate leads NASA’s program for the processing, archival and

  15. Accuracy assessment of planimetric large-scale map data for decision-making

    NASA Astrophysics Data System (ADS)

    Doskocz, Adam

    2016-06-01

    This paper presents decision-making risk estimation based on planimetric large-scale map data, which are data sets or databases which are useful for creating planimetric maps on scales of 1:5,000 or larger. The studies were conducted on four data sets of large-scale map data. Errors of map data were used for a risk assessment of decision-making about the localization of objects, e.g. for land-use planning in realization of investments. An analysis was performed for a large statistical sample set of shift vectors of control points, which were identified with the position errors of these points (errors of map data). In this paper, empirical cumulative distribution function models for decision-making risk assessment were established. The established models of the empirical cumulative distribution functions of shift vectors of control points involve polynomial equations. An evaluation of the compatibility degree of the polynomial with empirical data was stated by the convergence coefficient and by the indicator of the mean relative compatibility of model. The application of an empirical cumulative distribution function allows an estimation of the probability of the occurrence of position errors of points in a database. The estimated decision-making risk assessment is represented by the probability of the errors of points stored in the database.

  16. Accuracy of forced oscillation technique to assess lung function in geriatric COPD population

    PubMed Central

    Tse, Hoi Nam; Tseng, Cee Zhung Steven; Wong, King Ying; Yee, Kwok Sang; Ng, Lai Yun

    2016-01-01

    Introduction Performing lung function test in geriatric patients has never been an easy task. With well-established evidence indicating impaired small airway function and air trapping in patients with geriatric COPD, utilizing forced oscillation technique (FOT) as a supplementary tool may aid in the assessment of lung function in this population. Aims To study the use of FOT in the assessment of airflow limitation and air trapping in geriatric COPD patients. Study design A cross-sectional study in a public hospital in Hong Kong. ClinicalTrials.gov ID: NCT01553812. Methods Geriatric patients who had spirometry-diagnosed COPD were recruited, with both FOT and plethysmography performed. “Resistance” and “reactance” FOT parameters were compared to plethysmography for the assessment of air trapping and airflow limitation. Results In total, 158 COPD subjects with a mean age of 71.9±0.7 years and percentage of forced expiratory volume in 1 second of 53.4±1.7 L were recruited. FOT values had a good correlation (r=0.4–0.7) to spirometric data. In general, X values (reactance) were better than R values (resistance), showing a higher correlation with spirometric data in airflow limitation (r=0.07–0.49 vs 0.61–0.67), small airway (r=0.05–0.48 vs 0.56–0.65), and lung volume (r=0.12–0.29 vs 0.43–0.49). In addition, resonance frequency (Fres) and frequency dependence (FDep) could well identify the severe type (percentage of forced expiratory volume in 1 second <50%) of COPD with high sensitivity (0.76, 0.71) and specificity (0.72, 0.64) (area under the curve: 0.8 and 0.77, respectively). Moreover, X values could stratify different severities of air trapping, while R values could not. Conclusion FOT may act as a simple and accurate tool in the assessment of severity of airflow limitation, small and central airway function, and air trapping in patients with geriatric COPD who have difficulties performing conventional lung function test. Moreover, reactance

  17. Judgment of Learning, Monitoring Accuracy, and Student Performance in the Classroom Context

    ERIC Educational Resources Information Center

    Cao, Li; Nietfeld, John L.

    2005-01-01

    As a key component in self-regulated learning, the ability to accurately judge the status of learning enables students to become strategic and effective in the learning process. Weekly monitoring exercises were used to improve college students' (N = 94) accuracy of judgment of learning over a 14-week educational psychology course. A time series…

  18. Work Performance Ratings: Cognitive Modeling and Feedback Principles in Rater Accuracy Training

    DTIC Science & Technology

    1990-02-01

    task involves observation in addition to evaluation (Thornton & Zorich , 1980), and in more complicated rating tasks, additional cognitive processes...Personnel Psychology, 31, 853- 888. 30 Thornton, G.C., III, & Zorich , S. (1980). Training to improve observer accuracy. Journal of Applied

  19. Completion Rates and Accuracy of Performance Under Fixed and Variable Token Exchange Periods.

    ERIC Educational Resources Information Center

    McLaughlin, T. F.; Malaby, J. E.

    This research investigated the effects of employing fixed, variable, and extended token exchange periods for back-ups on the completion and accuracy of daily assignments for a total fifth and sixth-grade class. The results indicated that, in general, a higher percentage of assignments was completed when the number of days between point exchanges…

  20. Accuracy and precision of the three-dimensional assessment of the facial surface using a 3-D laser scanner.

    PubMed

    Kovacs, L; Zimmermann, A; Brockmann, G; Baurecht, H; Schwenzer-Zimmerer, K; Papadopulos, N A; Papadopoulos, M A; Sader, R; Biemer, E; Zeilhofer, H F

    2006-06-01

    Three-dimensional (3-D) recording of the surface of the human body or anatomical areas has gained importance in many medical specialties. Thus, it is important to determine scanner precision and accuracy in defined medical applications and to establish standards for the recording procedure. Here we evaluated the precision and accuracy of 3-D assessment of the facial area with the Minolta Vivid 910 3D Laser Scanner. We also investigated the influence of factors related to the recording procedure and the processing of scanner data on final results. These factors include lighting, alignment of scanner and object, the examiner, and the software used to convert measurements into virtual images. To assess scanner accuracy, we compared scanner data to those obtained by manual measurements on a dummy. Less than 7% of all results with the scanner method were outside a range of error of 2 mm when compared to corresponding reference measurements. Accuracy, thus, proved to be good enough to satisfy requirements for numerous clinical applications. Moreover, the experiments completed with the dummy yielded valuable information for optimizing recording parameters for best results. Thus, under defined conditions, precision and accuracy of surface models of the human face recorded with the Minolta Vivid 910 3D Scanner presumably can also be enhanced. Future studies will involve verification of our findings using test persons. The current findings indicate that the Minolta Vivid 910 3D Scanner might be used with benefit in medicine when recording the 3-D surface structures of the face.

  1. Assessment of the accuracy of global geodetic satellite laser ranging observations 1993-2013

    NASA Astrophysics Data System (ADS)

    Appleby, Graham; Rodriguez, Jose

    2014-05-01

    We continue efforts to estimate the intrinsic accuracy of range measurements made by the major satellite laser ranging stations of the ILRS Network using normal point observations of the primary geodetic satellites LAGEOS and LAGEOS-II. In a novel, but risky, approach we carry out weekly, loosely constrained, reference frame solutions for satellite initial state vectors, station coordinates and daily EOPs (X-pole, Y-pole and LoD), as well as estimating range bias for all the stations. We apply known range errors a-priori from the table developed and maintained through the efforts of the ILRS Analysis Working Group and apply station- and time-specific satellite centre of mass corrections (Appleby and Otsubo, 2014), both corrections that are currently implemented in the standard ILRS reference frame products. Our approach, to solve simultaneously for station coordinates and possible range bias for all the stations, has the strength that any bias results are independent of the coordinates taken for example from ITRF2008; thus the approach has the potential to discover bias that may have become absorbed primarily in station height had the coordinates been determined on the assumption of zero bias. A serious complication of the approach is that correlations will inevitably exist between station height and range bias. However, for the major stations of the Network, and using LAGEOS and LAGEOS-II observations simultaneously in our weekly solutions, we are developing techniques and testing their sensitivity in performing a partial separation between these parameters at the expense of an increase in the variance of the stations' height time series. In this paper we discuss the results in terms of potential impact on coordinate solutions, including the reference frame scale, and in the context of preparations for ITRF2013.

  2. Accuracy of qualitative analysis for assessment of skilled baseball pitching technique.

    PubMed

    Nicholls, Rochelle; Fleisig, Glenn; Elliott, Bruce; Lyman, Stephen; Osinski, Edmund

    2003-07-01

    Baseball pitching must be performed with correct technique if injuries are to be avoided and performance maximized. High-speed video analysis is accepted as the most accurate and objective method for evaluation of baseball pitching mechanics. The aim of this research was to develop an equivalent qualitative analysis method for use with standard video equipment. A qualitative analysis protocol (QAP) was developed for 24 kinematic variables identified as important to pitching performance. Twenty male baseball pitchers were videotaped using 60 Hz camcorders, and their technique evaluated using the QAP, by two independent raters. Each pitcher was also assessed using a 6-camera 200 Hz Motion Analysis system (MAS). Four QAP variables (22%) showed significant similarity with MAS results. Inter-rater reliability showed agreement on 33% of QAP variables. It was concluded that a complete and accurate profile of an athlete's pitching mechanics cannot be made using the QAP in its current form, but it is possible such simple forms of biomechanical analysis could yield accurate results before 3-D methods become obligatory.

  3. Dynamic Accuracy of GPS Receivers for Use in Health Research: A Novel Method to Assess GPS Accuracy in Real-World Settings

    PubMed Central

    Schipperijn, Jasper; Kerr, Jacqueline; Duncan, Scott; Madsen, Thomas; Klinker, Charlotte Demant; Troelsen, Jens

    2014-01-01

    The emergence of portable global positioning system (GPS) receivers over the last 10 years has provided researchers with a means to objectively assess spatial position in free-living conditions. However, the use of GPS in free-living conditions is not without challenges and the aim of this study was to test the dynamic accuracy of a portable GPS device under real-world environmental conditions, for four modes of transport, and using three data collection intervals. We selected four routes on different bearings, passing through a variation of environmental conditions in the City of Copenhagen, Denmark, to test the dynamic accuracy of the Qstarz BT-Q1000XT GPS device. Each route consisted of a walk, bicycle, and vehicle lane in each direction. The actual width of each walking, cycling, and vehicle lane was digitized as accurately as possible using ultra-high-resolution aerial photographs as background. For each trip, we calculated the percentage that actually fell within the lane polygon, and within the 2.5, 5, and 10 m buffers respectively, as well as the mean and median error in meters. Our results showed that 49.6% of all ≈68,000 GPS points fell within 2.5 m of the expected location, 78.7% fell within 10 m and the median error was 2.9 m. The median error during walking trips was 3.9, 2.0 m for bicycle trips, 1.5 m for bus, and 0.5 m for car. The different area types showed considerable variation in the median error: 0.7 m in open areas, 2.6 m in half-open areas, and 5.2 m in urban canyons. The dynamic spatial accuracy of the tested device is not perfect, but we feel that it is within acceptable limits for larger population studies. Longer recording periods, for a larger population are likely to reduce the potentially negative effects of measurement inaccuracy. Furthermore, special care should be taken when the environment in which the study takes place could compromise the GPS signal. PMID:24653984

  4. Dynamic Accuracy of GPS Receivers for Use in Health Research: A Novel Method to Assess GPS Accuracy in Real-World Settings.

    PubMed

    Schipperijn, Jasper; Kerr, Jacqueline; Duncan, Scott; Madsen, Thomas; Klinker, Charlotte Demant; Troelsen, Jens

    2014-01-01

    The emergence of portable global positioning system (GPS) receivers over the last 10 years has provided researchers with a means to objectively assess spatial position in free-living conditions. However, the use of GPS in free-living conditions is not without challenges and the aim of this study was to test the dynamic accuracy of a portable GPS device under real-world environmental conditions, for four modes of transport, and using three data collection intervals. We selected four routes on different bearings, passing through a variation of environmental conditions in the City of Copenhagen, Denmark, to test the dynamic accuracy of the Qstarz BT-Q1000XT GPS device. Each route consisted of a walk, bicycle, and vehicle lane in each direction. The actual width of each walking, cycling, and vehicle lane was digitized as accurately as possible using ultra-high-resolution aerial photographs as background. For each trip, we calculated the percentage that actually fell within the lane polygon, and within the 2.5, 5, and 10 m buffers respectively, as well as the mean and median error in meters. Our results showed that 49.6% of all ≈68,000 GPS points fell within 2.5 m of the expected location, 78.7% fell within 10 m and the median error was 2.9 m. The median error during walking trips was 3.9, 2.0 m for bicycle trips, 1.5 m for bus, and 0.5 m for car. The different area types showed considerable variation in the median error: 0.7 m in open areas, 2.6 m in half-open areas, and 5.2 m in urban canyons. The dynamic spatial accuracy of the tested device is not perfect, but we feel that it is within acceptable limits for larger population studies. Longer recording periods, for a larger population are likely to reduce the potentially negative effects of measurement inaccuracy. Furthermore, special care should be taken when the environment in which the study takes place could compromise the GPS signal.

  5. Dehydration: physiology, assessment, and performance effects.

    PubMed

    Cheuvront, Samuel N; Kenefick, Robert W

    2014-01-01

    This article provides a comprehensive review of dehydration assessment and presents a unique evaluation of the dehydration and performance literature. The importance of osmolality and volume are emphasized when discussing the physiology, assessment, and performance effects of dehydration. The underappreciated physiologic distinction between a loss of hypo-osmotic body water (intracellular dehydration) and an iso-osmotic loss of body water (extracellular dehydration) is presented and argued as the single most essential aspect of dehydration assessment. The importance of diagnostic and biological variation analyses to dehydration assessment methods is reviewed and their use in gauging the true potential of any dehydration assessment method highlighted. The necessity for establishing proper baselines is discussed, as is the magnitude of dehydration required to elicit reliable and detectable osmotic or volume-mediated compensatory physiologic responses. The discussion of physiologic responses further helps inform and explain our analysis of the literature suggesting a ≥ 2% dehydration threshold for impaired endurance exercise performance mediated by volume loss. In contrast, no clear threshold or plausible mechanism(s) support the marginal, but potentially important, impairment in strength, and power observed with dehydration. Similarly, the potential for dehydration to impair cognition appears small and related primarily to distraction or discomfort. The impact of dehydration on any particular sport skill or task is therefore likely dependent upon the makeup of the task itself (e.g., endurance, strength, cognitive, and motor skill).

  6. The Uses and Limits of Performance Assessment.

    ERIC Educational Resources Information Center

    Eisner, Elliot W.

    1999-01-01

    Performance assessment can help educators develop ways to reveal individual students' distinctive features and secure information about learning. These opportunities will be wasted unless the public's attitudes and expectations toward schooling are changed. Attitudes cannot change without revising policies inhibiting school children's educational…

  7. The Visible Hand of Research Performance Assessment

    ERIC Educational Resources Information Center

    Hamann, Julian

    2016-01-01

    Far from allowing a governance of universities by the invisible hand of market forces, research performance assessments do not just measure differences in research quality, but yield themselves visible symptoms in terms of a stratification and standardization of disciplines. The article illustrates this with a case study of UK history departments…

  8. Comparability of Two Cognitive Performance Assessment Systems

    DTIC Science & Technology

    1992-02-01

    reauesters Qualified requesters may obtain copies from the Defense Technical Information Center (DTIC), Cameron Station , Alexandria, Virginia 22314...photometric expertise. Thanks also to Mr. Jim A. Chiaramonte, SPC4 Angelia Mattingly, 2LT Shawn Prickett , and PFC Hilda Pou for help in preparing the report...presentation and subject response characteristics of performance assessment batteries (PABs) which are implemented on the different computer systems

  9. A Litmus Test for Performance Assessment.

    ERIC Educational Resources Information Center

    Finson, Kevin D.; Beaver, John B.

    1992-01-01

    Presents 10 guidelines for developing performance-based assessment items. Presents a sample activity developed from the guidelines. The activity tests students ability to observe, classify, and infer, using red and blue litmus paper, a pH-range finder, vinegar, ammonia, an unknown solution, distilled water, and paper towels. (PR)

  10. Questioning the Technical Quality of Performance Assessment.

    ERIC Educational Resources Information Center

    Baker, Eva L.

    1993-01-01

    Ongoing research finds that alternatives to formal tests carry hidden costs and require trade-offs. National Center for Research on Evaluation, Standards, and Student Testing (CRESST) is studying how performance assessments work so that they may be designed to help students reach their highest potential. This article raises priority questions,…

  11. Accuracy Assessment of Geometrical Elements for Setting-Out in Horizontal Plane of Conveying Chambers at the Bauxite Mine "KOSTURI" Srebrenica

    NASA Astrophysics Data System (ADS)

    Milutinović, Aleksandar; Ganić, Aleksandar; Tokalić, Rade

    2014-03-01

    Setting-out of objects on the exploitation field of the mine, both in surface mining and in the underground mines, is determined by the specified setting-out accuracy of reference points, which are best to define spatial position of the object projected. For the purpose of achieving of the specified accuracy, it is necessary to perform a priori accuracy assessment of parameters, which are to be used when performing setting-out. Based on the a priori accuracy assessment, verification of the quality of geometrical setting- -out elements specified in the layout; definition of the accuracy for setting-out of geometrical elements; selection of setting-out method; selection at the type and class of instruments and tools that need to be applied in order to achieve predefined accuracy. The paper displays the accuracy assessment of geometrical elements for setting-out of the main haul gallery, haul downcast and helical conveying downcasts in shape of an inclined helix in horizontal plane, using the example of the underground bauxite mine »Kosturi«, Srebrenica. Wytyczanie obiektów na polu wydobywczym w kopalniach, zarówno podziemnych jak i odkrywkowych, zależy w dużej mierze od określonej dokładności wytyczania punktów referencyjnych, przy pomocy których określane jest następnie położenie przestrzenne pozostałych obiektów. W celu uzyskania założonej dokładności, należy przeprowadzić wstępną analizę dokładności oszacowania parametrów które następnie wykorzystane będą w procesie wytyczania. W oparciu o wyniki wstępnej analizy dokładności dokonuje się weryfikacji jakości geometrycznego wytyczenia elementów zaznaczonych na szkicu, uwzględniając te wyniki dobrać należy odpowiednią metodę wytyczania i rodzaj oraz klasę wykorzystywanych narzędzi i instrumentów, tak by osiągnąć założony poziom dokładności. W pracy przedstawiono oszacowanie dokładności wytyczania elementów geometrycznych dla głównego chodnika transportowego

  12. Accuracy Assessment of Underwater Photogrammetric Three Dimensional Modelling for Coral Reefs

    NASA Astrophysics Data System (ADS)

    Guo, T.; Capra, A.; Troyer, M.; Gruen, A.; Brooks, A. J.; Hench, J. L.; Schmitt, R. J.; Holbrook, S. J.; Dubbini, M.

    2016-06-01

    Recent advances in automation of photogrammetric 3D modelling software packages have stimulated interest in reconstructing highly accurate 3D object geometry in unconventional environments such as underwater utilizing simple and low-cost camera systems. The accuracy of underwater 3D modelling is affected by more parameters than in single media cases. This study is part of a larger project on 3D measurements of temporal change of coral cover in tropical waters. It compares the accuracies of 3D point clouds generated by using images acquired from a system camera mounted in an underwater housing and the popular GoPro cameras respectively. A precisely measured calibration frame was placed in the target scene in order to provide accurate control information and also quantify the errors of the modelling procedure. In addition, several objects (cinder blocks) with various shapes were arranged in the air and underwater and 3D point clouds were generated by automated image matching. These were further used to examine the relative accuracy of the point cloud generation by comparing the point clouds of the individual objects with the objects measured by the system camera in air (the best possible values). Given a working distance of about 1.5 m, the GoPro camera can achieve a relative accuracy of 1.3 mm in air and 2.0 mm in water. The system camera achieved an accuracy of 1.8 mm in water, which meets our requirements for coral measurement in this system.

  13. Accuracy Assessment of Using Rapid Prototyping Drill Templates for Atlantoaxial Screw Placement: A Cadaver Study

    PubMed Central

    Guo, Shuai; Lu, Teng; Hu, Qiaolong; Yang, Baohui; He, Xijing

    2016-01-01

    Purpose. To preliminarily evaluate the feasibility and accuracy of using rapid prototyping drill templates (RPDTs) for C1 lateral mass screw (C1-LMS) and C2 pedicle screw (C2-PS) placement. Methods. 23 formalin-fixed craniocervical cadaver specimens were randomly divided into two groups. In the conventional method group, intraoperative fluoroscopy was used to assist the screw placement. In the RPDT navigation group, specific RPDTs were constructed for each specimen and were used intraoperatively for screw placement navigation. The screw position, the operating time, and the fluoroscopy time for each screw placement were compared between the 2 groups. Results. Compared with the conventional method, the RPDT technique significantly increased the placement accuracy of the C2-PS (p < 0.05). In the axial plane, using RPDTs also significantly increased C1-LMS placement accuracy (p < 0.05). In the sagittal plane, although using RPDTs had a very high accuracy rate (100%) in C1-LMS placement, it was not statistically significant compared with the conventional method (p > 0.05). Moreover, the RPDT technique significantly decreased the operating and fluoroscopy times. Conclusion. Using RPDTs significantly increases the accuracy of C1-LMS and C2-PS placement while decreasing the screw placement time and the radiation exposure. Due to these advantages, this approach is worth promoting for use in the Harms technique. PMID:28004004

  14. Performance, Accuracy, and Web Server for Evolutionary Placement of Short Sequence Reads under Maximum Likelihood

    PubMed Central

    Berger, Simon A.; Krompass, Denis; Stamatakis, Alexandros

    2011-01-01

    We present an evolutionary placement algorithm (EPA) and a Web server for the rapid assignment of sequence fragments (short reads) to edges of a given phylogenetic tree under the maximum-likelihood model. The accuracy of the algorithm is evaluated on several real-world data sets and compared with placement by pair-wise sequence comparison, using edit distances and BLAST. We introduce a slow and accurate as well as a fast and less accurate placement algorithm. For the slow algorithm, we develop additional heuristic techniques that yield almost the same run times as the fast version with only a small loss of accuracy. When those additional heuristics are employed, the run time of the more accurate algorithm is comparable with that of a simple BLAST search for data sets with a high number of short query sequences. Moreover, the accuracy of the EPA is significantly higher, in particular when the sample of taxa in the reference topology is sparse or inadequate. Our algorithm, which has been integrated into RAxML, therefore provides an equally fast but more accurate alternative to BLAST for tree-based inference of the evolutionary origin and composition of short sequence reads. We are also actively developing a Web server that offers a freely available service for computing read placements on trees using the EPA. PMID:21436105

  15. Radioactive Waste Management Complex performance assessment: Draft

    SciTech Connect

    Case, M.J.; Maheras, S.J.; McKenzie-Carter, M.A.; Sussman, M.E.; Voilleque, P.

    1990-06-01

    A radiological performance assessment of the Radioactive Waste Management Complex at the Idaho National Engineering Laboratory was conducted to demonstrate compliance with appropriate radiological criteria of the US Department of Energy and the US Environmental Protection Agency for protection of the general public. The calculations involved modeling the transport of radionuclides from buried waste, to surface soil and subsurface media, and eventually to members of the general public via air, ground water, and food chain pathways. Projections of doses were made for both offsite receptors and individuals intruding onto the site after closure. In addition, uncertainty analyses were performed. Results of calculations made using nominal data indicate that the radiological doses will be below appropriate radiological criteria throughout operations and after closure of the facility. Recommendations were made for future performance assessment calculations.

  16. ChronRater: A simple approach to assessing the accuracy of age models from Holocene sediment cores

    NASA Astrophysics Data System (ADS)

    Kaufman, D. S.; Balascio, N. L.; McKay, N. P.; Sundqvist, H. S.

    2013-12-01

    We have assembled a database of previously published Holocene proxy climate records from the Arctic, with the goal of reconstructing the spatial-temporal pattern of past climate changes. The focus is on well-dated, highly resolved, continuous records that extend to at least 6 ka, most of which (90%) are from sedimentary sequences sampled in cores from lakes and oceans. The database includes the original geochronological data (radiocarbon ages) for each record so that the accuracy of the underlying age models can be assessed uniformly. Determining the accuracy of age control for sedimentary sequences is difficult because it depends on many factors, some of which are difficult to quantify. Nevertheless, the geochronological accuracy of each time series in the database must be assessed systematically to objectively identify those that are appropriate to address a particular level of temporal inquiry. We have therefore devised a scoring scheme to rate the accuracy of age models that focuses on the most important factors and uses just the most commonly published information to determine the overall geochronological accuracy. The algorithm, "ChronRater" is written in the open-source statistical package, R. It relies on three characteristics of dated materials and their downcore trends: (1) The delineation of the downcore trend, which is quantified based on three attributes, namely: (a) the frequency of ages, (b) the regularity of their spacing, and (c) the uniformity of the sedimentation rate. (2) The quality of the dated materials, as determined by: (a) the proportion of outliers and downcore reversals, and (b) the type of materials analyzed and the extent to which their ages are verified by independent information as judged by a five-point scale for the entire sequence of ages. And (3) the overall uncertainty in the calibrated ages, which includes the analytical precision and the associated calibrated age ranges. Although our geochronological accuracy score is

  17. The Confidence-Accuracy Relationship in Diagnostic Assessment: The Case of the Potential Difference in Parallel Electric Circuits

    ERIC Educational Resources Information Center

    Saglam, Murat

    2015-01-01

    This study explored the relationship between accuracy of and confidence in performance of 114 prospective primary school teachers in answering diagnostic questions on potential difference in parallel electric circuits. The participants were required to indicate their confidence in their answers for each question. Bias and calibration indices were…

  18. What's the Difference between "Authentic" and "Performance" Assessment?

    ERIC Educational Resources Information Center

    Meyer, Carol A.

    1992-01-01

    Uses two direct writing assignments to show that performance assessment denotes the kind of student response to be examined, whereas authentic assessment denotes assessment context. Although not all performance assessments are authentic, it is difficult to imagine an authentic assessment that would not also be a performance assessment. Educators…

  19. Accuracy of Assessment of Eligibility for Early Medical Abortion by Community Health Workers in Ethiopia, India and South Africa

    PubMed Central

    Nguyen, My Huong; Habib, Ndema; Afework, Mesganaw Fantahun; Harries, Jane; Iyengar, Kirti; Moodley, Jennifer; Constant, Deborah; Sen, Swapnaleen

    2016-01-01

    Objective To assess the accuracy of assessment of eligibility for early medical abortion by community health workers using a simple checklist toolkit. Design Diagnostic accuracy study. Setting Ethiopia, India and South Africa. Methods Two hundred seventeen women in Ethiopia, 258 in India and 236 in South Africa were enrolled into the study. A checklist toolkit to determine eligibility for early medical abortion was validated by comparing results of clinician and community health worker assessment of eligibility using the checklist toolkit with the reference standard exam. Results Accuracy was over 90% and the negative likelihood ratio <0.1 at all three sites when used by clinician assessors. Positive likelihood ratios were 4.3 in Ethiopia, 5.8 in India and 6.3 in South Africa. When used by community health workers the overall accuracy of the toolkit was 92% in Ethiopia, 80% in India and 77% in South Africa negative likelihood ratios were 0.08 in Ethiopia, 0.25 in India and 0.22 in South Africa and positive likelihood ratios were 5.9 in Ethiopia and 2.0 in India and South Africa. Conclusion The checklist toolkit, as used by clinicians, was excellent at ruling out participants who were not eligible, and moderately effective at ruling in participants who were eligible for medical abortion. Results were promising when used by community health workers particularly in Ethiopia where they had more prior experience with use of diagnostic aids and longer professional training. The checklist toolkit assessments resulted in some participants being wrongly assessed as eligible for medical abortion which is an area of concern. Further research is needed to streamline the components of the tool, explore optimal duration and content of training for community health workers, and test feasibility and acceptability. PMID:26731176

  20. Accuracy of Subcutaneous Continuous Glucose Monitoring in Critically Ill Adults: Improved Sensor Performance with Enhanced Calibrations

    PubMed Central

    Leelarathna, Lalantha; English, Shane W.; Thabit, Hood; Caldwell, Karen; Allen, Janet M.; Kumareswaran, Kavita; Wilinska, Malgorzata E.; Nodale, Marianna; Haidar, Ahmad; Evans, Mark L.; Burnstein, Rowan

    2014-01-01

    Abstract Objective: Accurate real-time continuous glucose measurements may improve glucose control in the critical care unit. We evaluated the accuracy of the FreeStyle® Navigator® (Abbott Diabetes Care, Alameda, CA) subcutaneous continuous glucose monitoring (CGM) device in critically ill adults using two methods of calibration. Subjects and Methods: In a randomized trial, paired CGM and reference glucose (hourly arterial blood glucose [ABG]) were collected over a 48-h period from 24 adults with critical illness (mean±SD age, 60±14 years; mean±SD body mass index, 29.6±9.3 kg/m2; mean±SD Acute Physiology and Chronic Health Evaluation score, 12±4 [range, 6–19]) and hyperglycemia. In 12 subjects, the CGM device was calibrated at variable intervals of 1–6 h using ABG. In the other 12 subjects, the sensor was calibrated according to the manufacturer's instructions (1, 2, 10, and 24 h) using arterial blood and the built-in point-of-care glucometer. Results: In total, 1,060 CGM–ABG pairs were analyzed over the glucose range from 4.3 to 18.8 mmol/L. Using enhanced calibration median (interquartile range) every 169 (122–213) min, the absolute relative deviation was lower (7.0% [3.5, 13.0] vs. 12.8% [6.3, 21.8], P<0.001), and the percentage of points in the Clarke error grid Zone A was higher (87.8% vs. 70.2%). Conclusions: Accuracy of the Navigator CGM device during critical illness was comparable to that observed in non–critical care settings. Further significant improvements in accuracy may be obtained by frequent calibrations with ABG measurements. PMID:24180327

  1. Applying Signal-Detection Theory to the Study of Observer Accuracy and Bias in Behavioral Assessment

    ERIC Educational Resources Information Center

    Lerman, Dorothea C.; Tetreault, Allison; Hovanetz, Alyson; Bellaci, Emily; Miller, Jonathan; Karp, Hilary; Mahmood, Angela; Strobel, Maggie; Mullen, Shelley; Keyl, Alice; Toupard, Alexis

    2010-01-01

    We evaluated the feasibility and utility of a laboratory model for examining observer accuracy within the framework of signal-detection theory (SDT). Sixty-one individuals collected data on aggression while viewing videotaped segments of simulated teacher-child interactions. The purpose of Experiment 1 was to determine if brief feedback and…

  2. Interrater Reliability Estimators Commonly Used in Scoring Language Assessments: A Monte Carlo Investigation of Estimator Accuracy

    ERIC Educational Resources Information Center

    Morgan, Grant B.; Zhu, Min; Johnson, Robert L.; Hodge, Kari J.

    2014-01-01

    Common estimators of interrater reliability include Pearson product-moment correlation coefficients, Spearman rank-order correlations, and the generalizability coefficient. The purpose of this study was to examine the accuracy of estimators of interrater reliability when varying the true reliability, number of scale categories, and number of…

  3. Development of a Mathematical Model to Assess the Accuracy of Difference between Geodetic Heights

    ERIC Educational Resources Information Center

    Gairabekov, Ibragim; Kliushin, Evgenii; Gayrabekov, Magomed-Bashir; Ibragimova, Elina; Gayrabekova, Amina

    2016-01-01

    The article includes the results of theoretical studies of the accuracy of geodetic height survey and marks points on the Earth's surface using satellite technology. The dependence of the average square error of geodetic heights difference survey from the distance to the base point was detected. It is being proved that by using satellite…

  4. ESA ExoMars: Pre-launch PanCam Geometric Modeling and Accuracy Assessment

    NASA Astrophysics Data System (ADS)

    Li, D.; Li, R.; Yilmaz, A.

    2014-08-01

    ExoMars is the flagship mission of the European Space Agency (ESA) Aurora Programme. The mobile scientific platform, or rover, will carry a drill and a suite of instruments dedicated to exobiology and geochemistry research. As the ExoMars rover is designed to travel kilometres over the Martian surface, high-precision rover localization and topographic mapping will be critical for traverse path planning and safe planetary surface operations. For such purposes, the ExoMars rover Panoramic Camera system (PanCam) will acquire images that are processed into an imagery network providing vision information for photogrammetric algorithms to localize the rover and generate 3-D mapping products. Since the design of the ExoMars PanCam will influence localization and mapping accuracy, quantitative error analysis of the PanCam design will improve scientists' awareness of the achievable level of accuracy, and enable the PanCam design team to optimize its design to achieve the highest possible level of localization and mapping accuracy. Based on photogrammetric principles and uncertainty propagation theory, we have developed a method to theoretically analyze how mapping and localization accuracy would be affected by various factors, such as length of stereo hard-baseline, focal length, and pixel size, etc.

  5. Portable device to assess dynamic accuracy of global positioning systems (GPS) receivers used in agricultural aircraft

    Technology Transfer Automated Retrieval System (TEKTRAN)

    A device was designed to test the dynamic accuracy of Global Positioning System (GPS) receivers used in aerial vehicles. The system works by directing a sun-reflected light beam from the ground to the aircraft using mirrors. A photodetector is placed pointing downward from the aircraft and circuitry...

  6. Comparative analysis of Worldview-2 and Landsat 8 for coastal saltmarsh mapping accuracy assessment

    NASA Astrophysics Data System (ADS)

    Rasel, Sikdar M. M.; Chang, Hsing-Chung; Diti, Israt Jahan; Ralph, Tim; Saintilan, Neil

    2016-05-01

    Coastal saltmarsh and their constituent components and processes are of an interest scientifically due to their ecological function and services. However, heterogeneity and seasonal dynamic of the coastal wetland system makes it challenging to map saltmarshes with remotely sensed data. This study selected four important saltmarsh species Pragmitis australis, Sporobolus virginicus, Ficiona nodosa and Schoeloplectus sp. as well as a Mangrove and Pine tree species, Avecinia and Casuarina sp respectively. High Spatial Resolution Worldview-2 data and Coarse Spatial resolution Landsat 8 imagery were selected in this study. Among the selected vegetation types some patches ware fragmented and close to the spatial resolution of Worldview-2 data while and some patch were larger than the 30 meter resolution of Landsat 8 data. This study aims to test the effectiveness of different classifier for the imagery with various spatial and spectral resolutions. Three different classification algorithm, Maximum Likelihood Classifier (MLC), Support Vector Machine (SVM) and Artificial Neural Network (ANN) were tested and compared with their mapping accuracy of the results derived from both satellite imagery. For Worldview-2 data SVM was giving the higher overall accuracy (92.12%, kappa =0.90) followed by ANN (90.82%, Kappa 0.89) and MLC (90.55%, kappa = 0.88). For Landsat 8 data, MLC (82.04%) showed the highest classification accuracy comparing to SVM (77.31%) and ANN (75.23%). The producer accuracy of the classification results were also presented in the paper.

  7. Retrieval of Urban Boundary Layer Structures from Doppler Lidar Data. Part I: Accuracy Assessment

    SciTech Connect

    Xia, Quanxin; Lin, Ching Long; Calhoun, Ron; Newsom, Rob K.

    2008-01-01

    Two coherent Doppler lidars from the US Army Research Laboratory (ARL) and Arizona State University (ASU) were deployed in the Joint Urban 2003 atmospheric dispersion field experiment (JU2003) held in Oklahoma City. The dual lidar data are used to evaluate the accuracy of the four-dimensional variational data assimilation (4DVAR) method and identify the coherent flow structures in the urban boundary layer. The objectives of the study are three-fold. The first objective is to examine the effect of eddy viscosity models on the quality of retrieved velocity data. The second objective is to determine the fidelity of single-lidar 4DVAR and evaluate the difference between single- and dual-lidar retrievals. The third objective is to correlate the retrieved flow structures with the ground building data. It is found that the approach of treating eddy viscosity as part of control variables yields better results than the approach of prescribing viscosity. The ARL single-lidar 4DVAR is able to retrieve radial velocity fields with an accuracy of 98% in the along-beam direction and 80-90% in the cross-beam direction. For the dual-lidar 4DVAR, the accuracy of retrieved radial velocity in the ARL cross-beam direction improves to 90-94%. By using the dual-lidar retrieved data as a reference, the single-lidar 4DVAR is able to recover fluctuating velocity fields with 70-80% accuracy in the along-beam direction and 60-70% accuracy in the cross-beam direction. Large-scale convective roll structures are found in the vicinity of downtown airpark and parks. Vortical structures are identified near the business district. Strong updrafts and downdrafts are also found above a cluster of restaurants.

  8. Assessing genomic prediction accuracy for Holstein sires using bootstrap aggregation sampling and leave-one-out cross validation.

    PubMed

    Mikshowsky, Ashley A; Gianola, Daniel; Weigel, Kent A

    2017-01-01

    Since the introduction of genome-enabled prediction for dairy cattle in 2009, genomic selection has markedly changed many aspects of the dairy genetics industry and enhanced the rate of response to selection for most economically important traits. Young dairy bulls are genotyped to obtain their genomic predicted transmitting ability (GPTA) and reliability (REL) values. These GPTA are a main factor in most purchasing, marketing, and culling decisions until bulls reach 5 yr of age and their milk-recorded offspring become available. At that time, daughter yield deviations (DYD) can be compared with the GPTA computed several years earlier. For most bulls, the DYD align well with the initial predictions. However, for some bulls, the difference between DYD and corresponding GPTA is quite large, and published REL are of limited value in identifying such bulls. A method of bootstrap aggregation sampling (bagging) using genomic BLUP (GBLUP) was applied to predict the GPTA of 2,963, 2,963, and 2,803 young Holstein bulls for protein yield, somatic cell score, and daughter pregnancy rate (DPR), respectively. For each trait, 50 bootstrap samples from a reference population comprising 2011 DYD of 8,610, 8,405, and 7,945 older Holstein bulls were used. Leave-one-out cross validation was also performed to assess prediction accuracy when removing specific bulls from the reference population. The main objectives of this study were (1) to assess the extent to which current REL values and alternative measures of variability, such as the bootstrap standard deviation (SD) of predictions, could detect bulls whose daughter performance deviates significantly from early genomic predictions, and (2) to identify factors associated with the reference population that inform about inaccurate genomic predictions. The SD of bootstrap predictions was a mildly useful metric for identifying bulls whose future daughter performance may deviate significantly from early GPTA for protein and DPR. Leave

  9. Computational Tools to Assess Turbine Biological Performance

    SciTech Connect

    Richmond, Marshall C.; Serkowski, John A.; Rakowski, Cynthia L.; Strickler, Brad; Weisbeck, Molly; Dotson, Curtis L.

    2014-07-24

    Public Utility District No. 2 of Grant County (GCPUD) operates the Priest Rapids Dam (PRD), a hydroelectric facility on the Columbia River in Washington State. The dam contains 10 Kaplan-type turbine units that are now more than 50 years old. Plans are underway to refit these aging turbines with new runners. The Columbia River at PRD is a migratory pathway for several species of juvenile and adult salmonids, so passage of fish through the dam is a major consideration when upgrading the turbines. In this paper, a method for turbine biological performance assessment (BioPA) is demonstrated. Using this method, a suite of biological performance indicators is computed based on simulated data from a CFD model of a proposed turbine design. Each performance indicator is a measure of the probability of exposure to a certain dose of an injury mechanism. Using known relationships between the dose of an injury mechanism and frequency of injury (dose–response) from laboratory or field studies, the likelihood of fish injury for a turbine design can be computed from the performance indicator. By comparing the values of the indicators from proposed designs, the engineer can identify the more-promising alternatives. We present an application of the BioPA method for baseline risk assessment calculations for the existing Kaplan turbines at PRD that will be used as the minimum biological performance that a proposed new design must achieve.

  10. Assessing the performance of dynamical trajectory estimates

    NASA Astrophysics Data System (ADS)

    Bröcker, Jochen

    2014-06-01

    Estimating trajectories and parameters of dynamical systems from observations is a problem frequently encountered in various branches of science; geophysicists for example refer to this problem as data assimilation. Unlike as in estimation problems with exchangeable observations, in data assimilation the observations cannot easily be divided into separate sets for estimation and validation; this creates serious problems, since simply using the same observations for estimation and validation might result in overly optimistic performance assessments. To circumvent this problem, a result is presented which allows us to estimate this optimism, thus allowing for a more realistic performance assessment in data assimilation. The presented approach becomes particularly simple for data assimilation methods employing a linear error feedback (such as synchronization schemes, nudging, incremental 3DVAR and 4DVar, and various Kalman filter approaches). Numerical examples considering a high gain observer confirm the theory.

  11. Assessing the performance of dynamical trajectory estimates.

    PubMed

    Bröcker, Jochen

    2014-06-01

    Estimating trajectories and parameters of dynamical systems from observations is a problem frequently encountered in various branches of science; geophysicists for example refer to this problem as data assimilation. Unlike as in estimation problems with exchangeable observations, in data assimilation the observations cannot easily be divided into separate sets for estimation and validation; this creates serious problems, since simply using the same observations for estimation and validation might result in overly optimistic performance assessments. To circumvent this problem, a result is presented which allows us to estimate this optimism, thus allowing for a more realistic performance assessment in data assimilation. The presented approach becomes particularly simple for data assimilation methods employing a linear error feedback (such as synchronization schemes, nudging, incremental 3DVAR and 4DVar, and various Kalman filter approaches). Numerical examples considering a high gain observer confirm the theory.

  12. Assessing the accuracy of the Second Military Survey for the Doren Landslide (Vorarlberg, Austria)

    NASA Astrophysics Data System (ADS)

    Zámolyi, András.; Székely, Balázs; Biszak, Sándor

    2010-05-01

    Reconstruction of the early and long-term evolution of landslide areas is especially important for determining the proportion of anthropogenic influence on the evolution of the region affected by mass movements. The recent geologic and geomorphological setting of the prominent Doren landslide in Vorarlberg (Western Austria) has been studied extensively by various research groups and civil engineering companies. Civil aerial imaging of the area dates back to the 1950's. Modern monitoring techniques include aerial imaging as well as airborne and terrestrial laser scanning (LiDAR) providing us with almost yearly assessment of the changing geomorphology of the area. However, initiation of the landslide occurred most probably earlier than the application of these methods, since there is evidence that the landslide was already active in the 1930's. For studying the initial phase of landslide formation one possibility is to get back on information recorded on historic photographs or historic maps. In this case study we integrated topographic information from the map sheets of the Second Military Survey of the Habsburg Empire that was conducted in Vorarlberg during the years 1816-1821 (Kretschmer et al., 2004) into a comprehensive GIS. The region of interest around the Doren landslide was georeferenced using the method of Timár et al. (2006) refined by Molnár (2009) thus providing a geodetically correct positioning and the possibility of matching the topographic features from the historic map with features recognized in the LiDAR DTM. The landslide of Doren is clearly visible in the historic map. Additionally, prominent geomorphologic features such as morphological scarps, rills and gullies, mass movement lobes and the course of the Weißach rivulet can be matched. Not only the shape and character of these elements can be recognized and matched, but also the positional accuracy is adequate for geomorphological studies. Since the settlement structure is very stable in the

  13. Quality versus quantity: assessing individual research performance

    PubMed Central

    Sahel, José-Alain

    2011-01-01

    Evaluating individual research performance is a complex task that ideally examines productivity, scientific impact, and research quality––a task that metrics alone have been unable to achieve. In January 2011, the French Academy of Sciences published a report on current bibliometric (citation metric) methods for evaluating individual researchers, as well as recommendations for the integration of quality assessment. Here, we draw on key issues raised by this report and comment on the suggestions for improving existing research evaluation practices. PMID:21613620

  14. Performance and accuracy investigations of two Doppler global velocimetry systems applied in parallel

    NASA Astrophysics Data System (ADS)

    Willert, Christian; Stockhausen, Guido; Klinner, Joachim; Lempereur, Christine; Barricau, Philippe; Loiret, Philippe; Raynal, Jean Claude

    2007-08-01

    Two Doppler global velocimetry systems were applied in parallel to assess their performance in wind tunnel environments. Both DGV systems were mounted on a common traverse surrounding the glass-walled 1.4 × 1.8 m2 test section of the wind tunnel. The traverse normally supports a three-component forward-scatter laser Doppler velocimetry system. The reproducible tip-vortex flow field generated by the blunt tip of an airfoil was chosen for this investigation and was precisely surveyed by LDA just prior to the DGV measurements. Both DGV systems shared the same continuous wave laser light source, laser frequency monitoring and fibre optic light sheet delivery system. The principal differences between the DGV implementations are with regard to the imaging configuration. One configuration relied on a single camera view that observed three successively operated light sheets. In the second configuration, three camera views simultaneously observed a single light sheet using a four-branch fibre imaging bundle. The imaging bundle system had all three viewpoints in a forward scattering arrangement which increased the scattering efficiency but reduced the frequency shift sensitivity. Since all three light sheet observation components were acquired onto the same image frame, acquisition times could be reduced to a minimum. On the other hand, the triple light sheet-single camera system observed two light sheets in forward scatter and one light sheet in backscatter. Although three separate images had to be recorded in succession, the image quality, spatial resolution and signal-to-noise ratio were superior to the imaging bundle system. Comparison of the DGV data with LDV measurements shows very good agreement to within 1-2 m s-1. The remaining discrepancy has a variety of causes, some are related to the reduced resolving power of the fibre imaging bundle system (graininess, smoothing), exact localization of the receiver head with respect to the scene, laser frequency drift or

  15. Assessing the accuracy of software predictions of mammalian and microbial metabolites

    EPA Science Inventory

    New chemical development and hazard assessments benefit from accurate predictions of mammalian and microbial metabolites. Fourteen biotransformation libraries encoded in eight software packages that predict metabolite structures were assessed for their sensitivity (proportion of ...

  16. Effect of homogenizer performance on accuracy and repeatability of mid-infrared predicted values for major milk components.

    PubMed

    Di Marzo, Larissa; Barbano, David M

    2016-12-01

    Our objective was to determine the effect of mid-infrared (MIR) homogenizer efficiency on accuracy and repeatability of Fourier transform MIR predicted fat, true protein, and anhydrous lactose determination given by traditional filter and partial least squares (PLS) prediction models. Five homogenizers with different homogenization performance based on laser light-scattering particle size analysis were used. Repeatability and accuracy were determined by conducting 17 sequential readings on milk homogenized externally to the instrument (i.e., control) and unhomogenized milk. Milk component predictions on externally homogenized milks were affected by variation in homogenizer performance, but the magnitude of effect were small (i.e., <0.025%) when milks were pumped through both efficient and inefficient homogenizers within a MIR milk analyzer. Variation in the in-line MIR homogenizer performance on unhomogenized milks had a much larger effect on accuracy of component testing than on repeatability. The increase of particle size distribution [d(0.9)] from 1.35 to 3.03μm (i.e., fat globule diameter above which 10% of the volume of fat is contained) due to poor homogenization affected fat tests the most; traditional filter based fat B (carbon hydrogen stretch; -0.165%), traditional filter-based fat A (carbonyl stretch; -0.074%), and fat PLS (-0.078%) at a d(0.9) of 3.03μm. Variation in homogenization efficiency also affected traditional filter-based true protein test (+0.012%), true protein PLS prediction (-0.107%), and traditional filter-based anhydrous lactose test (+0.027%) at a d(0.9) of 3.03μm. Effects of variation in homogenization on anhydrous lactose PLS predictions were small. The accuracy of both traditional filter models and PLS models were influenced by poor homogenization. The value of 1.7µm for a d(0.9) used by the USDA Federal Milk Market laboratories as a criterion to make the decision to replace the homogenizer in a MIR milk analyzer appears to be a

  17. Assessing effort: differentiating performance and symptom validity.

    PubMed

    Van Dyke, Sarah A; Millis, Scott R; Axelrod, Bradley N; Hanks, Robin A

    2013-01-01

    The current study aimed to clarify the relationship among the constructs involved in neuropsychological assessment, including cognitive performance, symptom self-report, performance validity, and symptom validity. Participants consisted of 120 consecutively evaluated individuals from a veteran's hospital with mixed referral sources. Measures included the Wechsler Adult Intelligence Scale-Fourth Edition Full Scale IQ (WAIS-IV FSIQ), California Verbal Learning Test-Second Edition (CVLT-II), Trail Making Test Part B (TMT-B), Test of Memory Malingering (TOMM), Medical Symptom Validity Test (MSVT), WAIS-IV Reliable Digit Span (RDS), Post-traumatic Check List-Military Version (PCL-M), MMPI-2 F scale, MMPI-2 Symptom Validity Scale (FBS), MMPI-2 Response Bias Scale (RBS), and the Postconcussive Symptom Questionnaire (PCSQ). Six different models were tested using confirmatory factor analysis (CFA) to determine the factor model describing the relationships between cognitive performance, symptom self-report, performance validity, and symptom validity. The strongest and most parsimonious model was a three-factor model in which cognitive performance, performance validity, and self-reported symptoms (including both standard and symptom validity measures) were separate factors. The findings suggest failure in one validity domain does not necessarily invalidate the other domain. Thus, performance validity and symptom validity should be evaluated separately.

  18. Accuracy in Student Self-Assessment: Directions and Cautions for Research

    ERIC Educational Resources Information Center

    Brown, Gavin T. L.; Andrade, Heidi L.; Chen, Fei

    2015-01-01

    Student self-assessment is a central component of current conceptions of formative and classroom assessment. The research on self-assessment has focused on its efficacy in promoting both academic achievement and self-regulated learning, with little concern for issues of validity. Because reliability of testing is considered a sine qua non for the…

  19. Assessing decoding ability: the role of speed and accuracy and a new composite indicator to measure decoding skill in elementary grades.

    PubMed

    Morlini, Isabella; Stella, Giacomo; Scorza, Maristella

    2015-01-01

    Tools for assessing decoding skill in students attending elementary grades are of fundamental importance for guaranteeing an early identification of reading disabled students and reducing both the primary negative effects (on learning) and the secondary negative effects (on the development of the personality) of this disability. This article presents results obtained by administering existing standardized tests of reading and a new screening procedure to about 1,500 students in the elementary grades in Italy. It is found that variables measuring speed and accuracy in all administered reading tests are not Gaussian, and therefore the threshold values used for classifying a student as a normal decoder or as an impaired decoder must be estimated on the basis of the empirical distribution of these variables rather than by using the percentiles of the normal distribution. It is also found that the decoding speed and the decoding accuracy can be measured in either a 1-minute procedure or in much longer standardized tests. The screening procedure and the tests administered are found to be equivalent insofar as they carry the same information. Finally, it is found that speed and accuracy act as complementary effects in the measurement of decoding ability. On the basis of this last finding, the study introduces a new composite indicator aimed at determining the student's performance, which combines speed and accuracy in the measurement of decoding ability.

  20. Image-based in vivo assessment of targeting accuracy of stereotactic brain surgery in experimental rodent models

    NASA Astrophysics Data System (ADS)

    Rangarajan, Janaki Raman; Vande Velde, Greetje; van Gent, Friso; de Vloo, Philippe; Dresselaers, Tom; Depypere, Maarten; van Kuyck, Kris; Nuttin, Bart; Himmelreich, Uwe; Maes, Frederik

    2016-11-01

    Stereotactic neurosurgery is used in pre-clinical research of neurological and psychiatric disorders in experimental rat and mouse models to engraft a needle or electrode at a pre-defined location in the brain. However, inaccurate targeting may confound the results of such experiments. In contrast to the clinical practice, inaccurate targeting in rodents remains usually unnoticed until assessed by ex vivo end-point histology. We here propose a workflow for in vivo assessment of stereotactic targeting accuracy in small animal studies based on multi-modal post-operative imaging. The surgical trajectory in each individual animal is reconstructed in 3D from the physical implant imaged in post-operative CT and/or its trace as visible in post-operative MRI. By co-registering post-operative images of individual animals to a common stereotaxic template, targeting accuracy is quantified. Two commonly used neuromodulation regions were used as targets. Target localization errors showed not only variability, but also inaccuracy in targeting. Only about 30% of electrodes were within the subnucleus structure that was targeted and a-specific adverse effects were also noted. Shifting from invasive/subjective 2D histology towards objective in vivo 3D imaging-based assessment of targeting accuracy may benefit a more effective use of the experimental data by excluding off-target cases early in the study.

  1. Image-based in vivo assessment of targeting accuracy of stereotactic brain surgery in experimental rodent models

    PubMed Central

    Rangarajan, Janaki Raman; Vande Velde, Greetje; van Gent, Friso; De Vloo, Philippe; Dresselaers, Tom; Depypere, Maarten; van Kuyck, Kris; Nuttin, Bart; Himmelreich, Uwe; Maes, Frederik

    2016-01-01

    Stereotactic neurosurgery is used in pre-clinical research of neurological and psychiatric disorders in experimental rat and mouse models to engraft a needle or electrode at a pre-defined location in the brain. However, inaccurate targeting may confound the results of such experiments. In contrast to the clinical practice, inaccurate targeting in rodents remains usually unnoticed until assessed by ex vivo end-point histology. We here propose a workflow for in vivo assessment of stereotactic targeting accuracy in small animal studies based on multi-modal post-operative imaging. The surgical trajectory in each individual animal is reconstructed in 3D from the physical implant imaged in post-operative CT and/or its trace as visible in post-operative MRI. By co-registering post-operative images of individual animals to a common stereotaxic template, targeting accuracy is quantified. Two commonly used neuromodulation regions were used as targets. Target localization errors showed not only variability, but also inaccuracy in targeting. Only about 30% of electrodes were within the subnucleus structure that was targeted and a-specific adverse effects were also noted. Shifting from invasive/subjective 2D histology towards objective in vivo 3D imaging-based assessment of targeting accuracy may benefit a more effective use of the experimental data by excluding off-target cases early in the study. PMID:27901096

  2. Aneroid sphygmomanometers. An assessment of accuracy at a university hospital and clinics.

    PubMed

    Bailey, R H; Knaus, V L; Bauer, J H

    1991-07-01

    Defects of aneroid sphygmomanometers are a source of error in blood pressure measurement. We inspected 230 aneroid sphygmomanometers for physical defects and compared their accuracy against a standard mercury manometer at five different pressure points. An aneroid sphygmomanometer was defined as intolerant if it deviated from the mercury manometer by greater than +/- 3 mm Hg at two or more of the test points. The three most common physical defects were indicator needles not pointing to the "zero box," cracked face plates, and defective tubing. Eighty (34.8 of the 230 aneroid sphygmomanometers were determined to be intolerant with the greatest frequency of deviation seen at pressure levels of 150 mm Hg or greater. We recommend that aneroid manometers be inspected for physical defects and calibrated for accuracy against a standard mercury manometer at 6-month intervals to prevent inaccurate blood pressure measurements.

  3. Accuracy assessment of high frequency 3D ultrasound for digital impression-taking of prepared teeth

    NASA Astrophysics Data System (ADS)

    Heger, Stefan; Vollborn, Thorsten; Tinschert, Joachim; Wolfart, Stefan; Radermacher, Klaus

    2013-03-01

    Silicone based impression-taking of prepared teeth followed by plaster casting is well-established but potentially less reliable, error-prone and inefficient, particularly in combination with emerging techniques like computer aided design and manufacturing (CAD/CAM) of dental prosthesis. Intra-oral optical scanners for digital impression-taking have been introduced but until now some drawbacks still exist. Because optical waves can hardly penetrate liquids or soft-tissues, sub-gingival preparations still need to be uncovered invasively prior to scanning. High frequency ultrasound (HFUS) based micro-scanning has been recently investigated as an alternative to optical intra-oral scanning. Ultrasound is less sensitive against oral fluids and in principal able to penetrate gingiva without invasively exposing of sub-gingival preparations. Nevertheless, spatial resolution as well as digitization accuracy of an ultrasound based micro-scanning system remains a critical parameter because the ultrasound wavelength in water-like media such as gingiva is typically smaller than that of optical waves. In this contribution, the in-vitro accuracy of ultrasound based micro-scanning for tooth geometry reconstruction is being investigated and compared to its extra-oral optical counterpart. In order to increase the spatial resolution of the system, 2nd harmonic frequencies from a mechanically driven focused single element transducer were separated and corresponding 3D surface models were calculated for both fundamentals and 2nd harmonics. Measurements on phantoms, model teeth and human teeth were carried out for evaluation of spatial resolution and surface detection accuracy. Comparison of optical and ultrasound digital impression taking indicate that, in terms of accuracy, ultrasound based tooth digitization can be an alternative for optical impression-taking.

  4. Improvement of Field Accuracy and Plasma Performance in the RT-1 Device

    NASA Astrophysics Data System (ADS)

    Yano, Yoshihisa; Yoshida, Zensho; Morikawa, Junji; Saitoh, Haruhiko; Hayashi, Hiroyuki; Mizushima, Tatsunori

    To improve the accuracy of the magnetic field of Ring Trap-1 (RT-1), we have constructed a system of correction coils to cancel the geomagnetic field and control the attitude of the floating magnet. Without the geomagnetic field canccellation, the floating magnet tilts about 1.4 degrees. The previous prototype correction coils have been replaced by new coils that are much larger and farther from the chamber, so the error field due to the multipole components of the correction field is reduced by a factor of 30 (from 2.6% to 0.1% of the confinement field near the edge region). A significant improvement in plasma confinement has been observed (the stored energy of the plasma has been increased by a factor of 1.5).

  5. Accuracy of ELISA detection methods for gluten and reference materials: a realistic assessment.

    PubMed

    Diaz-Amigo, Carmen; Popping, Bert

    2013-06-19

    The determination of prolamins by ELISA and subsequent conversion of the resulting concentration to gluten content in food appears to be a comparatively simple and straightforward process with which many laboratories have years-long experience. At the end of the process, a value of gluten, expressed in mg/kg or ppm, is obtained. This value often is the basis for the decision if a product can be labeled gluten-free or not. On the basis of currently available scientific information, the accuracy of the obtained values with commonly used commercial ELISA kits has to be questioned. Although recently several multilaboratory studies have been conducted in an attempt to emphasize and ensure the accuracy of the results, data suggest that it was the precision of these assays, not the accuracy, that was confirmed because some of the underlying assumptions for calculating the gluten content lack scientific data support as well as appropriate reference materials for comparison. This paper discusses the issues of gluten determination and quantification with respect to antibody specificity, extraction procedures, reference materials, and their commutability.

  6. Determinants of Cervical Cancer Screening Accuracy for Visual Inspection with Acetic Acid (VIA) and Lugol’s Iodine (VILI) Performed by Nurse and Physician

    PubMed Central

    Raifu, Amidu O.; El-Zein, Mariam; Sangwa-Lugoma, Ghislain; Ramanakumar, Agnihotram; Walter, Stephen D.

    2017-01-01

    Background Visual inspection with acetic acid (VIA) and Lugol’s iodine (VILI) are used to screen women for cervical cancer in low-resource settings. Little is known about correlates of their diagnostic accuracy by healthcare provider. We examined determinants of VIA and VILI screening accuracy by examiner in a cross-sectional screening study of 1528 women aged 30 years or older in a suburb of Kinshasa, Democratic Republic of Congo. Methods We used a logistic regression model for sensitivity and specificity to estimate the diagnostic accuracy of VIA and VILI, independently performed by nurse and physician, as a function of sociodemographic and reproductive health characteristics. Results Nurses rated tests as positive more often than physicians (36.3% vs 30.2% for VIA, 26.2% vs 25.2% for VILI). Women’s age was the most important determinant of performance. It was inversely associated with sensitivity (nurse’s VIA: p<0.001, nurse’s VILI: p = 0.018, physician’s VIA: p = 0.005, physician’s VILI: p = 0.006) but positively associated with specificity (all four combinations: p<0.001). Increasing parity adversely affected sensitivity and specificity, but the effects on sensitivity were significant for nurses only. The screening performance of physician’s assessment was significantly better than the nurse’s (difference in sensitivity: VIA = 13%, VILI = 16%; difference in specificity: VIA = 6%, VILI = 1%). Conclusions Age and parity influence the performance of visual tests for cervical cancer screening. Proper training of local healthcare providers in the conduct of these tests should take into account these factors for improved performance of VIA and VILI in detecting cervical precancerous lesions among women in limited-resource settings. PMID:28107486

  7. Using Accuracy of Self-Estimated Interest Type as a Sign of Career Choice Readiness in Career Assessment of Secondary Students

    ERIC Educational Resources Information Center

    Hirschi, Andreas; Lage, Damian

    2008-01-01

    A frequent applied method in career assessment to elicit clients' self-concepts is asking them to predict their interest assessment results. Accuracy in estimating one's interest type is commonly taken as a sign of more self-awareness and career choice readiness. The study evaluated the empirical relation of accuracy of self-estimation to career…

  8. Investigating General Chemistry Students' Metacognitive Monitoring of Their Exam Performance by Measuring Postdiction Accuracies over Time

    ERIC Educational Resources Information Center

    Hawker, Morgan J.; Dysleski, Lisa; Rickey, Dawn

    2016-01-01

    Metacognitive monitoring of one's own understanding plays a key role in learning. An aspect of metacognitive monitoring can be measured by comparing a student's prediction or postdiction of performance (a judgment made before or after completing the relevant task) with the student's actual performance. In this study, we investigated students'…

  9. Guidance for performing preliminary assessments under CERCLA

    SciTech Connect

    1991-09-01

    EPA headquarters and a national site assessment workgroup produced this guidance for Regional, State, and contractor staff who manage or perform preliminary assessments (PAs). EPA has focused this guidance on the types of sites and site conditions most commonly encountered. The PA approach described in this guidance is generally applicable to a wide variety of sites. However, because of the variability among sites, the amount of information available, and the level of investigative effort required, it is not possible to provide guidance that is equally applicable to all sites. PA investigators should recognize this and be aware that variation from this guidance may be necessary for some sites, particularly for PAs performed at Federal facilities, PAs conducted under EPA`s Environmental Priorities Initiative (EPI), and PAs at sites that have previously been extensively investigated by EPA or others. The purpose of this guidance is to provide instructions for conducting a PA and reporting results. This guidance discusses the information required to evaluate a site and how to obtain it, how to score a site, and reporting requirements. This document also provides guidelines and instruction on PA evaluation, scoring, and the use of standard PA scoresheets. The overall goal of this guidance is to assist PA investigators in conducting high-quality assessments that result in correct site screening or further action recommendations on a nationally consistent basis.

  10. Reproducibility and accuracy of body composition assessments in mice by dual energy x-ray absorptiometry and time domain nuclear magnetic resonance.

    PubMed

    Halldorsdottir, Solveig; Carmody, Jill; Boozer, Carol N; Leduc, Charles A; Leibel, Rudolph L

    2009-01-01

    OBJECTIVE: To assess the accuracy and reproducibility of dual-energy absorptiometry (DXA; PIXImus(™)) and time domain nuclear magnetic resonance (TD-NMR; Bruker Optics) for the measurement of body composition of lean and obese mice. SUBJECTS AND MEASUREMENTS: Thirty lean and obese mice (body weight range 19-67 g) were studied. Coefficients of variation for repeated (x 4) DXA and NMR scans of mice were calculated to assess reproducibility. Accuracy was assessed by comparing DXA and NMR results of ten mice to chemical carcass analyses. Accuracy of the respective techniques was also assessed by comparing DXA and NMR results obtained with ground meat samples to chemical analyses. Repeated scans of 10-25 gram samples were performed to test the sensitivity of the DXA and NMR methods to variation in sample mass. RESULTS: In mice, DXA and NMR reproducibility measures were similar for fat tissue mass (FTM) (DXA coefficient of variation [CV]=2.3%; and NMR CV=2.8%) (P=0.47), while reproducibility of lean tissue mass (LTM) estimates were better for DXA (1.0%) than NMR (2.2%) (

    accuracy, in mice, DXA overestimated (vs chemical composition) LTM (+1.7 ± 1.3 g [SD], ~ 8%, P <0.001) as well as FTM (+2.0 ± 1.2 g, ~ 46%, P <0.001). NMR estimated LTM and FTM virtually identical to chemical composition analysis (LTM: -0.05 ± 0.5 g, ~0.2%, P =0.79) (FTM: +0.02 ± 0.7 g, ~15%, P =0.93). DXA and NMR-determined LTM and FTM measurements were highly correlated with the corresponding chemical analyses (r(2)=0.92 and r(2)=0.99 for DXA LTM and FTM, respectively; r(2)=0.99 and r(2)=0.99 for NMR LTM and FTM, respectively.) Sample mass did not affect accuracy in assessing chemical composition of small ground meat samples by either DXA or NMR. CONCLUSION: DXA and NMR provide comparable levels of reproducibility in measurements of body composition lean and obese mice. While DXA and NMR measures are highly correlated with chemical analysis measures, DXA consistently

  11. Reproducibility and accuracy of body composition assessments in mice by dual energy x-ray absorptiometry and time domain nuclear magnetic resonance

    PubMed Central

    Halldorsdottir, Solveig; Carmody, Jill; Boozer, Carol N.; Leduc, Charles A.; Leibel, Rudolph L.

    2011-01-01

    Objective To assess the accuracy and reproducibility of dual-energy absorptiometry (DXA; PIXImus™) and time domain nuclear magnetic resonance (TD-NMR; Bruker Optics) for the measurement of body composition of lean and obese mice. Subjects and measurements Thirty lean and obese mice (body weight range 19–67 g) were studied. Coefficients of variation for repeated (x 4) DXA and NMR scans of mice were calculated to assess reproducibility. Accuracy was assessed by comparing DXA and NMR results of ten mice to chemical carcass analyses. Accuracy of the respective techniques was also assessed by comparing DXA and NMR results obtained with ground meat samples to chemical analyses. Repeated scans of 10–25 gram samples were performed to test the sensitivity of the DXA and NMR methods to variation in sample mass. Results In mice, DXA and NMR reproducibility measures were similar for fat tissue mass (FTM) (DXA coefficient of variation [CV]=2.3%; and NMR CV=2.8%) (P=0.47), while reproducibility of lean tissue mass (LTM) estimates were better for DXA (1.0%) than NMR (2.2%) (

    accuracy, in mice, DXA overestimated (vs chemical composition) LTM (+1.7 ± 1.3 g [SD], ~ 8%, P <0.001) as well as FTM (+2.0 ± 1.2 g, ~ 46%, P <0.001). NMR estimated LTM and FTM virtually identical to chemical composition analysis (LTM: −0.05 ± 0.5 g, ~0.2%, P =0.79) (FTM: +0.02 ± 0.7 g, ~15%, P =0.93). DXA and NMR-determined LTM and FTM measurements were highly correlated with the corresponding chemical analyses (r2=0.92 and r2=0.99 for DXA LTM and FTM, respectively; r2=0.99 and r2=0.99 for NMR LTM and FTM, respectively.) Sample mass did not affect accuracy in assessing chemical composition of small ground meat samples by either DXA or NMR. Conclusion DXA and NMR provide comparable levels of reproducibility in measurements of body composition lean and obese mice. While DXA and NMR measures are highly correlated with chemical analysis measures, DXA consistently overestimates LTM

  12. Performance characterization of precision micro robot using a machine vision system over the Internet for guaranteed positioning accuracy

    NASA Astrophysics Data System (ADS)

    Kwon, Yongjin; Chiou, Richard; Rauniar, Shreepud; Sosa, Horacio

    2005-11-01

    There is a missing link between a virtual development environment (e.g., a CAD/CAM driven offline robotic programming) and production requirements of the actual robotic workcell. Simulated robot path planning and generation of pick-and-place coordinate points will not exactly coincide with the robot performance due to lack of consideration in variations in individual robot repeatability and thermal expansion of robot linkages. This is especially important when robots are controlled and programmed remotely (e.g., through Internet or Ethernet) since remote users have no physical contact with robotic systems. Using the current technology in Internet-based manufacturing that is limited to a web camera for live image transfer has been a significant challenge for the robot task performance. Consequently, the calibration and accuracy quantification of robot critical to precision assembly have to be performed on-site and the verification of robot positioning accuracy cannot be ascertained remotely. In worst case, the remote users have to assume the robot performance envelope provided by the manufacturers, which may causes a potentially serious hazard for system crash and damage to the parts and robot arms. Currently, there is no reliable methodology for remotely calibrating the robot performance. The objective of this research is, therefore, to advance the current state-of-the-art in Internet-based control and monitoring technology, with a specific aim in the accuracy calibration of micro precision robotic system for the development of a novel methodology utilizing Ethernet-based smart image sensors and other advanced precision sensory control network.

  13. Development of a Mass Casualty Triage Performance Assessment Tool

    DTIC Science & Technology

    2015-02-01

    Assessment, Triage, Performance measurement, BCT performance, Feedback , Tasks, Task analysis, Military police 16. SECURITY...assessment and provided feedback for further refinement. Findings: The assessment tool comprises six cases. Each case includes a scenario...Tool Feedback ..................................................................................................... 11 Procedure

  14. Using Covariance Analysis to Assess Pointing Performance

    NASA Technical Reports Server (NTRS)

    Bayard, David; Kang, Bryan

    2009-01-01

    A Pointing Covariance Analysis Tool (PCAT) has been developed for evaluating the expected performance of the pointing control system for NASA s Space Interferometry Mission (SIM). The SIM pointing control system is very complex, consisting of multiple feedback and feedforward loops, and operating with multiple latencies and data rates. The SIM pointing problem is particularly challenging due to the effects of thermomechanical drifts in concert with the long camera exposures needed to image dim stars. Other pointing error sources include sensor noises, mechanical vibrations, and errors in the feedforward signals. PCAT models the effects of finite camera exposures and all other error sources using linear system elements. This allows the pointing analysis to be performed using linear covariance analysis. PCAT propagates the error covariance using a Lyapunov equation associated with time-varying discrete and continuous-time system matrices. Unlike Monte Carlo analysis, which could involve thousands of computational runs for a single assessment, the PCAT analysis performs the same assessment in a single run. This capability facilitates the analysis of parametric studies, design trades, and "what-if" scenarios for quickly evaluating and optimizing the control system architecture and design.

  15. Accuracy assessment of the ERP prediction method based on analysis of 100-year ERP series

    NASA Astrophysics Data System (ADS)

    Malkin, Z.; Tissen, V. M.

    2012-12-01

    A new method has been developed at the Siberian Research Institute of Metrology (SNIIM) for highly accurate prediction of UT1 and Pole motion (PM). In this study, a detailed comparison was made of real-time UT1 predictions made in 2006-2011 and PMpredictions made in 2009-2011making use of the SNIIM method with simultaneous predictions computed at the International Earth Rotation and Reference Systems Service (IERS), USNO. Obtained results have shown that proposed method provides better accuracy at different prediction lengths.

  16. Rigorous Training of Dogs Leads to High Accuracy in Human Scent Matching-To-Sample Performance

    PubMed Central

    Marchal, Sophie; Bregeras, Olivier; Puaux, Didier; Gervais, Rémi; Ferry, Barbara

    2016-01-01

    Human scent identification is based on a matching-to-sample task in which trained dogs are required to compare a scent sample collected from an object found at a crime scene to that of a suspect. Based on dogs’ greater olfactory ability to detect and process odours, this method has been used in forensic investigations to identify the odour of a suspect at a crime scene. The excellent reliability and reproducibility of the method largely depend on rigor in dog training. The present study describes the various steps of training that lead to high sensitivity scores, with dogs matching samples with 90% efficiency when the complexity of the scents presented during the task in the sample is similar to that presented in the in lineups, and specificity reaching a ceiling, with no false alarms in human scent matching-to-sample tasks. This high level of accuracy ensures reliable results in judicial human scent identification tests. Also, our data should convince law enforcement authorities to use these results as official forensic evidence when dogs are trained appropriately. PMID:26863620

  17. Rigorous Training of Dogs Leads to High Accuracy in Human Scent Matching-To-Sample Performance.

    PubMed

    Marchal, Sophie; Bregeras, Olivier; Puaux, Didier; Gervais, Rémi; Ferry, Barbara

    2016-01-01

    Human scent identification is based on a matching-to-sample task in which trained dogs are required to compare a scent sample collected from an object found at a crime scene to that of a suspect. Based on dogs' greater olfactory ability to detect and process odours, this method has been used in forensic investigations to identify the odour of a suspect at a crime scene. The excellent reliability and reproducibility of the method largely depend on rigor in dog training. The present study describes the various steps of training that lead to high sensitivity scores, with dogs matching samples with 90% efficiency when the complexity of the scents presented during the task in the sample is similar to that presented in the in lineups, and specificity reaching a ceiling, with no false alarms in human scent matching-to-sample tasks. This high level of accuracy ensures reliable results in judicial human scent identification tests. Also, our data should convince law enforcement authorities to use these results as official forensic evidence when dogs are trained appropriately.

  18. The influence of attention on learning and performance: pre-movement time and accuracy in an isometric force production task.

    PubMed

    Lohse, Keith R

    2012-02-01

    Lohse, Sherwood, and Healy (2010) found that an external focus of attention (FOA) improved performance in a dart-throwing task and reduced the time taken between throws, but using the time between trials as a measure of preparation time is relatively crude. Thus, the current experiment analyzed how FOA affects accuracy and pre-movement time in an isometric force production task, to study how FOA affected motor planning. In the current experiment, training with an external focus improved the accuracy of the isometric force production task during training and during retention and transfer testing. During training, an external FOA also significantly reduced pre-movement time in early trials. These findings are interpreted as reduced explicit control of movement as a function of an external FOA, and help to integrate FOA research with other motor control phenomena and neuropsychological theories of motor control.

  19. Work Performance Ratings: Measurement Test Bed for Validity and Accuracy Research

    DTIC Science & Technology

    1989-02-01

    validity theory (Fiske, 1987) suggests that invalidity of performance measurement is due to method effects. Nonetheless, attention to the measurement...the test bed. 3 Table.~ Definitions of Performance Dimensions Planning and OrganizingT. Establishing a course of action for self and/or others to...accomplish a specific goal; planning proper assignments of personnel and appropriate allocation of resources. Structuring or arranging resources to

  20. Hidden Markov model and nuisance attribute projection based bearing performance degradation assessment

    NASA Astrophysics Data System (ADS)

    Jiang, Huiming; Chen, Jin; Dong, Guangming

    2016-05-01

    Hidden Markov model (HMM) has been widely applied in bearing performance degradation assessment. As a machine learning-based model, its accuracy, subsequently, is dependent on the sensitivity of the features used to estimate the degradation performance of bearings. It's a big challenge to extract effective features which are not influenced by other qualities or attributes uncorrelated with the bearing degradation condition. In this paper, a bearing performance degradation assessment method based on HMM and nuisance attribute projection (NAP) is proposed. NAP can filter out the effect of nuisance attributes in feature space through projection. The new feature space projected by NAP is more sensitive to bearing health changes and barely influenced by other interferences occurring in operation condition. To verify the effectiveness of the proposed method, two different experimental databases are utilized. The results show that the combination of HMM and NAP can effectively improve the accuracy and robustness of the bearing performance degradation assessment system.

  1. Pulsed Lidar Performance/Technical Maturity Assessment

    NASA Technical Reports Server (NTRS)

    Gimmestad, Gary G.; West, Leanne L.; Wood, Jack W.; Frehlich, Rod

    2004-01-01

    This report describes the results of investigations performed by the Georgia Tech Research Institute (GTRI) and the National Center for Atmospheric Research (NCAR) under a task entitled 'Pulsed Lidar Performance/Technical Maturity Assessment' funded by the Crew Systems Branch of the Airborne Systems Competency at the NASA Langley Research Center. The investigations included two tasks, 1.1(a) and 1.1(b). The Tasks discussed in this report are in support of the NASA Virtual Airspace Modeling and Simulation (VAMS) program and are designed to evaluate a pulsed lidar that will be required for active wake vortex avoidance solutions. The Coherent Technologies, Inc. (CTI) WindTracer LIDAR is an eye-safe, 2-micron, coherent, pulsed Doppler lidar with wake tracking capability. The actual performance of the WindTracer system was to be quantified. In addition, the sensor performance has been assessed and modeled, and the models have been included in simulation efforts. The WindTracer LIDAR was purchased by the Federal Aviation Administration (FAA) for use in near-term field data collection efforts as part of a joint NASA/FAA wake vortex research program. In the joint research program, a minimum common wake and weather data collection platform will be defined. NASA Langley will use the field data to support wake model development and operational concept investigation in support of the VAMS project, where the ultimate goal is to improve airport capacity and safety. Task 1.1(a), performed by NCAR in Boulder, Colorado to analyze the lidar system to determine its performance and capabilities based on results from simulated lidar data with analytic wake vortex models provided by NASA, which were then compared to the vendor's claims for the operational specifications of the lidar. Task 1.1(a) is described in Section 3, including the vortex model, lidar parameters and simulations, and results for both detection and tracking of wake vortices generated by Boeing 737s and 747s. Task 1

  2. LiDAR-Landsat data fusion for large-area assessment of urban land cover: Balancing spatial resolution, data volume and mapping accuracy

    NASA Astrophysics Data System (ADS)

    Singh, Kunwar K.; Vogler, John B.; Shoemaker, Douglas A.; Meentemeyer, Ross K.

    2012-11-01

    The structural characteristics of Light Detection and Ranging (LiDAR) data are increasingly used to classify urban environments at fine scales, but have been underutilized for distinguishing heterogeneous land covers over large urban regions due to high cost, limited spectral information, and the computational difficulties posed by inherently large data volumes. Here we explore tradeoffs between potential gains in mapping accuracy with computational costs by integrating structural and intensity surface models extracted from LiDAR data with Landsat Thematic Mapper (TM) imagery and evaluating the degree to which TM, LiDAR, and LiDAR-TM fusion data discriminated land covers in the rapidly urbanizing region of Charlotte, North Carolina, USA. Using supervised maximum likelihood (ML) and classification tree (CT) methods, we classified TM data at 30 m and LiDAR data and LiDAR-TM fusions at 1 m, 5 m, 10 m, 15 m and 30 m resolutions. We assessed the relative contributions of LiDAR structural and intensity surface models to classification map accuracy and identified optimal spatial resolution of LiDAR surface models for large-area assessments of urban land cover. ML classification of 1 m LiDAR-TM fusions using both structural and intensity surface models increased total accuracy by 32% compared to LiDAR alone and by 8% over TM at 30 m. Fusion data using all LiDAR surface models improved class discrimination of spectrally similar forest, farmland, and managed clearings and produced the highest total accuracies at 1 m, 5 m, and 10 m resolutions (87.2%, 86.3% and 85.4%, respectively). At all resolutions of fusion data and using either ML or CT classifier, the relative contribution of the LiDAR structural surface models (canopy height and normalized digital surface model) to classification accuracy is greater than the intensity surface. Our evaluation of tradeoffs between data volume and thematic map accuracy for this study system suggests that a spatial resolution of 5 m for Li

  3. Assessing the accuracy of the International Classification of Diseases codes to identify abusive head trauma: a feasibility study

    PubMed Central

    Berger, Rachel P; Parks, Sharyn; Fromkin, Janet; Rubin, Pamela; Pecora, Peter J

    2016-01-01

    Objective To assess the accuracy of an International Classification of Diseases (ICD) code-based operational case definition for abusive head trauma (AHT). Methods Subjects were children <5 years of age evaluated for AHT by a hospital-based Child Protection Team (CPT) at a tertiary care paediatric hospital with a completely electronic medical record (EMR) system. Subjects were designated as non-AHT traumatic brain injury (TBI) or AHT based on whether the CPT determined that the injuries were due to AHT. The sensitivity and specificity of the ICD-based definition were calculated. Results There were 223 children evaluated for AHT: 117 AHT and 106 non-AHT TBI. The sensitivity and specificity of the ICD-based operational case definition were 92% (95% CI 85.8 to 96.2) and 96% (95% CI 92.3 to 99.7), respectively. All errors in sensitivity and three of the four specificity errors were due to coder error; one specificity error was a physician error. Conclusions In a paediatric tertiary care hospital with an EMR system, the accuracy of an ICD-based case definition for AHT was high. Additional studies are needed to assess the accuracy of this definition in all types of hospitals in which children with AHT are cared for. PMID:24167034

  4. Technology integration performance assessment using lean principles in health care.

    PubMed

    Rico, Florentino; Yalcin, Ali; Eikman, Edward A

    2015-01-01

    This study assesses the impact of an automated infusion system (AIS) integration at a positron emission tomography (PET) center based on "lean thinking" principles. The authors propose a systematic measurement system that evaluates improvement in terms of the "8 wastes." This adaptation to the health care context consisted of performance measurement before and after integration of AIS in terms of time, utilization of resources, amount of materials wasted/saved, system variability, distances traveled, and worker strain. The authors' observations indicate that AIS stands to be very effective in a busy PET department, such as the one in Moffitt Cancer Center, owing to its accuracy, pace, and reliability, especially after the necessary adjustments are made to reduce or eliminate the source of errors. This integration must be accompanied by a process reengineering exercise to realize the full potential of AIS in reducing waste and improving patient care and worker satisfaction.

  5. Phase segmentation of X-ray computer tomography rock images using machine learning techniques: an accuracy and performance study

    NASA Astrophysics Data System (ADS)

    Chauhan, Swarup; Rühaak, Wolfram; Anbergen, Hauke; Kabdenov, Alen; Freise, Marcus; Wille, Thorsten; Sass, Ingo

    2016-07-01

    Performance and accuracy of machine learning techniques to segment rock grains, matrix and pore voxels from a 3-D volume of X-ray tomographic (XCT) grayscale rock images was evaluated. The segmentation and classification capability of unsupervised (k-means, fuzzy c-means, self-organized maps), supervised (artificial neural networks, least-squares support vector machines) and ensemble classifiers (bragging and boosting) were tested using XCT images of andesite volcanic rock, Berea sandstone, Rotliegend sandstone and a synthetic sample. The averaged porosity obtained for andesite (15.8 ± 2.5 %), Berea sandstone (16.3 ± 2.6 %), Rotliegend sandstone (13.4 ± 7.4 %) and the synthetic sample (48.3 ± 13.3 %) is in very good agreement with the respective laboratory measurement data and varies by a factor of 0.2. The k-means algorithm is the fastest of all machine learning algorithms, whereas a least-squares support vector machine is the most computationally expensive. Metrics entropy, purity, mean square root error, receiver operational characteristic curve and 10 K-fold cross-validation were used to determine the accuracy of unsupervised, supervised and ensemble classifier techniques. In general, the accuracy was found to be largely affected by the feature vector selection scheme. As it is always a trade-off between performance and accuracy, it is difficult to isolate one particular machine learning algorithm which is best suited for the complex phase segmentation problem. Therefore, our investigation provides parameters that can help in selecting the appropriate machine learning techniques for phase segmentation.

  6. Accuracy assessment of NOAA gridded daily reference evapotranspiration for the Texas High Plains

    USGS Publications Warehouse

    Moorhead, Jerry; Gowda, Prasanna H.; Hobbins, Michael; Senay, Gabriel; Paul, George; Marek, Thomas; Porter, Dana

    2015-01-01

    The National Oceanic and Atmospheric Administration (NOAA) provides daily reference evapotranspiration (ETref) maps for the contiguous United States using climatic data from North American Land Data Assimilation System (NLDAS). This data provides large-scale spatial representation of ETref, which is essential for regional scale water resources management. Data used in the development of NOAA daily ETref maps are derived from observations over surfaces that are different from short (grass — ETos) or tall (alfalfa — ETrs) reference crops, often in nonagricultural settings, which carries an unknown discrepancy between assumed and actual conditions. In this study, NOAA daily ETos and ETrs maps were evaluated for accuracy, using observed data from the Texas High Plains Evapotranspiration (TXHPET) network. Daily ETos, ETrs and the climatic data (air temperature, wind speed, and solar radiation) used for calculating ETref were extracted from the NOAA maps for TXHPET locations and compared against ground measurements on reference grass surfaces. NOAA ETrefmaps generally overestimated the TXHPET observations (1.4 and 2.2 mm/day ETos and ETrs, respectively), which may be attributed to errors in the NLDAS modeled air temperature and wind speed, to which reference ETref is most sensitive. Therefore, a bias correction to NLDAS modeled air temperature and wind speed data, or adjustment to the resulting NOAA ETref, may be needed to improve the accuracy of NOAA ETref maps.

  7. Assessment of Classification Accuracies of SENTINEL-2 and LANDSAT-8 Data for Land Cover / Use Mapping

    NASA Astrophysics Data System (ADS)

    Hale Topaloğlu, Raziye; Sertel, Elif; Musaoğlu, Nebiye

    2016-06-01

    This study aims to compare classification accuracies of land cover/use maps created from Sentinel-2 and Landsat-8 data. Istanbul metropolitan city of Turkey, with a population of around 14 million, having different landscape characteristics was selected as study area. Water, forest, agricultural areas, grasslands, transport network, urban, airport- industrial units and barren land- mine land cover/use classes adapted from CORINE nomenclature were used as main land cover/use classes to identify. To fulfil the aims of this research, recently acquired dated 08/02/2016 Sentinel-2 and dated 22/02/2016 Landsat-8 images of Istanbul were obtained and image pre-processing steps like atmospheric and geometric correction were employed. Both Sentinel-2 and Landsat-8 images were resampled to 30m pixel size after geometric correction and similar spectral bands for both satellites were selected to create a similar base for these multi-sensor data. Maximum Likelihood (MLC) and Support Vector Machine (SVM) supervised classification methods were applied to both data sets to accurately identify eight different land cover/ use classes. Error matrix was created using same reference points for Sentinel-2 and Landsat-8 classifications. After the classification accuracy, results were compared to find out the best approach to create current land cover/use map of the region. The results of MLC and SVM classification methods were compared for both images.

  8. Assessing the speed--accuracy trade-off effect on the capacity of information processing.

    PubMed

    Donkin, Chris; Little, Daniel R; Houpt, Joseph W

    2014-06-01

    The ability to trade accuracy for speed is fundamental to human decision making. The speed-accuracy trade-off (SAT) effect has received decades of study, and is well understood in relatively simple decisions: collecting more evidence before making a decision allows one to be more accurate but also slower. The SAT in more complex paradigms has been given less attention, largely due to limits in the models and statistics that can be applied to such tasks. Here, we have conducted the first analysis of the SAT in multiple signal processing, using recently developed technologies for measuring capacity that take into account both response time and choice probability. We show that the primary influence of caution in our redundant-target experiments is on the threshold amount of evidence required to trigger a response. However, in a departure from the usual SAT effect, we found that participants strategically ignored redundant information when they were forced to respond quickly, but only when the additional stimulus was reliably redundant. Interestingly, because the capacity of the system was severely limited on redundant-target trials, ignoring additional targets meant that processing was more efficient when making fast decisions than when making slow and accurate decisions, where participants' limited resources had to be divided between the 2 stimuli.

  9. A Control Variate Method for Probabilistic Performance Assessment. Improved Estimates for Mean Performance Quantities of Interest

    SciTech Connect

    MacKinnon, Robert J.; Kuhlman, Kristopher L

    2016-05-01

    We present a method of control variates for calculating improved estimates for mean performance quantities of interest, E(PQI) , computed from Monte Carlo probabilistic simulations. An example of a PQI is the concentration of a contaminant at a particular location in a problem domain computed from simulations of transport in porous media. To simplify the presentation, the method is described in the setting of a one- dimensional elliptical model problem involving a single uncertain parameter represented by a probability distribution. The approach can be easily implemented for more complex problems involving multiple uncertain parameters and in particular for application to probabilistic performance assessment of deep geologic nuclear waste repository systems. Numerical results indicate the method can produce estimates of E(PQI)having superior accuracy on coarser meshes and reduce the required number of simulations needed to achieve an acceptable estimate.

  10. Qmerit-calibrated overlay to improve overlay accuracy and device performance

    NASA Astrophysics Data System (ADS)

    Ullah, Md Zakir; Jazim, Mohamed Fazly Mohamed; Sim, Stella; Lim, Alan; Hiem, Biow; Chuen, Lieu Chia; Ang, Jesline; Lim, Ek Chow; Klein, Dana; Amit, Eran; Volkovitch, Roie; Tien, David; Choi, DongSub

    2015-03-01

    In advanced semiconductor industries, the overlay error budget is getting tighter due to shrinkage in technology. To fulfill the tighter overlay requirements, gaining every nanometer of improved overlay is very important in order to accelerate yield in high-volume manufacturing (HVM) fabs. To meet the stringent overlay requirements and to overcome other unforeseen situations, it is becoming critical to eliminate the smallest imperfections in the metrology targets used for overlay metrology. For standard cases, the overlay metrology recipe is selected based on total measurement uncertainty (TMU). However, under certain circumstances, inaccuracy due to target imperfections can become the dominant contributor to the metrology uncertainty and cannot be detected and quantified by the standard TMU. For optical-based overlay (OBO) metrology targets, mark asymmetry is a common issue which can cause measurement inaccuracy, and it is not captured by standard TMU. In this paper, a new calibration method, Archer Self-Calibration (ASC), has been established successfully in HVM fabs to improve overlay accuracy on image-based overlay (IBO) metrology targets. Additionally, a new color selection methodology has been developed for the overlay metrology recipe as part of this calibration method. In this study, Qmerit-calibrated data has been used for run-to-run control loop at multiple devices. This study shows that color filter can be chosen more precisely with the help of Qmerit data. Overlay stability improved by 10~20% with best color selection, without causing any negative impact to the products. Residual error, as well as overlay mean plus 3-sigma, showed an improvement of up to 20% when Qmerit-calibrated data was used. A 30% improvement was seen in certain electrical data associated with tested process layers.

  11. Measurement accuracy and Cerenkov removal for high performance, high spatial resolution scintillation dosimetry

    SciTech Connect

    Archambault, Louis; Beddar, A. Sam; Gingras, Luc

    2006-01-15

    With highly conformal radiation therapy techniques such as intensity-modulated radiation therapy, radiosurgery, and tomotherapy becoming more common in clinical practice, the use of these narrow beams requires a higher level of precision in quality assurance and dosimetry. Plastic scintillators with their water equivalence, energy independence, and dose rate linearity have been shown to possess excellent qualities that suit the most complex and demanding radiation therapy treatment plans. The primary disadvantage of plastic scintillators is the presence of Cerenkov radiation generated in the light guide, which results in an undesired stem effect. Several techniques have been proposed to minimize this effect. In this study, we compared three such techniques--background subtraction, simple filtering, and chromatic removal--in terms of reproducibility and dose accuracy as gauges of their ability to remove the Cerenkov stem effect from the dose signal. The dosimeter used in this study comprised a 6-mm{sup 3} plastic scintillating fiber probe, an optical fiber, and a color charge-coupled device camera. The whole system was shown to be linear and the total light collected by the camera was reproducible to within 0.31% for 5-s integration time. Background subtraction and chromatic removal were both found to be suitable for precise dose evaluation, with average absolute dose discrepancies of 0.52% and 0.67%, respectively, from ion chamber values. Background subtraction required two optical fibers, but chromatic removal used only one, thereby preventing possible measurement artifacts when a strong dose gradient was perpendicular to the optical fiber. Our findings showed that a plastic scintillation dosimeter could be made free of the effect of Cerenkov radiation.

  12. Methodology issues concerning the accuracy of kinematic data collection and analysis using the ariel performance analysis system

    NASA Technical Reports Server (NTRS)

    Wilmington, R. P.; Klute, Glenn K. (Editor); Carroll, Amy E. (Editor); Stuart, Mark A. (Editor); Poliner, Jeff (Editor); Rajulu, Sudhakar (Editor); Stanush, Julie (Editor)

    1992-01-01

    Kinematics, the study of motion exclusive of the influences of mass and force, is one of the primary methods used for the analysis of human biomechanical systems as well as other types of mechanical systems. The Anthropometry and Biomechanics Laboratory (ABL) in the Crew Interface Analysis section of the Man-Systems Division performs both human body kinematics as well as mechanical system kinematics using the Ariel Performance Analysis System (APAS). The APAS supports both analysis of analog signals (e.g. force plate data collection) as well as digitization and analysis of video data. The current evaluations address several methodology issues concerning the accuracy of the kinematic data collection and analysis used in the ABL. This document describes a series of evaluations performed to gain quantitative data pertaining to position and constant angular velocity movements under several operating conditions. Two-dimensional as well as three-dimensional data collection and analyses were completed in a controlled laboratory environment using typical hardware setups. In addition, an evaluation was performed to evaluate the accuracy impact due to a single axis camera offset. Segment length and positional data exhibited errors within 3 percent when using three-dimensional analysis and yielded errors within 8 percent through two-dimensional analysis (Direct Linear Software). Peak angular velocities displayed errors within 6 percent through three-dimensional analyses and exhibited errors of 12 percent when using two-dimensional analysis (Direct Linear Software). The specific results from this series of evaluations and their impacts on the methodology issues of kinematic data collection and analyses are presented in detail. The accuracy levels observed in these evaluations are also presented.

  13. Comparative performance assessment of switching options

    NASA Astrophysics Data System (ADS)

    Vukovic, Alex; Savoie, Michel J.

    2004-11-01

    Switching is one of the key functionalities in next generation optical networks. It might be performed by either an optical switch (optical-electrical-optical, or OEO) or a "purely" photonic switch (optical-optical-optical or OOO). Both switches are analyzed from two perspectives - as an individual network element, and as an integral part within the communication network. As an individual network element, the performance evaluation of the two switch types is based on the individual assessment of switch footprint and power dissipation, bandwidth utilization, scalability to high speed, transparency, interoperability, technology maturity and ability to manipulate data. Although both switch types have their own advantages as a network element, the full judgement of their role in next generation optical networks requires an overall network perspective. From that viewpoint, network functionalities such as grooming capabilities, scalability, traffic management, protection, line equalization and performance monitoring are those taken into account for comparative analyses to gain an understanding of the impacts of switch choice in the network. As a result of the comparative performance assessment, the merits and benefits of both switch types in actual network applications are analyzed and outlined. Although the paper evaluates some criteria for switch choice in a network, it points out potential technologies or techniques critical to next generation architectural solutions and protocols as well as the challenges to bridge the gap towards implementing flexible, cost-effective and dynamically provisioned networks of the future. Finally, the paper responds to one critical question - What is the expected role of each switch type in next generation applications and services?

  14. Assessing the accuracy and repeatability of automated photogrammetrically generated digital surface models from unmanned aerial system imagery

    NASA Astrophysics Data System (ADS)

    Chavis, Christopher

    Using commercial digital cameras in conjunction with Unmanned Aerial Systems (UAS) to generate 3-D Digital Surface Models (DSMs) and orthomosaics is emerging as a cost-effective alternative to Light Detection and Ranging (LiDAR). Powerful software applications such as Pix4D and APS can automate the generation of DSM and orthomosaic products from a handful of inputs. However, the accuracy of these models is relatively untested. The objectives of this study were to generate multiple DSM and orthomosaic pairs of the same area using Pix4D and APS from flights of imagery collected with a lightweight UAS. The accuracy of each individual DSM was assessed in addition to the consistency of the method to model one location over a period of time. Finally, this study determined if the DSMs automatically generated using lightweight UAS and commercial digital cameras could be used for detecting changes in elevation and at what scale. Accuracy was determined by comparing DSMs to a series of reference points collected with survey grade GPS. Other GPS points were also used as control points to georeference the products within Pix4D and APS. The effectiveness of the products for change detection was assessed through image differencing and observance of artificially induced, known elevation changes. The vertical accuracy with the optimal data and model is ≈ 25 cm and the highest consistency over repeat flights is a standard deviation of ≈ 5 cm. Elevation change detection based on such UAS imagery and DSM models should be viable for detecting infrastructure change in urban or suburban environments with little dense canopy vegetation.

  15. Assessment of accuracy of adopted centre of mass corrections for the Etalon geodetic satellites

    NASA Astrophysics Data System (ADS)

    Appleby, Graham; Dunn, Peter; Otsubo, Toshimichi; Rodriguez, Jose

    2016-04-01

    Accurate centre-of-mass corrections are key parameters in the analysis of satellite laser ranging observations. In order to meet current accuracy requirements, the vector from the reflection point of a laser retroreflector array to the centre of mass of the orbiting spacecraft must be known with mm-level accuracy. In general, the centre-of-mass correction will be dependent on the characteristics of the target (geometry, construction materials, type of retroreflectors), the hardware employed by the tracking station (laser system, detector type), the intensity of the returned laser pulses, and the post-processing strategy employed to reduce the observations [1]. For the geodetic targets used by the ILRS to produce the SLR contribution to the ITRF, the LAGEOS and Etalon satellite pairs, there are centre-of-mass correction tables available for each tracking station [2]. These values are based on theoretical considerations, empirical determination of the optical response functions of each satellite, and knowledge of the tracking technology and return intensity employed [1]. Here we present results that put into question the accuracy of some of the current values for the centre-of-mass corrections of the Etalon satellites. We have computed weekly reference frame solutions using LAGEOS and Etalon observations for the period 1996-2014, estimating range bias parameters for each satellite type along with station coordinates. Analysis of the range bias time series reveals an unexplained, cm-level positive bias for the Etalon satellites in the case of most stations operating at high energy return levels. The time series of tracking stations that have undergone a transition from different modes of operation provide the evidence pointing to an inadequate centre-of-mass modelling. [1] Otsubo, T., and G.M. Appleby, System-dependent centre-of-mass correction for spherical geodetic satellites, J Geophys. Res., 108(B4), 2201, 2003 [2] Appleby, G.M., and T. Otsubo, Centre of Mass

  16. A SUB-PIXEL ACCURACY ASSESSMENT FRAMEWORK FOR DETERMINING LANDSAT TM DERIVED IMPERVIOUS SURFACE ESTIMATES.

    EPA Science Inventory

    The amount of impervious surface in a watershed is a landscape indicator integrating a number of concurrent interactions that influence a watershed's hydrology. Remote sensing data and techniques are viable tools to assess anthropogenic impervious surfaces. However a fundamental ...

  17. Effect of Accuracy-Emphasized Instructions on Performance on an Attribute-Identification Task

    ERIC Educational Resources Information Center

    Wasilewski, Bohdan K.

    1972-01-01

    Results supported the hypotheses that the emphasis on speed: (a) has a detrimental effect on the performance, (b) is inherent in a test-like situation, and (c) can be reduced by emphasizing in the instructions to Ss the detrimental effects of speed on the achievement of solution. (Author/CB)

  18. Futsal Match-Related Fatigue Affects Running Performance and Neuromuscular Parameters but Not Finishing Kick Speed or Accuracy

    PubMed Central

    Milioni, Fabio; Vieira, Luiz H. P.; Barbieri, Ricardo A.; Zagatto, Alessandro M.; Nordsborg, Nikolai B.; Barbieri, Fabio A.; dos-Santos, Júlio W.; Santiago, Paulo R. P.; Papoti, Marcelo

    2016-01-01

    Purpose: The aim of the present study was to investigate the influence of futsal match-related fatigue on running performance, neuromuscular variables, and finishing kick speed and accuracy. Methods: Ten professional futsal players participated in the study (age: 22.2 ± 2.5 years) and initially performed an incremental protocol to determine maximum oxygen uptake (V˙O2max: 50.6 ± 4.9 mL.kg−1.min−1). Next, simulated games were performed, in four periods of 10 min during which heart rate and blood lactate concentration were monitored. The entire games were video recorded for subsequent automatic tracking. Before and immediately after the simulated game, neuromuscular function was measured by maximal isometric force of knee extension, voluntary activation using twitch interpolation technique, and electromyographic activity. Before, at half time, and immediately after the simulated game, the athletes also performed a set of finishing kicks for ball speed and accuracy measurements. Results: Total distance covered (1st half: 1986.6 ± 74.4 m; 2nd half: 1856.0 ± 129.7 m, P = 0.00) and distance covered per minute (1st half: 103.2 ± 4.4 m.min−1; 2nd half: 96.4 ± 7.5 m.min−1, P = 0.00) demonstrated significant declines during the simulated game, as well as maximal isometric force of knee extension (Before: 840.2 ± 66.2 N; After: 751.6 ± 114.3 N, P = 0.04) and voluntary activation (Before: 85.9 ± 7.5%; After: 74.1 ± 12.3%, P = 0.04), however ball speed and accuracy during the finishing kicks were not significantly affected. Conclusion: Therefore, we conclude that despite the decline in running performance and neuromuscular variables presenting an important manifestation of central fatigue, this condition apparently does not affect the speed and accuracy of finishing kicks. PMID:27872598

  19. Performance assessment task team progress report

    SciTech Connect

    Wood, D.E.; Curl, R.U.; Armstrong, D.R.; Cook, J.R.; Dolenc, M.R.; Kocher, D.C.; Owens, K.W.; Regnier, E.P.; Roles, G.W.; Seitz, R.R.

    1994-05-01

    The U.S. Department of Energy (DOE) Headquarters EM-35, established a Performance Assessment Task Team (referred to as the Team) to integrate the activities of the sites that are preparing performance assessments (PAs) for disposal of new low-level waste, as required by Chapter III of DOE Order 5820.2A, {open_quotes}Low-Level Waste Management{close_quotes}. The intent of the Team is to achieve a degree of consistency among these PAs as the analyses proceed at the disposal sites. The Team`s purpose is to recommend policy and guidance to the DOE on issues that impact the PAs, including release scenarios and parameters, so that the approaches are as consistent as possible across the DOE complex. The Team has identified issues requiring attention and developed discussion papers for those issues. Some issues have been completed, and the recommendations are provided in this document. Other issues are still being discussed, and the status summaries are provided in this document. A major initiative was to establish a subteam to develop a set of test scenarios and parameters for benchmarking codes in use at the various sites. The activities of the Team are reported here through December 1993.

  20. Mathematical accuracy of Aztec land surveys assessed from records in the Codex Vergara.

    PubMed

    Jorge, María del Carmen; Williams, Barbara J; Garza-Hume, C E; Olvera, Arturo

    2011-09-13

    Land surveying in ancient states is documented not only for Eurasia but also for the Americas, amply attested by two Acolhua-Aztec pictorial manuscripts from the Valley of Mexico. The Codex Vergara and the Códice de Santa María Asunción consist of hundreds of drawings of agricultural fields that uniquely record surface areas as well as perimeter measurements. A previous study of the Codex Vergara examines how Acolhua-Aztecs determined field area by reconstructing their calculation procedures. Here we evaluate the accuracy of their area values using modern mathematics. The findings verify the overall mathematical validity of the codex records. Three-quarters of the areas are within 5% of the maximum possible value, and 85% are within 10%, which compares well with reported errors by Western surveyors that postdate Aztec-Acolhua work by several centuries.

  1. Assessment of Required Accuracy of Digital Elevation Data for Hydrologic Modeling

    NASA Technical Reports Server (NTRS)

    Kenward, T.; Lettenmaier, D. P.

    1997-01-01

    The effect of vertical accuracy of Digital Elevation Models (DEMs) on hydrologic models is evaluated by comparing three DEMs and resulting hydrologic model predictions applied to a 7.2 sq km USDA - ARS watershed at Mahantango Creek, PA. The high resolution (5 m) DEM was resempled to a 30 m resolution using method that constrained the spatial structure of the elevations to be comparable with the USGS and SIR-C DEMs. This resulting 30 m DEM was used as the reference product for subsequent comparisons. Spatial fields of directly derived quantities, such as elevation differences, slope, and contributing area, were compared to the reference product, as were hydrologic model output fields derived using each of the three DEMs at the common 30 m spatial resolution.

  2. Mathematical accuracy of Aztec land surveys assessed from records in the Codex Vergara

    PubMed Central

    Williams, Barbara J.; Garza-Hume, C. E.; Olvera, Arturo

    2011-01-01

    Land surveying in ancient states is documented not only for Eurasia but also for the Americas, amply attested by two Acolhua–Aztec pictorial manuscripts from the Valley of Mexico. The Codex Vergara and the Códice de Santa María Asunción consist of hundreds of drawings of agricultural fields that uniquely record surface areas as well as perimeter measurements. A previous study of the Codex Vergara examines how Acolhua–Aztecs determined field area by reconstructing their calculation procedures. Here we evaluate the accuracy of their area values using modern mathematics. The findings verify the overall mathematical validity of the codex records. Three-quarters of the areas are within 5% of the maximum possible value, and 85% are within 10%, which compares well with reported errors by Western surveyors that postdate Aztec–Acolhua work by several centuries. PMID:21876138

  3. The short- to medium-term predictive accuracy of static and dynamic risk assessment measures in a secure forensic hospital.

    PubMed

    Chu, Chi Meng; Thomas, Stuart D M; Ogloff, James R P; Daffern, Michael

    2013-04-01

    Although violence risk assessment knowledge and practice has advanced over the past few decades, it remains practically difficult to decide which measures clinicians should use to assess and make decisions about the violence potential of individuals on an ongoing basis, particularly in the short to medium term. Within this context, this study sought to compare the predictive accuracy of dynamic risk assessment measures for violence with static risk assessment measures over the short term (up to 1 month) and medium term (up to 6 months) in a forensic psychiatric inpatient setting. Results showed that dynamic measures were generally more accurate than static measures for short- to medium-term predictions of inpatient aggression. These findings highlight the necessity of using risk assessment measures that are sensitive to important clinical risk state variables to improve the short- to medium-term prediction of aggression within the forensic inpatient setting. Such knowledge can assist with the development of more accurate and efficient risk assessment procedures, including the selection of appropriate risk assessment instruments to manage and prevent the violence of offenders with mental illnesses during inpatient treatment.

  4. Assessment of dimensional accuracy of preadjusted metal injection molding orthodontic brackets

    PubMed Central

    Alavi, Shiva; Tajmirriahi, Farnaz

    2016-01-01

    Background: the aim of this study is to evaluate the dimensional accuracy of McLaughlin, Bennett, and Trevisi (MBT) brackets manufactured by two different companies (American Orthodontics and Ortho Organizers) and determine variations in incorporation of values in relation to tip and torque in these products. Materials and Methods: In the present analytical/descriptive study, 64 maxillary right central brackets manufactured by two companies (American Orthodontics and Ortho Organizers) were selected randomly and evaluated for the accuracy of the values in relation to torque and angulation presented by the manufacturers. They were placed in a video measuring machine using special revolvers under them and were positioned in a manner so that the light beams would be directed on the floor of the slot without the slot walls being seen. Then, the software program of the same machine was used to determine the values of each bracket type. The means of measurements were determined for each sample and were analyzed with independent t-test and one-sample t-test. Results: Based on the confidence interval, it can be concluded that at 95% probability, the means of tip angles of maxillary right central brackets of these two brands were 4.1–4.3° and the torque angles were 16.39–16.72°. The tips in these samples were at a range of 3.33–4.98°, and the torque was at a range of 15.22–18.48°. Conclusion: In the present study, there were no significant differences in the angulation incorporated into the brackets from the two companies; however, they were significantly different from the tiP values for the MBT prescription. In relation to torque, there was a significant difference between the American Orthodontic brackets exhibited significant differences with the reported 17°, too. PMID:27857770

  5. Human Performance Assessments when Using Augmented Reality for Navigation

    DTIC Science & Technology

    2006-06-01

    Human performance executing search and rescue type of navigation is one area that can benefit from augmented reality technology when the proper...landmarks. We briefly report on an experiment that demonstrated the benefits of augmented reality in a search and rescue task. Specifically, 120...participants, equally divided by gender, were tested in speed and accuracy using augmented reality in a search and rescue task. Accuracy performance was

  6. Performance-based assessment of reconstructed images

    SciTech Connect

    Hanson, Kenneth

    2009-01-01

    During the early 90s, I engaged in a productive and enjoyable collaboration with Robert Wagner and his colleague, Kyle Myers. We explored the ramifications of the principle that tbe quality of an image should be assessed on the basis of how well it facilitates the performance of appropriate visual tasks. We applied this principle to algorithms used to reconstruct scenes from incomplete and/or noisy projection data. For binary visual tasks, we used both the conventional disk detection and a new challenging task, inspired by the Rayleigh resolution criterion, of deciding whether an object was a blurred version of two dots or a bar. The results of human and machine observer tests were summarized with the detectability index based on the area under the ROC curve. We investigated a variety of reconstruction algorithms, including ART, with and without a nonnegativity constraint, and the MEMSYS3 algorithm. We concluded that the performance of the Raleigh task was optimized when the strength of the prior was near MEMSYS's default 'classic' value for both human and machine observers. A notable result was that the most-often-used metric of rms error in the reconstruction was not necessarily indicative of the value of a reconstructed image for the purpose of performing visual tasks.

  7. Using urbanization profiles to assess screening performance.

    PubMed

    Boon, Mathilde E; Kok, Lambrecht P

    2004-04-01

    The large Dutch data sets acquired as a result of population-based cervical smear screening programs can be further exploited to obtain an urbanization-weighted score to gain insight into the quality of the performance of the individual cytology laboratories. Based on the first four digits of the postal code of the screenees, the data are stratified according to urbanization. Urb 1 corresponds to (semi)rural, which includes villages and small townships with less than 20,000 inhabitants; Urb 2, to towns with between 20,000 and 250,000 inhabitants; and Urb 3, to big cities, in this case, The Hague. From the postal code data of the screenees, the urbanization profiles of the laboratories can be calculated. The urbanization degree proved to have a substantial effect on the cytologic scores in the four laboratories. The number of expected, urbanization-weighted patient cases is calculated. Accordingly, the laboratories could be compared with respect to performance. We conclude that laboratories in our screening program were quite similar in performance for the cytologic diagnosis leading to referral to the hospital, with little difference between the actual and the expected, urbanization-weighted number of cases. It is evident that the equation for calculating the expected scores for S5-S9 is relevant for control of quality of care provided by laboratories and regions, but also for the quality of these assessments.

  8. Mass Evolution of Mediterranean, Black, Red, and Caspian Seas from GRACE and Altimetry: Accuracy Assessment and Solution Calibration

    NASA Technical Reports Server (NTRS)

    Loomis, B. D.; Luthcke, S. B.

    2016-01-01

    We present new measurements of mass evolution for the Mediterranean, Black, Red, and Caspian Seas as determined by the NASA Goddard Space Flight Center (GSFC) GRACE time-variable global gravity mascon solutions. These new solutions are compared to sea surface altimetry measurements of sea level anomalies with steric corrections applied. To assess their accuracy, the GRACE and altimetry-derived solutions are applied to the set of forward models used by GSFC for processing the GRACE Level-1B datasets, with the resulting inter-satellite range acceleration residuals providing a useful metric for analyzing solution quality.

  9. Springback Control in Industrial Bending Operations: Assessing the Accuracy of Three Commercial FEA Codes

    NASA Astrophysics Data System (ADS)

    Welo, Torgeir; Granly, Bjørg M.; Elverum, Christer; Søvik, Odd P.; Sørbø, Steinar

    2011-05-01

    Over the past two decades, a quantum leap has been made in FE technology for metal forming applications, including methods, algorithms, models and hardware capabilities. A myriad of research articles reports on methodologies that provide excellent capabilities in reproducing springback obtained from physical experiments. However, it is felt that we are not yet to the point where current modeling practice provides satisfactory value to tool designers and manufacturing engineers, particularly when the results have to be available before the first piece of tool steel has been cut; the main reasons being lack of accuracy in predicting elastic springback. The main objective of the present work is to validate springback capabilities using a strategy that integrates industrial tool simulation practice with carefully controlled physical experiments conducted in an academic setting. An industry-like (rotary) draw bending machine has been built and equipped with advanced measurement capabilities. Extruded rectangular, hollow aluminum alloy AA6060 sections were heat treated to two different tempers to produce a range of material properties prior to forming into two different bending angles. The selected set-up represents a challenging benchmark due to tight-radius bending and complex contact conditions, meaning that elastic springback is resulting from interaction effects between excessive local cross-sectional distortions and global bending mechanisms. The material properties were obtained by tensile testing, curve-fitting data to a conventional isotropic Ludwik-type material model. The bending process was modeled in three different commercial FE codes following best practice, including LS-Dyna, Stampack and Abaqus (explicit). The springback analyses were done prior to bending tests as would be done in an industrial tool design process. After having completed the bending tests and carefully measured the released bend angle for the different combinations, the results were

  10. Assessing liner performance using on-farm milk meters.

    PubMed

    Penry, J F; Leonardi, S; Upton, J; Thompson, P D; Reinemann, D J

    2016-08-01

    The primary objective of this study was to quantify and compare the interactive effects of liner compression, milking vacuum level, and pulsation settings on average milk flow rates for liners representing the range of liner compression of commercial liners. A secondary objective was to evaluate a methodology for assessing liner performance that can be applied on commercial dairy farms. Eight different liner types were assessed using 9 different combinations of milking system vacuum and pulsation settings applied to a herd of 80 cows with vacuum and pulsation conditions changed daily for 36d using a central composite experimental design. Liner response surfaces were created for explanatory variables milking system vacuum (Vsystem) and pulsator ratio (PR) and response variable average milk flow rate (AMF=total yield/total cups-on time) expressed as a fraction of the within-cow average flow rate for all treatments (average milk flow rate fraction, AMFf). Response surfaces were also created for between-liner comparisons for standardized conditions of claw vacuum and milk ratio (fraction of pulsation cycle during which milk is flowing). The highest AMFf was observed at the highest levels of Vsystem, PR, and overpressure. All liners showed an increase in AMF as milking conditions were changed from low to high standardized conditions of claw vacuum and milk ratio. Differences in AMF between liners were smallest at the most gentle milking conditions (low Vsystem and low milk ratio), and these between-liner differences in AMF increased as liner overpressure increased. Differences were noted with vacuum drop between Vsystem and claw vacuum depending on the liner venting system, with short milk tube vented liners having the greater vacuum drop than mouthpiece chamber vented liners. The accuracy of liner performance assessment in commercial parlors fitted with milk meters can be improved by using a central composite experimental design with a repeated center point treatment

  11. The Impact of Performance Level Misclassification on the Accuracy and Precision of Percent at Performance Level Measures

    ERIC Educational Resources Information Center

    Betebenner, Damian W.; Shang, Yi; Xiang, Yun; Zhao, Yan; Yue, Xiaohui

    2008-01-01

    No Child Left Behind (NCLB) performance mandates, embedded within state accountability systems, focus school AYP (adequate yearly progress) compliance squarely on the percentage of students at or above proficient. The singular importance of this quantity for decision-making purposes has initiated extensive research into percent proficient as a…

  12. Disease severity estimates - effects of rater accuracy and assessments methods for comparing treatments

    Technology Transfer Automated Retrieval System (TEKTRAN)

    Assessment of disease is fundamental to the discipline of plant pathology, and estimates of severity are often made visually. However, it is established that visual estimates can be inaccurate and unreliable. In this study estimates of Septoria leaf blotch on leaves of winter wheat from non-treated ...

  13. 12 CFR 630.5 - Accuracy of reports and assessment of internal control over financial reporting.

    Code of Federal Regulations, 2011 CFR

    2011-01-01

    ... information is true, accurate, and complete to the best of signatories' knowledge and belief. (d) Management... reporting for the System-wide report to investors. The assessment must be conducted during the reporting... CREDIT SYSTEM DISCLOSURE TO INVESTORS IN SYSTEMWIDE AND CONSOLIDATED BANK DEBT OBLIGATIONS OF THE...

  14. Do Students Know What They Know? Exploring the Accuracy of Students' Self-Assessments

    ERIC Educational Resources Information Center

    Lindsey, Beth A.; Nagel, Megan L.

    2015-01-01

    We have conducted an investigation into how well students in introductory science classes (both physics and chemistry) are able to predict which questions they will or will not be able to answer correctly on an upcoming assessment. An examination of the data at the level of students' overall scores reveals results consistent with the…

  15. Assessing posttraumatic stress in military service members: improving efficiency and accuracy.

    PubMed

    Fissette, Caitlin L; Snyder, Douglas K; Balderrama-Durbin, Christina; Balsis, Steve; Cigrang, Jeffrey; Talcott, G Wayne; Tatum, JoLyn; Baker, Monty; Cassidy, Daniel; Sonnek, Scott; Heyman, Richard E; Smith Slep, Amy M

    2014-03-01

    Posttraumatic stress disorder (PTSD) is assessed across many different populations and assessment contexts. However, measures of PTSD symptomatology often are not tailored to meet the needs and demands of these different populations and settings. In order to develop population- and context-specific measures of PTSD it is useful first to examine the item-level functioning of existing assessment methods. One such assessment measure is the 17-item PTSD Checklist-Military version (PCL-M; Weathers, Litz, Herman, Huska, & Keane, 1993). Although the PCL-M is widely used in both military and veteran health-care settings, it is limited by interpretations based on aggregate scores that ignore variability in item endorsement rates and relatedness to PTSD. Based on item response theory, this study conducted 2-parameter logistic analyses of the PCL-M in a sample of 196 service members returning from a yearlong, high-risk deployment to Iraq. Results confirmed substantial variability across items both in terms of their relatedness to PTSD and their likelihood of endorsement at any given level of PTSD. The test information curve for the full 17-item PCL-M peaked sharply at a value of θ = 0.71, reflecting greatest information at approximately the 76th percentile level of underlying PTSD symptom levels in this sample. Implications of findings are discussed as they relate to identifying more efficient, accurate subsets of items tailored to military service members as well as other specific populations and evaluation contexts.

  16. Assessment of the accuracy of plasma shape reconstruction by the Cauchy condition surface method in JT-60SA

    SciTech Connect

    Miyata, Y.; Suzuki, T.; Takechi, M.; Urano, H.; Ide, S.

    2015-07-15

    For the purpose of stable plasma equilibrium control and detailed analysis, it is essential to reconstruct an accurate plasma boundary on the poloidal cross section in tokamak devices. The Cauchy condition surface (CCS) method is a numerical approach for calculating the spatial distribution of the magnetic flux outside a hypothetical surface and reconstructing the plasma boundary from the magnetic measurements located outside the plasma. The accuracy of the plasma shape reconstruction has been assessed by comparing the CCS method and an equilibrium calculation in JT-60SA with a high elongation and triangularity of plasma shape. The CCS, on which both Dirichlet and Neumann conditions are unknown, is defined as a hypothetical surface located inside the real plasma region. The accuracy of the plasma shape reconstruction is sensitive to the CCS free parameters such as the number of unknown parameters and the shape in JT-60SA. It is found that the optimum number of unknown parameters and the size of the CCS that minimizes errors in the reconstructed plasma shape are in proportion to the plasma size. Furthermore, it is shown that the accuracy of the plasma shape reconstruction is greatly improved using the optimum number of unknown parameters and shape of the CCS, and the reachable reconstruction errors in plasma shape and locations of strike points are within the target ranges in JT-60SA.

  17. Assessment of the accuracy of plasma shape reconstruction by the Cauchy condition surface method in JT-60SA

    NASA Astrophysics Data System (ADS)

    Miyata, Y.; Suzuki, T.; Takechi, M.; Urano, H.; Ide, S.

    2015-07-01

    For the purpose of stable plasma equilibrium control and detailed analysis, it is essential to reconstruct an accurate plasma boundary on the poloidal cross section in tokamak devices. The Cauchy condition surface (CCS) method is a numerical approach for calculating the spatial distribution of the magnetic flux outside a hypothetical surface and reconstructing the plasma boundary from the magnetic measurements located outside the plasma. The accuracy of the plasma shape reconstruction has been assessed by comparing the CCS method and an equilibrium calculation in JT-60SA with a high elongation and triangularity of plasma shape. The CCS, on which both Dirichlet and Neumann conditions are unknown, is defined as a hypothetical surface located inside the real plasma region. The accuracy of the plasma shape reconstruction is sensitive to the CCS free parameters such as the number of unknown parameters and the shape in JT-60SA. It is found that the optimum number of unknown parameters and the size of the CCS that minimizes errors in the reconstructed plasma shape are in proportion to the plasma size. Furthermore, it is shown that the accuracy of the plasma shape reconstruction is greatly improved using the optimum number of unknown parameters and shape of the CCS, and the reachable reconstruction errors in plasma shape and locations of strike points are within the target ranges in JT-60SA.

  18. Assessment of the accuracy of plasma shape reconstruction by the Cauchy condition surface method in JT-60SA.

    PubMed

    Miyata, Y; Suzuki, T; Takechi, M; Urano, H; Ide, S

    2015-07-01

    For the purpose of stable plasma equilibrium control and detailed analysis, it is essential to reconstruct an accurate plasma boundary on the poloidal cross section in tokamak devices. The Cauchy condition surface (CCS) method is a numerical approach for calculating the spatial distribution of the magnetic flux outside a hypothetical surface and reconstructing the plasma boundary from the magnetic measurements located outside the plasma. The accuracy of the plasma shape reconstruction has been assessed by comparing the CCS method and an equilibrium calculation in JT-60SA with a high elongation and triangularity of plasma shape. The CCS, on which both Dirichlet and Neumann conditions are unknown, is defined as a hypothetical surface located inside the real plasma region. The accuracy of the plasma shape reconstruction is sensitive to the CCS free parameters such as the number of unknown parameters and the shape in JT-60SA. It is found that the optimum number of unknown parameters and the size of the CCS that minimizes errors in the reconstructed plasma shape are in proportion to the plasma size. Furthermore, it is shown that the accuracy of the plasma shape reconstruction is greatly improved using the optimum number of unknown parameters and shape of the CCS, and the reachable reconstruction errors in plasma shape and locations of strike points are within the target ranges in JT-60SA.

  19. Assessment of Completeness and Positional Accuracy of Linear Features in Volunteered Geographic Information (vgi)

    NASA Astrophysics Data System (ADS)

    Eshghi, M.; Alesheikh, A. A.

    2015-12-01

    Recent advances in spatial data collection technologies and online services dramatically increase the contribution of ordinary people to produce, share, and use geographic information. Collecting spatial data as well as disseminating them on the internet by citizens has led to a huge source of spatial data termed as Volunteered Geographic Information (VGI) by Mike Goodchild. Although, VGI has produced previously unavailable data assets, and enriched existing ones. But its quality can be highly variable and challengeable. This presents several challenges to potential end users who are concerned about the validation and the quality assurance of the data which are collected. Almost, all the existing researches are based on how to find accurate VGI data from existing VGI data which consist of a) comparing the VGI data with the accurate official data, or b) in cases that there is no access to correct data; therefore, looking for an alternative way to determine the quality of VGI data is essential, and so forth. In this paper it has been attempt to develop a useful method to reach this goal. In this process, the positional accuracy of linear feature of Iran, Tehran OSM data have been analyzed.

  20. Accuracy assessment of building point clouds automatically generated from iphone images

    NASA Astrophysics Data System (ADS)

    Sirmacek, B.; Lindenbergh, R.

    2014-06-01

    Low-cost sensor generated 3D models can be useful for quick 3D urban model updating, yet the quality of the models is questionable. In this article, we evaluate the reliability of an automatic point cloud generation method using multi-view iPhone images or an iPhone video file as an input. We register such automatically generated point cloud on a TLS point cloud of the same object to discuss accuracy, advantages and limitations of the iPhone generated point clouds. For the chosen example showcase, we have classified 1.23% of the iPhone point cloud points as outliers, and calculated the mean of the point to point distances to the TLS point cloud as 0.11 m. Since a TLS point cloud might also include measurement errors and noise, we computed local noise values for the point clouds from both sources. Mean (μ) and standard deviation (σ) of roughness histograms are calculated as (μ1 = 0.44 m., σ1 = 0.071 m.) and (μ2 = 0.025 m., σ2 = 0.037 m.) for the iPhone and TLS point clouds respectively. Our experimental results indicate possible usage of the proposed automatic 3D model generation framework for 3D urban map updating, fusion and detail enhancing, quick and real-time change detection purposes. However, further insights should be obtained first on the circumstances that are needed to guarantee a successful point cloud generation from smartphone images.

  1. An extended dynamometer setup to improve the accuracy of knee joint moment assessment.

    PubMed

    Van Campen, Anke; De Groote, Friedl; Jonkers, Ilse; De Schutter, Joris

    2013-05-01

    This paper analyzes an extended dynamometry setup that aims at obtaining accurate knee joint moments. The main problem of the standard setup is the misalignment of the joint and the dynamometer axes of rotation due to nonrigid fixation, and the determination of the joint axis of rotation by palpation. The proposed approach 1) combines 6-D registration of the contact forces with 3-D motion capturing (which is a contribution to the design of the setup); 2) includes a functional axis of rotation in the model to describe the knee joint (which is a contribution to the modeling); and 3) calculates joint moments by a model-based 3-D inverse dynamic analysis. Through a sensitivity analysis, the influence of the accuracy of all model parameters is evaluated. Dynamics resulting from the extended setup are quantified, and are compared to those provided by the dynamometer. Maximal differences between the 3-D joint moment resulting from the inverse dynamics and measured by the dynamometer were 16.4 N ·m (16.9%) isokinetically and 18.3 N ·m (21.6%) isometrically. The calculated moment is most sensitive to the orientation and location of the axis of rotation. In conclusion, more accurate experimental joint moments are obtained using a model-based 3-D inverse dynamic approach that includes a good estimate of the pose of the joint axis.

  2. Assessment of the accuracy of density functional theory for first principles simulations of water

    NASA Astrophysics Data System (ADS)

    Grossman, J. C.; Schwegler, E.; Draeger, E.; Gygi, F.; Galli, G.

    2004-03-01

    We present a series of Car-Parrinello (CP) molecular dynamics simulation in order to better understand the accuracy of density functional theory for the calculation of the properties of water [1]. Through 10 separate ab initio simulations, each for 20 ps of ``production'' time, a number of approximations are tested by varying the density functional employed, the fictitious electron mass, μ, in the CP Langrangian, the system size, and the ionic mass, M (we considered both H_2O and D_2O). We present the impact of these approximations on properties such as the radial distribution function [g(r)], structure factor [S(k)], diffusion coefficient and dipole moment. Our results show that structural properties may artificially depend on μ, and that in the case of an accurate description of the electronic ground state, and in the absence of proton quantum effects, we obtained an oxygen-oxygen correlation function that is over-structured compared to experiment, and a diffusion coefficient which is approximately 10 times smaller. ^1 J.C. Grossman et. al., J. Chem. Phys. (in press, 2004).

  3. Economic assessment of animal health performance.

    PubMed

    Galligan, David

    2006-03-01

    This article describes the fundamental principles of economic assessment of animal health performance in the modem animal production environment. Animal production is a complex system of combined inputs (eg, physical inputs, managerial decision choices) into a production process that produces products valued by society. Perturbations to this system include disease processes and management inefficiencies. Economic valuation of these perturbations must account for the marginal changes in revenues and cost, the time dimensions of occurrence, the inherent risk characteristics of biologic systems, and any opportunity value that exists that allows management to intervene within the process and make economically influencing decisions. It has been recognized that improving animal health can play a major role in achieving efficient and economically rewarding production.

  4. Performing Probabilistic Risk Assessment Through RAVEN

    SciTech Connect

    A. Alfonsi; C. Rabiti; D. Mandelli; J. Cogliati; R. Kinoshita

    2013-06-01

    The Reactor Analysis and Virtual control ENviroment (RAVEN) code is a software tool that acts as the control logic driver and post-processing engine for the newly developed Thermal-Hydraulic code RELAP-7. RAVEN is now a multi-purpose Probabilistic Risk Assessment (PRA) software framework that allows dispatching different functionalities: Derive and actuate the control logic required to simulate the plant control system and operator actions (guided procedures), allowing on-line monitoring/controlling in the Phase Space Perform both Monte-Carlo sampling of random distributed events and Dynamic Event Tree based analysis Facilitate the input/output handling through a Graphical User Interface (GUI) and a post-processing data mining module

  5. The Accuracy of Intelligence Assessment: Bias, Perception, and Judgement in Analysis and Decision

    DTIC Science & Technology

    1993-03-10

    practical inteligence ethic--not a code of conduct but an ethical way of thinking that forces analysts and decision-makers to ask L El the right...incorporated it into their varying viewpoints and turned it often into competitive conclusions. Some embraced, some acquiesced, some ignored, some rejected...dilemma. Intelligence officers and decision-makers compete for viewpoints. Intelligence assessments are always potentially competitive decisions. This

  6. The other 90% of the protein: Assessment beyond the Cαs for CASP8 template-based and high-accuracy models

    PubMed Central

    Keedy, Daniel A.; Williams, Christopher J.; Headd, Jeffrey J.; Arendall, W. Bryan; Chen, Vincent B.; Kapral, Gary J.; Gillespie, Robert A.; Block, Jeremy N.; Zemla, Adam; Richardson, David C.; Richardson, Jane S.

    2010-01-01

    For template-based modeling in the CASP8 Critical Assessment of Techniques for Protein Structure Prediction, this work develops and applies six new full-model metrics. They are designed to complement and add value to the traditional template-based assessment by GDT (Global Distance Test) and related scores (based on multiple superpositions of Cα atoms between target structure and predictions labeled “model 1”). The new metrics evaluate each predictor group on each target, using all atoms of their best model with above-average GDT. Two metrics evaluate how “protein-like” the predicted model is: the MolProbity score used for validating experimental structures, and a mainchain reality score using all-atom steric clashes, bond length and angle outliers, and backbone dihedrals. Four other new metrics evaluate match of model to target for mainchain and sidechain hydrogen bonds, sidechain end positioning, and sidechain rotamers. Group-average Z-score across the six full-model measures is averaged with group-average GDT Z-score to produce the overall ranking for full-model, high-accuracy performance. Separate assessments are reported for specific aspects of predictor-group performance, such as robustness of approximately correct template or fold identification, and self-scoring ability at identifying the best of their models. Fold identification is distinct from but correlated with group-average GDT Z-score if target difficulty is taken into account, while self-scoring is done best by servers and is uncorrelated with GDT performance. Outstanding individual models on specific targets are identified and discussed. Predictor groups excelled at different aspects, highlighting the diversity of current methodologies. However, good full-model scores correlate robustly with high Cα accuracy. PMID:19731372

  7. Bone QUS measurement performed under loading condition, a more accuracy ultrasound method for osteoporosis diagnosis.

    PubMed

    Liu, Chengrui; Niu, Haijun; Fan, Yubo; Li, Deyu

    2012-10-01

    Osteoporosis is a worldwide health problem with enormous social and economic impact. Quantitative ultrasound (QUS) method provides comprehensive information on bone mass, microstructure and mechanical properties of the bone. And the cheap, safe and portable ultrasound equipment is more suitable for public health monitoring. QUS measurement was normally performed on bone specimens without mechanical loading. But human bones are subjected to loading during routine daily activities, and physical loading leads to the changes of bone microstructure and mechanical properties. We hypothesized that bone QUS parameters measured under loading condition differ from those measured without loading because the microstructure of bone was changed when loading subjected to bone. Furthermore, when loading was subjected on bone, the loading-lead microstructure change of osteoporosis bone may larger than that of health bone. By considering the high relationship between bone microstructure and QUS parameters, the QUS parameters of osteoporosis bone may changed larger than that of health bone. So osteoporosis may be detected more effectively by the combination of QUS method and mechanical loading.

  8. Assessing the accuracy of approximate treatments of ion hydration based on primitive quasichemical theory

    NASA Astrophysics Data System (ADS)

    Roux, Benoît; Yu, Haibo

    2010-06-01

    Quasichemical theory (QCT) provides a framework that can be used to partition the influence of the solvent surrounding an ion into near and distant contributions. Within QCT, the solvation properties of the ion are expressed as a sum of configurational integrals comprising only the ion and a small number of solvent molecules. QCT adopts a particularly simple form if it is assumed that the clusters undergo only small thermal fluctuations around a well-defined energy minimum and are affected exclusively in a mean-field sense by the surrounding bulk solvent. The fluctuations can then be integrated out via a simple vibrational analysis, leading to a closed-form expression for the solvation free energy of the ion. This constitutes the primitive form of quasichemical theory (pQCT), which is an approximate mathematical formulation aimed at reproducing the results from the full many-body configurational averages of statistical mechanics. While the results from pQCT from previous applications are reasonable, the accuracy of the approach has not been fully characterized and its range of validity remains unclear. Here, a direct test of pQCT for a set of ion models is carried out by comparing with the results of free energy simulations with explicit solvent. The influence of the distant surrounding bulk on the cluster comprising the ion and the nearest solvent molecule is treated both with a continuum dielectric approximation and with free energy perturbation molecular dynamics simulations with explicit solvent. The analysis shows that pQCT can provide an accurate framework in the case of a small cation such as Li+. However, the approximation encounters increasing difficulties when applied to larger cations such as Na+, and particularly for K+. This suggests that results from pQCT should be interpreted with caution when comparing ions of different sizes.

  9. Accuracy and feasibility of video analysis for assessing hamstring flexibility and validity of the sit-and-reach test.

    PubMed

    Mier, Constance M

    2011-12-01

    The accuracy of video analysis of the passive straight-leg raise test (PSLR) and the validity of the sit-and-reach test (SR) were tested in 60 men and women. Computer software measured static hip-joint flexion accurately. High within-session reliability of the PSLR was demonstrated (R > .97). Test-retest (separate days) reliability for SR was high in men (R = .97) and women R = .98) moderate for PSLR in men (R = .79) and women (R = .89). SR validity (PSLR as criterion) was higher in women (Day 1, r = .69; Day 2, r = .81) than men (Day 1, r = .64; Day 2, r = .66). In conclusion, video analysis is accurate and feasible for assessing static joint angles, PSLR and SR tests are very reliable methods for assessing flexibility, and the SR validity for hamstring flexibility was found to be moderate in women and low in men.

  10. E-Area Performance Assessment Interim Measures Assessment FY2005

    SciTech Connect

    Stallings, M

    2006-01-31

    After major changes to the limits for various disposal units of the E-Area Low Level Waste Facility (ELLWF) last year, no major changes have been made during FY2005. A Special Analysis was completed which removes the air pathway {sup 14}C limit from the Intermediate Level Vault (ILV). This analysis will allow the disposal of reactor moderator deionizers which previously had no pathway to disposal. Several studies have also been completed providing groundwater transport input for future special analyses. During the past year, since Slit Trenches No.1 and No.2 were nearing volumetric capacity, they were operationally closed under a preliminary closure analysis. This analysis was performed using as-disposed conditions and data and showed that concrete rubble from the demolition of 232-F was acceptable for disposal in the STs even though the latest special analysis for the STs had reduced the tritium limits so that the inventory in the rubble exceeded limits. A number of special studies are planned during the next years; perhaps the largest of these will be revision of the Performance Assessment (PA) for the ELLWF. The revision will be accomplished by incorporating special analyses performed since the last PA revision as well as revising analyses to include new data. Projected impacts on disposal limits of more recent studies have been estimated. No interim measures will be applied during this year. However, it is being recommended that tritium disposals to the Components-in-Grout (CIG) Trenches be suspended until a limited Special Analysis (SA) currently in progress is completed. This SA will give recommendations for optimum placement of tritiated D-Area tower waste. Further recommendations for tritiated waste placement in the CIG Trenches will be given in the upcoming PA revision.

  11. Effect of training, education, professional experience, and need for cognition on accuracy of exposure assessment decision-making.

    PubMed

    Vadali, Monika; Ramachandran, Gurumurthy; Banerjee, Sudipto

    2012-04-01

    Results are presented from a study that investigated the effect of characteristics of occupational hygienists relating to educational and professional experience and task-specific experience on the accuracy of occupational exposure judgments. A total of 49 occupational hygienists from six companies participated in the study and 22 tasks were evaluated. Participating companies provided monitoring data on specific tasks. Information on nine educational and professional experience determinants (e.g. educational background, years of occupational hygiene and exposure assessment experience, professional certifications, statistical training and experience, and the 'need for cognition (NFC)', which is a measure of an individual's motivation for thinking) and four task-specific determinants was also collected from each occupational hygienist. Hygienists had a wide range of educational and professional backgrounds for tasks across a range of industries with different workplace and task characteristics. The American Industrial Hygiene Association exposure assessment strategy was used to make exposure judgments on the probability of the 95th percentile of the underlying exposure distribution being located in one of four exposure categories relative to the occupational exposure limit. After reviewing all available job/task/chemical information, hygienists were asked to provide their judgment in probabilistic terms. Both qualitative (judgments without monitoring data) and quantitative judgments (judgments with monitoring data) were recorded. Ninety-three qualitative judgments and 2142 quantitative judgments were obtained. Data interpretation training, with simple rules of thumb for estimating the 95th percentiles of lognormal distributions, was provided to all hygienists. A data interpretation test (DIT) was also administered and judgments were elicited before and after training. General linear models and cumulative logit models were used to analyze the relationship between

  12. Field assessments on the accuracy of spherical gauges in rainfall measurements

    NASA Astrophysics Data System (ADS)

    Chang, Mingteh; Harrison, Lee

    2005-02-01

    claims that spherical gauges are effective in reducing wind effects on rainfall measurements. The spherical gauges could greatly improve the accuracy of hydrologic simulations and the efficiency on the designs and management of water resources. They are suitable for large-scale applications.

  13. Validating the Accuracy of Reaction Time Assessment on Computer-Based Tablet Devices.

    PubMed

    Schatz, Philip; Ybarra, Vincent; Leitner, Donald

    2015-08-01

    Computer-based assessment has evolved to tablet-based devices. Despite the availability of tablets and "apps," there is limited research validating their use. We documented timing delays between stimulus presentation and (simulated) touch response on iOS devices (3rd- and 4th-generation Apple iPads) and Android devices (Kindle Fire, Google Nexus, Samsung Galaxy) at response intervals of 100, 250, 500, and 1,000 milliseconds (ms). Results showed significantly greater timing error on Google Nexus and Samsung tablets (81-97 ms), than Kindle Fire and Apple iPads (27-33 ms). Within Apple devices, iOS 7 obtained significantly lower timing error than iOS 6. Simple reaction time (RT) trials (250 ms) on tablet devices represent 12% to 40% error (30-100 ms), depending on the device, which decreases considerably for choice RT trials (3-5% error at 1,000 ms). Results raise implications for using the same device for serial clinical assessment of RT using tablets, as well as the need for calibration of software and hardware.

  14. Performance Assessment of Two GPS Receivers on Space Shuttle

    NASA Technical Reports Server (NTRS)

    Schroeder, Christine A.; Schutz, Bob E.

    1996-01-01

    Space Shuttle STS-69 was launched on September 7, 1995, carrying the Wake Shield Facility (WSF-02) among its payloads. The mission included two GPS receivers: a Collins 3M receiver onboard the Endeavour and an Osborne flight TurboRogue, known as the TurboStar, onboard the WSF-02. Two of the WSF-02 GPS Experiment objectives were to: (1) assess the ability to use GPS in a relative satellite positioning mode using the receivers on Endeavour and WSF-02; and (2) assess the performance of the receivers to support high precision orbit determination at the 400 km altitude. Three ground tests of the receivers were conducted in order to characterize the respective receivers. The analysis of the tests utilized the Double Differencing technique. A similar test in orbit was conducted during STS-69 while the WSF-02 was held by the Endeavour robot arm for a one hour period. In these tests, biases were observed in the double difference pseudorange measurements, implying that biases up to 140 m exist which do not cancel in double differencing. These biases appear to exist in the Collins receiver, but their effect can be mitigated by including measurement bias parameters to accommodate them in an estimation process. An additional test was conducted in which the orbit of the combined Endeavour/WSF-02 was determined independently with each receiver. These one hour arcs were based on forming double differences with 13 TurboRogue receivers in the global IGS network and estimating pseudorange biases for the Collins. Various analyses suggest the TurboStar overall orbit accuracy is about one to two meters for this period, based on double differenced phase residuals of 34 cm. These residuals indicate the level of unmodeled forces on Endeavour produced by gravitational and nongravitational effects. The rms differences between the two independently determined orbits are better than 10 meters, thereby demonstrating the accuracy of the Collins-determined orbit at this level as well as the

  15. Assessing the accuracy of tympanometric evaluation of external auditory canal volume: a scientific study using an ear canal model.

    PubMed

    Al-Hussaini, A; Owens, D; Tomkinson, A

    2011-12-01

    Tympanometric evaluation is routinely used as part of the complete otological examination. During tympanometric examination, evaluation of middle ear pressure and ear canal volume is undertaken. Little is reported in relation to the accuracy and precision tympanometry evaluates external ear canal volume. This study examines the capability of the tympanometer to accurately evaluate external auditory canal volume in both simple and partially obstructed ear canal models and assesses its capability to be used in studies examining the effectiveness of cerumolytics. An ear canal model was designed using simple laboratory equipment, including a 5 ml calibrated clinical syringe (Becton Dickinson, Spain). The ear canal model was attached to the sensing probe of a Kamplex tympanometer (Interacoustics, Denmark). Three basic trials were undertaken: evaluation of the tympanometer in simple canal volume measurement, evaluation of the tympanometer in assessing canal volume with partial canal occlusion at different positions within the model, and evaluation of the tympanometer in assessing canal volume with varying degrees of canal occlusion. 1,290 individual test scenarios were completed over the three arms of the study. At volumes of 1.4 cm(3) or below, a perfect relationship was noted between the actual and tympanometric volumes in the simple model (Spearman's ρ = 1) with weakening degrees of agreement with increasing volume of the canal. Bland-Altman plotting confirmed the accuracy of this agreement. In the wax substitute models, tympanometry was observed to have a close relationship (Spearman's ρ > 0.99) with the actual volume present with worsening error above a volume of 1.4 cm(3). Bland-Altman plotting and precision calculations provided evidence of accuracy. Size and position of the wax substitute had no statistical effect on results [Wilcoxon rank-sum test (WRST) p > 0.99], nor did degree of partial obstruction (WRST p > 0.99). The Kamplex tympanometer

  16. Fire Severity Model Accuracy Using Short-term, Rapid Assessment versus Long-term, Anniversary Date Assessment

    Technology Transfer Automated Retrieval System (TEKTRAN)

    Fires are common in rangelands and after a century of fire suppression, the potential exists for fires to burn with high intensity and severity. In addition, the ability of fires to affect long-term changes in rangelands is considerable and for this reason, assessing fire severity after a fire is cr...

  17. Accuracy Assessment for PPP by Comparing Various Online PPP Service Solutions with Bernese 5.2 Network Solution

    NASA Astrophysics Data System (ADS)

    Ozgur Uygur, Sureyya; Aydin, Cuneyt; Demir, Deniz Oz; Cetin, Seda; Dogan, Ugur

    2016-04-01

    GNSS precise point positioning (PPP) technique is frequently used for geodetic applications such as monitoring of reference stations and estimation of tropospheric parameters. This technique uses the undifferenced GNSS observations along with the IGS products to reach high level positioning accuracy. The accuracy level depends on the GNSS data quality as well as the length of the observation duration and the quality of the external data products. It is possible to reach the desired positioning accuracy in the reference frame of satellite coordinates by using a single receiver GNSS data applying PPP technique. PPP technique is provided to users by scientific GNSS processing software packages (like GIPSY of NASA-JPL and Bernese Processing Software of AIUB) as well as several online PPP services. The related services are Auto-GIPSY provided by JPL California Institute of Technology, CSRS-PPP provided by Natural Resources Canada, GAPS provided by the University of New Brunswick and Magic-PPP provided by GMV. In this study, we assess the accuracy of PPP by comparing the solutions from the online PPP services with Bernese 5.2 network solutions. Seven days (DoY 256-262 in 2015) of GNSS observations with 24 hours session duration on the CORS-TR network in Turkey collected on a set of 14 stations were processed in static mode using the above-mentioned PPP services. The average of daily coordinates from Bernese 5.2 static network solution related to 12 IGS stations were taken as the true coordinates. Our results indicate that the distributions of the north, east and up daily position differences are characterized by means and RMS of 1.9±0.5, 2.1±0.7, 4.7±2.1 mm for CSRS, 1.6±0.6, 1.4±0.8, 5.5±3.9 mm for Auto-GIPSY, 3.0±0.8, 3.0±1.2, 6.0±3.2 mm for Magic GNSS, 2.1±1.3, 2.8±1.7, 5.0±2.3 mm for GAPS, with respect to Bernese 5.2 network solution. Keywords: PPP, Online GNSS Service, Bernese, Accuracy

  18. Accuracy Assessment of Digital Surface Models Based on WorldView-2 and ADS80 Stereo Remote Sensing Data

    PubMed Central

    Hobi, Martina L.; Ginzler, Christian

    2012-01-01

    Digital surface models (DSMs) are widely used in forest science to model the forest canopy. Stereo pairs of very high resolution satellite and digital aerial images are relatively new and their absolute accuracy for DSM generation is largely unknown. For an assessment of these input data two DSMs based on a WorldView-2 stereo pair and a ADS80 DSM were generated with photogrammetric instruments. Rational polynomial coefficients (RPCs) are defining the orientation of the WorldView-2 satellite images, which can be enhanced with ground control points (GCPs). Thus two WorldView-2 DSMs were distinguished: a WorldView-2 RPCs-only DSM and a WorldView-2 GCP-enhanced RPCs DSM. The accuracy of the three DSMs was estimated with GPS measurements, manual stereo-measurements, and airborne laser scanning data (ALS). With GCP-enhanced RPCs the WorldView-2 image orientation could be optimised to a root mean square error (RMSE) of 0.56 m in planimetry and 0.32 m in height. This improvement in orientation allowed for a vertical median error of −0.24 m for the WorldView-2 GCP-enhanced RPCs DSM in flat terrain. Overall, the DSM based on ADS80 images showed the highest accuracy of the three models with a median error of 0.08 m over bare ground. As the accuracy of a DSM varies with land cover three classes were distinguished: herb and grass, forests, and artificial areas. The study suggested the ADS80 DSM to best model actual surface height in all three land cover classes, with median errors <1.1 m. The WorldView-2 GCP-enhanced RPCs model achieved good accuracy, too, with median errors of −0.43 m for the herb and grass vegetation and −0.26 m for artificial areas. Forested areas emerged as the most difficult land cover type for height modelling; still, with median errors of −1.85 m for the WorldView-2 GCP-enhanced RPCs model and −1.12 m for the ADS80 model, the input data sets evaluated here are quite promising for forest canopy modelling. PMID:22778645

  19. Accuracy Assessment of a Complex Building 3d Model Reconstructed from Images Acquired with a Low-Cost Uas

    NASA Astrophysics Data System (ADS)

    Oniga, E.; Chirilă, C.; Stătescu, F.

    2017-02-01

    Nowadays, Unmanned Aerial Systems (UASs) are a wide used technique for acquisition in order to create buildings 3D models, providing the acquisition of a high number of images at very high resolution or video sequences, in a very short time. Since low-cost UASs are preferred, the accuracy of a building 3D model created using this platforms must be evaluated. To achieve results, the dean's office building from the Faculty of "Hydrotechnical Engineering, Geodesy and Environmental Engineering" of Iasi, Romania, has been chosen, which is a complex shape building with the roof formed of two hyperbolic paraboloids. Seven points were placed on the ground around the building, three of them being used as GCPs, while the remaining four as Check points (CPs) for accuracy assessment. Additionally, the coordinates of 10 natural CPs representing the building characteristic points were measured with a Leica TCR 405 total station. The building 3D model was created as a point cloud which was automatically generated based on digital images acquired with the low-cost UASs, using the image matching algorithm and different software like 3DF Zephyr, Visual SfM, PhotoModeler Scanner and Drone2Map for ArcGIS. Except for the PhotoModeler Scanner software, the interior and exterior orientation parameters were determined simultaneously by solving a self-calibrating bundle adjustment. Based on the UAS point clouds, automatically generated by using the above mentioned software and GNSS data respectively, the parameters of the east side hyperbolic paraboloid were calculated using the least squares method and a statistical blunder detection. Then, in order to assess the accuracy of the building 3D model, several comparisons were made for the facades and the roof with reference data, considered with minimum errors: TLS mesh for the facades and GNSS mesh for the roof. Finally, the front facade of the building was created in 3D based on its characteristic points using the PhotoModeler Scanner

  20. Accuracy Assessment of Three-dimensional Surface Reconstructions of In vivo Teeth from Cone-beam Computed Tomography

    PubMed Central

    Sang, Yan-Hui; Hu, Hong-Cheng; Lu, Song-He; Wu, Yu-Wei; Li, Wei-Ran; Tang, Zhi-Hui

    2016-01-01

    Background: The accuracy of three-dimensional (3D) reconstructions from cone-beam computed tomography (CBCT) has been particularly important in dentistry, which will affect the effectiveness of diagnosis, treatment plan, and outcome in clinical practice. The aims of this study were to assess the linear, volumetric, and geometric accuracy of 3D reconstructions from CBCT and to investigate the influence of voxel size and CBCT system on the reconstructions results. Methods: Fifty teeth from 18 orthodontic patients were assigned to three groups as NewTom VG 0.15 mm group (NewTom VG; voxel size: 0.15 mm; n = 17), NewTom VG 0.30 mm group (NewTom VG; voxel size: 0.30 mm; n = 16), and VATECH DCTPRO 0.30 mm group (VATECH DCTPRO; voxel size: 0.30 mm; n = 17). The 3D reconstruction models of the teeth were segmented from CBCT data manually using Mimics 18.0 (Materialise Dental, Leuven, Belgium), and the extracted teeth were scanned by 3Shape optical scanner (3Shape A/S, Denmark). Linear and volumetric deviations were separately assessed by comparing the length and volume of the 3D reconstruction model with physical measurement by paired t-test. Geometric deviations were assessed by the root mean square value of the imposed 3D reconstruction and optical models by one-sample t-test. To assess the influence of voxel size and CBCT system on 3D reconstruction, analysis of variance (ANOVA) was used (α = 0.05). Results: The linear, volumetric, and geometric deviations were −0.03 ± 0.48 mm, −5.4 ± 2.8%, and 0.117 ± 0.018 mm for NewTom VG 0.15 mm group; −0.45 ± 0.42 mm, −4.5 ± 3.4%, and 0.116 ± 0.014 mm for NewTom VG 0.30 mm group; and −0.93 ± 0.40 mm, −4.8 ± 5.1%, and 0.194 ± 0.117 mm for VATECH DCTPRO 0.30 mm group, respectively. There were statistically significant differences between groups in terms of linear measurement (P < 0.001), but no significant difference in terms of volumetric measurement (P = 0.774). No statistically significant difference were

  1. Assessment of relative accuracy in the determination of organic matter concentrations in aquatic systems

    USGS Publications Warehouse

    Aiken, G.; Kaplan, L.A.; Weishaar, J.

    2002-01-01

    Accurate determinations of total (TOC), dissolved (DOC) and particulate (POC) organic carbon concentrations are critical for understanding the geochemical, environmental, and ecological roles of aquatic organic matter. Of particular significance for the drinking water industry, TOC measurements are the basis for compliance with US EPA regulations. The results of an interlaboratory comparison designed to identify problems associated with the determination of organic matter concentrations in drinking water supplies are presented. The study involved 31 laboratories and a variety of commercially available analytical instruments. All participating laboratories performed well on samples of potassium hydrogen phthalate (KHP), a compound commonly used as a standard in carbon analysis. However, problems associated with the oxidation of difficult to oxidize compounds, such as dodecylbenzene sulfonic acid and caffeine, were noted. Humic substances posed fewer problems for analysts. Particulate organic matter (POM) in the form of polystyrene beads, freeze-dried bacteria and pulverized leaf material were the most difficult for all analysts, with a wide range of performances reported. The POM results indicate that the methods surveyed in this study are inappropriate for the accurate determination of POC and TOC concentration. Finally, several analysts had difficulty in efficiently separating inorganic carbon from KHP solutions, thereby biasing DOC results.

  2. Construct measurement quality improves predictive accuracy in violence risk assessment: an illustration using the personality assessment inventory.

    PubMed

    Hendry, Melissa C; Douglas, Kevin S; Winter, Elizabeth A; Edens, John F

    2013-01-01

    Much of the risk assessment literature has focused on the predictive validity of risk assessment tools. However, these tools often comprise a list of risk factors that are themselves complex constructs, and focusing on the quality of measurement of individual risk factors may improve the predictive validity of the tools. The present study illustrates this concern using the Antisocial Features and Aggression scales of the Personality Assessment Inventory (Morey, 1991). In a sample of 1,545 prison inmates and offenders undergoing treatment for substance abuse (85% male), we evaluated (a) the factorial validity of the ANT and AGG scales, (b) the utility of original ANT and AGG scales and newly derived ANT and AGG scales for predicting antisocial outcomes (recidivism and institutional infractions), and (c) whether items with a stronger relationship to the underlying constructs (higher factor loadings) were in turn more strongly related to antisocial outcomes. Confirmatory factor analyses (CFAs) indicated that ANT and AGG items were not structured optimally in these data in terms of correspondence to the subscale structure identified in the PAI manual. Exploratory factor analyses were conducted on a random split-half of the sample to derive optimized alternative factor structures, and cross-validated in the second split-half using CFA. Four-factor models emerged for both the ANT and AGG scales, and, as predicted, the size of item factor loadings was associated with the strength with which items were associated with institutional infractions and community recidivism. This suggests that the quality by which a construct is measured is associated with its predictive strength. Implications for risk assessment are discussed.

  3. Assessing laboratory performance in Trichinella ring trials.

    PubMed

    Petroff, David; Hasenclever, Dirk; Makrutzki, Gregor; Riehn, Katharina; Lücker, Ernst

    2014-08-01

    Trichinosis (Trichinellosis) is a zoonotic disease acquired by eating raw or not adequately processed pork or wild game infected with the larvae of the roundworm genus Trichinella. According to European regulations, animals susceptible to Trichinella have to be examined for infestation. To evaluate the performance of laboratories in Germany, inter-laboratory comparisons known as "ring trials" were introduced by the Federal Institute for Risk Assessment in 2004. The current method of analysis makes use of tolerance zones based on the number of larvae in the sample, but does not permit one to determine if a given lab can detect an infested sample reliably, as required by the quality assurance recommendations of the International Commission on Trichinellosis (ICT). A new way of analysing the ring trial data is presented here, which is based on Bayesian hierarchical models. The model implements the ICT requirement by providing an estimate for the probability that a given lab would fail to detect a sample containing, say, five larvae. When applied to the 87 labs that participated in Germany's 2009 ring trials, it turns out this probability is greater than 10% for 21 of them, although only 10 of these in fact returned a false negative result. Such a new method is required to abide by the ICT requirements and make ring trials effective.

  4. An Accuracy Assessment of Automated Photogrammetric Techniques for 3d Modeling of Complex Interiors

    NASA Astrophysics Data System (ADS)

    Georgantas, A.; Brédif, M.; Pierrot-Desseilligny, M.

    2012-07-01

    This paper presents a comparison of automatic photogrammetric techniques to terrestrial laser scanning for 3D modelling of complex interior spaces. We try to evaluate the automated photogrammetric techniques not only in terms of their geometric quality compared to laser scanning but also in terms of cost in money, acquisition and computational time. To this purpose we chose as test site a modern building's stairway. APERO/MICMAC ( ©IGN )which is an Open Source photogrammetric software was used for the production of the 3D photogrammetric point cloud which was compared to the one acquired by a Leica Scanstation 2 laser scanner. After performing various qualitative and quantitative controls we present the advantages and disadvantages of each 3D modelling method applied in a complex interior of a modern building.

  5. Usefulness of the jump-and-reach test in assessment of vertical jump performance.

    PubMed

    Menzel, Hans-Joachim; Chagas, Mauro H; Szmuchrowski, Leszek A; Araujo, Silvia R; Campos, Carlos E; Giannetti, Marcus R

    2010-02-01

    The objective was to estimate the reliability and criterion-related validity of the Jump-and-Reach Test for the assessment of squat, countermovement, and drop jump performance of 32 male Brazilian professional volleyball players. Performance of squat, countermovement, and drop jumps with different dropping heights was assessed on the Jump-and-Reach Test and the measurement of flight time, then compared across different jump trials. The very high reliability coefficients of both assessment methods and the lower correlation coefficients between scores on the assessments indicate a very high consistency of each method but only moderate covariation, which means that they measure partly different items. As a consequence, the Jump-and-Reach Test has good ecological validity in situations when reaching height during the flight phase is critical for performance (e.g., basketball and volleyball) but only limited accuracy for the assessment of vertical impulse production with different jump techniques and conditions.

  6. Assessing Student Performance for School Improvement.

    ERIC Educational Resources Information Center

    Arkley, Harriet; And Others

    This handbook exists to assist the staff of a Springfield, Illinois school district in implementing the district assessment program and to assist other local school units in developing and assessing an implementation program. Chapter 1 presents information on the philosophical and pragmatic background of comprehensive assessment programs, with…

  7. An Assessment of the Accuracy of Admittance and Coherence Estimates Using Synthetic Data

    NASA Astrophysics Data System (ADS)

    Crosby, A.

    2006-12-01

    The estimation of the effective elastic thickness of the lithosphere (T_e) using spectral relationships between gravity and topography has become a controversial topic in recent years. However, one area which has received relatively little attention is the bias in estimates of T_e and the internal loading fraction (F_2) which results from spectral leakage and noise when using the multi-tapered free-air admittance method. In this study, I use grids of synthetic data to assess the magnitude of that bias. I also assess the bias which occurs when T_e within other planets is estimated using the admittance between observed and topographic line-of-sight accelerations of orbiting satellites. I find that leakage can cause the estimated admittance and coherence to be significantly in error, but only if the box in which they are estimated is too small. The definition of `small' depends on the redness of the gravity spectrum. On the Earth, there is minimal error in the estimate of T_e if the admittance between surface gravity and topography is estimated within a box at least 3000-km-wide. When the true T_e is less than 20~km and the true coherence is high, the errors in the estimate of T_e are mostly less than 5~km for all box sizes greater than 1000~km. On the other hand, when the true T_e is greater than 20~km and the box size is 1000~km, the best-fit T_e is likely to be at least 5-10~km less than the true T_e. Even when the true coherence is high, it is not possible to use the free-air admittance to distinguish between real and spurious small fractions of internal loading when the boxes are smaller than 2000~km in size. Furthermore, the trade-off between T_e and F_2 means that even small amounts of leakage can shift the best-fit values of T_e and F_2 by an appreciable amount when the true F_2 is greater than zero. Geological noise in the gravity is caused by subsurface loads, the flexural surface expression of which has been erased by erosion and deposition. I find that

  8. Constraining OCT with Knowledge of Device Design Enables High Accuracy Hemodynamic Assessment of Endovascular Implants

    PubMed Central

    Brown, Jonathan; Lopes, Augusto C.; Kunio, Mie; Kolachalama, Vijaya B.; Edelman, Elazer R.

    2016-01-01

    Background Stacking cross-sectional intravascular images permits three-dimensional rendering of endovascular implants, yet introduces between-frame uncertainties that limit characterization of device placement and the hemodynamic microenvironment. In a porcine coronary stent model, we demonstrate enhanced OCT reconstruction with preservation of between-frame features through fusion with angiography and a priori knowledge of stent design. Methods and Results Strut positions were extracted from sequential OCT frames. Reconstruction with standard interpolation generated discontinuous stent structures. By computationally constraining interpolation to known stent skeletons fitted to 3D ‘clouds’ of OCT-Angio-derived struts, implant anatomy was resolved, accurately rendering features from implant diameter and curvature (n = 1 vessels, r2 = 0.91, 0.90, respectively) to individual strut-wall configurations (average displacement error ~15 μm). This framework facilitated hemodynamic simulation (n = 1 vessel), showing the critical importance of accurate anatomic rendering in characterizing both quantitative and basic qualitative flow patterns. Discontinuities with standard approaches systematically introduced noise and bias, poorly capturing regional flow effects. In contrast, the enhanced method preserved multi-scale (local strut to regional stent) flow interactions, demonstrating the impact of regional contexts in defining the hemodynamic consequence of local deployment errors. Conclusion Fusion of planar angiography and knowledge of device design permits enhanced OCT image analysis of in situ tissue-device interactions. Given emerging interests in simulation-derived hemodynamic assessment as surrogate measures of biological risk, such fused modalities offer a new window into patient-specific implant environments. PMID:26906566

  9. Accuracy and Utility of Deformable Image Registration in {sup 68}Ga 4D PET/CT Assessment of Pulmonary Perfusion Changes During and After Lung Radiation Therapy

    SciTech Connect

    Hardcastle, Nicholas; Hofman, Michael S.; Hicks, Rodney J.; Callahan, Jason; Kron, Tomas; MacManus, Michael P.; Ball, David L.; Jackson, Price; Siva, Shankar

    2015-09-01

    Purpose: Measuring changes in lung perfusion resulting from radiation therapy dose requires registration of the functional imaging to the radiation therapy treatment planning scan. This study investigates registration accuracy and utility for positron emission tomography (PET)/computed tomography (CT) perfusion imaging in radiation therapy for non–small cell lung cancer. Methods: {sup 68}Ga 4-dimensional PET/CT ventilation-perfusion imaging was performed before, during, and after radiation therapy for 5 patients. Rigid registration and deformable image registration (DIR) using B-splines and Demons algorithms was performed with the CT data to obtain a deformation map between the functional images and planning CT. Contour propagation accuracy and correspondence of anatomic features were used to assess registration accuracy. Wilcoxon signed-rank test was used to determine statistical significance. Changes in lung perfusion resulting from radiation therapy dose were calculated for each registration method for each patient and averaged over all patients. Results: With B-splines/Demons DIR, median distance to agreement between lung contours reduced modestly by 0.9/1.1 mm, 1.3/1.6 mm, and 1.3/1.6 mm for pretreatment, midtreatment, and posttreatment (P<.01 for all), and median Dice score between lung contours improved by 0.04/0.04, 0.05/0.05, and 0.05/0.05 for pretreatment, midtreatment, and posttreatment (P<.001 for all). Distance between anatomic features reduced with DIR by median 2.5 mm and 2.8 for pretreatment and midtreatment time points, respectively (P=.001) and 1.4 mm for posttreatment (P>.2). Poorer posttreatment results were likely caused by posttreatment pneumonitis and tumor regression. Up to 80% standardized uptake value loss in perfusion scans was observed. There was limited change in the loss in lung perfusion between registration methods; however, Demons resulted in larger interpatient variation compared with rigid and B-splines registration

  10. Topographic accuracy assessment of bare earth lidar-derived unstructured meshes

    NASA Astrophysics Data System (ADS)

    Bilskie, Matthew V.; Hagen, Scott C.

    2013-02-01

    This study is focused on the integration of bare earth lidar (Light Detection and Ranging) data into unstructured (triangular) finite element meshes and the implications on simulating storm surge inundation using a shallow water equations model. A methodology is developed to compute root mean square error (RMSE) and the 95th percentile of vertical elevation errors using four different interpolation methods (linear, inverse distance weighted, natural neighbor, and cell averaging) to resample bare earth lidar and lidar-derived digital elevation models (DEMs) onto unstructured meshes at different resolutions. The results are consolidated into a table of optimal interpolation methods that minimize the vertical elevation error of an unstructured mesh for a given mesh node density. The cell area averaging method performed most accurate when DEM grid cells within 0.25 times the ratio of local element size and DEM cell size were averaged. The methodology is applied to simulate inundation extent and maximum water levels in southern Mississippi due to Hurricane Katrina, which illustrates that local changes in topography such as adjusting element size and interpolation method drastically alter simulated storm surge locally and non-locally. The methods and results presented have utility and implications to any modeling application that uses bare earth lidar.

  11. A geostatistical methodology to assess the accuracy of unsaturated flow models

    SciTech Connect

    Smoot, J.L.; Williams, R.E.

    1996-04-01

    The Pacific Northwest National Laboratory spatiotemporal movement of water injected into (PNNL) has developed a Hydrologic unsaturated sediments at the Hanford Site in Evaluation Methodology (HEM) to assist the Washington State was used to develop a new U.S. Nuclear Regulatory Commission in method for evaluating mathematical model evaluating the potential that infiltrating meteoric predictions. Measured water content data were water will produce leachate at commercial low- interpolated geostatistically to a 16 x 16 x 36 level radioactive waste disposal sites. Two key grid at several time intervals. Then a issues are raised in the HEM: (1) evaluation of mathematical model was used to predict water mathematical models that predict facility content at the same grid locations at the selected performance, and (2) estimation of the times. Node-by-node comparison of the uncertainty associated with these mathematical mathematical model predictions with the model predictions. The technical objective of geostatistically interpolated values was this research is to adapt geostatistical tools conducted. The method facilitates a complete commonly used for model parameter estimation accounting and categorization of model error at to the problem of estimating the spatial every node. The comparison suggests that distribution of the dependent variable to be model results generally are within measurement calculated by the model. To fulfill this error. The worst model error occurs in silt objective, a database describing the lenses and is in excess of measurement error.

  12. Assessing the prediction accuracy of cure in the Cox proportional hazards cure model: an application to breast cancer data.

    PubMed

    Asano, Junichi; Hirakawa, Akihiro; Hamada, Chikuma

    2014-01-01

    A cure rate model is a survival model incorporating the cure rate with the assumption that the population contains both uncured and cured individuals. It is a powerful statistical tool for prognostic studies, especially in cancer. The cure rate is important for making treatment decisions in clinical practice. The proportional hazards (PH) cure model can predict the cure rate for each patient. This contains a logistic regression component for the cure rate and a Cox regression component to estimate the hazard for uncured patients. A measure for quantifying the predictive accuracy of the cure rate estimated by the Cox PH cure model is required, as there has been a lack of previous research in this area. We used the Cox PH cure model for the breast cancer data; however, the area under the receiver operating characteristic curve (AUC) could not be estimated because many patients were censored. In this study, we used imputation-based AUCs to assess the predictive accuracy of the cure rate from the PH cure model. We examined the precision of these AUCs using simulation studies. The results demonstrated that the imputation-based AUCs were estimable and their biases were negligibly small in many cases, although ordinary AUC could not be estimated. Additionally, we introduced the bias-correction method of imputation-based AUCs and found that the bias-corrected estimate successfully compensated the overestimation in the simulation studies. We also illustrated the estimation of the imputation-based AUCs using breast cancer data.

  13. Performance model assessment for multi-junction concentrating photovoltaic systems.

    SciTech Connect

    Riley, Daniel M.; McConnell, Robert.; Sahm, Aaron; Crawford, Clark; King, David L.; Cameron, Christopher P.; Foresi, James S.

    2010-03-01

    Four approaches to modeling multi-junction concentrating photovoltaic system performance are assessed by comparing modeled performance to measured performance. Measured weather, irradiance, and system performance data were collected on two systems over a one month period. Residual analysis is used to assess the models and to identify opportunities for model improvement.

  14. Performance of alternative strategies for primary cervical cancer screening in sub-Saharan Africa: systematic review and meta-analysis of diagnostic test accuracy studies

    PubMed Central

    Combescure, Christophe; Fokom-Defo, Victoire; Tebeu, Pierre Marie; Vassilakos, Pierre; Kengne, André Pascal; Petignat, Patrick

    2015-01-01

    Objective To assess and compare the accuracy of visual inspection with acetic acid (VIA), visual inspection with Lugol’s iodine (VILI), and human papillomavirus (HPV) testing as alternative standalone methods for primary cervical cancer screening in sub-Saharan Africa. Design Systematic review and meta-analysis of diagnostic test accuracy studies. Data sources Systematic searches of multiple databases including Medline, Embase, and Scopus for studies published between January 1994 and June 2014. Review methods Inclusion criteria for studies were: alternative methods to cytology used as a standalone test for primary screening; study population not at particular risk of cervical cancer (excluding studies focusing on HIV positive women or women with gynaecological symptoms); women screened by nurses; reference test (colposcopy and directed biopsies) performed at least in women with positive screening results. Two reviewers independently screened studies for eligibility and extracted data for inclusion, and evaluated study quality using the quality assessment of diagnostic accuracy studies 2 (QUADAS-2) checklist. Primary outcomes were absolute accuracy measures (sensitivity and specificity) of screening tests to detect cervical intraepithelial neoplasia grade 2 or worse (CIN2+). Results 15 studies of moderate quality were included (n=61 381 for VIA, n=46 435 for VILI, n=11 322 for HPV testing). Prevalence of CIN2+ did not vary by screening test and ranged from 2.3% (95% confidence interval 1.5% to 3.3%) in VILI studies to 4.9% (2.7% to 7.8%) in HPV testing studies. Positivity rates of VILI, VIA, and HPV testing were 16.5% (9.8% to 24.7%), 16.8% (11.0% to 23.6%), and 25.8% (17.4% to 35.3%), respectively. Pooled sensitivity was higher for VILI (95.1%; 90.1% to 97.7%) than VIA (82.4%; 76.3% to 87.3%) in studies where the reference test was performed in all women (P<0.001). Pooled specificity of VILI and VIA were similar (87.2% (78.1% to 92.8%) v 87.4% (77.1% to 93

  15. Evaluation of the truebeam machine performance check (MPC) geometric checks for daily IGRT geometric accuracy quality assurance.

    PubMed

    Barnes, Michael P; Greer, Peter B

    2017-03-22

    Machine Performance Check (MPC) is an automated and integrated image-based tool for verification of beam and geometric performance of the TrueBeam linac. The aims of the study were to evaluate the performance of the MPC geometric tests relevant to OBI/CBCT IGRT geometric accuracy. This included evaluation of the MPC isocenter and couch tests. Evaluation was performed by comparing MPC to QA tests performed routinely in the department over a 4-month period. The MPC isocenter tests were compared against an in-house developed Winston-Lutz test and the couch compared against routine mechanical QA type procedures. In all cases the results from the routine QA procedure was presented in a form directly comparable to MPC to allow a like-to-like comparison. The sensitivity of MPC was also tested by deliberately miscalibrating the appropriate linac parameter. The MPC isocenter size and MPC kV imager offset were found to agree with Winston-Lutz to within 0.2 mm and 0.22 mm, respectively. The MPC couch tests agreed with routine QA to within 0.12 mm and 0.15°. The MPC isocenter size and kV imager offset parameters were found to be affected by a change in beam focal spot position with the kV imager offset more sensitive. The MPC couch tests were all unaffected by an offset in the couch calibration but the three axes that utilized two point calibrations were sensitive to a miscalibration of the size in the span of the calibration. All MPC tests were unaffected by a deliberate misalignment of the MPC phantom and roll of the order of one degree.

  16. An assessment of the accuracy of admittance and coherence estimates using synthetic data

    NASA Astrophysics Data System (ADS)

    Crosby, A. G.

    2007-10-01

    Previous work has shown that estimates of the admittance between topography and free-air gravity anomalies are often biased by spectral leakage, even after the application of multiple prolate spheroidal wavefunction data-tapers. Despite this, a number of authors who have used the free-air admittance method to estimate the weighted-average effective elastic thickness of the lithosphere (Te) and to identify topography supported by mantle convection have not tested their methods using synthetic data with a known relationship between topography and gravity. In this paper, I perform a range of such tests using both synthetic surface data and synthetic line-of-sight (LOS) accelerations of satellites orbiting around an extra-terrestrial planet. It is found that spectral leakage can cause the estimated admittance and coherence to be significantly in error-but only if the box in which they are estimated is too small. The definition of `small' depends on the redness of the gravity spectrum. There is minimal error in the whole-box weighted-average estimate of Te if the admittance between surface gravity and topography is estimated within a box at least 3000-km-wide. When the synthetic (uniform) Te is less than 20 km and the coherence is high, the errors in Te are mostly +/-5 km for all box sizes greater than 1000 km. On the other hand, when the true Te is greater than 20 km and the box size is 1000 km, the best-fitting Te is likely to be at least 5-10 km less than the true Te. However, even when the coherence is high, it is not possible to use elastic plate admittance models to distinguish between real and spurious small fractions of internal loading when the boxes are smaller than 2000 km in width. Noise in the gravity introduces error and uncertainty, but no additional bias, into the estimates of the admittance. It does, however, bias estimates of Te calculated using the coherence between Bouguer gravity and topography. The admittance at wavelengths between 1000 and 4000 km

  17. High-Capacity Communications from Martian Distances Part 4: Assessment of Spacecraft Pointing Accuracy Capabilities Required For Large Ka-Band Reflector Antennas

    NASA Technical Reports Server (NTRS)

    Hodges, Richard E.; Sands, O. Scott; Huang, John; Bassily, Samir

    2006-01-01

    Improved surface accuracy for deployable reflectors has brought with it the possibility of Ka-band reflector antennas with extents on the order of 1000 wavelengths. Such antennas are being considered for high-rate data delivery from planetary distances. To maintain losses at reasonable levels requires a sufficiently capable Attitude Determination and Control System (ADCS) onboard the spacecraft. This paper provides an assessment of currently available ADCS strategies and performance levels. In addition to other issues, specific factors considered include: (1) use of "beaconless" or open loop tracking versus use of a beacon on the Earth side of the link, and (2) selection of fine pointing strategy (body-fixed/spacecraft pointing, reflector pointing or various forms of electronic beam steering). Capabilities of recent spacecraft are discussed.

  18. Complexity, Accuracy, and Fluency as Properties of Language Performance: The Development of the Multiple Subsystems over Time and in Relation to Each Other

    ERIC Educational Resources Information Center

    Vercellotti, Mary Lou

    2012-01-01

    Applied linguists have identified three components of second language (L2) performance: complexity, accuracy, and fluency (CAF) to measure L2 development. Many studies researching CAF found trade-off effects (in which a higher performance in one component corresponds to lower performance in another) during tasks, often in online oral language…

  19. Implementing Performance Assessment: Promises, Problems, and Challenges.

    ERIC Educational Resources Information Center

    Kane, Michael B., Ed.; Mitchell, Ruth, Ed.

    The chapters in this collection contribute to the debate about the value and usefulness of radically different kinds of assessments in the U.S. educational system by considering and expanding on the theoretical underpinnings of reports and speculation. The chapters are: (1) "Assessment Reform: Promises and Challenges" (Nidhi Khattri and…

  20. Assessment of the Metrological Performance of Seismic Tables for a QMS Recognition

    NASA Astrophysics Data System (ADS)

    Silva Ribeiro, A.; Campos Costa, A.; Candeias, P.; Sousa, J. Alves e.; Lages Martins, L.; Freitas Martins, A. C.; Ferreira, A. C.

    2016-11-01

    Seismic testing and analysis using large infrastructures, such as shaking tables and reaction walls, is performed worldwide requiring the use of complex instrumentation systems. To assure the accuracy of these systems, conformity assessment is needed to verify the compliance with standards and applications, and the Quality Management Systems (QMS) is being increasingly applied to domains where risk analysis is critical as a way to provide a formal recognition. This paper describes an approach to the assessment of the metrological performance of seismic shake tables as part of a QMS recognition, with the analysis of a case study of LNEC Seismic shake table.

  1. Scalable Methods for Uncertainty Quantification, Data Assimilation and Target Accuracy Assessment for Multi-Physics Advanced Simulation of Light Water Reactors

    NASA Astrophysics Data System (ADS)

    Khuwaileh, Bassam

    High fidelity simulation of nuclear reactors entails large scale applications characterized with high dimensionality and tremendous complexity where various physics models are integrated in the form of coupled models (e.g. neutronic with thermal-hydraulic feedback). Each of the coupled modules represents a high fidelity formulation of the first principles governing the physics of interest. Therefore, new developments in high fidelity multi-physics simulation and the corresponding sensitivity/uncertainty quantification analysis are paramount to the development and competitiveness of reactors achieved through enhanced understanding of the design and safety margins. Accordingly, this dissertation introduces efficient and scalable algorithms for performing efficient Uncertainty Quantification (UQ), Data Assimilation (DA) and Target Accuracy Assessment (TAA) for large scale, multi-physics reactor design and safety problems. This dissertation builds upon previous efforts for adaptive core simulation and reduced order modeling algorithms and extends these efforts towards coupled multi-physics models with feedback. The core idea is to recast the reactor physics analysis in terms of reduced order models. This can be achieved via identifying the important/influential degrees of freedom (DoF) via the subspace analysis, such that the required analysis can be recast by considering the important DoF only. In this dissertation, efficient algorithms for lower dimensional subspace construction have been developed for single physics and multi-physics applications with feedback. Then the reduced subspace is used to solve realistic, large scale forward (UQ) and inverse problems (DA and TAA). Once the elite set of DoF is determined, the uncertainty/sensitivity/target accuracy assessment and data assimilation analysis can be performed accurately and efficiently for large scale, high dimensional multi-physics nuclear engineering applications. Hence, in this work a Karhunen-Loeve (KL

  2. NREL Evaluates the Thermal Performance of Uninsulated Walls to Improve the Accuracy of Building Energy Simulation Tools (Fact Sheet)

    SciTech Connect

    Not Available

    2012-01-01

    This technical highlight describes NREL research to develop models of uninsulated wall assemblies that help to improve the accuracy of building energy simulation tools when modeling potential energy savings in older homes. Researchers at the National Renewable Energy Laboratory (NREL) have developed models for evaluating the thermal performance of walls in existing homes that will improve the accuracy of building energy simulation tools when predicting potential energy savings of existing homes. Uninsulated walls are typical in older homes where the wall cavities were not insulated during construction or where the insulating material has settled. Accurate calculation of heat transfer through building enclosures will help determine the benefit of energy efficiency upgrades in order to reduce energy consumption in older American homes. NREL performed detailed computational fluid dynamics (CFD) analysis to quantify the energy loss/gain through the walls and to visualize different airflow regimes within the uninsulated cavities. The effects of ambient outdoor temperature, radiative properties of building materials, and insulation level were investigated. The study showed that multi-dimensional airflows occur in walls with uninsulated cavities and that the thermal resistance is a function of the outdoor temperature - an effect not accounted for in existing building energy simulation tools. The study quantified the difference between CFD prediction and the approach currently used in building energy simulation tools over a wide range of conditions. For example, researchers found that CFD predicted lower heating loads and slightly higher cooling loads. Implementation of CFD results into building energy simulation tools such as DOE2 and EnergyPlus will likely reduce the predicted heating load of homes. Researchers also determined that a small air gap in a partially insulated cavity can lead to a significant reduction in thermal resistance. For instance, a 4-in. tall air gap

  3. Positional Accuracy Assessment of the Openstreetmap Buildings Layer Through Automatic Homologous Pairs Detection: the Method and a Case Study

    NASA Astrophysics Data System (ADS)

    Brovelli, M. A.; Minghini, M.; Molinari, M. E.; Zamboni, G.

    2016-06-01

    OpenStreetMap (OSM) is currently the largest openly licensed collection of geospatial data. Being OSM increasingly exploited in a variety of applications, research has placed great attention on the assessment of its quality. This work focuses on assessing the quality of OSM buildings. While most of the studies available in literature are limited to the evaluation of OSM building completeness, this work proposes an original approach to assess the positional accuracy of OSM buildings based on comparison with a reference dataset. The comparison relies on a quasi-automated detection of homologous pairs on the two datasets. Based on the homologous pairs found, warping algorithms like e.g. affine transformations and multi-resolution splines can be applied to the OSM buildings to generate a new version having an optimal local match to the reference layer. A quality assessment of the OSM buildings of Milan Municipality (Northern Italy), having an area of about 180 km2, is then presented. After computing some measures of completeness, the algorithm based on homologous points is run using the building layer of the official vector cartography of Milan Municipality as the reference dataset. Approximately 100000 homologous points are found, which show a systematic translation of about 0.4 m on both the X and Y directions and a mean distance of about 0.8 m between the datasets. Besides its efficiency and high degree of automation, the algorithm generates a warped version of OSM buildings which, having by definition a closest match to the reference buildings, can be eventually integrated in the OSM database.

  4. Performance-Based Assessment of Biology Teachers: Promises and Pitfalls.

    ERIC Educational Resources Information Center

    Collins, Angelo

    Research of the biology component of the Teacher Assessment Project (BioTAP) was conducted to explore the feasibility of using performance-based assessments to evaluate high school biology teachers. Three modes of performance-based assessment were employed: portfolios, portfolio-based simulations, and simulation exercises. Fifteen high school…

  5. Exploring the Utility of a Virtual Performance Assessment

    ERIC Educational Resources Information Center

    Clarke-Midura, Jody; Code, Jillianne; Zap, Nick; Dede, Chris

    2011-01-01

    With funding from the Institute of Education Sciences (IES), the Virtual Performance Assessment project at the Harvard Graduate School of Education is developing and studying the feasibility of immersive virtual performance assessments (VPAs) to assess scientific inquiry of middle school students as a standardized component of an accountability…

  6. Assessment in Performance-Based Secondary Music Classes

    ERIC Educational Resources Information Center

    Pellegrino, Kristen; Conway, Colleen M.; Russell, Joshua A.

    2015-01-01

    After sharing research findings about grading and assessment practices in secondary music ensemble classes, we offer examples of commonly used assessment tools (ratings scale, checklist, rubric) for the performance ensemble. Then, we explore the various purposes of assessment in performance-based music courses: (1) to meet state, national, and…

  7. Integration of Mobile AR Technology in Performance Assessment

    ERIC Educational Resources Information Center

    Kuo-Hung, Chao; Kuo-En, Chang; Chung-Hsien, Lan; Kinshuk; Yao-Ting, Sung

    2016-01-01

    This study was aimed at exploring how to use augmented reality (AR) technology to enhance the effect of performance assessment (PA). A mobile AR performance assessment system (MARPAS) was developed by integrating AR technology to reduce the limitations in observation and assessment during PA. This system includes three modules: Authentication, AR…

  8. Performance Assessment for the Workplace. Volume I.

    ERIC Educational Resources Information Center

    Wigdor, Alexandra K., Ed.; Green, Bert F., Jr., Ed.

    This is the sixth and final report of the National Research Council's Committee on the Performance of Military Personnel on the Joint-Service Job Performance Measurement/Enlistment Standards (JPM) Project, a project designed to develop measures of performance for entry-level military jobs so that enlistment standards could be linked to performance…

  9. Accuracy assessment on the analysis of unbound drug in plasma by comparing traditional centrifugal ultrafiltration with hollow fiber centrifugal ultrafiltration and application in pharmacokinetic study.

    PubMed

    Zhang, Lin; Zhang, Zhi-Qing; Dong, Wei-Chong; Jing, Shao-Jun; Zhang, Jin-Feng; Jiang, Ye

    2013-11-29

    In present study, accuracy assessment on the analysis of unbound drug in plasma was made by comparing traditional centrifugal ultrafiltration (CF-UF) with hollow fiber centrifugal ultrafiltration (HFCF-UF). We used metformin (MET) as a model drug and studied the influence of centrifugal time, plasma condition and freeze-thaw circle times on the ultrafiltrate volume and related effect on the measurement of MET. Our results demonstrated that ultrafiltrate volume was a crucial factor which influenced measurement accuracy of unbound drug in plasma. For traditional CF-UF, the ultrafiltrate volume cannot be well-controlled due to a series of factors. Compared with traditional CF-UF, the ultrafiltrate volume by HFCF-UF can be easily controlled by the inner capacity of the U-shaped hollow fiber inserted into the sample under enough centrifugal force and centrifugal time, which contributes to a more accurate measurement. Moreover, the developed HFCF-UF method achieved a successful application in real plasma samples and exhibited several advantages including high precision, extremely low detection limit and perfect recovery. The HFCF-UF method offers the advantage of highly satisfactory performance in addition to being simple and fast in pretreatment, with these characteristics being consistent with the practicability requirements in current scientific research.

  10. WebRASP: a server for computing energy scores to assess the accuracy and stability of RNA 3D structures

    PubMed Central

    Norambuena, Tomas; Cares, Jorge F.; Capriotti, Emidio; Melo, Francisco

    2013-01-01

    Summary: The understanding of the biological role of RNA molecules has changed. Although it is widely accepted that RNAs play important regulatory roles without necessarily coding for proteins, the functions of many of these non-coding RNAs are unknown. Thus, determining or modeling the 3D structure of RNA molecules as well as assessing their accuracy and stability has become of great importance for characterizing their functional activity. Here, we introduce a new web application, WebRASP, that uses knowledge-based potentials for scoring RNA structures based on distance-dependent pairwise atomic interactions. This web server allows the users to upload a structure in PDB format, select several options to visualize the structure and calculate the energy profile. The server contains online help, tutorials and links to other related resources. We believe this server will be a useful tool for predicting and assessing the quality of RNA 3D structures. Availability and implementation: The web server is available at http://melolab.org/webrasp. It has been tested on the most popular web browsers and requires Java plugin for Jmol visualization. Contact: fmelo@bio.puc.cl PMID:23929030

  11. Accuracy and Usefulness of Select Methods for Assessing Complete Collection of 24-Hour Urine: A Systematic Review.

    PubMed

    John, Katherine A; Cogswell, Mary E; Campbell, Norm R; Nowson, Caryl A; Legetic, Branka; Hennis, Anselm J M; Patel, Sheena M

    2016-05-01

    Twenty-four-hour urine collection is the recommended method for estimating sodium intake. To investigate the strengths and limitations of methods used to assess completion of 24-hour urine collection, the authors systematically reviewed the literature on the accuracy and usefulness of methods vs para-aminobenzoic acid (PABA) recovery (referent). The percentage of incomplete collections, based on PABA, was 6% to 47% (n=8 studies). The sensitivity and specificity for identifying incomplete collection using creatinine criteria (n=4 studies) was 6% to 63% and 57% to 99.7%, respectively. The most sensitive method for removing incomplete collections was a creatinine index <0.7. In pooled analysis (≥2 studies), mean urine creatinine excretion and volume were higher among participants with complete collection (P<.05); whereas, self-reported collection time did not differ by completion status. Compared with participants with incomplete collection, mean 24-hour sodium excretion was 19.6 mmol higher (n=1781 specimens, 5 studies) in patients with complete collection. Sodium excretion may be underestimated by inclusion of incomplete 24-hour urine collections. None of the current approaches reliably assess completion of 24-hour urine collection.

  12. Accuracy of the third molar index for assessing the legal majority of 18 years in Turkish population.

    PubMed

    Gulsahi, Ayse; De Luca, Stefano; Cehreli, S Burcak; Tirali, R Ebru; Cameriere, Roberto

    2016-09-01

    In the last few years, forced and unregistered child marriage has widely increased into Turkey. The aim of this study was to test the accuracy of cut-off value of 0.08 by measurement of third molar index (I3M) in assessing legal adult age of 18 years. Digital panoramic images of 293 Turkish children and young adults (165 girls and 128 boys), aged between 14 and 22 years, were analysed. Age distribution gradually decreases as I3M increases in both girls and boys. For girls, the sensitivity was 85.9% (95% CI 77.1-92.8%) and specificity was 100%. The proportion of correctly classified individuals was 92.7%. For boys, the sensitivity was 94.6% (95% CI 88.1-99.8%) and specificity was 100%. The proportion of correctly classified individuals was 97.6%. The cut-off value of 0.08 is a useful method to assess if a subject is older than 18 years of age or not.

  13. AN ACCURACY ASSESSMENT OF 1992 LANDSAT-MSS DERIVED LAND COVER FOR THE UPPER SAN PEDRO WATERSHED (U.S./MEXICO)

    EPA Science Inventory

    The utility of Digital Orthophoto Quads (DOQS) in assessing the classification accuracy of land cover derived from Landsat MSS data was investigated. Initially, the suitability of DOQs in distinguishing between different land cover classes was assessed using high-resolution airbo...

  14. Performance and Accuracy of Lightweight and Low-Cost GPS Data Loggers According to Antenna Positions, Fix Intervals, Habitats and Animal Movements.

    PubMed

    Forin-Wiart, Marie-Amélie; Hubert, Pauline; Sirguey, Pascal; Poulle, Marie-Lazarine

    2015-01-01

    Recently developed low-cost Global Positioning System (GPS) data loggers are promising tools for wildlife research because of their affordability for low-budget projects and ability to simultaneously track a greater number of individuals compared with expensive built-in wildlife GPS. However, the reliability of these devices must be carefully examined because they were not developed to track wildlife. This study aimed to assess the performance and accuracy of commercially available GPS data loggers for the first time using the same methods applied to test built-in wildlife GPS. The effects of antenna position, fix interval and habitat on the fix-success rate (FSR) and location error (LE) of CatLog data loggers were investigated in stationary tests, whereas the effects of animal movements on these errors were investigated in motion tests. The units operated well and presented consistent performance and accuracy over time in stationary tests, and the FSR was good for all antenna positions and fix intervals. However, the LE was affected by the GPS antenna and fix interval. Furthermore, completely or partially obstructed habitats reduced the FSR by up to 80% in households and increased the LE. Movement across habitats had no effect on the FSR, whereas forest habitat influenced the LE. Finally, the mean FSR (0.90 ± 0.26) and LE (15.4 ± 10.1 m) values from low-cost GPS data loggers were comparable to those of built-in wildlife GPS collars (71.6% of fixes with LE < 10 m for motion tests), thus confirming their suitability for use in wildlife studies.

  15. Performance and Accuracy of Lightweight and Low-Cost GPS Data Loggers According to Antenna Positions, Fix Intervals, Habitats and Animal Movements

    PubMed Central

    Forin-Wiart, Marie-Amélie; Hubert, Pauline; Sirguey, Pascal; Poulle, Marie-Lazarine

    2015-01-01

    Recently developed low-cost Global Positioning System (GPS) data loggers are promising tools for wildlife research because of their affordability for low-budget projects and ability to simultaneously track a greater number of individuals compared with expensive built-in wildlife GPS. However, the reliability of these devices must be carefully examined because they were not developed to track wildlife. This study aimed to assess the performance and accuracy of commercially available GPS data loggers for the first time using the same methods applied to test built-in wildlife GPS. The effects of antenna position, fix interval and habitat on the fix-success rate (FSR) and location error (LE) of CatLog data loggers were investigated in stationary tests, whereas the effects of animal movements on these errors were investigated in motion tests. The units operated well and presented consistent performance and accuracy over time in stationary tests, and the FSR was good for all antenna positions and fix intervals. However, the LE was affected by the GPS antenna and fix interval. Furthermore, completely or partially obstructed habitats reduced the FSR by up to 80% in households and increased the LE. Movement across habitats had no effect on the FSR, whereas forest habitat influenced the LE. Finally, the mean FSR (0.90 ± 0.26) and LE (15.4 ± 10.1 m) values from low-cost GPS data loggers were comparable to those of built-in wildlife GPS collars (71.6% of fixes with LE < 10 m for motion tests), thus confirming their suitability for use in wildlife studies. PMID:26086958

  16. Flight assessment of the onboard propulsion system model for the Performance Seeking Control algorithm on an F-15 aircraft

    NASA Technical Reports Server (NTRS)

    Orme, John S.; Schkolnik, Gerard S.

    1995-01-01

    Performance Seeking Control (PSC), an onboard, adaptive, real-time optimization algorithm, relies upon an onboard propulsion system model. Flight results illustrated propulsion system performance improvements as calculated by the model. These improvements were subject to uncertainty arising from modeling error. Thus to quantify uncertainty in the PSC performance improvements, modeling accuracy must be assessed. A flight test approach to verify PSC-predicted increases in thrust (FNP) and absolute levels of fan stall margin is developed and applied to flight test data. Application of the excess thrust technique shows that increases of FNP agree to within 3 percent of full-scale measurements for most conditions. Accuracy to these levels is significant because uncertainty bands may now be applied to the performance improvements provided by PSC. Assessment of PSC fan stall margin modeling accuracy was completed with analysis of in-flight stall tests. Results indicate that the model overestimates the stall margin by between 5 to 10 percent. Because PSC achieves performance gains by using available stall margin, this overestimation may represent performance improvements to be recovered with increased modeling accuracy. Assessment of thrust and stall margin modeling accuracy provides a critical piece for a comprehensive understanding of PSC's capabilities and limitations.

  17. A Short History of Performance Assessment: Lessons Learned.

    ERIC Educational Resources Information Center

    Madaus, George F.; O'Dwyer, Laura M.

    1999-01-01

    Places performance assessment in the context of high-stakes uses, describes underlying technologies, and outlines the history of performance testing from 210 B.C.E. to the present. Historical issues of fairness, efficiency, cost, and infrastructure influence contemporary efforts to use performance assessments in large-scale, high-stakes testing…

  18. New models for age estimation and assessment of their accuracy using developing mandibular third molar teeth in a Thai population.

    PubMed

    Duangto, P; Iamaroon, A; Prasitwattanaseree, S; Mahakkanukrauh, P; Janhom, A

    2017-03-01

    Age estimation using developing third molar teeth is considered an important and accurate technique for both clinical and forensic practices. The aims of this study were to establish population-specific reference data, to develop age prediction models using mandibular third molar development, to test the accuracy of the resulting models, and to find the probability of persons being at the age thresholds of legal relevance in a Thai population. A total of 1867 digital panoramic radiographs of Thai individuals aged between 8 and 23 years was selected to assess dental age. The mandibular third molar development was divided into nine stages. The stages were evaluated and each stage was transformed into a development score. Quadratic regression was employed to develop age prediction models. Our results show that males reached mandibular third molar root formation stages earlier than females. The models revealed a high correlation coefficient for both left and right mandibular third molar teeth in both sexes (R = 0.945 and 0.944 in males, R = 0.922 and 0.923 in females, respectively). Furthermore, the accuracy of the resulting models was tested in randomly selected 374 cases and showed low error values between the predicted dental age and the chronological age for both left and right mandibular third molar teeth in both sexes (-0.13 and -0.17 years in males, 0.01 and 0.03 years in females, respectively). In Thai samples, when the mandibular third molar teeth reached stage H, the probability of the person being over 18 years was 100 % in both sexes.

  19. Assessment of the accuracy of coupled cluster perturbation theory for open-shell systems. I. Triples expansions.

    PubMed

    Eriksen, Janus J; Matthews, Devin A; Jørgensen, Poul; Gauss, Jürgen

    2016-05-21

    The accuracy at which total energies of open-shell atoms and organic radicals may be calculated is assessed for selected coupled cluster perturbative triples expansions, all of which augment the coupled cluster singles and doubles (CCSD) energy by a non-iterative correction for the effect of triple excitations. Namely, the second- through sixth-order models of the recently proposed CCSD(T-n) triples series [J. J. Eriksen et al., J. Chem. Phys. 140, 064108 (2014)] are compared to the acclaimed CCSD(T) model for both unrestricted as well as restricted open-shell Hartree-Fock (UHF/ROHF) reference determinants. By comparing UHF- and ROHF-based statistical results for a test set of 18 modest-sized open-shell species with comparable RHF-based results, no behavioral differences are observed for the higher-order models of the CCSD(T-n) series in their correlated descriptions of closed- and open-shell species. In particular, we find that the convergence rate throughout the series towards the coupled cluster singles, doubles, and triples (CCSDT) solution is identical for the two cases. For the CCSD(T) model, on the other hand, not only its numerical consistency, but also its established, yet fortuitous cancellation of errors breaks down in the transition from closed- to open-shell systems. The higher-order CCSD(T-n) models (orders n > 3) thus offer a consistent and significant improvement in accuracy relative to CCSDT over the CCSD(T) model, equally for RHF, UHF, and ROHF reference determinants, albeit at an increased computational cost.

  20. [CONTROVERSIES REGARDING THE ACCURACY AND LIMITATIONS OF FROZEN SECTION IN THYROID PATHOLOGY: AN EVIDENCE-BASED ASSESSMENT].

    PubMed

    Stanciu-Pop, C; Pop, F C; Thiry, A; Scagnol, I; Maweja, S; Hamoir, E; Beckers, A; Meurisse, M; Grosu, F; Delvenne, Ph

    2015-12-01

    Palpable thyroid nodules are present clinically in 4-7% of the population and their prevalence increases to 50%-67% when using high-resolution neck ultrasonography. By contrast, thyroid carcinoma (TC) represents only 5-20% of these nodules, which underlines the need for an appropriate approach to avoid unnecessary surgery. Frozen section (PS) has been used for more than 40 years in thyroid surgery to establish the diagnosis of malignancy. However, a controversy persists regarding the accuracy of FS and its place in thyroid pathology has changed with the emergence of fine-needle aspiration (FNA). A PubMed Medline and SpringerLink search was made covering the period from January 2000 to June 2012 to assess the accuracy of ES, its limitations and indications for the diagnosis of thyroid nodules. Twenty publications encompassing 8.567 subjects were included in our study. The average value of TC among thyroid nodules in analyzed studies was 15.5 %. ES ability to detect cancer expressed by its sensitivity (Ss) was 67.5 %. More than two thirds of the authors considered PS useful exclusively in the presence of doubtful ENA and for guiding the surgical extension in cases confirmed as malignant by FNA; however, only 33% accepted FS as a routine examination for the management of thyroid nodules. The influence of FS on surgical reintervention rate in nodular thyroid pathology was considered to be negligible by most studies, whereas 31 % of the authors thought that FS has a favorable benefit by decreasing the number of surgical re-interventions. In conclusion, the role of FS in thyroid pathology evolved from a mandatory component for thyroid surgery to an optional examination after a pre-operative FNA cytology. The accuracy of FS seems to provide no sufficient additional benefit and most experts support its use only in the presence of equivocal or suspicious cytological features, for guiding the surgical extension in cases confirmed as malignant by FNA and for the

  1. Statistical assessment of speech system performance

    NASA Technical Reports Server (NTRS)

    Moshier, Stephen L.

    1977-01-01

    Methods for the normalization of performance tests results of speech recognition systems are presented. Technological accomplishments in speech recognition systems, as well as planned research activities are described.

  2. Performance assessment of 3D surface imaging technique for medical imaging applications

    NASA Astrophysics Data System (ADS)

    Li, Tuotuo; Geng, Jason; Li, Shidong

    2013-03-01

    Recent development in optical 3D surface imaging technologies provide better ways to digitalize the 3D surface and its motion in real-time. The non-invasive 3D surface imaging approach has great potential for many medical imaging applications, such as motion monitoring of radiotherapy, pre/post evaluation of plastic surgery and dermatology, to name a few. Various commercial 3D surface imaging systems have appeared on the market with different dimension, speed and accuracy. For clinical applications, the accuracy, reproducibility and robustness across the widely heterogeneous skin color, tone, texture, shape properties, and ambient lighting is very crucial. Till now, a systematic approach for evaluating the performance of different 3D surface imaging systems still yet exist. In this paper, we present a systematic performance assessment approach to 3D surface imaging system assessment for medical applications. We use this assessment approach to exam a new real-time surface imaging system we developed, dubbed "Neo3D Camera", for image-guided radiotherapy (IGRT). The assessments include accuracy, field of view, coverage, repeatability, speed and sensitivity to environment, texture and color.

  3. OSLD energy response performance and dose accuracy at 24 - 1250 keV: Comparison with TLD-100H and TLD-100

    SciTech Connect

    Kadir, A. B. A.; Priharti, W.; Samat, S. B.; Dolah, M. T.

    2013-11-27

    OSLD was evaluated in terms of energy response and accuracy of the measured dose in comparison with TLD-100H and TLD-100. The OSLD showed a better energy response performance for H{sub p}(10) whereas for H{sub p}(0.07), TLD-100H is superior than the others. The OSLD dose accuracy is comparable with the other two dosimeters since it fulfilled the requirement of the ICRP trumpet graph analysis.

  4. Accuracy of the actuator disc-RANS approach for predicting the performance and wake of tidal turbines.

    PubMed

    Batten, W M J; Harrison, M E; Bahaj, A S

    2013-02-28

    The actuator disc-RANS model has widely been used in wind and tidal energy to predict the wake of a horizontal axis turbine. The model is appropriate where large-scale effects of the turbine on a flow are of interest, for example, when considering environmental impacts, or arrays of devices. The accuracy of the model for modelling the wake of tidal stream turbines has not been demonstrated, and flow predictions presented in the literature for similar modelled scenarios vary significantly. This paper compares the results of the actuator disc-RANS model, where the turbine forces have been derived using a blade-element approach, to experimental data measured in the wake of a scaled turbine. It also compares the results with those of a simpler uniform actuator disc model. The comparisons show that the model is accurate and can predict up to 94 per cent of the variation in the experimental velocity data measured on the centreline of the wake, therefore demonstrating that the actuator disc-RANS model is an accurate approach for modelling a turbine wake, and a conservative approach to predict performance and loads. It can therefore be applied to similar scenarios with confidence.

  5. 10 CFR 63.114 - Requirements for performance assessment.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... GEOLOGIC REPOSITORY AT YUCCA MOUNTAIN, NEVADA Technical Criteria Postclosure Performance Assessment § 63..., hydrology, and geochemistry (including disruptive processes and events) of the Yucca Mountain site, and...

  6. 10 CFR 63.114 - Requirements for performance assessment.

    Code of Federal Regulations, 2014 CFR

    2014-01-01

    ... GEOLOGIC REPOSITORY AT YUCCA MOUNTAIN, NEVADA Technical Criteria Postclosure Performance Assessment § 63..., hydrology, and geochemistry (including disruptive processes and events) of the Yucca Mountain site, and...

  7. 10 CFR 63.114 - Requirements for performance assessment.

    Code of Federal Regulations, 2013 CFR

    2013-01-01

    ... GEOLOGIC REPOSITORY AT YUCCA MOUNTAIN, NEVADA Technical Criteria Postclosure Performance Assessment § 63..., hydrology, and geochemistry (including disruptive processes and events) of the Yucca Mountain site, and...

  8. 10 CFR 63.114 - Requirements for performance assessment.

    Code of Federal Regulations, 2011 CFR

    2011-01-01

    ... GEOLOGIC REPOSITORY AT YUCCA MOUNTAIN, NEVADA Technical Criteria Postclosure Performance Assessment § 63..., hydrology, and geochemistry (including disruptive processes and events) of the Yucca Mountain site, and...

  9. Pooled analysis of the accuracy of five cervical cancer screening tests assessed in eleven studies in Africa and India.

    PubMed

    Arbyn, Marc; Sankaranarayanan, Rengaswamy; Muwonge, Richard; Keita, Namory; Dolo, Amadou; Mbalawa, Charles Gombe; Nouhou, Hassan; Sakande, Boblewende; Wesley, Ramani; Somanathan, Thara; Sharma, Anjali; Shastri, Surendra; Basu, Parthasarathy

    2008-07-01

    Cervical cancer is the main cancer among women in sub-Saharan Africa, India and other parts of the developing world. Evaluation of screening performance of effective, feasible and affordable early detection and management methods is a public health priority. Five screening methods, naked eye visual inspection of the cervix uteri after application of diluted acetic acid (VIA), or Lugol's iodine (VILI) or with a magnifying device (VIAM), the Pap smear and human papillomavirus testing with the high-risk probe of the Hybrid Capture-2 assay (HC2), were evaluated in 11 studies in India and Africa. More than 58,000 women, aged 25-64 years, were tested with 2-5 screening tests and outcome verification was done on all women independent of the screen test results. The outcome was presence or absence of cervical intraepithelial neoplasia (CIN) of different degrees or invasive cervical cancer. Verification was based on colposcopy and histological interpretation of colposcopy-directed biopsies. Negative colposcopy was accepted as a truly negative outcome. VIA showed a sensitivity of 79% (95% CI 73-85%) and 83% (95% CI 77-89%), and a specificity of 85% (95% CI 81-89%) and 84% (95% CI 80-88%) for the outcomes CIN2+ or CIN3+, respectively. VILI was on average 10% more sensitive and equally specific. VIAM showed similar results as VIA. The Pap smear showed lowest sensitivity, even at the lowest cutoff of atypical squamous cells of undetermined significance (57%; 95% CI 38-76%) for CIN2+ but the specificity was rather high (93%; 95% CI 89-97%). The HC2-assay showed a sensitivity for CIN2+ of 62% (95% CI 56-68%) and a specificity of 94% (95% CI 92-95%). Substantial interstudy variation was observed in the accuracy of the visual screening methods. Accuracy of visual methods and cytology increased over time, whereas performance of HC2 was constant. Results of visual tests and colposcopy were highly correlated. This study was the largest ever done that evaluates the cross

  10. Design Rationale for a Complex Performance Assessment

    ERIC Educational Resources Information Center

    Williamson, David M.; Bauer, Malcolm; Steinberg, Linda S.; Mislevy, Robert J.; Behrens, John T.; DeMark, Sarah F.

    2004-01-01

    In computer-based interactive environments meant to support learning, students must bring a wide range of relevant knowledge, skills, and abilities to bear jointly as they solve meaningful problems in a learning domain. To function effectively as an assessment, a computer system must additionally be able to evoke and interpret observable evidence…

  11. Study on accuracy and interobserver reliability of the assessment of odontoid fracture union using plain radiographs or CT scans

    PubMed Central

    Kolb, Klaus; Zenner, Juliane; Reynolds, Jeremy; Dvorak, Marcel; Acosta, Frank; Forstner, Rosemarie; Mayer, Michael; Tauber, Mark; Auffarth, Alexander; Kathrein, Anton; Hitzl, Wolfgang

    2009-01-01

    In odontoid fracture research, outcome can be evaluated based on validated questionnaires, based on functional outcome in terms of atlantoaxial and total neck rotation, and based on the treatment-related union rate. Data on clinical and functional outcome are still sparse. In contrast, there is abundant information on union rates, although, frequently the rates differ widely. Odontoid union is the most frequently assessed outcome parameter and therefore it is imperative to investigate the interobserver reliability of fusion assessment using radiographs compared to CT scans. Our objective was to identify the diagnostic accuracy of plain radiographs in detecting union and non-union after odontoid fractures and compare this to CT scans as the standard of reference. Complete sets of biplanar plain radiographs and CT scans of 21 patients treated for odontoid fractures were subjected to interobserver assessment of fusion. Image sets were presented to 18 international observers with a mean experience in fusion assessment of 10.7 years. Patients selected had complete radiographic follow-up at a mean of 63.3 ± 53 months. Mean age of the patients at follow-up was 68.2 years. We calculated interobserver agreement of the diagnostic assessment using radiographs compared to using CT scans, as well as the sensitivity and specificity of the radiographic assessment. Agreement on the fusion status using radiographs compared to CT scans ranged between 62 and 90% depending on the observer. Concerning the assessment of non-union and fusion, the mean specificity was 62% and mean sensitivity was 77%. Statistical analysis revealed an agreement of 80–100% in 48% of cases only, between the biplanar radiographs and the reconstructed CT scans. In 50% of patients assessed there was an agreement of less than 80%. The mean sensitivity and specificity values indicate that radiographs are not a reliable measure to indicate odontoid fracture union or non-union. Regarding experience in years

  12. Short Term Survival after Admission for Heart Failure in Sweden: Applying Multilevel Analyses of Discriminatory Accuracy to Evaluate Institutional Performance

    PubMed Central

    Ghith, Nermin; Wagner, Philippe; Frølich, Anne; Merlo, Juan

    2016-01-01

    Background Hospital performance is frequently evaluated by analyzing differences between hospital averages in some quality indicators. The results are often expressed as quality charts of hospital variance (e.g., league tables, funnel plots). However, those analyses seldom consider patients heterogeneity around averages, which is of fundamental relevance for a correct evaluation. Therefore, we apply an innovative methodology based on measures of components of variance and discriminatory accuracy to analyze 30-day mortality after hospital discharge with a diagnosis of Heart Failure (HF) in Sweden. Methods We analyzed 36,943 patients aged 45–80 treated in 565 wards at 71 hospitals during 2007–2009. We applied single and multilevel logistic regression analyses to calculate the odds ratios and the area under the receiver-operating characteristic (AUC). We evaluated general hospital and ward effects by quantifying the intra-class correlation coefficient (ICC) and the increment in the AUC obtained by adding random effects in a multilevel regression analysis (MLRA). Finally, the Odds Ratios (ORs) for specific ward and hospital characteristics were interpreted jointly with the proportional change in variance (PCV) and the proportion of ORs in the opposite direction (POOR). Findings Overall, the average 30-day mortality was 9%. Using only patient information on age and previous hospitalizations for different diseases we obtained an AUC = 0.727. This value was almost unchanged when adding sex, country of birth as well as hospitals and wards levels. Average mortality was higher in small wards and municipal hospitals but the POOR values were 15% and 16% respectively. Conclusions Swedish wards and hospitals in general performed homogeneously well, resulting in a low 30-day mortality rate after HF. In our study, knowledge on a patient’s previous hospitalizations was the best predictor of 30-day mortality, and this information did not improve by knowing the sex and country

  13. Assessing children's competency to take the oath in court: The influence of question type on children's accuracy.

    PubMed

    Evans, Angela D; Lyon, Thomas D

    2012-06-01

    This study examined children's accuracy in response to truth-lie competency questions asked in court. The participants included 164 child witnesses in criminal child sexual abuse cases tried in Los Angeles County over a 5-year period (1997-2001) and 154 child witnesses quoted in the U.S. state and federal appellate cases over a 35-year period (1974-2008). The results revealed that judges virtually never found children incompetent to testify, but children exhibited substantial variability in their performance based on question-type. Definition questions, about the meaning of the truth and lies, were the most difficult largely due to errors in response to "Do you know" questions. Questions about the consequences of lying were more difficult than questions evaluating the morality of lying. Children exhibited high rates of error in response to questions about whether they had ever told a lie. Attorneys rarely asked children hypothetical questions in a form that has been found to facilitate performance. Defense attorneys asked a higher proportion of the more difficult question types than prosecutors. The findings suggest that children's truth-lie competency is underestimated by courtroom questioning and support growing doubts about the utility of the competency requirements.

  14. Binding Free Energy Calculations for Lead Optimization: Assessment of Their Accuracy in an Industrial Drug Design Context.

    PubMed

    Homeyer, Nadine; Stoll, Friederike; Hillisch, Alexander; Gohlke, Holger

    2014-08-12

    Correctly ranking compounds according to their computed relative binding affinities will be of great value for decision making in the lead optimization phase of industrial drug discovery. However, the performance of existing computationally demanding binding free energy calculation methods in this context is largely unknown. We analyzed the performance of the molecular mechanics continuum solvent, the linear interaction energy (LIE), and the thermodynamic integration (TI) approach for three sets of compounds from industrial lead optimization projects. The data sets pose challenges typical for this early stage of drug discovery. None of the methods was sufficiently predictive when applied out of the box without considering these challenges. Detailed investigations of failures revealed critical points that are essential for good binding free energy predictions. When data set-specific features were considered accordingly, predictions valuable for lead optimization could be obtained for all approaches but LIE. Our findings lead to clear recommendations for when to use which of the above approaches. Our findings also stress the important role of expert knowledge in this process, not least for estimating the accuracy of prediction results by TI, using indicators such as the size and chemical structure of exchanged groups and the statistical error in the predictions. Such knowledge will be invaluable when it comes to the question which of the TI results can be trusted for decision making.

  15. Technical Basis for Assessing Uranium Bioremediation Performance

    SciTech Connect

    PE Long; SB Yabusaki; PD Meyer; CJ Murray; AL N’Guessan

    2008-04-01

    In situ bioremediation of uranium holds significant promise for effective stabilization of U(VI) from groundwater at reduced cost compared to conventional pump and treat. This promise is unlikely to be realized unless researchers and practitioners successfully predict and demonstrate the long-term effectiveness of uranium bioremediation protocols. Field research to date has focused on both proof of principle and a mechanistic level of understanding. Current practice typically involves an engineering approach using proprietary amendments that focuses mainly on monitoring U(VI) concentration for a limited time period. Given the complexity of uranium biogeochemistry and uranium secondary minerals, and the lack of documented case studies, a systematic monitoring approach using multiple performance indicators is needed. This document provides an overview of uranium bioremediation, summarizes design considerations, and identifies and prioritizes field performance indicators for the application of uranium bioremediation. The performance indicators provided as part of this document are based on current biogeochemical understanding of uranium and will enable practitioners to monitor the performance of their system and make a strong case to clients, regulators, and the public that the future performance of the system can be assured and changes in performance addressed as needed. The performance indicators established by this document and the information gained by using these indicators do add to the cost of uranium bioremediation. However, they are vital to the long-term success of the application of uranium bioremediation and provide a significant assurance that regulatory goals will be met. The document also emphasizes the need for systematic development of key information from bench scale tests and pilot scales tests prior to full-scale implementation.

  16. OMPS Limb Profiler Instrument Performance Assessment

    NASA Technical Reports Server (NTRS)

    Jaross, Glen R.; Bhartia, Pawan K.; Chen, Grace; Kowitt, Mark; Haken, Michael; Chen, Zhong; Xu, Philippe; Warner, Jeremy; Kelly, Thomas

    2014-01-01

    Following the successful launch of the Ozone Mapping and Profiler Suite (OMPS) aboard the Suomi National Polar-orbiting Partnership (SNPP) spacecraft, the NASA OMPS Limb team began an evaluation of instrument and data product performance. The focus of this paper is the instrument performance in relation to the original design criteria. Performance that is closer to expectations increases the likelihood that limb scatter measurements by SNPP OMPS and successor instruments can form the basis for accurate long-term monitoring of ozone vertical profiles. The team finds that the Limb instrument operates mostly as designed and basic performance meets or exceeds the original design criteria. Internally scattered stray light and sensor pointing knowledge are two design challenges with the potential to seriously degrade performance. A thorough prelaunch characterization of stray li