Science.gov

Sample records for accuracy assessment procedures

  1. Airborne Topographic Mapper Calibration Procedures and Accuracy Assessment

    NASA Technical Reports Server (NTRS)

    Martin, Chreston F.; Krabill, William B.; Manizade, Serdar S.; Russell, Rob L.; Sonntag, John G.; Swift, Robert N.; Yungel, James K.

    2012-01-01

    Description of NASA Airborn Topographic Mapper (ATM) lidar calibration procedures including analysis of the accuracy and consistancy of various ATM instrument parameters and the resulting influence on topographic elevation measurements. The ATM elevations measurements from a nominal operating altitude 500 to 750 m above the ice surface was found to be: Horizontal Accuracy 74 cm, Horizontal Precision 14 cm, Vertical Accuracy 6.6 cm, Vertical Precision 3 cm.

  2. Procedural Documentation and Accuracy Assessment of Bathymetric Maps and Area/Capacity Tables for Small Reservoirs

    USGS Publications Warehouse

    Wilson, Gary L.; Richards, Joseph M.

    2006-01-01

    Because of the increasing use and importance of lakes for water supply to communities, a repeatable and reliable procedure to determine lake bathymetry and capacity is needed. A method to determine the accuracy of the procedure will help ensure proper collection and use of the data and resulting products. It is important to clearly define the intended products and desired accuracy before conducting the bathymetric survey to ensure proper data collection. A survey-grade echo sounder and differential global positioning system receivers were used to collect water-depth and position data in December 2003 at Sugar Creek Lake near Moberly, Missouri. Data were collected along planned transects, with an additional set of quality-assurance data collected for use in accuracy computations. All collected data were imported into a geographic information system database. A bathymetric surface model, contour map, and area/capacity tables were created from the geographic information system database. An accuracy assessment was completed on the collected data, bathymetric surface model, area/capacity table, and contour map products. Using established vertical accuracy standards, the accuracy of the collected data, bathymetric surface model, and contour map product was 0.67 foot, 0.91 foot, and 1.51 feet at the 95 percent confidence level. By comparing results from different transect intervals with the quality-assurance transect data, it was determined that a transect interval of 1 percent of the longitudinal length of Sugar Creek Lake produced nearly as good results as 0.5 percent transect interval for the bathymetric surface model, area/capacity table, and contour map products.

  3. Accuracy of actuarial procedures for assessment of sexual offender recidivism risk may vary across ethnicity.

    PubMed

    Långström, Niklas

    2004-04-01

    Little is known about whether the accuracy of tools for assessment of sexual offender recidivism risk holds across ethnic minority offenders. I investigated the predictive validity across ethnicity for the RRASOR and the Static-99 actuarial risk assessment procedures in a national cohort of all adult male sex offenders released from prison in Sweden 1993-1997. Subjects ordered out of Sweden upon release from prison were excluded and remaining subjects (N = 1303) divided into three subgroups based on citizenship. Eighty-three percent of the subjects were of Nordic ethnicity, and non-Nordic citizens were either of non-Nordic European (n = 49, hereafter called European) or African Asian descent (n = 128). The two tools were equally accurate among Nordic and European sexual offenders for the prediction of any sexual and any violent nonsexual recidivism. In contrast, neither measure could differentiate African Asian sexual or violent recidivists from nonrecidivists. Compared to European offenders, AfricanAsian offenders had more often sexually victimized a nonrelative or stranger, had higher Static-99 scores, were younger, more often single, and more often homeless. The results require replication, but suggest that the promising predictive validity seen with some risk assessment tools may not generalize across offender ethnicity or migration status. More speculatively, different risk factors or causal chains might be involved in the development or persistence of offending among minority or immigrant sexual abusers.

  4. GEOSPATIAL DATA ACCURACY ASSESSMENT

    EPA Science Inventory

    The development of robust accuracy assessment methods for the validation of spatial data represent's a difficult scientific challenge for the geospatial science community. The importance and timeliness of this issue is related directly to the dramatic escalation in the developmen...

  5. Ground Truth Sampling and LANDSAT Accuracy Assessment

    NASA Technical Reports Server (NTRS)

    Robinson, J. W.; Gunther, F. J.; Campbell, W. J.

    1982-01-01

    It is noted that the key factor in any accuracy assessment of remote sensing data is the method used for determining the ground truth, independent of the remote sensing data itself. The sampling and accuracy procedures developed for nuclear power plant siting study are described. The purpose of the sampling procedure was to provide data for developing supervised classifications for two study sites and for assessing the accuracy of that and the other procedures used. The purpose of the accuracy assessment was to allow the comparison of the cost and accuracy of various classification procedures as applied to various data types.

  6. Numerical accuracy assessment

    NASA Astrophysics Data System (ADS)

    Boerstoel, J. W.

    1988-12-01

    A framework is provided for numerical accuracy assessment. The purpose of numerical flow simulations is formulated. This formulation concerns the classes of aeronautical configurations (boundaries), the desired flow physics (flow equations and their properties), the classes of flow conditions on flow boundaries (boundary conditions), and the initial flow conditions. Next, accuracy and economical performance requirements are defined; the final numerical flow simulation results of interest should have a guaranteed accuracy, and be produced for an acceptable FLOP-price. Within this context, the validation of numerical processes with respect to the well known topics of consistency, stability, and convergence when the mesh is refined must be done by numerical experimentation because theory gives only partial answers. This requires careful design of text cases for numerical experimentation. Finally, the results of a few recent evaluation exercises of numerical experiments with a large number of codes on a few test cases are summarized.

  7. Accuracy of remotely sensed data: Sampling and analysis procedures

    NASA Technical Reports Server (NTRS)

    Congalton, R. G.; Oderwald, R. G.; Mead, R. A.

    1982-01-01

    A review and update of the discrete multivariate analysis techniques used for accuracy assessment is given. A listing of the computer program written to implement these techniques is given. New work on evaluating accuracy assessment using Monte Carlo simulation with different sampling schemes is given. The results of matrices from the mapping effort of the San Juan National Forest is given. A method for estimating the sample size requirements for implementing the accuracy assessment procedures is given. A proposed method for determining the reliability of change detection between two maps of the same area produced at different times is given.

  8. When Does Choice of Accuracy Measure Alter Imputation Accuracy Assessments?

    PubMed Central

    Ramnarine, Shelina; Zhang, Juan; Chen, Li-Shiun; Culverhouse, Robert; Duan, Weimin; Hancock, Dana B.; Hartz, Sarah M.; Johnson, Eric O.; Olfson, Emily; Schwantes-An, Tae-Hwi; Saccone, Nancy L.

    2015-01-01

    Imputation, the process of inferring genotypes for untyped variants, is used to identify and refine genetic association findings. Inaccuracies in imputed data can distort the observed association between variants and a disease. Many statistics are used to assess accuracy; some compare imputed to genotyped data and others are calculated without reference to true genotypes. Prior work has shown that the Imputation Quality Score (IQS), which is based on Cohen’s kappa statistic and compares imputed genotype probabilities to true genotypes, appropriately adjusts for chance agreement; however, it is not commonly used. To identify differences in accuracy assessment, we compared IQS with concordance rate, squared correlation, and accuracy measures built into imputation programs. Genotypes from the 1000 Genomes reference populations (AFR N = 246 and EUR N = 379) were masked to match the typed single nucleotide polymorphism (SNP) coverage of several SNP arrays and were imputed with BEAGLE 3.3.2 and IMPUTE2 in regions associated with smoking behaviors. Additional masking and imputation was conducted for sequenced subjects from the Collaborative Genetic Study of Nicotine Dependence and the Genetic Study of Nicotine Dependence in African Americans (N = 1,481 African Americans and N = 1,480 European Americans). Our results offer further evidence that concordance rate inflates accuracy estimates, particularly for rare and low frequency variants. For common variants, squared correlation, BEAGLE R2, IMPUTE2 INFO, and IQS produce similar assessments of imputation accuracy. However, for rare and low frequency variants, compared to IQS, the other statistics tend to be more liberal in their assessment of accuracy. IQS is important to consider when evaluating imputation accuracy, particularly for rare and low frequency variants. PMID:26458263

  9. Pollutant Assessments Group Procedures Manual

    SciTech Connect

    Chavarria, D.E.; Davidson, J.R.; Espegren, M.L.; Kearl, P.M.; Knott, R.R.; Pierce, G.A.; Retolaza, C.D.; Smuin, D.R.; Wilson, M.J.; Witt, D.A. ); Conklin, N.G.; Egidi, P.V.; Ertel, D.B.; Foster, D.S.; Krall, B.J.; Meredith, R.L.; Rice, J.A.; Roemer, E.K. )

    1991-02-01

    This procedures manual combines the existing procedures for radiological and chemical assessment of hazardous wastes used by the Pollutant Assessments Group at the time of manuscript completion (October 1, 1990). These procedures will be revised in an ongoing process to incorporate new developments in hazardous waste assessment technology and changes in administrative policy and support procedures. Format inconsistencies will be corrected in subsequent revisions of individual procedures.

  10. Skinfold Assessment: Accuracy and Application

    ERIC Educational Resources Information Center

    Ball, Stephen; Swan, Pamela D.; Altena, Thomas S.

    2006-01-01

    Although not perfect, skinfolds (SK), or the measurement of fat under the skin, remains the most popular and practical method available to assess body composition on a large scale (Kuczmarski, Flegal, Campbell, & Johnson, 1994). Even for practitioners who have been using SK for years and are highly proficient at locating the correct anatomical…

  11. Estimating Classification Consistency and Accuracy for Cognitive Diagnostic Assessment

    ERIC Educational Resources Information Center

    Cui, Ying; Gierl, Mark J.; Chang, Hua-Hua

    2012-01-01

    This article introduces procedures for the computation and asymptotic statistical inference for classification consistency and accuracy indices specifically designed for cognitive diagnostic assessments. The new classification indices can be used as important indicators of the reliability and validity of classification results produced by…

  12. On precision and accuracy (bias) statements for measurement procedures

    SciTech Connect

    Bruckner, L.A.; Hume, M.W.; Delvin, W.L.

    1988-01-01

    Measurement procedures are often required to contain precision and accuracy of precision and bias statements. This paper contains a glossary that explains various terms that often appear in these statements as well as an example illustrating such statements for a specific set of data. Precision and bias statements are shown to vary according to the conditions under which the data were collected. This paper emphasizes that the error model (an algebraic expression that describes how the various sources of variation affect the measurement) is an important consideration in the formation of precision and bias statements.

  13. Arizona Vegetation Resource Inventory (AVRI) accuracy assessment

    USGS Publications Warehouse

    Szajgin, John; Pettinger, L.R.; Linden, D.S.; Ohlen, D.O.

    1982-01-01

    A quantitative accuracy assessment was performed for the vegetation classification map produced as part of the Arizona Vegetation Resource Inventory (AVRI) project. This project was a cooperative effort between the Bureau of Land Management (BLM) and the Earth Resources Observation Systems (EROS) Data Center. The objective of the accuracy assessment was to estimate (with a precision of ?10 percent at the 90 percent confidence level) the comission error in each of the eight level II hierarchical vegetation cover types. A stratified two-phase (double) cluster sample was used. Phase I consisted of 160 photointerpreted plots representing clusters of Landsat pixels, and phase II consisted of ground data collection at 80 of the phase I cluster sites. Ground data were used to refine the phase I error estimates by means of a linear regression model. The classified image was stratified by assigning each 15-pixel cluster to the stratum corresponding to the dominant cover type within each cluster. This method is known as stratified plurality sampling. Overall error was estimated to be 36 percent with a standard error of 2 percent. Estimated error for individual vegetation classes ranged from a low of 10 percent ?6 percent for evergreen woodland to 81 percent ?7 percent for cropland and pasture. Total cost of the accuracy assessment was $106,950 for the one-million-hectare study area. The combination of the stratified plurality sampling (SPS) method of sample allocation with double sampling provided the desired estimates within the required precision levels. The overall accuracy results confirmed that highly accurate digital classification of vegetation is difficult to perform in semiarid environments, due largely to the sparse vegetation cover. Nevertheless, these techniques show promise for providing more accurate information than is presently available for many BLM-administered lands.

  14. Environmental Impact Assessment: A Procedure.

    ERIC Educational Resources Information Center

    Stover, Lloyd V.

    Prepared by a firm of consulting engineers, this booklet outlines the procedural "whys and hows" of assessing environmental impact, particularly for the construction industry. Section I explores the need for environmental assessment and evaluation to determine environmental impact. It utilizes a review of the National Environmental Policy Act and…

  15. Positional Accuracy Assessment of Googleearth in Riyadh

    NASA Astrophysics Data System (ADS)

    Farah, Ashraf; Algarni, Dafer

    2014-06-01

    Google Earth is a virtual globe, map and geographical information program that is controlled by Google corporation. It maps the Earth by the superimposition of images obtained from satellite imagery, aerial photography and GIS 3D globe. With millions of users all around the globe, GoogleEarth® has become the ultimate source of spatial data and information for private and public decision-support systems besides many types and forms of social interactions. Many users mostly in developing countries are also using it for surveying applications, the matter that raises questions about the positional accuracy of the Google Earth program. This research presents a small-scale assessment study of the positional accuracy of GoogleEarth® Imagery in Riyadh; capital of Kingdom of Saudi Arabia (KSA). The results show that the RMSE of the GoogleEarth imagery is 2.18 m and 1.51 m for the horizontal and height coordinates respectively.

  16. Accuracy Assessment of Altimeter Derived Geostrophic Velocities

    NASA Astrophysics Data System (ADS)

    Leben, R. R.; Powell, B. S.; Born, G. H.; Guinasso, N. L.

    2002-12-01

    Along track sea surface height anomaly gradients are proportional to cross track geostrophic velocity anomalies allowing satellite altimetry to provide much needed satellite observations of changes in the geostrophic component of surface ocean currents. Often, surface height gradients are computed from altimeter data archives that have been corrected to give the most accurate absolute sea level, a practice that may unnecessarily increase the error in the cross track velocity anomalies and thereby require excessive smoothing to mitigate noise. Because differentiation along track acts as a high-pass filter, many of the path length corrections applied to altimeter data for absolute height accuracy are unnecessary for the corresponding gradient calculations. We report on a study to investigate appropriate altimetric corrections and processing techniques for improving geostrophic velocity accuracy. Accuracy is assessed by comparing cross track current measurements from two moorings placed along the descending TOPEX/POSEIDON ground track number 52 in the Gulf of Mexico to the corresponding altimeter velocity estimates. The buoys are deployed and maintained by the Texas Automated Buoy System (TABS) under Interagency Contracts with Texas A&M University. The buoys telemeter observations in near real-time via satellite to the TABS station located at the Geochemical and Environmental Research Group (GERG) at Texas A&M. Buoy M is located in shelf waters of 57 m depth with a second, Buoy N, 38 km away on the shelf break at 105 m depth. Buoy N has been operational since the beginning of 2002 and has a current meter at 2m depth providing in situ measurements of surface velocities coincident with Jason and TOPEX/POSEIDON altimeter over flights. This allows one of the first detailed comparisons of shallow water near surface current meter time series to coincident altimetry.

  17. Accuracy assessment of fluoroscopy-transesophageal echocardiography registration

    NASA Astrophysics Data System (ADS)

    Lang, Pencilla; Seslija, Petar; Bainbridge, Daniel; Guiraudon, Gerard M.; Jones, Doug L.; Chu, Michael W.; Holdsworth, David W.; Peters, Terry M.

    2011-03-01

    This study assesses the accuracy of a new transesophageal (TEE) ultrasound (US) fluoroscopy registration technique designed to guide percutaneous aortic valve replacement. In this minimally invasive procedure, a valve is inserted into the aortic annulus via a catheter. Navigation and positioning of the valve is guided primarily by intra-operative fluoroscopy. Poor anatomical visualization of the aortic root region can result in incorrect positioning, leading to heart valve embolization, obstruction of the coronary ostia and acute kidney injury. The use of TEE US images to augment intra-operative fluoroscopy provides significant improvements to image-guidance. Registration is achieved using an image-based TEE probe tracking technique and US calibration. TEE probe tracking is accomplished using a single-perspective pose estimation algorithm. Pose estimation from a single image allows registration to be achieved using only images collected in standard OR workflow. Accuracy of this registration technique is assessed using three models: a point target phantom, a cadaveric porcine heart with implanted fiducials, and in-vivo porcine images. Results demonstrate that registration can be achieved with an RMS error of less than 1.5mm, which is within the clinical accuracy requirements of 5mm. US-fluoroscopy registration based on single-perspective pose estimation demonstrates promise as a method for providing guidance to percutaneous aortic valve replacement procedures. Future work will focus on real-time implementation and a visualization system that can be used in the operating room.

  18. Accuracy of quantitative visual soil assessment

    NASA Astrophysics Data System (ADS)

    van Leeuwen, Maricke; Heuvelink, Gerard; Stoorvogel, Jetse; Wallinga, Jakob; de Boer, Imke; van Dam, Jos; van Essen, Everhard; Moolenaar, Simon; Verhoeven, Frank; Stoof, Cathelijne

    2016-04-01

    Visual soil assessment (VSA) is a method to assess soil quality visually, when standing in the field. VSA is increasingly used by farmers, farm organisations and companies, because it is rapid and cost-effective, and because looking at soil provides understanding about soil functioning. Often VSA is regarded as subjective, so there is a need to verify VSA. Also, many VSAs have not been fine-tuned for contrasting soil types. This could lead to wrong interpretation of soil quality and soil functioning when contrasting sites are compared to each other. We wanted to assess accuracy of VSA, while taking into account soil type. The first objective was to test whether quantitative visual field observations, which form the basis in many VSAs, could be validated with standardized field or laboratory measurements. The second objective was to assess whether quantitative visual field observations are reproducible, when used by observers with contrasting backgrounds. For the validation study, we made quantitative visual observations at 26 cattle farms. Farms were located at sand, clay and peat soils in the North Friesian Woodlands, the Netherlands. Quantitative visual observations evaluated were grass cover, number of biopores, number of roots, soil colour, soil structure, number of earthworms, number of gley mottles and soil compaction. Linear regression analysis showed that four out of eight quantitative visual observations could be well validated with standardized field or laboratory measurements. The following quantitative visual observations correlated well with standardized field or laboratory measurements: grass cover with classified images of surface cover; number of roots with root dry weight; amount of large structure elements with mean weight diameter; and soil colour with soil organic matter content. Correlation coefficients were greater than 0.3, from which half of the correlations were significant. For the reproducibility study, a group of 9 soil scientists and 7

  19. Evaluation of the contribution of LiDAR data and postclassification procedures to object-based classification accuracy

    NASA Astrophysics Data System (ADS)

    Styers, Diane M.; Moskal, L. Monika; Richardson, Jeffrey J.; Halabisky, Meghan A.

    2014-01-01

    Object-based image analysis (OBIA) is becoming an increasingly common method for producing land use/land cover (LULC) classifications in urban areas. In order to produce the most accurate LULC map, LiDAR data and postclassification procedures are often employed, but their relative contributions to accuracy are unclear. We examined the contribution of LiDAR data and postclassification procedures to increase classification accuracies over using imagery alone and assessed sources of error along an ecologically complex urban-to-rural gradient in Olympia, Washington. Overall classification accuracy and user's and producer's accuracies for individual classes were evaluated. The addition of LiDAR data to the OBIA classification resulted in an 8.34% increase in overall accuracy, while manual postclassification to the imagery+LiDAR classification improved accuracy only an additional 1%. Sources of error in this classification were largely due to edge effects, from which multiple different types of errors result.

  20. Classification accuracy of actuarial risk assessment instruments.

    PubMed

    Neller, Daniel J; Frederick, Richard I

    2013-01-01

    Users of commonly employed actuarial risk assessment instruments (ARAIs) hope to generate numerical probability statements about risk; however, ARAI manuals often do not explicitly report data that are essential for understanding the classification accuracy of the instruments. In addition, ARAI manuals often contain data that have the potential for misinterpretation. The authors of the present article address the accurate generation of probability statements. First, they illustrate how the reporting of numerical probability statements based on proportions rather than predictive values can mislead users of ARAIs. Next, they report essential test characteristics that, to date, have gone largely unreported in ARAI manuals. Then they discuss a graphing method that can enhance the practice of clinicians who communicate risk via numerical probability statements. After the authors review several strategies for selecting optimal cut-off scores, they show how the graphing method can be used to estimate positive predictive values for each cut-off score of commonly used ARAIs, across all possible base rates. They also show how the graphing method can be used to estimate base rates of violent recidivism in local samples.

  1. Accuracy Assessment for AG500, Electromagnetic Articulograph

    ERIC Educational Resources Information Center

    Yunusova, Yana; Green, Jordan R.; Mefferd, Antje

    2009-01-01

    Purpose: The goal of this article was to evaluate the accuracy and reliability of the AG500 (Carstens Medizinelectronik, Lenglern, Germany), an electromagnetic device developed recently to register articulatory movements in three dimensions. This technology seems to have unprecedented capabilities to provide rich information about time-varying…

  2. Assessment Of Accuracies Of Remote-Sensing Maps

    NASA Technical Reports Server (NTRS)

    Card, Don H.; Strong, Laurence L.

    1992-01-01

    Report describes study of accuracies of classifications of picture elements in map derived by digital processing of Landsat-multispectral-scanner imagery of coastal plain of Arctic National Wildlife Refuge. Accuracies of portions of map analyzed with help of statistical sampling procedure called "stratified plurality sampling", in which all picture elements in given cluster classified in stratum to which plurality of them belong.

  3. Accuracy assessment of EPA protocol gases purchased in 1991

    SciTech Connect

    Coppedge, E.A.; Logan, T.J.; Midgett, M.R.; Shores, R.C.; Messner, M.J.

    1992-12-01

    The U.S. Environmental Protection Agency (EPA) has established quality assurance procedures for air pollution measurement systems that are intended to reduce the uncertainty in environmental measurements. The compressed gas standards of the program are used for calibration and audits of continuous emission monitoring systems. EPA's regulations require that the certified values for these standards be traceable to National Institute of Standards and Technology (NIST) Standard Reference Materials or to NIST/EPA-approved Certified Reference Materials via either of two traceability protocols. The manufacturer assessment was conducted to: (1) document the accuracy of the compressed gas standards' certified concentrations; and (2) ensure that the compressed gas standards' written certification reports met the documentation requirements of the protocol. All available sources were contacted and the following gas mixtures were acquired: (1) 300-ppm SO2 and 400-ppm NO in N2; and (2) 1500-ppm SO2 and 900-ppm NO in N2.

  4. Multimodality Image Fusion-Guided Procedures: Technique, Accuracy, and Applications

    SciTech Connect

    Abi-Jaoudeh, Nadine; Kruecker, Jochen; Kadoury, Samuel; Kobeiter, Hicham; Venkatesan, Aradhana M. Levy, Elliot Wood, Bradford J.

    2012-10-15

    Personalized therapies play an increasingly critical role in cancer care: Image guidance with multimodality image fusion facilitates the targeting of specific tissue for tissue characterization and plays a role in drug discovery and optimization of tailored therapies. Positron-emission tomography (PET), magnetic resonance imaging (MRI), and contrast-enhanced computed tomography (CT) may offer additional information not otherwise available to the operator during minimally invasive image-guided procedures, such as biopsy and ablation. With use of multimodality image fusion for image-guided interventions, navigation with advanced modalities does not require the physical presence of the PET, MRI, or CT imaging system. Several commercially available methods of image-fusion and device navigation are reviewed along with an explanation of common tracking hardware and software. An overview of current clinical applications for multimodality navigation is provided.

  5. Accuracy assessment of NLCD 2006 land cover and impervious surface

    USGS Publications Warehouse

    Wickham, James D.; Stehman, Stephen V.; Gass, Leila; Dewitz, Jon; Fry, Joyce A.; Wade, Timothy G.

    2013-01-01

    Release of NLCD 2006 provides the first wall-to-wall land-cover change database for the conterminous United States from Landsat Thematic Mapper (TM) data. Accuracy assessment of NLCD 2006 focused on four primary products: 2001 land cover, 2006 land cover, land-cover change between 2001 and 2006, and impervious surface change between 2001 and 2006. The accuracy assessment was conducted by selecting a stratified random sample of pixels with the reference classification interpreted from multi-temporal high resolution digital imagery. The NLCD Level II (16 classes) overall accuracies for the 2001 and 2006 land cover were 79% and 78%, respectively, with Level II user's accuracies exceeding 80% for water, high density urban, all upland forest classes, shrubland, and cropland for both dates. Level I (8 classes) accuracies were 85% for NLCD 2001 and 84% for NLCD 2006. The high overall and user's accuracies for the individual dates translated into high user's accuracies for the 2001–2006 change reporting themes water gain and loss, forest loss, urban gain, and the no-change reporting themes for water, urban, forest, and agriculture. The main factor limiting higher accuracies for the change reporting themes appeared to be difficulty in distinguishing the context of grass. We discuss the need for more research on land-cover change accuracy assessment.

  6. Alaska national hydrography dataset positional accuracy assessment study

    USGS Publications Warehouse

    Arundel, Samantha; Yamamoto, Kristina H.; Constance, Eric; Mantey, Kim; Vinyard-Houx, Jeremy

    2013-01-01

    Initial visual assessments Wide range in the quality of fit between features in NHD and these new image sources. No statistical analysis has been performed to actually quantify accuracy Determining absolute accuracy is cost prohibitive (must collect independent, well defined test points) Quantitative analysis of relative positional error is feasible.

  7. A fast RCS accuracy assessment method for passive radar calibrators

    NASA Astrophysics Data System (ADS)

    Zhou, Yongsheng; Li, Chuanrong; Tang, Lingli; Ma, Lingling; Liu, QI

    2016-10-01

    In microwave radar radiometric calibration, the corner reflector acts as the standard reference target but its structure is usually deformed during the transportation and installation, or deformed by wind and gravity while permanently installed outdoor, which will decrease the RCS accuracy and therefore the radiometric calibration accuracy. A fast RCS accuracy measurement method based on 3-D measuring instrument and RCS simulation was proposed in this paper for tracking the characteristic variation of the corner reflector. In the first step, RCS simulation algorithm was selected and its simulation accuracy was assessed. In the second step, the 3-D measuring instrument was selected and its measuring accuracy was evaluated. Once the accuracy of the selected RCS simulation algorithm and 3-D measuring instrument was satisfied for the RCS accuracy assessment, the 3-D structure of the corner reflector would be obtained by the 3-D measuring instrument, and then the RCSs of the obtained 3-D structure and corresponding ideal structure would be calculated respectively based on the selected RCS simulation algorithm. The final RCS accuracy was the absolute difference of the two RCS calculation results. The advantage of the proposed method was that it could be applied outdoor easily, avoiding the correlation among the plate edge length error, plate orthogonality error, plate curvature error. The accuracy of this method is higher than the method using distortion equation. In the end of the paper, a measurement example was presented in order to show the performance of the proposed method.

  8. Thematic Accuracy Assessment of the 2011 National Land ...

    EPA Pesticide Factsheets

    Accuracy assessment is a standard protocol of National Land Cover Database (NLCD) mapping. Here we report agreement statistics between map and reference labels for NLCD 2011, which includes land cover for ca. 2001, ca. 2006, and ca. 2011. The two main objectives were assessment of agreement between map and reference labels for the three, single-date NLCD land cover products at Level II and Level I of the classification hierarchy, and agreement for 17 land cover change reporting themes based on Level I classes (e.g., forest loss; forest gain; forest, no change) for three change periods (2001–2006, 2006–2011, and 2001–2011). The single-date overall accuracies were 82%, 83%, and 83% at Level II and 88%, 89%, and 89% at Level I for 2011, 2006, and 2001, respectively. Many class-specific user's accuracies met or exceeded a previously established nominal accuracy benchmark of 85%. Overall accuracies for 2006 and 2001 land cover components of NLCD 2011 were approximately 4% higher (at Level II and Level I) than the overall accuracies for the same components of NLCD 2006. The high Level I overall, user's, and producer's accuracies for the single-date eras in NLCD 2011 did not translate into high class-specific user's and producer's accuracies for many of the 17 change reporting themes. User's accuracies were high for the no change reporting themes, commonly exceeding 85%, but were typically much lower for the reporting themes that represented change. Only forest l

  9. Assessment of the Thematic Accuracy of Land Cover Maps

    NASA Astrophysics Data System (ADS)

    Höhle, J.

    2015-08-01

    Several land cover maps are generated from aerial imagery and assessed by different approaches. The test site is an urban area in Europe for which six classes (`building', `hedge and bush', `grass', `road and parking lot', `tree', `wall and car port') had to be derived. Two classification methods were applied (`Decision Tree' and `Support Vector Machine') using only two attributes (height above ground and normalized difference vegetation index) which both are derived from the images. The assessment of the thematic accuracy applied a stratified design and was based on accuracy measures such as user's and producer's accuracy, and kappa coefficient. In addition, confidence intervals were computed for several accuracy measures. The achieved accuracies and confidence intervals are thoroughly analysed and recommendations are derived from the gained experiences. Reliable reference values are obtained using stereovision, false-colour image pairs, and positioning to the checkpoints with 3D coordinates. The influence of the training areas on the results is studied. Cross validation has been tested with a few reference points in order to derive approximate accuracy measures. The two classification methods perform equally for five classes. Trees are classified with a much better accuracy and a smaller confidence interval by means of the decision tree method. Buildings are classified by both methods with an accuracy of 99% (95% CI: 95%-100%) using independent 3D checkpoints. The average width of the confidence interval of six classes was 14% of the user's accuracy.

  10. Evaluating the Accuracy of Pharmacy Students' Self-Assessment Skills

    PubMed Central

    Gregory, Paul A. M.

    2007-01-01

    Objectives To evaluate the accuracy of self-assessment skills of senior-level bachelor of science pharmacy students. Methods A method proposed by Kruger and Dunning involving comparisons of pharmacy students' self-assessment with weighted average assessments of peers, standardized patients, and pharmacist-instructors was used. Results Eighty students participated in the study. Differences between self-assessment and external assessments were found across all performance quartiles. These differences were particularly large and significant in the third and fourth (lowest) quartiles and particularly marked in the areas of empathy, and logic/focus/coherence of interviewing. Conclusions The quality and accuracy of pharmacy students' self-assessment skills were not as strong as expected, particularly given recent efforts to include self-assessment in the curriculum. Further work is necessary to ensure this important practice competency and life skill is at the level expected for professional practice and continuous professional development. PMID:17998986

  11. Procedural Handbook: 1978-79 Writing Assessment.

    ERIC Educational Resources Information Center

    Education Commission of the States, Denver, CO. National Assessment of Educational Progress.

    The third (1978-79) of three writing assessments conducted by the National Assessment of Educational Progress (NAEP) is reported. The writing achievement of American 9-, 13-, and 17-year-olds was surveyed using a deeply stratified, multistage probability sample design. The specific procedures used in the assessment to develop objectives and…

  12. On the accuracy assessment of Laplacian models in MPS

    NASA Astrophysics Data System (ADS)

    Ng, K. C.; Hwang, Y. H.; Sheu, T. W. H.

    2014-10-01

    From the basis of the Gauss divergence theorem applied on a circular control volume that was put forward by Isshiki (2011) in deriving the MPS-based differential operators, a more general Laplacian model is further deduced from the current work which involves the proposal of an altered kernel function. The Laplacians of several functions are evaluated and the accuracies of various MPS Laplacian models in solving the Poisson equation that is subjected to both Dirichlet and Neumann boundary conditions are assessed. For regular grids, the Laplacian model with smaller N is generally more accurate, owing to the reduction of leading errors due to those higher-order derivatives appearing in the modified equation. For irregular grids, an optimal N value does exist in ensuring better global accuracy, in which this optimal value of N will increase when cases employing highly irregular grids are computed. Finally, the accuracies of these MPS Laplacian models are assessed in an incompressible flow problem.

  13. Language Providiency Assessment Committees, Procedures Manual.

    ERIC Educational Resources Information Center

    Intercultural Development Research Association, San Antonio, TX.

    The Language Proficiency Assessment Committees (LPACs) authorized by Congress were given the responsibility of assessing limited-English-speaking students within a school district and making placement recommendations regarding these students to the local school board. This manual provides guidelines and procedures for the establishment of tasks,…

  14. ASSESSING ACCURACY OF NET CHANGE DERIVED FROM LAND COVER MAPS

    EPA Science Inventory

    Net change derived from land-cover maps provides important descriptive information for environmental monitoring and is often used as an input or explanatory variable in environmental models. The sampling design and analysis for assessing net change accuracy differ from traditio...

  15. Commissioning Procedures for Mechanical Precision and Accuracy in a Dedicated LINAC

    NASA Astrophysics Data System (ADS)

    Ballesteros-Zebadúa, P.; Lárrga-Gutierrez, J. M.; García-Garduño, O. A.; Juárez, J.; Prieto, I.; Moreno-Jiménez, S.; Celis, M. A.

    2008-08-01

    Mechanical precision measurements are fundamental procedures for the commissioning of a dedicated LINAC. At our Radioneurosurgery Unit, these procedures can be suitable as quality assurance routines that allow the verification of the equipment geometrical accuracy and precision. In this work mechanical tests were performed for gantry and table rotation, obtaining mean associated uncertainties of 0.3 mm and 0.71 mm, respectively. Using an anthropomorphic phantom and a series of localized surface markers, isocenter accuracy showed to be smaller than 0.86 mm for radiosurgery procedures and 0.95 mm for fractionated treatments with mask. All uncertainties were below tolerances. The highest contribution to mechanical variations is due to table rotation, so it is important to correct variations using a localization frame with printed overlays. Mechanical precision knowledge would allow to consider the statistical errors in the treatment planning volume margins.

  16. Commissioning Procedures for Mechanical Precision and Accuracy in a Dedicated LINAC

    SciTech Connect

    Ballesteros-Zebadua, P.; Larrga-Gutierrez, J. M.; Garcia-Garduno, O. A.; Juarez, J.; Prieto, I.; Moreno-Jimenez, S.; Celis, M. A.

    2008-08-11

    Mechanical precision measurements are fundamental procedures for the commissioning of a dedicated LINAC. At our Radioneurosurgery Unit, these procedures can be suitable as quality assurance routines that allow the verification of the equipment geometrical accuracy and precision. In this work mechanical tests were performed for gantry and table rotation, obtaining mean associated uncertainties of 0.3 mm and 0.71 mm, respectively. Using an anthropomorphic phantom and a series of localized surface markers, isocenter accuracy showed to be smaller than 0.86 mm for radiosurgery procedures and 0.95 mm for fractionated treatments with mask. All uncertainties were below tolerances. The highest contribution to mechanical variations is due to table rotation, so it is important to correct variations using a localization frame with printed overlays. Mechanical precision knowledge would allow to consider the statistical errors in the treatment planning volume margins.

  17. [Navigation in implantology: Accuracy assessment regarding the literature].

    PubMed

    Barrak, Ibrahim Ádám; Varga, Endre; Piffko, József

    2016-06-01

    Our objective was to assess the literature regarding the accuracy of the different static guided systems. After applying electronic literature search we found 661 articles. After reviewing 139 articles, the authors chose 52 articles for full-text evaluation. 24 studies involved accuracy measurements. Fourteen of our selected references were clinical and ten of them were in vitro (modell or cadaver). Variance-analysis (Tukey's post-hoc test; p < 0.05) was conducted to summarize the selected publications. Regarding 2819 results the average mean error at the entry point was 0.98 mm. At the level of the apex the average deviation was 1.29 mm while the mean of the angular deviation was 3,96 degrees. Significant difference could be observed between the two methods of implant placement (partially and fully guided sequence) in terms of deviation at the entry point, apex and angular deviation. Different levels of quality and quantity of evidence were available for assessing the accuracy of the different computer-assisted implant placement. The rapidly evolving field of digital dentistry and the new developments will further improve the accuracy of guided implant placement. In the interest of being able to draw dependable conclusions and for the further evaluation of the parameters used for accuracy measurements, randomized, controlled single or multi-centered clinical trials are necessary.

  18. A Preanalytic Validation Study of Automated Bone Scan Index: Effect on Accuracy and Reproducibility Due to the Procedural Variabilities in Bone Scan Image Acquisition.

    PubMed

    Anand, Aseem; Morris, Michael J; Kaboteh, Reza; Reza, Mariana; Trägårdh, Elin; Matsunaga, Naofumi; Edenbrandt, Lars; Bjartell, Anders; Larson, Steven M; Minarik, David

    2016-12-01

    The effect of the procedural variability in image acquisition on the quantitative assessment of bone scan is unknown. Here, we have developed and performed preanalytical studies to assess the impact of the variability in scanning speed and in vendor-specific γ-camera on reproducibility and accuracy of the automated bone scan index (BSI).

  19. Clinical assessment of intraarterial blood gas monitor accuracy

    NASA Astrophysics Data System (ADS)

    Aziz, Salim; Spiess, R.; Roby, Paul; Kenny, Margaret

    1993-08-01

    The accuracy of intraarterial blood gas monitoring (IABGM) devices is challenging to assess under routine clinical conditions. When comparing discrete measurements by blood gas analyzer (BGA) to IABGM values, it is important that the BGA determinations (reference method) be as accurate as possible. In vitro decay of gas tensions caused by delay in BGA analysis is particularly problematic for specimens with high arterial oxygen tension (PaO2) values. Clinical instability of blood gases in the acutely ill patient may cause disagreement between BGA and IABGM values because of IABGM response time lag, particularly in the measurement of arterial blood carbon dioxide tension (PaCO2). We recommend that clinical assessments of IABGM accuracy by comparison with BGA use multiple bedside BGA instruments, and that blood sampling only occur during periods when IABGM values appear stable.

  20. Accuracy of clinical coding for procedures in oral and maxillofacial surgery.

    PubMed

    Khurram, S A; Warner, C; Henry, A M; Kumar, A; Mohammed-Ali, R I

    2016-10-01

    Clinical coding has important financial implications, and discrepancies in the assigned codes can directly affect the funding of a department and hospital. Over the last few years, numerous oversights have been noticed in the coding of oral and maxillofacial (OMF) procedures. To establish the accuracy and completeness of coding, we retrospectively analysed the records of patients during two time periods: March to May 2009 (324 patients), and January to March 2014 (200 patients). Two investigators independently collected and analysed the data to ensure accuracy and remove bias. A large proportion of operations were not assigned all the relevant codes, and only 32% - 33% were correct in both cycles. To our knowledge, this is the first reported audit of clinical coding in OMFS, and it highlights serious shortcomings that have substantial financial implications. Better input by the surgical team and improved communication between the surgical and coding departments will improve accuracy.

  1. Survey methods for assessing land cover map accuracy

    USGS Publications Warehouse

    Nusser, S.M.; Klaas, E.E.

    2003-01-01

    The increasing availability of digital photographic materials has fueled efforts by agencies and organizations to generate land cover maps for states, regions, and the United States as a whole. Regardless of the information sources and classification methods used, land cover maps are subject to numerous sources of error. In order to understand the quality of the information contained in these maps, it is desirable to generate statistically valid estimates of accuracy rates describing misclassification errors. We explored a full sample survey framework for creating accuracy assessment study designs that balance statistical and operational considerations in relation to study objectives for a regional assessment of GAP land cover maps. We focused not only on appropriate sample designs and estimation approaches, but on aspects of the data collection process, such as gaining cooperation of land owners and using pixel clusters as an observation unit. The approach was tested in a pilot study to assess the accuracy of Iowa GAP land cover maps. A stratified two-stage cluster sampling design addressed sample size requirements for land covers and the need for geographic spread while minimizing operational effort. Recruitment methods used for private land owners yielded high response rates, minimizing a source of nonresponse error. Collecting data for a 9-pixel cluster centered on the sampled pixel was simple to implement, and provided better information on rarer vegetation classes as well as substantial gains in precision relative to observing data at a single-pixel.

  2. Revisiting and Refining the Multicultural Assessment Procedure.

    ERIC Educational Resources Information Center

    Ridley, Charles R.; Hill, Carrie L.; Li, Lisa C.

    1998-01-01

    Reacts to critiques of the Multicultural Assessment Procedure (MAP). Discusses the definition of culture, the structure of the MAP, cultural versus idiosyncratic data, counselors' knowledge and characteristics, soliciting client feedback and perceptions, and managed care. Encourages colleagues to apply the MAP to their research, practice, and…

  3. Regional Needs Assessment: Procedures and Outcomes.

    ERIC Educational Resources Information Center

    Anderson, Patricia S.; Deck, Dennis

    This paper presents the procedures used for carrying out a needs assessment concerning drug and alcohol abuse prevention and intervention efforts across nine western states and the Pacific territories prior to and subsequent to the receipt of United States Department of Education funds to implement training and technical services in the region.…

  4. Navigation accuracy for an intracardiac procedure using ultrasound enhanced virtual reality

    NASA Astrophysics Data System (ADS)

    Wiles, Andrew D.; Guiraudon, Gerard M.; Moore, John; Wedlake, Christopher; Linte, Cristian A.; Bainbridge, Daniel; Jones, Douglas L.; Peters, Terry M.

    2007-03-01

    Minimally invasive techniques for use inside the beating heart, such as mitral valve replacement and septal defect repair, are the focus of this work. Traditional techniques for these procedures require an open chest approach and a cardiopulmonary bypass machine. New techniques using port access and a combined surgical guidance tool that includes an overlaid two-dimensional ultrasound image in a virtual reality environment are being developed. To test this technique, a cardiac phantom was developed to simulate the anatomy. The phantom consists of an acrylic box filled with a 7% glycerol solution with ultrasound properties similar to human tissue. Plate inserts mounted in the box simulate the physical anatomy. An accuracy assessment was completed to evaluate the performance of the system. Using the cardiac phantom, a 2mm diameter glass toroid was attached to a vertical plate as the target location. An elastic material was placed between the target and plate to simulate the target lying on a soft tissue structure. The target was measured using an independent measurement system and was represented as a sphere in the virtual reality system. The goal was to test the ability of a user to probe the target using three guidance methods: (i) 2D ultrasound only, (ii) virtual reality only and (iii) ultrasound enhanced virtual reality. Three users attempted the task three times each for each method. An independent measurement system was used to validate the measurement. The ultrasound imaging alone was poor in locating the target (5.42 mm RMS) while the other methods proved to be significantly better (1.02 mm RMS and 1.47 mm RMS respectively). The ultrasound enhancement is expected to be more useful in a dynamic environment where the system registration may be disturbed.

  5. Simplified Expert Elicitation Procedure for Risk Assessment of Operating Events

    SciTech Connect

    Ronald L. Boring; David Gertman; Jeffrey Joe; Julie Marble; William Galyean; Larry Blackwood; Harold Blackman

    2005-06-01

    This report describes a simplified, tractable, and usable procedure within the US Nuclear Regulator Commission (NRC) for seeking expert opinion and judgment. The NRC has increased efforts to document the reliability and risk of nuclear power plants (NPPs) through Probabilistic Risk Assessment (PRA) and Human Reliability Analysis (HRA) models. The Significance Determination Process (SDP) and Accident Sequence Precursor (ASP) programs at the NRC utilize expert judgment on the probability of failure, human error, and the operability of equipment in cases where otherwise insufficient operational data exist to make meaningful estimates. In the past, the SDP and ASP programs informally sought the opinion of experts inside and outside the NRC. This document represents a formal, documented procedure to take the place of informal expert elicitation. The procedures outlined in this report follow existing formal expert elicitation methodologies, but are streamlined as appropriate to the degree of accuracy required and the schedule for producing SDP and ASP analyses.

  6. Demons deformable registration for CBCT-guided procedures in the head and neck: Convergence and accuracy

    SciTech Connect

    Nithiananthan, S.; Brock, K. K.; Daly, M. J.; Chan, H.; Irish, J. C.; Siewerdsen, J. H.

    2009-10-15

    Purpose: The accuracy and convergence behavior of a variant of the Demons deformable registration algorithm were investigated for use in cone-beam CT (CBCT)-guided procedures of the head and neck. Online use of deformable registration for guidance of therapeutic procedures such as image-guided surgery or radiation therapy places trade-offs on accuracy and computational expense. This work describes a convergence criterion for Demons registration developed to balance these demands; the accuracy of a multiscale Demons implementation using this convergence criterion is quantified in CBCT images of the head and neck. Methods: Using an open-source ''symmetric'' Demons registration algorithm, a convergence criterion based on the change in the deformation field between iterations was developed to advance among multiple levels of a multiscale image pyramid in a manner that optimized accuracy and computation time. The convergence criterion was optimized in cadaver studies involving CBCT images acquired using a surgical C-arm prototype modified for 3D intraoperative imaging. CBCT-to-CBCT registration was performed and accuracy was quantified in terms of the normalized cross-correlation (NCC) and target registration error (TRE). The accuracy and robustness of the algorithm were then tested in clinical CBCT images of ten patients undergoing radiation therapy of the head and neck. Results: The cadaver model allowed optimization of the convergence factor and initial measurements of registration accuracy: Demons registration exhibited TRE=(0.8{+-}0.3) mm and NCC=0.99 in the cadaveric head compared to TRE=(2.6{+-}1.0) mm and NCC=0.93 with rigid registration. Similarly for the patient data, Demons registration gave mean TRE=(1.6{+-}0.9) mm compared to rigid registration TRE=(3.6{+-}1.9) mm, suggesting registration accuracy at or near the voxel size of the patient images (1x1x2 mm{sup 3}). The multiscale implementation based on optimal convergence criteria completed registration in

  7. Accuracy of the European solar water heater test procedure. Part 1: Measurement errors and parameter estimates

    SciTech Connect

    Rabl, A.; Leide, B. ); Carvalho, M.J.; Collares-Pereira, M. ); Bourges, B.

    1991-01-01

    The Collector and System Testing Group (CSTG) of the European Community has developed a procedure for testing the performance of solar water heaters. This procedure treats a solar water heater as a black box with input-output parameters that are determined by all-day tests. In the present study the authors carry out a systematic analysis of the accuracy of this procedure, in order to answer the question: what tolerances should one impose for the measurements and how many days of testing should one demand under what meteorological conditions, in order to be able to quarantee a specified maximum error for the long term performance The methodology is applicable to other test procedures as well. The present paper (Part 1) examines the measurement tolerances of the current version of the procedure and derives a priori estimates of the errors of the parameters; these errors are then compared with the regression results of the Round Robin test series. The companion paper (Part 2) evaluates the consequences for the accuracy of the long term performance prediction. The authors conclude that the CSTG test procedure makes it possible to predict the long term performance with standard errors around 5% for sunny climates (10% for cloudy climates). The apparent precision of individual test sequences is deceptive because of large systematic discrepancies between different sequences. Better results could be obtained by imposing tighter control on the constancy of the cold water supply temperature and on the environment of the test, the latter by enforcing the recommendation for the ventilation of the collector.

  8. Standardized accuracy assessment of the calypso wireless transponder tracking system

    NASA Astrophysics Data System (ADS)

    Franz, A. M.; Schmitt, D.; Seitel, A.; Chatrasingh, M.; Echner, G.; Oelfke, U.; Nill, S.; Birkfellner, W.; Maier-Hein, L.

    2014-11-01

    Electromagnetic (EM) tracking allows localization of small EM sensors in a magnetic field of known geometry without line-of-sight. However, this technique requires a cable connection to the tracked object. A wireless alternative based on magnetic fields, referred to as transponder tracking, has been proposed by several authors. Although most of the transponder tracking systems are still in an early stage of development and not ready for clinical use yet, Varian Medical Systems Inc. (Palo Alto, California, USA) presented the Calypso system for tumor tracking in radiation therapy which includes transponder technology. But it has not been used for computer-assisted interventions (CAI) in general or been assessed for accuracy in a standardized manner, so far. In this study, we apply a standardized assessment protocol presented by Hummel et al (2005 Med. Phys. 32 2371-9) to the Calypso system for the first time. The results show that transponder tracking with the Calypso system provides a precision and accuracy below 1 mm in ideal clinical environments, which is comparable with other EM tracking systems. Similar to other systems the tracking accuracy was affected by metallic distortion, which led to errors of up to 3.2 mm. The potential of the wireless transponder tracking technology for use in many future CAI applications can be regarded as extremely high.

  9. A Framework for the Objective Assessment of Registration Accuracy

    PubMed Central

    Simonetti, Flavio; Foroni, Roberto Israel

    2014-01-01

    Validation and accuracy assessment are the main bottlenecks preventing the adoption of image processing algorithms in the clinical practice. In the classical approach, a posteriori analysis is performed through objective metrics. In this work, a different approach based on Petri nets is proposed. The basic idea consists in predicting the accuracy of a given pipeline based on the identification and characterization of the sources of inaccuracy. The concept is demonstrated on a case study: intrasubject rigid and affine registration of magnetic resonance images. Both synthetic and real data are considered. While synthetic data allow the benchmarking of the performance with respect to the ground truth, real data enable to assess the robustness of the methodology in real contexts as well as to determine the suitability of the use of synthetic data in the training phase. Results revealed a higher correlation and a lower dispersion among the metrics for simulated data, while the opposite trend was observed for pathologic ones. Results show that the proposed model not only provides a good prediction performance but also leads to the optimization of the end-to-end chain in terms of accuracy and robustness, setting the ground for its generalization to different and more complex scenarios. PMID:24659997

  10. Accuracy of reporting maternal in-hospital diagnoses and intrapartum procedures in Washington State linked birth records.

    PubMed

    Lydon-Rochelle, Mona T; Holt, Victoria L; Nelson, Jennifer C; Cárdenas, Vicky; Gardella, Carolyn; Easterling, Thomas R; Callaghan, William M

    2005-11-01

    While the impact of maternal morbidities and intrapartum procedures is a common topic in perinatal outcomes research, the accuracy of the reporting of these variables in the large administrative databases (birth certificates, hospital discharges) often utilised for such research is largely unknown. We conducted this study to compare maternal diagnoses and procedures listed on birth certificates, hospital discharge data, and birth certificate and hospital discharge data combined, with those documented in a stratified random sample of hospital medical records of 4541 women delivering liveborn infants in Washington State in 2000. We found that birth certificate and hospital discharge data combined had substantially higher true positive fractions (TPF, proportion of women with a positive medical record assessment who were positive using the administrative databases) than did birth certificate data alone for labour induction (86% vs. 52%), cephalopelvic disproportion (83% vs. 35%), abruptio placentae (85% vs. 68%), and forceps-assisted delivery (89% vs. 55%). For procedures available only in hospital discharge data, TPFs were generally high: episiotomy (85%) and third and fourth degree vaginal lacerations (91%). Except for repeat caesarean section without labour (TPF, 81%), delivery procedures available only in birth certificate data had low TPFs, including augmentation (34%), repeat caesarean section with labour (61%), and vaginal birth after caesarean section (62%). Our data suggest that researchers conducting perinatal epidemiological studies should not rely solely on birth certificate data to detect maternal diagnoses and intrapartum procedures accurately.

  11. Procedures used for assessment of stuttering frequency and stuttering duration.

    PubMed

    Jani, Leanne; Huckvale, Mark; Howell, Peter

    2013-12-01

    Frequency of stuttered syllables and their durations were assessed using different procedures. The experiment examined overall syllable counts, counts of stuttered syllables and measures of stutter durations when they were made simultaneously or successively. Samples of speech with associated syllable, stuttered syllable and duration measurements of stuttering events were employed in reference transcriptions. Samples contained a minimum of 200 syllables. Ten participants assessed these samples for syllables, stuttered syllables and duration in an experiment. The responses of these participants were stored in alignment with the speech recordings for analysis. Performance was significantly more accurate (relative to transcriptions) for measures other than duration when the successive procedure was used as opposed to the simultaneous procedure. Although the successive method was more accurate, accuracy of stutter event identification was low for most participants. The procedure that allowed listeners to replay a speech sample and count the syllables, stuttered syllables and durations in three passes yielded more accurate syllable and stuttered syllable counts than procedures that required these judgments to be made in one pass.

  12. Whole-procedure clinical accuracy of Gamma Knife treatments of large lesions

    SciTech Connect

    Ma Lijun; Chuang, Cynthia; Descovich, Martina; Petti, Paula; Smith, Vernon; Verhey, Lynn

    2008-11-15

    The mechanical accuracy of Gamma Knife radiosurgery based on single-isocenter measurement has been established to within 0.3 mm. However, the full delivery accuracy for Gamma Knife treatments of large lesions has only been estimated via the quadrature-sum analysis. In this study, the authors directly measured the whole-procedure accuracy for Gamma Knife treatments of large lesions to examine the validity of such estimation. The measurements were conducted on a head-phantom simulating the whole treatment procedure that included frame placement, computed tomography imaging, treatment planning, and treatment delivery. The results of the measurements were compared with the dose calculations from the treatment planning system. Average agreements of 0.1-1.6 mm for the isodose lines ranging from 25% to 90% of the maximum dose were found despite potentially large contributing uncertainties such as 3-mm imaging resolution, 2-mm dose grid size, 1-mm frame registration, multi-isocenter deliveries, etc. The results of our measurements were found to be significantly smaller (>50%) than the calculated value based on the quadrature-sum analysis. In conclusion, Gamma Knife treatments of large lesions can be delivered much more accurately than predicted from the quadrature-sum analysis of major sources of uncertainties from each step of the delivery chain.

  13. Whole-procedure clinical accuracy of gamma knife treatments of large lesions.

    PubMed

    Ma, Lijun; Chuang, Cynthia; Descovich, Martina; Petti, Paula; Smith, Vernon; Verhey, Lynn

    2008-11-01

    The mechanical accuracy of Gamma Knife radiosurgery based on single-isocenter measurement has been established to within 0.3 mm. However, the full delivery accuracy for Gamma Knife treatments of large lesions has only been estimated via the quadrature-sum analysis. In this study, the authors directly measured the whole-procedure accuracy for Gamma Knife treatments of large lesions to examine the validity of such estimation. The measurements were conducted on a head-phantom simulating the whole treatment procedure that included frame placement, computed tomography imaging, treatment planning, and treatment delivery. The results of the measurements were compared with the dose calculations from the treatment planning system. Average agreements of 0.1-1.6 mm for the isodose lines ranging from 25% to 90% of the maximum dose were found despite potentially large contributing uncertainties such as 3-mm imaging resolution, 2-mm dose grid size, 1-mm frame registration, multi-isocenter deliveries, etc. The results of our measurements were found to be significantly smaller (>50%) than the calculated value based on the quadrature-sum analysis. In conclusion, Gamma Knife treatments of large lesions can be delivered much more accurately than predicted from the quadrature-sum analysis of major sources of uncertainties from each step of the delivery chain.

  14. Assessment of flash flood warning procedures

    NASA Astrophysics Data System (ADS)

    Johnson, Lynn E.

    2000-01-01

    Assessment of four alternate flash flood warning procedures was conducted to ascertain their suitability for forecast operations using radar-rainfall imagery. The procedures include (1) areal mean basin effective rainfall, (2) unit hydrograph, (3) time-area, and (4) 2-D numerical modeling. The Buffalo Creek flash flood of July 12, 1996, was used as a case study for application of each of the procedures. A significant feature of the Buffalo Creek event was a forest fire that occurred a few months before the flood and significantly affected watershed runoff characteristics. Objectives were to assess the applicability of the procedures for watersheds having spatial and temporal scale similarities to Buffalo Creek, to compare their technical characteristics, and to consider forecaster usability. Geographic information system techniques for hydrologic database development and flash flood potential computations are illustrated. Generalizations of the case study results are offered relative to their suitability for flash flood forecasting operations. Although all four methods have relative advantages, their application to the Buffalo Creek event resulted in mixed performance. Failure of any method was due primarily to uncertainties of the land surface response (i.e., burn area imperviousness). Results underscore the need for model calibration; a difficult requirement for real-time forecasting.

  15. Accuracy assessment: The statistical approach to performance evaluation in LACIE. [Great Plains corridor, United States

    NASA Technical Reports Server (NTRS)

    Houston, A. G.; Feiveson, A. H.; Chhikara, R. S.; Hsu, E. M. (Principal Investigator)

    1979-01-01

    A statistical methodology was developed to check the accuracy of the products of the experimental operations throughout crop growth and to determine whether the procedures are adequate to accomplish the desired accuracy and reliability goals. It has allowed the identification and isolation of key problems in wheat area yield estimation, some of which have been corrected and some of which remain to be resolved. The major unresolved problem in accuracy assessment is that of precisely estimating the bias of the LACIE production estimator. Topics covered include: (1) evaluation techniques; (2) variance and bias estimation for the wheat production estimate; (3) the 90/90 evaluation; (4) comparison of the LACIE estimate with reference standards; and (5) first and second order error source investigations.

  16. The Social Accuracy Model of Interpersonal Perception: Assessing Individual Differences in Perceptive and Expressive Accuracy

    ERIC Educational Resources Information Center

    Biesanz, Jeremy C.

    2010-01-01

    The social accuracy model of interpersonal perception (SAM) is a componential model that estimates perceiver and target effects of different components of accuracy across traits simultaneously. For instance, Jane may be generally accurate in her perceptions of others and thus high in "perceptive accuracy"--the extent to which a particular…

  17. Accuracy assessment, using stratified plurality sampling, of portions of a LANDSAT classification of the Arctic National Wildlife Refuge Coastal Plain

    NASA Technical Reports Server (NTRS)

    Card, Don H.; Strong, Laurence L.

    1989-01-01

    An application of a classification accuracy assessment procedure is described for a vegetation and land cover map prepared by digital image processing of LANDSAT multispectral scanner data. A statistical sampling procedure called Stratified Plurality Sampling was used to assess the accuracy of portions of a map of the Arctic National Wildlife Refuge coastal plain. Results are tabulated as percent correct classification overall as well as per category with associated confidence intervals. Although values of percent correct were disappointingly low for most categories, the study was useful in highlighting sources of classification error and demonstrating shortcomings of the plurality sampling method.

  18. Accuracy Assessment of a Uav-Based Landslide Monitoring System

    NASA Astrophysics Data System (ADS)

    Peppa, M. V.; Mills, J. P.; Moore, P.; Miller, P. E.; Chambers, J. E.

    2016-06-01

    Landslides are hazardous events with often disastrous consequences. Monitoring landslides with observations of high spatio-temporal resolution can help mitigate such hazards. Mini unmanned aerial vehicles (UAVs) complemented by structure-from-motion (SfM) photogrammetry and modern per-pixel image matching algorithms can deliver a time-series of landslide elevation models in an automated and inexpensive way. This research investigates the potential of a mini UAV, equipped with a Panasonic Lumix DMC-LX5 compact camera, to provide surface deformations at acceptable levels of accuracy for landslide assessment. The study adopts a self-calibrating bundle adjustment-SfM pipeline using ground control points (GCPs). It evaluates misalignment biases and unresolved systematic errors that are transferred through the SfM process into the derived elevation models. To cross-validate the research outputs, results are compared to benchmark observations obtained by standard surveying techniques. The data is collected with 6 cm ground sample distance (GSD) and is shown to achieve planimetric and vertical accuracy of a few centimetres at independent check points (ICPs). The co-registration error of the generated elevation models is also examined in areas of stable terrain. Through this error assessment, the study estimates that the vertical sensitivity to real terrain change of the tested landslide is equal to 9 cm.

  19. Accuracy assessment of gridded precipitation datasets in the Himalayas

    NASA Astrophysics Data System (ADS)

    Khan, A.

    2015-12-01

    Accurate precipitation data are vital for hydro-climatic modelling and water resources assessments. Based on mass balance calculations and Turc-Budyko analysis, this study investigates the accuracy of twelve widely used precipitation gridded datasets for sub-basins in the Upper Indus Basin (UIB) in the Himalayas-Karakoram-Hindukush (HKH) region. These datasets are: 1) Global Precipitation Climatology Project (GPCP), 2) Climate Prediction Centre (CPC) Merged Analysis of Precipitation (CMAP), 3) NCEP / NCAR, 4) Global Precipitation Climatology Centre (GPCC), 5) Climatic Research Unit (CRU), 6) Asian Precipitation Highly Resolved Observational Data Integration Towards Evaluation of Water Resources (APHRODITE), 7) Tropical Rainfall Measuring Mission (TRMM), 8) European Reanalysis (ERA) interim data, 9) PRINCETON, 10) European Reanalysis-40 (ERA-40), 11) Willmott and Matsuura, and 12) WATCH Forcing Data based on ERA interim (WFDEI). Precipitation accuracy and consistency was assessed by physical mass balance involving sum of annual measured flow, estimated actual evapotranspiration (average of 4 datasets), estimated glacier mass balance melt contribution (average of 4 datasets), and ground water recharge (average of 3 datasets), during 1999-2010. Mass balance assessment was complemented by Turc-Budyko non-dimensional analysis, where annual precipitation, measured flow and potential evapotranspiration (average of 5 datasets) data were used for the same period. Both analyses suggest that all tested precipitation datasets significantly underestimate precipitation in the Karakoram sub-basins. For the Hindukush and Himalayan sub-basins most datasets underestimate precipitation, except ERA-interim and ERA-40. The analysis indicates that for this large region with complicated terrain features and stark spatial precipitation gradients the reanalysis datasets have better consistency with flow measurements than datasets derived from records of only sparsely distributed climatic

  20. Assessing the Accuracy of Ancestral Protein Reconstruction Methods

    PubMed Central

    Williams, Paul D; Pollock, David D; Blackburne, Benjamin P; Goldstein, Richard A

    2006-01-01

    The phylogenetic inference of ancestral protein sequences is a powerful technique for the study of molecular evolution, but any conclusions drawn from such studies are only as good as the accuracy of the reconstruction method. Every inference method leads to errors in the ancestral protein sequence, resulting in potentially misleading estimates of the ancestral protein's properties. To assess the accuracy of ancestral protein reconstruction methods, we performed computational population evolution simulations featuring near-neutral evolution under purifying selection, speciation, and divergence using an off-lattice protein model where fitness depends on the ability to be stable in a specified target structure. We were thus able to compare the thermodynamic properties of the true ancestral sequences with the properties of “ancestral sequences” inferred by maximum parsimony, maximum likelihood, and Bayesian methods. Surprisingly, we found that methods such as maximum parsimony and maximum likelihood that reconstruct a “best guess” amino acid at each position overestimate thermostability, while a Bayesian method that sometimes chooses less-probable residues from the posterior probability distribution does not. Maximum likelihood and maximum parsimony apparently tend to eliminate variants at a position that are slightly detrimental to structural stability simply because such detrimental variants are less frequent. Other properties of ancestral proteins might be similarly overestimated. This suggests that ancestral reconstruction studies require greater care to come to credible conclusions regarding functional evolution. Inferred functional patterns that mimic reconstruction bias should be reevaluated. PMID:16789817

  1. Quantitative modeling of the accuracy in registering preoperative patient-specific anatomic models into left atrial cardiac ablation procedures

    SciTech Connect

    Rettmann, Maryam E. Holmes, David R.; Camp, Jon J.; Cameron, Bruce M.; Robb, Richard A.; Kwartowitz, David M.; Gunawan, Mia; Johnson, Susan B.; Packer, Douglas L.; Dalegrave, Charles; Kolasa, Mark W.

    2014-02-15

    Purpose: In cardiac ablation therapy, accurate anatomic guidance is necessary to create effective tissue lesions for elimination of left atrial fibrillation. While fluoroscopy, ultrasound, and electroanatomic maps are important guidance tools, they lack information regarding detailed patient anatomy which can be obtained from high resolution imaging techniques. For this reason, there has been significant effort in incorporating detailed, patient-specific models generated from preoperative imaging datasets into the procedure. Both clinical and animal studies have investigated registration and targeting accuracy when using preoperative models; however, the effect of various error sources on registration accuracy has not been quantitatively evaluated. Methods: Data from phantom, canine, and patient studies are used to model and evaluate registration accuracy. In the phantom studies, data are collected using a magnetically tracked catheter on a static phantom model. Monte Carlo simulation studies were run to evaluate both baseline errors as well as the effect of different sources of error that would be present in a dynamicin vivo setting. Error is simulated by varying the variance parameters on the landmark fiducial, physical target, and surface point locations in the phantom simulation studies. In vivo validation studies were undertaken in six canines in which metal clips were placed in the left atrium to serve as ground truth points. A small clinical evaluation was completed in three patients. Landmark-based and combined landmark and surface-based registration algorithms were evaluated in all studies. In the phantom and canine studies, both target registration error and point-to-surface error are used to assess accuracy. In the patient studies, no ground truth is available and registration accuracy is quantified using point-to-surface error only. Results: The phantom simulation studies demonstrated that combined landmark and surface-based registration improved

  2. Inertial Measures of Motion for Clinical Biomechanics: Comparative Assessment of Accuracy under Controlled Conditions – Changes in Accuracy over Time

    PubMed Central

    Lebel, Karina; Boissy, Patrick; Hamel, Mathieu; Duval, Christian

    2015-01-01

    Background Interest in 3D inertial motion tracking devices (AHRS) has been growing rapidly among the biomechanical community. Although the convenience of such tracking devices seems to open a whole new world of possibilities for evaluation in clinical biomechanics, its limitations haven’t been extensively documented. The objectives of this study are: 1) to assess the change in absolute and relative accuracy of multiple units of 3 commercially available AHRS over time; and 2) to identify different sources of errors affecting AHRS accuracy and to document how they may affect the measurements over time. Methods This study used an instrumented Gimbal table on which AHRS modules were carefully attached and put through a series of velocity-controlled sustained motions including 2 minutes motion trials (2MT) and 12 minutes multiple dynamic phases motion trials (12MDP). Absolute accuracy was assessed by comparison of the AHRS orientation measurements to those of an optical gold standard. Relative accuracy was evaluated using the variation in relative orientation between modules during the trials. Findings Both absolute and relative accuracy decreased over time during 2MT. 12MDP trials showed a significant decrease in accuracy over multiple phases, but accuracy could be enhanced significantly by resetting the reference point and/or compensating for initial Inertial frame estimation reference for each phase. Interpretation The variation in AHRS accuracy observed between the different systems and with time can be attributed in part to the dynamic estimation error, but also and foremost, to the ability of AHRS units to locate the same Inertial frame. Conclusions Mean accuracies obtained under the Gimbal table sustained conditions of motion suggest that AHRS are promising tools for clinical mobility assessment under constrained conditions of use. However, improvement in magnetic compensation and alignment between AHRS modules are desirable in order for AHRS to reach their

  3. An accuracy assessment of Cartesian-mesh approaches for the Euler equations

    NASA Technical Reports Server (NTRS)

    Coirier, William J.; Powell, Kenneth G.

    1995-01-01

    A critical assessment of the accuracy of Cartesian-mesh approaches for steady, transonic solutions of the Euler equations of gas dynamics is made. An exact solution of the Euler equations (Ringleb's flow) is used not only to infer the order of the truncation error of the Cartesian-mesh approaches, but also to compare the magnitude of the discrete error directly to that obtained with a structured mesh approach. Uniformly and adaptively refined solutions using a Cartesian-mesh approach are obtained and compared to each other and to uniformly refined structured mesh results. The effect of cell merging is investigated as well as the use of two different K-exact reconstruction procedures. The solution methodology of the schemes is explained and tabulated results are presented to compare the solution accuracies.

  4. High-temperature flaw assessment procedure

    SciTech Connect

    Ruggles, M.B. ); Takahashi, Y. ); Ainsworth, R.A. )

    1991-08-01

    Described is the background work performed jointly by the Electric Power Research Institute in the United States, the Central Research Institute of Electric Power Industry in Japan and Nuclear Electric plc in the United Kingdom with the purpose of developing a high-temperature flaw assessment procedure for reactor components. Existing creep-fatigue crack-growth models are reviewed, and the most promising methods are identified. Sources of material data are outlined, and results of the fundamental deformation and crack-growth tests are discussed. Results of subcritical crack-growth exploratory tests, creep-fatigue crack-growth tests under repeated thermal transient conditions, and exploratory failure tests are presented and contrasted with the analytical modeling. Crack-growth assessment methods are presented and applied to a typical liquid-metal reactor component. The research activities presented herein served as a foundation for the Flaw Assessment Guide for High-Temperature Reactor Components Subjected to Creep-Fatigue Loading published separately. 30 refs., 108 figs., 13 tabs.

  5. Fusion of range camera and photogrammetry: a systematic procedure for improving 3-D models metric accuracy.

    PubMed

    Guidi, G; Beraldin, J A; Ciofi, S; Atzeni, C

    2003-01-01

    The generation of three-dimensional (3-D) digital models produced by optical technologies in some cases involves metric errors. This happens when small high-resolution 3-D images are assembled together in order to model a large object. In some applications, as for example 3-D modeling of Cultural Heritage, the problem of metric accuracy is a major issue and no methods are currently available for enhancing it. The authors present a procedure by which the metric reliability of the 3-D model, obtained through iterative alignments of many range maps, can be guaranteed to a known acceptable level. The goal is the integration of the 3-D range camera system with a close range digital photogrammetry technique. The basic idea is to generate a global coordinate system determined by the digital photogrammetric procedure, measuring the spatial coordinates of optical targets placed around the object to be modeled. Such coordinates, set as reference points, allow the proper rigid motion of few key range maps, including a portion of the targets, in the global reference system defined by photogrammetry. The other 3-D images are normally aligned around these locked images with usual iterative algorithms. Experimental results on an anthropomorphic test object, comparing the conventional and the proposed alignment method, are finally reported.

  6. Assessing and Ensuring GOES-R Magnetometer Accuracy

    NASA Technical Reports Server (NTRS)

    Carter, Delano R.; Todirita, Monica; Kronenwetter, Jeffrey; Chu, Donald

    2016-01-01

    The GOES-R magnetometer subsystem accuracy requirement is 1.7 nanoteslas (nT). During quiet times (100 nT), accuracy is defined as absolute mean plus 3 sigma. During storms (300 nT), accuracy is defined as absolute mean plus 2 sigma. Error comes both from outside the magnetometers, e.g. spacecraft fields and misalignments, as well as inside, e.g. zero offset and scale factor errors. Because zero offset and scale factor drift over time, it will be necessary to perform annual calibration maneuvers. To predict performance before launch, we have used Monte Carlo simulations and covariance analysis. Both behave as expected, and their accuracy predictions agree within 30%. With the proposed calibration regimen, both suggest that the GOES-R magnetometer subsystem will meet its accuracy requirements.

  7. Assessing and Ensuring GOES-R Magnetometer Accuracy

    NASA Technical Reports Server (NTRS)

    Kronenwetter, Jeffrey; Carter, Delano R.; Todirita, Monica; Chu, Donald

    2016-01-01

    The GOES-R magnetometer accuracy requirement is 1.7 nanoteslas (nT). During quiet times (100 nT), accuracy is defined as absolute mean plus 3 sigma. During storms (300 nT), accuracy is defined as absolute mean plus 2 sigma. To achieve this, the sensor itself has better than 1 nT accuracy. Because zero offset and scale factor drift over time, it is also necessary to perform annual calibration maneuvers. To predict performance, we used covariance analysis and attempted to corroborate it with simulations. Although not perfect, the two generally agree and show the expected behaviors. With the annual calibration regimen, these predictions suggest that the magnetometers will meet their accuracy requirements.

  8. Assessing accuracy and precision for field and laboratory data: a perspective in ecosystem restoration

    USGS Publications Warehouse

    Stapanian, Martin A.; Lewis, Timothy E; Palmer, Craig J.; Middlebrook Amos, Molly

    2016-01-01

    Unlike most laboratory studies, rigorous quality assurance/quality control (QA/QC) procedures may be lacking in ecosystem restoration (“ecorestoration”) projects, despite legislative mandates in the United States. This is due, in part, to ecorestoration specialists making the false assumption that some types of data (e.g. discrete variables such as species identification and abundance classes) are not subject to evaluations of data quality. Moreover, emergent behavior manifested by complex, adapting, and nonlinear organizations responsible for monitoring the success of ecorestoration projects tend to unconsciously minimize disorder, QA/QC being an activity perceived as creating disorder. We discuss similarities and differences in assessing precision and accuracy for field and laboratory data. Although the concepts for assessing precision and accuracy of ecorestoration field data are conceptually the same as laboratory data, the manner in which these data quality attributes are assessed is different. From a sample analysis perspective, a field crew is comparable to a laboratory instrument that requires regular “recalibration,” with results obtained by experts at the same plot treated as laboratory calibration standards. Unlike laboratory standards and reference materials, the “true” value for many field variables is commonly unknown. In the laboratory, specific QA/QC samples assess error for each aspect of the measurement process, whereas field revisits assess precision and accuracy of the entire data collection process following initial calibration. Rigorous QA/QC data in an ecorestoration project are essential for evaluating the success of a project, and they provide the only objective “legacy” of the dataset for potential legal challenges and future uses.

  9. Assessing the accuracy of self-reported self-talk

    PubMed Central

    Brinthaupt, Thomas M.; Benson, Scott A.; Kang, Minsoo; Moore, Zaver D.

    2015-01-01

    As with most kinds of inner experience, it is difficult to assess actual self-talk frequency beyond self-reports, given the often hidden and subjective nature of the phenomenon. The Self-Talk Scale (STS; Brinthaupt et al., 2009) is a self-report measure of self-talk frequency that has been shown to possess acceptable reliability and validity. However, no research using the STS has examined the accuracy of respondents’ self-reports. In the present paper, we report a series of studies directly examining the measurement of self-talk frequency and functions using the STS. The studies examine ways to validate self-reported self-talk by (1) comparing STS responses from 6 weeks earlier to recent experiences that might precipitate self-talk, (2) using experience sampling methods to determine whether STS scores are related to recent reports of self-talk over a period of a week, and (3) comparing self-reported STS scores to those provided by a significant other who rated the target on the STS. Results showed that (1) overall self-talk scores, particularly self-critical and self-reinforcing self-talk, were significantly related to reports of context-specific self-talk; (2) high STS scorers reported talking to themselves significantly more often during recent events compared to low STS scorers, and, contrary to expectations, (3) friends reported less agreement than strangers in their self-other self-talk ratings. Implications of the results for the validity of the STS and for measuring self-talk are presented. PMID:25999887

  10. Accuracy Assessment of Coastal Topography Derived from Uav Images

    NASA Astrophysics Data System (ADS)

    Long, N.; Millescamps, B.; Pouget, F.; Dumon, A.; Lachaussée, N.; Bertin, X.

    2016-06-01

    To monitor coastal environments, Unmanned Aerial Vehicle (UAV) is a low-cost and easy to use solution to enable data acquisition with high temporal frequency and spatial resolution. Compared to Light Detection And Ranging (LiDAR) or Terrestrial Laser Scanning (TLS), this solution produces Digital Surface Model (DSM) with a similar accuracy. To evaluate the DSM accuracy on a coastal environment, a campaign was carried out with a flying wing (eBee) combined with a digital camera. Using the Photoscan software and the photogrammetry process (Structure From Motion algorithm), a DSM and an orthomosaic were produced. Compared to GNSS surveys, the DSM accuracy is estimated. Two parameters are tested: the influence of the methodology (number and distribution of Ground Control Points, GCPs) and the influence of spatial image resolution (4.6 cm vs 2 cm). The results show that this solution is able to reproduce the topography of a coastal area with a high vertical accuracy (< 10 cm). The georeferencing of the DSM require a homogeneous distribution and a large number of GCPs. The accuracy is correlated with the number of GCPs (use 19 GCPs instead of 10 allows to reduce the difference of 4 cm); the required accuracy should be dependant of the research problematic. Last, in this particular environment, the presence of very small water surfaces on the sand bank does not allow to improve the accuracy when the spatial resolution of images is decreased.

  11. Assessing and ensuring GOES-R magnetometer accuracy

    NASA Astrophysics Data System (ADS)

    Carter, Delano; Todirita, Monica; Kronenwetter, Jeffrey; Dahya, Melissa; Chu, Donald

    2016-05-01

    The GOES-R magnetometer subsystem accuracy requirement is 1.7 nanoteslas (nT). During quiet times (100 nT), accuracy is defined as absolute mean plus 3 sigma error per axis. During storms (300 nT), accuracy is defined as absolute mean plus 2 sigma error per axis. Error comes both from outside the magnetometers, e.g. spacecraft fields and misalignments, as well as inside, e.g. zero offset and scale factor errors. Because zero offset and scale factor drift over time, it will be necessary to perform annual calibration maneuvers. To predict performance before launch, we have used Monte Carlo simulations and covariance analysis. With the proposed calibration regimen, both suggest that the magnetometer subsystem will meet its accuracy requirements.

  12. Assessment of the Accuracy of Close Distance Photogrammetric JRC Data

    NASA Astrophysics Data System (ADS)

    Kim, Dong Hyun; Poropat, George; Gratchev, Ivan; Balasubramaniam, Arumugam

    2016-11-01

    By using close range photogrammetry, this article investigates the accuracy of the photogrammetric estimation of rock joint roughness coefficients (JRC), a measure of the degree of roughness of rock joint surfaces. This methodology has proven to be convenient both in laboratory and in site conditions. However, the accuracy and precision of roughness profiles obtained from photogrammetric 3D images have not been properly established due to the variances caused by factors such as measurement errors and systematic errors in photogrammetry. In this study, the influences of camera-to-object distance, focal length and profile orientation on the accuracy of JRC values are investigated using several photogrammetry field surveys. Directional photogrammetric JRC data are compared with data derived from the measured profiles, so as to determine their accuracy. The extent of the accuracy of JRC values was examined based on the error models which were previously developed from laboratory tests and revised for better estimation in this study. The results show that high-resolution 3D images (point interval ≤1 mm) can reduce the JRC errors obtained from field photogrammetric surveys. Using the high-resolution images, the photogrammetric JRC values in the range of high oblique camera angles are highly consistent with the revised error models. Therefore, the analysis indicates that the revised error models facilitate the verification of the accuracy of photogrammetric JRC values.

  13. Pollutant Assessments Group Procedures Manual: Volume 1, Administrative and support procedures

    SciTech Connect

    Not Available

    1992-03-01

    This manual describes procedures currently in use by the Pollutant Assessments Group. The manual is divided into two volumes: Volume 1 includes administrative and support procedures, and Volume 2 includes technical procedures. These procedures are revised in an ongoing process to incorporate new developments in hazardous waste assessment technology and changes in administrative policy. Format inconsistencies will be corrected in subsequent revisions of individual procedures. The purpose of the Pollutant Assessments Groups Procedures Manual is to provide a standardized set of procedures documenting in an auditable manner the activities performed by the Pollutant Assessments Group (PAG) of the Health and Safety Research Division (HASRD) of the Environmental Measurements and Applications Section (EMAS) at Oak Ridge National Laboratory (ORNL). The Procedures Manual ensures that the organizational, administrative, and technical activities of PAG conform properly to protocol outlined by funding organizations. This manual also ensures that the techniques and procedures used by PAG and other contractor personnel meet the requirements of applicable governmental, scientific, and industrial standards. The Procedures Manual is sufficiently comprehensive for use by PAG and contractor personnel in the planning, performance, and reporting of project activities and measurements. The Procedures Manual provides procedures for conducting field measurements and includes program planning, equipment operation, and quality assurance elements. Successive revisions of this manual will be archived in the PAG Document Control Department to facilitate tracking of the development of specific procedures.

  14. Pixels, Blocks of Pixels, and Polygons: Choosing a Spatial Unit for Thematic Accuracy Assessment

    EPA Science Inventory

    Pixels, polygons, and blocks of pixels are all potentially viable spatial assessment units for conducting an accuracy assessment. We develop a statistical population-based framework to examine how the spatial unit chosen affects the outcome of an accuracy assessment. The populati...

  15. Accuracy of Students' Self-Assessment and Their Beliefs about Its Utility

    ERIC Educational Resources Information Center

    Lew, Magdeleine D. N.; Alwis, W. A. M.; Schmidt, Henk G.

    2010-01-01

    The purpose of the two studies presented here was to evaluate the accuracy of students' self-assessment ability, to examine whether this ability improves over time and to investigate whether self-assessment is more accurate if students believe that it contributes to improving learning. To that end, the accuracy of the self-assessments of 3588…

  16. Does it Make a Difference? Investigating the Assessment Accuracy of Teacher Tutors and Student Tutors

    ERIC Educational Resources Information Center

    Herppich, Stephanie; Wittwer, Jorg; Nuckles, Matthias; Renkl, Alexander

    2013-01-01

    Tutors often have difficulty with accurately assessing a tutee's understanding. However, little is known about whether the professional expertise of tutors influences their assessment accuracy. In this study, the authors examined the accuracy with which 21 teacher tutors and 25 student tutors assessed a tutee's understanding of the human…

  17. ASSESSING THE ACCURACY OF NATIONAL LAND COVER DATASET AREA ESTIMATES AT MULTIPLE SPATIAL EXTENTS

    EPA Science Inventory

    Site specific accuracy assessments provide fine-scale evaluation of the thematic accuracy of land use/land cover (LULC) datasets; however, they provide little insight into LULC accuracy across varying spatial extents. Additionally, LULC data are typically used to describe lands...

  18. Constraint on Absolute Accuracy of Metacomprehension Assessments: The Anchoring and Adjustment Model vs. the Standards Model

    ERIC Educational Resources Information Center

    Kwon, Heekyung

    2011-01-01

    The objective of this study is to provide a systematic account of three typical phenomena surrounding absolute accuracy of metacomprehension assessments: (1) the absolute accuracy of predictions is typically quite low; (2) there exist individual differences in absolute accuracy of predictions as a function of reading skill; and (3) postdictions…

  19. Bilingual Language Assessment: A Meta-Analysis of Diagnostic Accuracy

    ERIC Educational Resources Information Center

    Dollaghan, Christine A.; Horner, Elizabeth A.

    2011-01-01

    Purpose: To describe quality indicators for appraising studies of diagnostic accuracy and to report a meta-analysis of measures for diagnosing language impairment (LI) in bilingual Spanish-English U.S. children. Method: The authors searched electronically and by hand to locate peer-reviewed English-language publications meeting inclusion criteria;…

  20. Assessing genomic selection prediction accuracy in a dynamic barley breeding

    Technology Transfer Automated Retrieval System (TEKTRAN)

    Genomic selection is a method to improve quantitative traits in crops and livestock by estimating breeding values of selection candidates using phenotype and genome-wide marker data sets. Prediction accuracy has been evaluated through simulation and cross-validation, however validation based on prog...

  1. Assessing accuracy in citizen science-based plant phenology monitoring

    NASA Astrophysics Data System (ADS)

    Fuccillo, Kerissa K.; Crimmins, Theresa M.; de Rivera, Catherine E.; Elder, Timothy S.

    2015-07-01

    In the USA, thousands of volunteers are engaged in tracking plant and animal phenology through a variety of citizen science programs for the purpose of amassing spatially and temporally comprehensive datasets useful to scientists and resource managers. The quality of these observations and their suitability for scientific analysis, however, remains largely unevaluated. We aimed to evaluate the accuracy of plant phenology observations collected by citizen scientist volunteers following protocols designed by the USA National Phenology Network (USA-NPN). Phenology observations made by volunteers receiving several hours of formal training were compared to those collected independently by a professional ecologist. Approximately 11,000 observations were recorded by 28 volunteers over the course of one field season. Volunteers consistently identified phenophases correctly (91 % overall) for the 19 species observed. Volunteers demonstrated greatest overall accuracy identifying unfolded leaves, ripe fruits, and open flowers. Transitional accuracy decreased for some species/phenophase combinations (70 % average), and accuracy varied significantly by phenophase and species ( p < 0.0001). Volunteers who submitted fewer observations over the period of study did not exhibit a higher error rate than those who submitted more total observations. Overall, these results suggest that volunteers with limited training can provide reliable observations when following explicit, standardized protocols. Future studies should investigate different observation models (i.e., group/individual, online/in-person training) over subsequent seasons with multiple expert comparisons to further substantiate the ability of these monitoring programs to supply accurate broadscale datasets capable of answering pressing ecological questions about global change.

  2. Accuracy and precision of the three-dimensional assessment of the facial surface using a 3-D laser scanner.

    PubMed

    Kovacs, L; Zimmermann, A; Brockmann, G; Baurecht, H; Schwenzer-Zimmerer, K; Papadopulos, N A; Papadopoulos, M A; Sader, R; Biemer, E; Zeilhofer, H F

    2006-06-01

    Three-dimensional (3-D) recording of the surface of the human body or anatomical areas has gained importance in many medical specialties. Thus, it is important to determine scanner precision and accuracy in defined medical applications and to establish standards for the recording procedure. Here we evaluated the precision and accuracy of 3-D assessment of the facial area with the Minolta Vivid 910 3D Laser Scanner. We also investigated the influence of factors related to the recording procedure and the processing of scanner data on final results. These factors include lighting, alignment of scanner and object, the examiner, and the software used to convert measurements into virtual images. To assess scanner accuracy, we compared scanner data to those obtained by manual measurements on a dummy. Less than 7% of all results with the scanner method were outside a range of error of 2 mm when compared to corresponding reference measurements. Accuracy, thus, proved to be good enough to satisfy requirements for numerous clinical applications. Moreover, the experiments completed with the dummy yielded valuable information for optimizing recording parameters for best results. Thus, under defined conditions, precision and accuracy of surface models of the human face recorded with the Minolta Vivid 910 3D Scanner presumably can also be enhanced. Future studies will involve verification of our findings using test persons. The current findings indicate that the Minolta Vivid 910 3D Scanner might be used with benefit in medicine when recording the 3-D surface structures of the face.

  3. Recent Advances in Image Assisted Neurosurgical Procedures: Improved Navigational Accuracy and Patient Safety

    SciTech Connect

    Olivi, Alessandro, M.D.

    2010-08-28

    Neurosurgical procedures require precise planning and intraoperative support. Recent advances in image guided technology have provided neurosurgeons with improved navigational support for more effective and safer procedures. A number of exemplary cases will be presented.

  4. Recent Advances in Image Assisted Neurosurgical Procedures: Improved Navigational Accuracy and Patient Safety

    ScienceCinema

    Olivi, Alessandro, M.D.

    2016-07-12

    Neurosurgical procedures require precise planning and intraoperative support. Recent advances in image guided technology have provided neurosurgeons with improved navigational support for more effective and safer procedures. A number of exemplary cases will be presented.

  5. Accuracy assessment of seven global land cover datasets over China

    NASA Astrophysics Data System (ADS)

    Yang, Yongke; Xiao, Pengfeng; Feng, Xuezhi; Li, Haixing

    2017-03-01

    Land cover (LC) is the vital foundation to Earth science. Up to now, several global LC datasets have arisen with efforts of many scientific communities. To provide guidelines for data usage over China, nine LC maps from seven global LC datasets (IGBP DISCover, UMD, GLC, MCD12Q1, GLCNMO, CCI-LC, and GlobeLand30) were evaluated in this study. First, we compared their similarities and discrepancies in both area and spatial patterns, and analysed their inherent relations to data sources and classification schemes and methods. Next, five sets of validation sample units (VSUs) were collected to calculate their accuracy quantitatively. Further, we built a spatial analysis model and depicted their spatial variation in accuracy based on the five sets of VSUs. The results show that, there are evident discrepancies among these LC maps in both area and spatial patterns. For LC maps produced by different institutes, GLC 2000 and CCI-LC 2000 have the highest overall spatial agreement (53.8%). For LC maps produced by same institutes, overall spatial agreement of CCI-LC 2000 and 2010, and MCD12Q1 2001 and 2010 reach up to 99.8% and 73.2%, respectively; while more efforts are still needed if we hope to use these LC maps as time series data for model inputting, since both CCI-LC and MCD12Q1 fail to represent the rapid changing trend of several key LC classes in the early 21st century, in particular urban and built-up, snow and ice, water bodies, and permanent wetlands. With the highest spatial resolution, the overall accuracy of GlobeLand30 2010 is 82.39%. For the other six LC datasets with coarse resolution, CCI-LC 2010/2000 has the highest overall accuracy, and following are MCD12Q1 2010/2001, GLC 2000, GLCNMO 2008, IGBP DISCover, and UMD in turn. Beside that all maps exhibit high accuracy in homogeneous regions; local accuracies in other regions are quite different, particularly in Farming-Pastoral Zone of North China, mountains in Northeast China, and Southeast Hills. Special

  6. Assessing accuracy in citizen science-based plant phenology monitoring.

    PubMed

    Fuccillo, Kerissa K; Crimmins, Theresa M; de Rivera, Catherine E; Elder, Timothy S

    2015-07-01

    In the USA, thousands of volunteers are engaged in tracking plant and animal phenology through a variety of citizen science programs for the purpose of amassing spatially and temporally comprehensive datasets useful to scientists and resource managers. The quality of these observations and their suitability for scientific analysis, however, remains largely unevaluated. We aimed to evaluate the accuracy of plant phenology observations collected by citizen scientist volunteers following protocols designed by the USA National Phenology Network (USA-NPN). Phenology observations made by volunteers receiving several hours of formal training were compared to those collected independently by a professional ecologist. Approximately 11,000 observations were recorded by 28 volunteers over the course of one field season. Volunteers consistently identified phenophases correctly (91% overall) for the 19 species observed. Volunteers demonstrated greatest overall accuracy identifying unfolded leaves, ripe fruits, and open flowers. Transitional accuracy decreased for some species/phenophase combinations (70% average), and accuracy varied significantly by phenophase and species (p < 0.0001). Volunteers who submitted fewer observations over the period of study did not exhibit a higher error rate than those who submitted more total observations. Overall, these results suggest that volunteers with limited training can provide reliable observations when following explicit, standardized protocols. Future studies should investigate different observation models (i.e., group/individual, online/in-person training) over subsequent seasons with multiple expert comparisons to further substantiate the ability of these monitoring programs to supply accurate broadscale datasets capable of answering pressing ecological questions about global change.

  7. Diagnostic accuracy assessment of cytopathological examination of feline sporotrichosis.

    PubMed

    Jessica, N; Sonia, R L; Rodrigo, C; Isabella, D F; Tânia, M P; Jeferson, C; Anna, B F; Sandro, A

    2015-11-01

    Sporotrichosis is an implantation mycosis caused by pathogenic species of Sporothrix schenckii complex that affects humans and animals, especially cats. Its main forms of zoonotic transmission include scratching, biting and/or contact with the exudate from lesions of sick cats. In Brazil, epidemic involving humans, dogs and cats has occurred since 1998. The definitive diagnosis of sporotrichosis is obtained by the isolation of the fungus in culture; however, the result can take up to four weeks, which may delay the beginning of antifungal treatment in some cases. Cytopathological examination is often used in feline sporotrichosis diagnosis, but accuracy parameters have not been established yet. The aim of this study was to evaluate the accuracy and reliability of cytopathological examination in the diagnosis of feline sporotrichosis. The present study included 244 cats from the metropolitan region of Rio de Janeiro, mostly males in reproductive age with three or more lesions in non-adjacent anatomical places. To evaluate the inter-observer reliability, two different observers performed the microscopic examination of the slides blindly. Test sensitivity was 84.9%. The values of positive predictive value, negative predictive value, positive likelihood ratio, negative likelihood ratio and accuracy were 86.0, 24.4, 2.02, 0.26 and 82.8%, respectively. The reliability between the two observers was considered substantial. We conclude that the cytopathological examination is a sensitive, rapid and practical method to be used in feline sporotrichosis diagnosis in outbreaks of this mycosis.

  8. Augmented Reality Mentor for Training Maintenance Procedures: Interim Assessment

    DTIC Science & Technology

    2014-08-01

    ARI Research Note 2014-04 Augmented Reality Mentor for Training Maintenance Procedures: Interim Assessment Louise...2014 4. TITLE AND SUBTITLE Augmented Reality Mentor for Training Maintenance Procedures: Interim Assessment 5a. CONTRACT OR GRANT NUMBER...Representative and Subject Matter POC: Dr. William R. Bickley 14. ABSTRACT (Maximum 200 words): The Augmented Reality Mentor is a 2-yr advanced

  9. 30 CFR 723.18 - Procedures for assessment conference.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... 30 Mineral Resources 3 2010-07-01 2010-07-01 false Procedures for assessment conference. 723.18 Section 723.18 Mineral Resources OFFICE OF SURFACE MINING RECLAMATION AND ENFORCEMENT, DEPARTMENT OF THE INTERIOR INITIAL PROGRAM REGULATIONS CIVIL PENALTIES § 723.18 Procedures for assessment conference. (a) The Office shall arrange for a...

  10. 12 CFR 620.3 - Accuracy of reports and assessment of internal control over financial reporting.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... 12 Banks and Banking 6 2010-01-01 2010-01-01 false Accuracy of reports and assessment of internal control over financial reporting. 620.3 Section 620.3 Banks and Banking FARM CREDIT ADMINISTRATION FARM CREDIT SYSTEM DISCLOSURE TO SHAREHOLDERS General § 620.3 Accuracy of reports and assessment of...

  11. An accuracy assessment of Magellan Very Long Baseline Interferometry (VLBI)

    NASA Technical Reports Server (NTRS)

    Engelhardt, D. B.; Kronschnabl, G. R.; Border, J. S.

    1990-01-01

    Very Long Baseline Interferometry (VLBI) measurements of the Magellan spacecraft's angular position and velocity were made during July through September, 1989, during the spacecraft's heliocentric flight to Venus. The purpose of this data acquisition and reduction was to verify this data type for operational use before Magellan is inserted into Venus orbit, in August, 1990. The accuracy of these measurements are shown to be within 20 nanoradians in angular position, and within 5 picoradians/sec in angular velocity. The media effects and their calibrations are quantified; the wet fluctuating troposphere is the dominant source of measurement error for angular velocity. The charged particle effect is completely calibrated with S- and X-Band dual-frequency calibrations. Increasing the accuracy of the Earth platform model parameters, by using VLBI-derived tracking station locations consistent with the planetary ephemeris frame, and by including high frequency Earth tidal terms in the Earth rotation model, add a few nanoradians improvement to the angular position measurements. Angular velocity measurements were insensitive to these Earth platform modelling improvements.

  12. Assessing expected accuracy of probe vehicle travel time reports

    SciTech Connect

    Hellinga, B.; Fu, L.

    1999-12-01

    The use of probe vehicles to provide estimates of link travel times has been suggested as a means of obtaining travel times within signalized networks for use in advanced travel information systems. Past research in the literature has proved contradictory conclusions regarding the expected accuracy of these probe-based estimates, and consequently has estimated different levels of market penetration of probe vehicles required to sustain accurate data within an advanced traveler information system. This paper examines the effect of sampling bias on the accuracy of the probe estimates. An analytical expression is derived on the basis of queuing theory to prove that bias in arrival time distributions and/or in the proportion of probes associated with each link departure turning movement will lead to a systematic bias in the sample estimate of the mean delay. Subsequently, the potential for and impact of sampling bias on a signalized link is examined by simulating an arterial corridor. The analytical derivation and the simulation analysis show that the reliability of probe-based average link travel times is highly affected by sampling bias. Furthermore, this analysis shows that the contradictory conclusions of previous research are directly related to the presence of absence of sample bias.

  13. Assessing Organizational Effectiveness: Considerations and Procedures.

    ERIC Educational Resources Information Center

    Krakower, Jack Y.

    The dimensions of effectiveness pertinent to postsecondary institutions are discussed, along with approaches for assessing effectiveness. A paradigm of effectiveness is presented, based on six concerns: whose perspective is taken; assessment criteria, the referent for judging effectiveness, level and unit of analysis, time frame, and types and…

  14. A procedure for the assessment of low frequency noise complaints.

    PubMed

    Moorhouse, Andy T; Waddington, David C; Adams, Mags D

    2009-09-01

    The development and application of a procedure for the assessment of low frequency noise (LFN) complaints are described. The development of the assessment method included laboratory tests addressing low frequency hearing threshold and the effect on acceptability of fluctuation, and field measurements complemented with interview-based questionnaires. Environmental health departments then conducted a series of six trials with genuine "live" LFN complaints to test the workability and usefulness of the procedure. The procedure includes guidance notes and a pro-forma report with step-by-step instructions. It does not provide a prescriptive indicator of nuisance but rather gives a systematic procedure to help environmental health practitioners to form their own opinion. Examples of field measurements and application of the procedure are presented. The procedure and examples are likely to be of particular interest to environmental health practitioners involved in the assessment of LFN complaints.

  15. Assessment of the accuracy and stability of ENSN sensors responses

    NASA Astrophysics Data System (ADS)

    Nofal, Hamed; Mohamed, Omar; Mohanna, Mahmoud; El-Gabry, Mohamed

    2015-06-01

    The Egyptian National Seismic Network (ENSN) is an advanced scientific tool used to investigate earth structure and seismic activity in Egypt. One of the main tasks of the engineering team of ENSN is to keep the accuracy and stability of the high performance seismic instruments as close as possible to the international standards used in international seismic network. To achieve this task, the seismometers are routinely calibrated. One of the final outcomes of the calibration process is a set of the actual poles and zeros of the seismometers. Due to the strategic importance of the High Dam, we present in this paper the results of the calibrating broad band (BB) seismometers type Trillium-40 (40 second). From these sets we computed both amplitude and phase responses as well as their deviations from the nominal responses of this particular seismometer type. The computed deviation of this sub-network is then statistically analyzed to obtain an overall estimate of the accuracy of measurements recorded by it. Such analysis might also discover some stations which are far from the international standards. This test will be carried out regularly at periods of several months to find out how stable the seismometer response is. As a result, the values of the magnitude and phase errors are confined between 0% and 2% for about 90% of the calibrated seismometers. The average magnitude error was found to be 5% from the nominal and 4% for average phase error. In order to eliminate any possible error in the measured data, the measured (true) poles and zeroes are used in the response files to replace the nominal values.

  16. QUANTITATIVE PROCEDURES FOR NEUROTOXICOLOGY RISK ASSESSMENT

    EPA Science Inventory

    In this project, previously published information on biologically based dose-response model for brain development was used to quantitatively evaluate critical neurodevelopmental processes, and to assess potential chemical impacts on early brain development. This model has been ex...

  17. Teaching and assessing procedural skills: a qualitative study

    PubMed Central

    2013-01-01

    Background Graduating Internal Medicine residents must possess sufficient skills to perform a variety of medical procedures. Little is known about resident experiences of acquiring procedural skills proficiency, of practicing these techniques, or of being assessed on their proficiency. The purpose of this study was to qualitatively investigate resident 1) experiences of the acquisition of procedural skills and 2) perceptions of procedural skills assessment methods available to them. Methods Focus groups were conducted in the weeks following an assessment of procedural skills incorporated into an objective structured clinical examination (OSCE). Using fundamental qualitative description, emergent themes were identified and analyzed. Results Residents perceived procedural skills assessment on the OSCE as a useful formative tool for direct observation and immediate feedback. This positive reaction was regularly expressed in conjunction with a frustration with available assessment systems. Participants reported that proficiency was acquired through resident directed learning with no formal mechanism to ensure acquisition or maintenance of skills. Conclusions The acquisition and assessment of procedural skills in Internal Medicine programs should move toward a more structured system of teaching, deliberate practice and objective assessment. We propose that directed, self-guided learning might meet these needs. PMID:23672617

  18. Accuracy of Specific BIVA for the Assessment of Body Composition in the United States Population

    PubMed Central

    Buffa, Roberto; Saragat, Bruno; Cabras, Stefano; Rinaldi, Andrea C.; Marini, Elisabetta

    2013-01-01

    Background Bioelectrical impedance vector analysis (BIVA) is a technique for the assessment of hydration and nutritional status, used in the clinical practice. Specific BIVA is an analytical variant, recently proposed for the Italian elderly population, that adjusts bioelectrical values for body geometry. Objective Evaluating the accuracy of specific BIVA in the adult U.S. population, compared to the ‘classic’ BIVA procedure, using DXA as the reference technique, in order to obtain an interpretative model of body composition. Design A cross-sectional sample of 1590 adult individuals (836 men and 754 women, 21–49 years old) derived from the NHANES 2003–2004 was considered. Classic and specific BIVA were applied. The sensitivity and specificity in recognizing individuals below the 5th and above the 95th percentiles of percent fat (FMDXA%) and extracellular/intracellular water (ECW/ICW) ratio were evaluated by receiver operating characteristic (ROC) curves. Classic and specific BIVA results were compared by a probit multiple-regression. Results Specific BIVA was significantly more accurate than classic BIVA in evaluating FMDXA% (ROC areas: 0.84–0.92 and 0.49–0.61 respectively; p = 0.002). The evaluation of ECW/ICW was accurate (ROC areas between 0.83 and 0.96) and similarly performed by the two procedures (p = 0.829). The accuracy of specific BIVA was similar in the two sexes (p = 0.144) and in FMDXA% and ECW/ICW (p = 0.869). Conclusions Specific BIVA showed to be an accurate technique. The tolerance ellipses of specific BIVA can be used for evaluating FM% and ECW/ICW in the U.S. adult population. PMID:23484033

  19. Precision and accuracy of visual foliar injury assessments

    SciTech Connect

    Gumpertz, M.L.; Tingey, D.T.; Hogsett, W.E.

    1982-07-01

    The study compared three measures of foliar injury: (i) mean percent leaf area injured of all leaves on the plant, (ii) mean percent leaf area injured of the three most injured leaves, and (iii) the proportion of injured leaves to total number of leaves. For the first measure, the variation caused by reader biases and day-to-day variations were compared with the innate plant-to-plant variation. Bean (Phaseolus vulgaris 'Pinto'), pea (Pisum sativum 'Little Marvel'), radish (Rhaphanus sativus 'Cherry Belle'), and spinach (Spinacia oleracea 'Northland') plants were exposed to either 3 ..mu..L L/sup -1/ SO/sub 2/ or 0.3 ..mu..L L/sup -1/ ozone for 2 h. Three leaf readers visually assessed the percent injury on every leaf of each plant while a fourth reader used a transparent grid to make an unbiased assessment for each plant. The mean leaf area injured of the three most injured leaves was highly correlated with all leaves on the plant only if the three most injured leaves were <100% injured. The proportion of leaves injured was not highly correlated with percent leaf area injured of all leaves on the plant for any species in this study. The largest source of variation in visual assessments was plant-to-plant variation, which ranged from 44 to 97% of the total variance, followed by variation among readers (0-32% of the variance). Except for radish exposed to ozone, the day-to-day variation accounted for <18% of the total. Reader bias in assessment of ozone injury was significant but could be adjusted for each reader by a simple linear regression (R/sup 2/ = 0.89-0.91) of the visual assessments against the grid assessments.

  20. An evaluation of periodontal assessment procedures among Indiana dental hygienists.

    PubMed

    Stephan, Christine A

    2014-01-01

    Using a descriptive correlational design, this study surveyed periodontal assessment procedures currently performed by Indiana dental hygienists in general dentistry practices to reveal if deficiencies in assessment exist. Members (n = 354) of the Indiana Dental Hygienists' Association (IDHA) were invited to participate in the survey. A 22 multiple choice question survey, using Likert scales for responses, was open to participants for three weeks. Descriptive and non-parametric inferential statistics analyzed questions related to demographics and assessment procedures practiced. In addition, an evaluation of the awareness of periodontal assessment procedures recommended by the American Academy of Periodontology (AAP) was examined. Of the 354 Indiana dental hygienists surveyed, a 31.9% response rate was achieved. Participants were asked to identify the recommended AAP periodontal assessment procedures they perform. The majority of respondents indicated either frequently or always performing the listed assessment procedures. Additionally, significant relationships were found between demographic factors and participants' awareness and performance of recommended AAP assessment procedures. While information gathered from this study is valuable to the body of literature regarding periodontal disease assessment, continued research with larger survey studies should be conducted to obtain a more accurate national representation of what is being practiced by dental hygienists.

  1. Charts of operational process specifications ("OPSpecs charts") for assessing the precision, accuracy, and quality control needed to satisfy proficiency testing performance criteria.

    PubMed

    Westgard, J O

    1992-07-01

    "Operational process specifications" have been derived from an analytical quality-planning model to assess the precision, accuracy, and quality control (QC) needed to satisfy Proficiency Testing (PT) criteria. These routine operating specifications are presented in the form of an "OPSpecs chart," which describes the operational limits for imprecision and inaccuracy when a desired level of quality assurance is provided by a specific QC procedure. OPSpecs charts can be used to compare the operational limits for different QC procedures and to select a QC procedure that is appropriate for the precision and accuracy of a specific measurement procedure. To select a QC procedure, one plots the inaccuracy and imprecision observed for a measurement procedure on the OPSpecs chart to define the current operating point, which is then compared with the operational limits of candidate QC procedures. Any QC procedure whose operational limits are greater than the measurement procedure's operating point will provide a known assurance, with the percent chance specified by the OPSpecs chart, that critical analytical errors will be detected. OPSpecs charts for a 10% PT criterion are presented to illustrate the selection of QC procedures for measurement procedures with different amounts of imprecision and inaccuracy. Normalized OPSpecs charts are presented to permit a more general assessment of the analytical performance required with commonly used QC procedures.

  2. Effects of Procedural Content and Task Repetition on Accuracy and Fluency in an EFL Context

    ERIC Educational Resources Information Center

    Patanasorn, Chomraj

    2010-01-01

    Task-supported language teaching can help provide L2 learners communicative practice in EFL contexts. Additionally, it has been suggested that repetition of tasks can help learners develop their accuracy and fluency (Bygate, 2001; Gass, Mackey, Fernandez, & Alvarez-Torres, 1999; Lynch & Maclean, 2000). The purposes of the study were to investigate…

  3. Self-assessment procedure using fuzzy sets

    NASA Astrophysics Data System (ADS)

    Mimi, Fotini

    2000-10-01

    Self-Assessment processes, initiated by a company itself and carried out by its own people, are considered to be the starting point for a regular strategic or operative planning process to ensure a continuous quality improvement. Their importance has increased by the growing relevance and acceptance of international quality awards such as the Malcolm Baldrige National Quality Award, the European Quality Award and the Deming Prize. Especially award winners use the instrument of a systematic and regular Self-Assessment and not only because they have to verify their quality and business results for at least three years. The Total Quality Model of the European Foundation for Quality Management (EFQM), used for the European Quality Award, is the basis for Self-Assessment in Europe. This paper presents a self-assessment supporting method based on a methodology of fuzzy control systems providing an effective means of converting the linguistic approximation into an automatic control strategy. In particular, the elements of the Quality Model mentioned above are interpreted as linguistic variables. The LR-type of a fuzzy interval is used for their representation. The input data has a qualitative character based on empirical investigation and expert knowledge and therefore the base- variables are ordinal scaled. The aggregation process takes place on the basis of a hierarchical structure. Finally, in order to render the use of the method more practical a software system on PC basis is developed and implemented.

  4. The Search for Adult Assessment Procedures.

    ERIC Educational Resources Information Center

    Usnick, Virginia; Babbitt, Beatrice C.

    1993-01-01

    Reviewed 12 currently available mathematics assessment tools to determine whether they would be appropriate for use in a college-level mathematics clinic. Tests were assigned to two domains: cognitive and mathematical content. Major deficiencies of the tests are cited. (Contains 15 references.) (MDH)

  5. Assessing Learning: Standards, Principles, and Procedures.

    ERIC Educational Resources Information Center

    Whitaker, Urban

    The monograph provides a systematic explication of the underlying standards and principles that have been developed to help adult learners articulate what they know and can do, to clarify their claims to creditable achievement, to help assessors improve the reliability of assessment, and to save assessor time. Ten academic and administrative…

  6. Improving the accuracy of weight status assessment in infancy research.

    PubMed

    Dixon, Wallace E; Dalton, William T; Berry, Sarah M; Carroll, Vincent A

    2014-08-01

    Both researchers and primary care providers vary in their methods for assessing weight status in infants. The purpose of the present investigation was to compare standing-height-derived to recumbent-length-derived weight-for-length standardized (WLZ) scores, using the WHO growth curves, in a convenience sample of infants who visited the lab at 18 and 21 months of age. Fifty-eight primarily White, middle class infants (25 girls) from a semi-rural region of southern Appalachia visited the lab at 18 months, with 45 infants returning 3 months later. We found that recumbent-length-derived WLZ scores were significantly higher at 18 months than corresponding standing-height-derived WLZ scores. We also found that recumbent-length-derived WLZ scores, but not those derived from standing height measures, decreased significantly from 18 to 21 months. Although these differential results are attributable to the WHO database data entry syntax, which automatically corrects standing height measurements by adding 0.7 cm, they suggest that researchers proceed cautiously when using standing-height derived measures when calculating infant BMI z-scores. Our results suggest that for practical purposes, standing height measurements may be preferred, so long as they are entered into the WHO database as recumbent length measurements. We also encourage basic science infancy researchers to include BMI assessments as part of their routine assessment protocols, to serve as potential outcome measures for other basic science variables of theoretical interest.

  7. Mapping with Small UAS: A Point Cloud Accuracy Assessment

    NASA Astrophysics Data System (ADS)

    Toth, Charles; Jozkow, Grzegorz; Grejner-Brzezinska, Dorota

    2015-12-01

    Interest in using inexpensive Unmanned Aerial System (UAS) technology for topographic mapping has recently significantly increased. Small UAS platforms equipped with consumer grade cameras can easily acquire high-resolution aerial imagery allowing for dense point cloud generation, followed by surface model creation and orthophoto production. In contrast to conventional airborne mapping systems, UAS has limited ground coverage due to low flying height and limited flying time, yet it offers an attractive alternative to high performance airborne systems, as the cost of the sensors and platform, and the flight logistics, is relatively low. In addition, UAS is better suited for small area data acquisitions and to acquire data in difficult to access areas, such as urban canyons or densely built-up environments. The main question with respect to the use of UAS is whether the inexpensive consumer sensors installed in UAS platforms can provide the geospatial data quality comparable to that provided by conventional systems. This study aims at the performance evaluation of the current practice of UAS-based topographic mapping by reviewing the practical aspects of sensor configuration, georeferencing and point cloud generation, including comparisons between sensor types and processing tools. The main objective is to provide accuracy characterization and practical information for selecting and using UAS solutions in general mapping applications. The analysis is based on statistical evaluation as well as visual examination of experimental data acquired by a Bergen octocopter with three different image sensor configurations, including a GoPro HERO3+ Black Edition, a Nikon D800 DSLR and a Velodyne HDL-32. In addition, georeferencing data of varying quality were acquired and evaluated. The optical imagery was processed by using three commercial point cloud generation tools. Comparing point clouds created by active and passive sensors by using different quality sensors, and finally

  8. A Procedure for Assessing Students' Ability to Write Compositions.

    ERIC Educational Resources Information Center

    Cohen, Arthur M.

    This investigation developed a procedure for scoring English compositions that would be simple enough for use by junior college instructors with minimal statistical assistance, and still yield data that would allow sound inferences regarding student placement procedures and assessment of instructional effects. Twenty-one instructors from 14 junior…

  9. Attribute-Level and Pattern-Level Classification Consistency and Accuracy Indices for Cognitive Diagnostic Assessment

    ERIC Educational Resources Information Center

    Wang, Wenyi; Song, Lihong; Chen, Ping; Meng, Yaru; Ding, Shuliang

    2015-01-01

    Classification consistency and accuracy are viewed as important indicators for evaluating the reliability and validity of classification results in cognitive diagnostic assessment (CDA). Pattern-level classification consistency and accuracy indices were introduced by Cui, Gierl, and Chang. However, the indices at the attribute level have not yet…

  10. Thermal radiation view factor: Methods, accuracy and computer-aided procedures

    NASA Technical Reports Server (NTRS)

    Kadaba, P. V.

    1982-01-01

    The computer aided thermal analysis programs which predicts the result of predetermined acceptable temperature range prior to stationing of these orbiting equipment in various attitudes with respect to the Sun and the Earth was examined. Complexity of the surface geometries suggests the use of numerical schemes for the determination of these viewfactors. Basic definitions and standard methods which form the basis for various digital computer methods and various numerical methods are presented. The physical model and the mathematical methods on which a number of available programs are built are summarized. The strength and the weaknesses of the methods employed, the accuracy of the calculations and the time required for computations are evaluated. The situations where accuracies are important for energy calculations are identified and methods to save computational times are proposed. Guide to best use of the available programs at several centers and the future choices for efficient use of digital computers are included in the recommendations.

  11. Procedural ultrasound in pediatric patients: techniques and tips for accuracy and safety.

    PubMed

    Lin, Sophia

    2016-06-01

    Point-of-care ultrasound is becoming more prevalent in pediatric emergency departments as a critical adjunct to both diagnosis and procedure guidance. It is cost-effective, safe for unstable patients, and easily repeatable as a patient's clinical status changes. Point-of-care ultrasound does not expose the patient to ionizing radiation and may care ultrasound in pediatric emergency medicine is relatively new, the body of literature evaluating its utility is small, but growing. Data from adult emergency medicine, radiology, critical care, and anesthesia evaluating the utility of ultrasound guidance must be extrapolated to pediatric emergency medicine. This issue will review the adult literature and the available pediatric literature comparing ultrasound guidance to more traditional approaches. Methods for using ultrasound guidance to perform various procedures, and the pitfalls associated with each procedure, will also be described.

  12. Assessment of RFID Read Accuracy for ISS Water Kit

    NASA Technical Reports Server (NTRS)

    Chu, Andrew

    2011-01-01

    The Space Life Sciences Directorate/Medical Informatics and Health Care Systems Branch (SD4) is assessing the benefits Radio Frequency Identification (RFID) technology for tracking items flown onboard the International Space Station (ISS). As an initial study, the Avionic Systems Division Electromagnetic Systems Branch (EV4) is collaborating with SD4 to affix RFID tags to a water kit supplied by SD4 and studying the read success rate of the tagged items. The tagged water kit inside a Cargo Transfer Bag (CTB) was inventoried using three different RFID technologies, including the Johnson Space Center Building 14 Wireless Habitat Test Bed RFID portal, an RFID hand-held reader being targeted for use on board the ISS, and an RFID enclosure designed and prototyped by EV4.

  13. Accuracy of virtual models in the assessment of maxillary defects

    PubMed Central

    Kurşun, Şebnem; Kılıç, Cenk; Özen, Tuncer

    2015-01-01

    Purpose This study aimed to assess the reliability of measurements performed on three-dimensional (3D) virtual models of maxillary defects obtained using cone-beam computed tomography (CBCT) and 3D optical scanning. Materials and Methods Mechanical cavities simulating maxillary defects were prepared on the hard palate of nine cadavers. Images were obtained using a CBCT unit at three different fields-of-views (FOVs) and voxel sizes: 1) 60×60 mm FOV, 0.125 mm3 (FOV60); 2) 80×80 mm FOV, 0.160 mm3 (FOV80); and 3) 100×100 mm FOV, 0.250 mm3 (FOV100). Superimposition of the images was performed using software called VRMesh Design. Automated volume measurements were conducted, and differences between surfaces were demonstrated. Silicon impressions obtained from the defects were also scanned with a 3D optical scanner. Virtual models obtained using VRMesh Design were compared with impressions obtained by scanning silicon models. Gold standard volumes of the impression models were then compared with CBCT and 3D scanner measurements. Further, the general linear model was used, and the significance was set to p=0.05. Results A comparison of the results obtained by the observers and methods revealed the p values to be smaller than 0.05, suggesting that the measurement variations were caused by both methods and observers along with the different cadaver specimens used. Further, the 3D scanner measurements were closer to the gold standard measurements when compared to the CBCT measurements. Conclusion In the assessment of artificially created maxillary defects, the 3D scanner measurements were more accurate than the CBCT measurements. PMID:25793180

  14. 12 CFR 630.5 - Accuracy of reports and assessment of internal control over financial reporting.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... CREDIT SYSTEM General § 630.5 Accuracy of reports and assessment of internal control over financial... assessment of internal control over financial reporting. (1) Annual reports must include a report by the Funding Corporation's management assessing the effectiveness of the internal control over...

  15. Evaluating the Effect of Learning Style and Student Background on Self-Assessment Accuracy

    ERIC Educational Resources Information Center

    Alaoutinen, Satu

    2012-01-01

    This study evaluates a new taxonomy-based self-assessment scale and examines factors that affect assessment accuracy and course performance. The scale is based on Bloom's Revised Taxonomy and is evaluated by comparing students' self-assessment results with course performance in a programming course. Correlation has been used to reveal possible…

  16. Preschoolers' Person Description and Identification Accuracy: A Comparison of the Simultaneous and Elimination Lineup Procedures

    ERIC Educational Resources Information Center

    Pozzulo, Joanna D.; Dempsey, Julie; Crescini, Charmagne

    2009-01-01

    Preschoolers' (3- to 6-year-olds) person description and identification abilities were examined using the simultaneous and elimination lineup procedures. Participants (N = 100) were exposed to a 20-minute mask-making session conducted by a female confederate who acted as the mask-making teacher. After a brief delay (20 min), participants were…

  17. Test-Induced Priming Impairs Source Monitoring Accuracy in the DRM Procedure

    ERIC Educational Resources Information Center

    Dewhurst, Stephen A.; Knott, Lauren M.; Howe, Mark L.

    2011-01-01

    Three experiments investigated the effects of test-induced priming (TIP) on false recognition in the Deese/Roediger-McDermott procedure (Deese, 1959; Roediger & McDermott, 1995). In Experiment 1, TIP significantly increased false recognition for participants who made old/new decisions at test but not for participants who made remember/know…

  18. Accuracy Assessment of GPS Buoy Sea Level Measurements for Coastal Applications

    NASA Astrophysics Data System (ADS)

    Chiu, S.; Cheng, K.

    2008-12-01

    The GPS buoy in this study contains a geodetic antenna and a compact floater with the GPS receiver and power supply tethered to a boat. The coastal applications using GPS include monitoring of sea level and its change, calibration of satellite altimeters, hydrological or geophysical parameters modeling, seafloor geodesy, and others. Among these applications, in order to understand the overall data or model quality, it is required to gain the knowledge of position accuracy of GPS buoys or GPS-equipped vessels. Despite different new GPS data processing techniques, e.g., Precise Point Positioning (PPP) and virtual reference station (VRS), that require a prioir information obtained from the a regional GPS network. While the required a prioir information can be implemented on land, it may not be available on the sea. Hence, in this study, the GPS buoy was positioned with respect to a onshore GPS reference station using the traditional double- difference technique. Since the atmosphere starts to decorrelate as the baseline, the distance between the buoy and the reference station, increases, the positioning accuracy consequently decreases. Therefore, this study aims to assess the buoy position accuracy as the baseline increases and in order to quantify the upper limit of sea level measured by the GPS buoy. A GPS buoy campaign was conducted by National Chung Cheng University in An Ping, Taiwan with a 8- hour GPS buoy data collection. In addition, a GPS network contains 4 Continuous GPS (CGPS) stations in Taiwan was established with the goal to enable baselines in different range for buoy data processing. A vector relation from the network was utilized in order to find the correct ambiguities, which were applied to the long-baseline solution to eliminate the position error caused by incorrect ambiguities. After this procedure, a 3.6-cm discrepancy was found in the mean sea level solution between the long (~80 km) and the short (~1.5 km) baselines. The discrepancy between a

  19. Teaching and assessing procedural skills using simulation: metrics and methodology.

    PubMed

    Lammers, Richard L; Davenport, Moira; Korley, Frederick; Griswold-Theodorson, Sharon; Fitch, Michael T; Narang, Aneesh T; Evans, Leigh V; Gross, Amy; Rodriguez, Elliot; Dodge, Kelly L; Hamann, Cara J; Robey, Walter C

    2008-11-01

    Simulation allows educators to develop learner-focused training and outcomes-based assessments. However, the effectiveness and validity of simulation-based training in emergency medicine (EM) requires further investigation. Teaching and testing technical skills require methods and assessment instruments that are somewhat different than those used for cognitive or team skills. Drawing from work published by other medical disciplines as well as educational, behavioral, and human factors research, the authors developed six research themes: measurement of procedural skills; development of performance standards; assessment and validation of training methods, simulator models, and assessment tools; optimization of training methods; transfer of skills learned on simulator models to patients; and prevention of skill decay over time. The article reviews relevant and established educational research methodologies and identifies gaps in our knowledge of how physicians learn procedures. The authors present questions requiring further research that, once answered, will advance understanding of simulation-based procedural training and assessment in EM.

  20. Surgical accuracy of three-dimensional virtual planning: a pilot study of bimaxillary orthognathic procedures including maxillary segmentation.

    PubMed

    Stokbro, K; Aagaard, E; Torkov, P; Bell, R B; Thygesen, T

    2016-01-01

    This retrospective study evaluated the precision and positional accuracy of different orthognathic procedures following virtual surgical planning in 30 patients. To date, no studies of three-dimensional virtual surgical planning have evaluated the influence of segmentation on positional accuracy and transverse expansion. Furthermore, only a few have evaluated the precision and accuracy of genioplasty in placement of the chin segment. The virtual surgical plan was compared with the postsurgical outcome by using three linear and three rotational measurements. The influence of maxillary segmentation was analyzed in both superior and inferior maxillary repositioning. In addition, transverse surgical expansion was compared with the postsurgical expansion obtained. An overall, high degree of linear accuracy between planned and postsurgical outcomes was found, but with a large standard deviation. Rotational difference showed an increase in pitch, mainly affecting the maxilla. Segmentation had no significant influence on maxillary placement. However, a posterior movement was observed in inferior maxillary repositioning. A lack of transverse expansion was observed in the segmented maxilla independent of the degree of expansion.

  1. A Self-Assessment Procedure for Use in Evaluation Training

    ERIC Educational Resources Information Center

    Stufflebeam, Daniel L.; Wingate, Lori A.

    2005-01-01

    This article describes the Self-Assessment of Program Evaluation Expertise instrument and procedure developed to help participants assess their learning gains in a 3-week evaluation institute. Participants completed the instrument in a pre- and posttest format. To reduce both the threat of embarrassment from individual results and the temptation…

  2. 42 CFR 90.3 - Procedures for requesting health assessments.

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... 42 Public Health 1 2010-10-01 2010-10-01 false Procedures for requesting health assessments. 90.3 Section 90.3 Public Health PUBLIC HEALTH SERVICE, DEPARTMENT OF HEALTH AND HUMAN SERVICES HEALTH ASSESSMENTS AND HEALTH EFFECTS STUDIES OF HAZARDOUS SUBSTANCES RELEASES AND FACILITIES...

  3. Comparative Validity of the Shedler and Westen Assessment Procedure-200

    ERIC Educational Resources Information Center

    Mullins-Sweatt, Stephanie N.; Widiger, Thomas A.

    2008-01-01

    A predominant dimensional model of general personality structure is the five-factor model (FFM). Quite a number of alternative instruments have been developed to assess the domains of the FFM. The current study compares the validity of 2 alternative versions of the Shedler and Westen Assessment Procedure (SWAP-200) FFM scales, 1 that was developed…

  4. DMM assessments of attachment and adaptation: Procedures, validity and utility.

    PubMed

    Farnfield, Steve; Hautamäki, Airi; Nørbech, Peder; Sahhar, Nicola

    2010-07-01

    This article gives a brief over view of the Dynamic-Maturational Model of attachment and adaptation (DMM; Crittenden, 2008) together with the various DMM assessments of attachment that have been developed for specific stages of development. Each assessment is discussed in terms of procedure, outcomes, validity, advantages and limitations, comparable procedures and areas for further research and validation. The aims are twofold: to provide an introduction to DMM theory and its application that underlie the articles in this issue of CCPP; and to provide researchers and clinicians with a guide to DMM assessments.

  5. Peaks, plateaus, numerical instabilities, and achievable accuracy in Galerkin and norm minimizing procedures for solving Ax=b

    SciTech Connect

    Cullum, J.

    1994-12-31

    Plots of the residual norms generated by Galerkin procedures for solving Ax = b often exhibit strings of irregular peaks. At seemingly erratic stages in the iterations, peaks appear in the residual norm plot, intervals of iterations over which the norms initially increase and then decrease. Plots of the residual norms generated by related norm minimizing procedures often exhibit long plateaus, sequences of iterations over which reductions in the size of the residual norm are unacceptably small. In an earlier paper the author discussed and derived relationships between such peaks and plateaus within corresponding Galerkin/Norm Minimizing pairs of such methods. In this paper, through a set of numerical experiments, the author examines connections between peaks, plateaus, numerical instabilities, and the achievable accuracy for such pairs of iterative methods. Three pairs of methods, GMRES/Arnoldi, QMR/BCG, and two bidiagonalization methods are studied.

  6. An improved multivariate analytical method to assess the accuracy of acoustic sediment classification maps.

    NASA Astrophysics Data System (ADS)

    Biondo, M.; Bartholomä, A.

    2014-12-01

    High resolution hydro acoustic methods have been successfully employed for the detailed classification of sedimentary habitats. The fine-scale mapping of very heterogeneous, patchy sedimentary facies, and the compound effect of multiple non-linear physical processes on the acoustic signal, cause the classification of backscatter images to be subject to a great level of uncertainty. Standard procedures for assessing the accuracy of acoustic classification maps are not yet established. This study applies different statistical techniques to automated classified acoustic images with the aim of i) quantifying the ability of backscatter to resolve grain size distributions ii) understanding complex patterns influenced by factors other than grain size variations iii) designing innovative repeatable statistical procedures to spatially assess classification uncertainties. A high-frequency (450 kHz) sidescan sonar survey, carried out in the year 2012 in the shallow upper-mesotidal inlet the Jade Bay (German North Sea), allowed to map 100 km2 of surficial sediment with a resolution and coverage never acquired before in the area. The backscatter mosaic was ground-truthed using a large dataset of sediment grab sample information (2009-2011). Multivariate procedures were employed for modelling the relationship between acoustic descriptors and granulometric variables in order to evaluate the correctness of acoustic classes allocation and sediment group separation. Complex patterns in the acoustic signal appeared to be controlled by the combined effect of surface roughness, sorting and mean grain size variations. The area is dominated by silt and fine sand in very mixed compositions; in this fine grained matrix, percentages of gravel resulted to be the prevailing factor affecting backscatter variability. In the absence of coarse material, sorting mostly affected the ability to detect gradual but significant changes in seabed types. Misclassification due to temporal discrepancies

  7. Procedures for scour assessments at bridges in Pennsylvania

    USGS Publications Warehouse

    Cinotto, Peter J.; White, Kirk E.

    2000-01-01

    Scour is the process and result of flowing water eroding the bed and banks of a stream. Scour at nearly 14,300 bridges(1) spanning water, and the stability of river and stream channels in Pennsylvania, are being assessed by the U.S. Geological Survey (USGS) in cooperation with the Pennsylvania Department of Transportation (PennDOT). Procedures for bridge-scour assessments have been established to address the needs of PennDOT in meeting a 1988 Federal Highway Administration mandate requiring states to establish a program to assess all public bridges over water for their vulnerability to scour. The procedures also have been established to help develop an understanding of the local and regional factors that affect scour and channel stability. This report describes procedures for the assessment of scour at all bridges that are 20 feet or greater in length that span water in Pennsylvania. There are two basic types of assessment: field-viewed bridge site assessments, for which USGS personnel visit the bridge site, and office-reviewed bridge site assessments, for which USGS personnel compile PennDOT data and do not visit the bridge site. Both types of assessments are primarily focused at assisting PennDOT in meeting the requirements of the Federal Highway Administration mandate; however, both assessments include procedures for the collection and processing of ancillary data for subsequent analysis. Date of bridge construction and the accessibility of the bridge substructure units for inspection determine which type of assessment a bridge receives. A Scour-Critical Bridge Indicator Code and a Scour Assessment Rating are computed from selected collected and compiled data. PennDOT personnel assign the final Scour-Critical Bridge Indicator Code and a Scour Assessment Rating on the basis of their review of all data. (1)Words presented in bold type are defined in the Glossary section of this report.

  8. Self-Assessment in University Assessment of Prior Learning Procedures

    ERIC Educational Resources Information Center

    Brinke, D. Joosten-Ten; Sluijsmans, D. M. A.; Jochems, W. M. G.

    2009-01-01

    Competency-based university education, in which lifelong learning and flexible learning are key elements, demands a renewed vision on assessment. Within this vision, Assessment of Prior Learning (APL), in which learners have to show their prior learning in order for their goals to be recognised, becomes an important element. This article focuses…

  9. Comparative assessment of thematic accuracy of GLC maps for specific applications using existing reference data

    NASA Astrophysics Data System (ADS)

    Tsendbazar, N. E.; de Bruin, S.; Mora, B.; Schouten, L.; Herold, M.

    2016-02-01

    Inputs to various applications and models, current global land cover (GLC) maps are based on different data sources and methods. Therefore, comparing GLC maps is challenging. Statistical comparison of GLC maps is further complicated by the lack of a reference dataset that is suitable for validating multiple maps. This study utilizes the existing Globcover-2005 reference dataset to compare thematic accuracies of three GLC maps for the year 2005 (Globcover, LC-CCI and MODIS). We translated and reinterpreted the LCCS (land cover classification system) classifier information of the reference dataset into the different map legends. The three maps were evaluated for a variety of applications, i.e., general circulation models, dynamic global vegetation models, agriculture assessments, carbon estimation and biodiversity assessments, using weighted accuracy assessment. Based on the impact of land cover confusions on the overall weighted accuracy of the GLC maps, we identified map improvement priorities. Overall accuracies were 70.8 ± 1.4%, 71.4 ± 1.3%, and 61.3 ± 1.5% for LC-CCI, MODIS, and Globcover, respectively. Weighted accuracy assessments produced increased overall accuracies (80-93%) since not all class confusion errors are important for specific applications. As a common denominator for all applications, the classes mixed trees, shrubs, grasses, and cropland were identified as improvement priorities. The results demonstrate the necessity of accounting for dissimilarities in the importance of map classification errors for different user application. To determine the fitness of use of GLC maps, accuracy of GLC maps should be assessed per application; there is no single-figure accuracy estimate expressing map fitness for all purposes.

  10. Peer Interaction and Corrective Feedback for Accuracy and Fluency Development: Monitoring, Practice, and Proceduralization

    ERIC Educational Resources Information Center

    Sato, Masatoshi; Lyster, Roy

    2012-01-01

    This quasi-experimental study is aimed at (a) teaching learners how to provide corrective feedback (CF) during peer interaction and (b) assessing the effects of peer interaction and CF on second language (L2) development. Four university-level English classes in Japan participated (N = 167), each assigned to one of four treatment conditions. Of…

  11. Investigating the ultimate accuracy of Doppler-broadening thermometry by means of a global fitting procedure

    NASA Astrophysics Data System (ADS)

    Amodio, Pasquale; De Vizia, Maria Domenica; Moretti, Luigi; Gianfrani, Livio

    2015-09-01

    Doppler-limited, high-precision, molecular spectroscopy in the linear regime of interaction may refine our knowledge of the Boltzmann constant. To this end, the global uncertainty in the retrieval of the Doppler width should be reduced down to 1 part over 106, which is a rather challenging target. So far, Doppler-broadening thermometry has been mostly limited by the uncertainty associated to the line shape model that is adopted for the nonlinear least-squares fits of experimental spectra. In this paper, we deeply investigate this issue by using a very realistic and sophisticated model, known as partially correlated speed-dependent Keilson-Storer profile, to reproduce near-infrared water spectra. A global approach has been developed to fit a large number of numerically simulated spectra, testing a variety of simplified line-shape models. It turns out that the most appropriate model is the speed-dependent hard-collision profile. We demonstrate that the Doppler width can be determined with relative precision and accuracy, respectively, of 0.42 and 0.75 part per million.

  12. Thematic Accuracy Assessment of the 2011 National Land Cover Database (NLCD)

    EPA Science Inventory

    Accuracy assessment is a standard protocol of National Land Cover Database (NLCD) mapping. Here we report agreement statistics between map and reference labels for NLCD 2011, which includes land cover for ca. 2001, ca. 2006, and ca. 2011. The two main objectives were assessment o...

  13. Examining rating quality in writing assessment: rater agreement, error, and accuracy.

    PubMed

    Wind, Stefanie A; Engelhard, George

    2012-01-01

    The use of performance assessments in which human raters evaluate student achievement has become increasingly prevalent in high-stakes assessment systems such as those associated with recent policy initiatives (e.g., Race to the Top). In this study, indices of rating quality are compared between two measurement perspectives. Within the context of a large-scale writing assessment, this study focuses on the alignment between indices of rater agreement, error, and accuracy based on traditional and Rasch measurement theory perspectives. Major empirical findings suggest that Rasch-based indices of model-data fit for ratings provide information about raters that is comparable to direct measures of accuracy. The use of easily obtained approximations of direct accuracy measures holds significant implications for monitoring rating quality in large-scale rater-mediated performance assessments.

  14. Beyond the Correlation Coefficient in Studies of Self-Assessment Accuracy: Commentary on Zell & Krizan (2014).

    PubMed

    Dunning, David; Helzer, Erik G

    2014-03-01

    Zell and Krizan (2014, this issue) provide a comprehensive yet incomplete portrait of the factors influencing accurate self-assessment. This is no fault of their own. Much work on self-accuracy focuses on the correlation coefficient as the measure of accuracy, but it is not the only way self-accuracy can be measured. As such, its use can provide an incomplete and potentially misleading story. We urge researchers to explore measures of bias as well as correlation, because there are indirect hints that each respond to a different psychological dynamic. We further entreat researchers to develop other creative measures of accuracy and not to forget that self-accuracy may come not only from personal knowledge but also from insight about human nature more generally.

  15. Impact assessment procedures for sustainable development: A complexity theory perspective

    SciTech Connect

    Nooteboom, Sibout

    2007-10-15

    The author assumes that effective Impact Assessment procedures should somehow contribute to sustainable development. There is no widely agreed framework for evaluating such effectiveness. The author suggests that complexity theories may offer criteria. The relevant question is 'do Impact Assessment Procedures contribute to the 'requisite variety' of a social system for it to deal with changing circumstances?' Requisite variety theoretically relates to the capability of a system to deal with changes in its environment. The author reconstructs how thinking about achieving sustainable development has developed in a sequence of discourses in The Netherlands since the 1970s. Each new discourse built on the previous ones, and is supposed to have added to 'requisite variety'. The author asserts that Impact Assessment procedures may be a necessary component in such sequences and derives possible criteria for effectiveness.

  16. Accuracy assessment of the integration of GNSS and a MEMS IMU in a terrestrial platform.

    PubMed

    Madeira, Sergio; Yan, Wenlin; Bastos, Luísa; Gonçalves, José A

    2014-11-04

    MEMS Inertial Measurement Units are available at low cost and can replace expensive units in mobile mapping platforms which need direct georeferencing. This is done through the integration with GNSS measurements in order to achieve a continuous positioning solution and to obtain orientation angles. This paper presents the results of the assessment of the accuracy of a system that integrates GNSS and a MEMS IMU in a terrestrial platform. We describe the methodology used and the tests realized where the accuracy of the positions and orientation parameters were assessed using an independent photogrammetric technique employing cameras that integrate the mobile mapping system developed by the authors. Results for the accuracy of attitude angles and coordinates show that accuracies better than a decimeter in positions, and under a degree in angles, can be achieved even considering that the terrestrial platform is operating in less than favorable environments.

  17. Thematic accuracy assessment of the 2011 National Land Cover Database (NLCD)

    USGS Publications Warehouse

    Wickham, James; Stehman, Stephen V.; Gass, Leila; Dewitz, Jon; Sorenson, Daniel G.; Granneman, Brian J.; Poss, Richard V.; Baer, Lori Anne

    2017-01-01

    Accuracy assessment is a standard protocol of National Land Cover Database (NLCD) mapping. Here we report agreement statistics between map and reference labels for NLCD 2011, which includes land cover for ca. 2001, ca. 2006, and ca. 2011. The two main objectives were assessment of agreement between map and reference labels for the three, single-date NLCD land cover products at Level II and Level I of the classification hierarchy, and agreement for 17 land cover change reporting themes based on Level I classes (e.g., forest loss; forest gain; forest, no change) for three change periods (2001–2006, 2006–2011, and 2001–2011). The single-date overall accuracies were 82%, 83%, and 83% at Level II and 88%, 89%, and 89% at Level I for 2011, 2006, and 2001, respectively. Many class-specific user's accuracies met or exceeded a previously established nominal accuracy benchmark of 85%. Overall accuracies for 2006 and 2001 land cover components of NLCD 2011 were approximately 4% higher (at Level II and Level I) than the overall accuracies for the same components of NLCD 2006. The high Level I overall, user's, and producer's accuracies for the single-date eras in NLCD 2011 did not translate into high class-specific user's and producer's accuracies for many of the 17 change reporting themes. User's accuracies were high for the no change reporting themes, commonly exceeding 85%, but were typically much lower for the reporting themes that represented change. Only forest loss, forest gain, and urban gain had user's accuracies that exceeded 70%. Lower user's accuracies for the other change reporting themes may be attributable to the difficulty in determining the context of grass (e.g., open urban, grassland, agriculture) and between the components of the forest-shrubland-grassland gradient at either the mapping phase, reference label assignment phase, or both. NLCD 2011 user's accuracies for forest loss, forest gain, and urban gain compare favorably with results from other

  18. ChronRater: A simple approach to assessing the accuracy of age models from Holocene sediment cores

    NASA Astrophysics Data System (ADS)

    Kaufman, D. S.; Balascio, N. L.; McKay, N. P.; Sundqvist, H. S.

    2013-12-01

    numerical, we recognize that judging the quality of material and weighting the various factors that influence accuracy can be subjective. We applied the scoring scheme to more than 110 different downcore age models to assess the distribution of the score and its dependency on the input variables. While no scoring scheme will be perfect, ours can be used to assign reasonable numerical ratings to the reliability of downcore age models based on a simple, reproducible, and customizable procedure that focuses on the most important factors that determine the overall geochronological accuracy.

  19. Comparing preference assessments: selection- versus duration-based preference assessment procedures.

    PubMed

    Kodak, Tiffany; Fisher, Wayne W; Kelley, Michael E; Kisamore, April

    2009-01-01

    In the current investigation, the results of a selection- and a duration-based preference assessment procedure were compared. A Multiple Stimulus With Replacement (MSW) preference assessment [Windsor, J., Piché, L. M., & Locke, P. A. (1994). Preference testing: A comparison of two presentation methods. Research in Developmental Disabilities, 15, 439-455] and a variation of a Free-Operant (FO) preference assessment procedure [Roane, H. S., Vollmer, T. R., Ringdahl, J. E., & Marcus, B. A. (1998). Evaluation of a brief stimulus preference assessment. Journal of Applied Behavior Analysis, 31, 605-620] were conducted with four participants. A reinforcer assessment was conducted to determine which preference assessment procedure identified the item that produced the highest rates of responding. The items identified as most highly preferred were different across preference assessment procedures for all participants. Results of the reinforcer assessment showed that the MSW identified the item that functioned as the most effective reinforcer for two participants.

  20. A procedure for high resolution satellite imagery quality assessment.

    PubMed

    Crespi, Mattia; De Vendictis, Laura

    2009-01-01

    Data products generated from High Resolution Satellite Imagery (HRSI) are routinely evaluated during the so-called in-orbit test period, in order to verify if their quality fits the desired features and, if necessary, to obtain the image correction parameters to be used at the ground processing center. Nevertheless, it is often useful to have tools to evaluate image quality also at the final user level. Image quality is defined by some parameters, such as the radiometric resolution and its accuracy, represented by the noise level, and the geometric resolution and sharpness, described by the Modulation Transfer Function (MTF). This paper proposes a procedure to evaluate these image quality parameters; the procedure was implemented in a suitable software and tested on high resolution imagery acquired by the QuickBird, WorldView-1 and Cartosat-1 satellites.

  1. A Procedure for High Resolution Satellite Imagery Quality Assessment

    PubMed Central

    Crespi, Mattia; De Vendictis, Laura

    2009-01-01

    Data products generated from High Resolution Satellite Imagery (HRSI) are routinely evaluated during the so-called in-orbit test period, in order to verify if their quality fits the desired features and, if necessary, to obtain the image correction parameters to be used at the ground processing center. Nevertheless, it is often useful to have tools to evaluate image quality also at the final user level. Image quality is defined by some parameters, such as the radiometric resolution and its accuracy, represented by the noise level, and the geometric resolution and sharpness, described by the Modulation Transfer Function (MTF). This paper proposes a procedure to evaluate these image quality parameters; the procedure was implemented in a suitable software and tested on high resolution imagery acquired by the QuickBird, WorldView-1 and Cartosat-1 satellites. PMID:22412312

  2. Assessment of the Accuracy of Pharmacy Students’ Compounded Solutions Using Vapor Pressure Osmometry

    PubMed Central

    McPherson, Timothy B.

    2013-01-01

    Objective. To assess the effectiveness of using a vapor pressure osmometer to measure the accuracy of pharmacy students’ compounding skills. Design. Students calculated the theoretical osmotic pressure (mmol/kg) of a solution as a pre-laboratory exercise, compared their calculations with actual values, and then attempted to determine the cause of any errors found. Assessment. After the introduction of the vapor pressure osmometer, the first-time pass rate for solution compounding has varied from 85% to 100%. Approximately 85% of students surveyed reported that the instrument was valuable as a teaching tool because it objectively assessed their work and provided immediate formative assessment. Conclusions. This simple technique of measuring compounding accuracy using a vapor pressure osmometer allowed students to see the importance of quality control and assessment in practice for both pharmacists and technicians. PMID:23610476

  3. Assessment of the accuracy of pharmacy students' compounded solutions using vapor pressure osmometry.

    PubMed

    Kolling, William M; McPherson, Timothy B

    2013-04-12

    OBJECTIVE. To assess the effectiveness of using a vapor pressure osmometer to measure the accuracy of pharmacy students' compounding skills. DESIGN. Students calculated the theoretical osmotic pressure (mmol/kg) of a solution as a pre-laboratory exercise, compared their calculations with actual values, and then attempted to determine the cause of any errors found. ASSESSMENT. After the introduction of the vapor pressure osmometer, the first-time pass rate for solution compounding has varied from 85% to 100%. Approximately 85% of students surveyed reported that the instrument was valuable as a teaching tool because it objectively assessed their work and provided immediate formative assessment. CONCLUSIONS. This simple technique of measuring compounding accuracy using a vapor pressure osmometer allowed students to see the importance of quality control and assessment in practice for both pharmacists and technicians.

  4. Assessing the impact of measurement frequency on accuracy and uncertainty of water quality data

    NASA Astrophysics Data System (ADS)

    Helm, Björn; Schiffner, Stefanie; Krebs, Peter

    2014-05-01

    Physico-chemical water quality is a major objective for the evaluation of the ecological state of a river water body. Physical and chemical water properties are measured to assess the river state, identify prevalent pressures and develop mitigating measures. Regularly water quality is assessed based on weekly to quarterly grab samples. The increasing availability of online-sensor data measured at a high frequency allows for an enhanced understanding of emission and transport dynamics, as well as the identification of typical and critical states. In this study we present a systematic approach to assess the impact of measurement frequency on the accuracy and uncertainty of derived aggregate indicators of environmental quality. High frequency measured (10 min-1 and 15 min-1) data on water temperature, pH, turbidity, electric conductivity and concentrations of dissolved oxygen nitrate, ammonia and phosphate are assessed in resampling experiments. The data is collected at 14 sites in eastern and northern Germany representing catchments between 40 km2 and 140 000 km2 of varying properties. Resampling is performed to create series of hourly to quarterly frequency, including special restrictions like sampling at working hours or discharge compensation. Statistical properties and their confidence intervals are determined in a bootstrapping procedure and evaluated along a gradient of sampling frequency. For all variables the range of the aggregate indicators increases largely in the bootstrapping realizations with decreasing sampling frequency. Mean values of electric conductivity, pH and water temperature obtained with monthly frequency differ in average less than five percent from the original data. Mean dissolved oxygen, nitrate and phosphate had in most stations less than 15 % bias. Ammonia and turbidity are most sensitive to the increase of sampling frequency with up to 30 % in average and 250 % maximum bias at monthly sampling frequency. A systematic bias is recognized

  5. Recent Developments in Assessment Procedures in England and Wales.

    ERIC Educational Resources Information Center

    Goldstein, Harvey; Nuttall, Desmond

    Focusing on technical issues, this paper critiques proposed changes in assessment procedures at the further educational level (ages 16 through 18) in England and Wales. Major structural changes are taking place at this educational level, partly because of large scale youth unemployment. The two current examination systems for the final year of…

  6. Innovative Approaches to Increasing the Student Assessment Procedures Effectiveness

    ERIC Educational Resources Information Center

    Dorozhkin, Evgenij M.; Chelyshkova, Marina B.; Malygin, Alexey A.; Toymentseva, Irina A.; Anopchenko, Tatiana Y.

    2016-01-01

    The relevance of the investigated problem is determined by the need to improving the evaluation procedures in education and the student assessment in the age of the context of education widening, new modes of study developing (such as blending learning, e-learning, massive open online courses), immediate feedback necessity, reliable and valid…

  7. Maine Educational Assessment (MEA) Operational Procedures, March 2005 Administration.

    ERIC Educational Resources Information Center

    Maine Department of Education, 2004

    2004-01-01

    This document is intended for use in conjunction with "Policies and Procedures for Accommodations and Alternate Assessment to the MEA," and both the "MEA Principal/Test Coordinator's Manual" and the "MEA Test Administrator's Manual." The first section, Enrollment, covers the following subjects: (1) Participation of Enrolled Students; (2) Students…

  8. 15 CFR 990.27 - Use of assessment procedures.

    Code of Federal Regulations, 2012 CFR

    2012-01-01

    ... 15 Commerce and Foreign Trade 3 2012-01-01 2012-01-01 false Use of assessment procedures. 990.27 Section 990.27 Commerce and Foreign Trade Regulations Relating to Commerce and Foreign Trade (Continued) NATIONAL OCEANIC AND ATMOSPHERIC ADMINISTRATION, DEPARTMENT OF COMMERCE OIL POLLUTION ACT...

  9. 15 CFR 990.27 - Use of assessment procedures.

    Code of Federal Regulations, 2011 CFR

    2011-01-01

    ... 15 Commerce and Foreign Trade 3 2011-01-01 2011-01-01 false Use of assessment procedures. 990.27 Section 990.27 Commerce and Foreign Trade Regulations Relating to Commerce and Foreign Trade (Continued) NATIONAL OCEANIC AND ATMOSPHERIC ADMINISTRATION, DEPARTMENT OF COMMERCE OIL POLLUTION ACT...

  10. 15 CFR 990.27 - Use of assessment procedures.

    Code of Federal Regulations, 2013 CFR

    2013-01-01

    ... 15 Commerce and Foreign Trade 3 2013-01-01 2013-01-01 false Use of assessment procedures. 990.27 Section 990.27 Commerce and Foreign Trade Regulations Relating to Commerce and Foreign Trade (Continued) NATIONAL OCEANIC AND ATMOSPHERIC ADMINISTRATION, DEPARTMENT OF COMMERCE OIL POLLUTION ACT...

  11. 15 CFR 990.27 - Use of assessment procedures.

    Code of Federal Regulations, 2014 CFR

    2014-01-01

    ... 15 Commerce and Foreign Trade 3 2014-01-01 2014-01-01 false Use of assessment procedures. 990.27 Section 990.27 Commerce and Foreign Trade Regulations Relating to Commerce and Foreign Trade (Continued) NATIONAL OCEANIC AND ATMOSPHERIC ADMINISTRATION, DEPARTMENT OF COMMERCE OIL POLLUTION ACT...

  12. 15 CFR 990.27 - Use of assessment procedures.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... 15 Commerce and Foreign Trade 3 2010-01-01 2010-01-01 false Use of assessment procedures. 990.27 Section 990.27 Commerce and Foreign Trade Regulations Relating to Commerce and Foreign Trade (Continued) NATIONAL OCEANIC AND ATMOSPHERIC ADMINISTRATION, DEPARTMENT OF COMMERCE OIL POLLUTION ACT...

  13. 49 CFR 1540.205 - Procedures for security threat assessment.

    Code of Federal Regulations, 2012 CFR

    2012-10-01

    ...) TRANSPORTATION SECURITY ADMINISTRATION, DEPARTMENT OF HOMELAND SECURITY CIVIL AVIATION SECURITY CIVIL AVIATION... 49 Transportation 9 2012-10-01 2012-10-01 false Procedures for security threat assessment. 1540... standards in 49 CFR 1540.201(c) and may pose an imminent threat to transportation or national security,...

  14. 49 CFR 1540.205 - Procedures for security threat assessment.

    Code of Federal Regulations, 2011 CFR

    2011-10-01

    ...) TRANSPORTATION SECURITY ADMINISTRATION, DEPARTMENT OF HOMELAND SECURITY CIVIL AVIATION SECURITY CIVIL AVIATION... 49 Transportation 9 2011-10-01 2011-10-01 false Procedures for security threat assessment. 1540... standards in 49 CFR 1540.201(c) and may pose an imminent threat to transportation or national security,...

  15. 49 CFR 1540.205 - Procedures for security threat assessment.

    Code of Federal Regulations, 2014 CFR

    2014-10-01

    ...) TRANSPORTATION SECURITY ADMINISTRATION, DEPARTMENT OF HOMELAND SECURITY CIVIL AVIATION SECURITY CIVIL AVIATION... 49 Transportation 9 2014-10-01 2014-10-01 false Procedures for security threat assessment. 1540... standards in 49 CFR 1540.201(c) and may pose an imminent threat to transportation or national security,...

  16. 49 CFR 1540.205 - Procedures for security threat assessment.

    Code of Federal Regulations, 2013 CFR

    2013-10-01

    ...) TRANSPORTATION SECURITY ADMINISTRATION, DEPARTMENT OF HOMELAND SECURITY CIVIL AVIATION SECURITY CIVIL AVIATION... 49 Transportation 9 2013-10-01 2013-10-01 false Procedures for security threat assessment. 1540... standards in 49 CFR 1540.201(c) and may pose an imminent threat to transportation or national security,...

  17. 49 CFR 1540.205 - Procedures for security threat assessment.

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ...) TRANSPORTATION SECURITY ADMINISTRATION, DEPARTMENT OF HOMELAND SECURITY CIVIL AVIATION SECURITY CIVIL AVIATION... 49 Transportation 9 2010-10-01 2010-10-01 false Procedures for security threat assessment. 1540... standards in 49 CFR 1540.201(c) and may pose an imminent threat to transportation or national security,...

  18. Experimental Assessment of Delphi Procedures with Group Value Judgments.

    ERIC Educational Resources Information Center

    Dalkey, Norman C.; Rourke, Daniel L.

    This report describes the results of an experiment assessing the appropriateness of Delphi procedures for formulating group value judgments. Two groups of subjects--upperclass and graduate students from UCLA--were asked to generate and rate value categories relating to higher education and the quality of life. The initial lists (300 and 250 items…

  19. Quality Assessment of Comparative Diagnostic Accuracy Studies: Our Experience Using a Modified Version of the QUADAS-2 Tool

    ERIC Educational Resources Information Center

    Wade, Ros; Corbett, Mark; Eastwood, Alison

    2013-01-01

    Assessing the quality of included studies is a vital step in undertaking a systematic review. The recently revised Quality Assessment of Diagnostic Accuracy Studies (QUADAS) tool (QUADAS-2), which is the only validated quality assessment tool for diagnostic accuracy studies, does not include specific criteria for assessing comparative studies. As…

  20. Accuracy assessment of a surface electromyogram decomposition system in human first dorsal interosseus muscle

    NASA Astrophysics Data System (ADS)

    Hu, Xiaogang; Rymer, William Z.; Suresh, Nina L.

    2014-04-01

    Objective. The aim of this study is to assess the accuracy of a surface electromyogram (sEMG) motor unit (MU) decomposition algorithm during low levels of muscle contraction. Approach. A two-source method was used to verify the accuracy of the sEMG decomposition system, by utilizing simultaneous intramuscular and surface EMG recordings from the human first dorsal interosseous muscle recorded during isometric trapezoidal force contractions. Spike trains from each recording type were decomposed independently utilizing two different algorithms, EMGlab and dEMG decomposition algorithms. The degree of agreement of the decomposed spike timings was assessed for three different segments of the EMG signals, corresponding to specified regions in the force task. A regression analysis was performed to examine whether certain properties of the sEMG and force signal can predict the decomposition accuracy. Main results. The average accuracy of successful decomposition among the 119 MUs that were common to both intramuscular and surface records was approximately 95%, and the accuracy was comparable between the different segments of the sEMG signals (i.e., force ramp-up versus steady state force versus combined). The regression function between the accuracy and properties of sEMG and force signals revealed that the signal-to-noise ratio of the action potential and stability in the action potential records were significant predictors of the surface decomposition accuracy. Significance. The outcomes of our study confirm the accuracy of the sEMG decomposition algorithm during low muscle contraction levels and provide confidence in the overall validity of the surface dEMG decomposition algorithm.

  1. Targeting Accuracy, Procedure Times and User Experience of 240 Experimental MRI Biopsies Guided by a Clinical Add-On Navigation System

    PubMed Central

    Busse, Harald; Riedel, Tim; Garnov, Nikita; Thörmer, Gregor; Kahn, Thomas; Moche, Michael

    2015-01-01

    Objectives MRI is of great clinical utility for the guidance of special diagnostic and therapeutic interventions. The majority of such procedures are performed iteratively ("in-and-out") in standard, closed-bore MRI systems with control imaging inside the bore and needle adjustments outside the bore. The fundamental limitations of such an approach have led to the development of various assistance techniques, from simple guidance tools to advanced navigation systems. The purpose of this work was to thoroughly assess the targeting accuracy, workflow and usability of a clinical add-on navigation solution on 240 simulated biopsies by different medical operators. Methods Navigation relied on a virtual 3D MRI scene with real-time overlay of the optically tracked biopsy needle. Smart reference markers on a freely adjustable arm ensured proper registration. Twenty-four operators – attending (AR) and resident radiologists (RR) as well as medical students (MS) – performed well-controlled biopsies of 10 embedded model targets (mean diameter: 8.5 mm, insertion depths: 17-76 mm). Targeting accuracy, procedure times and 13 Likert scores on system performance were determined (strong agreement: 5.0). Results Differences in diagnostic success rates (AR: 93%, RR: 88%, MS: 81%) were not significant. In contrast, between-group differences in biopsy times (AR: 4:15, RR: 4:40, MS: 5:06 min:sec) differed significantly (p<0.01). Mean overall rating was 4.2. The average operator would use the system again (4.8) and stated that the outcome justifies the extra effort (4.4). Lowest agreement was reported for the robustness against external perturbations (2.8). Conclusions The described combination of optical tracking technology with an automatic MRI registration appears to be sufficiently accurate for instrument guidance in a standard (closed-bore) MRI environment. High targeting accuracy and usability was demonstrated on a relatively large number of procedures and operators. Between

  2. ICan: an optimized ion-current-based quantification procedure with enhanced quantitative accuracy and sensitivity in biomarker discovery.

    PubMed

    Tu, Chengjian; Sheng, Quanhu; Li, Jun; Shen, Xiaomeng; Zhang, Ming; Shyr, Yu; Qu, Jun

    2014-12-05

    The rapidly expanding availability of high-resolution mass spectrometry has substantially enhanced the ion-current-based relative quantification techniques. Despite the increasing interest in ion-current-based methods, quantitative sensitivity, accuracy, and false discovery rate remain the major concerns; consequently, comprehensive evaluation and development in these regards are urgently needed. Here we describe an integrated, new procedure for data normalization and protein ratio estimation, termed ICan, for improved ion-current-based analysis of data generated by high-resolution mass spectrometry (MS). ICan achieved significantly better accuracy and precision, and lower false-positive rate for discovering altered proteins, over current popular pipelines. A spiked-in experiment was used to evaluate the performance of ICan to detect small changes. In this study E. coli extracts were spiked with moderate-abundance proteins from human plasma (MAP, enriched by IgY14-SuperMix procedure) at two different levels to set a small change of 1.5-fold. Forty-five (92%, with an average ratio of 1.71 ± 0.13) of 49 identified MAP protein (i.e., the true positives) and none of the reference proteins (1.0-fold) were determined as significantly altered proteins, with cutoff thresholds of ≥ 1.3-fold change and p ≤ 0.05. This is the first study to evaluate and prove competitive performance of the ion-current-based approach for assigning significance to proteins with small changes. By comparison, other methods showed remarkably inferior performance. ICan can be broadly applicable to reliable and sensitive proteomic survey of multiple biological samples with the use of high-resolution MS. Moreover, many key features evaluated and optimized here such as normalization, protein ratio determination, and statistical analyses are also valuable for data analysis by isotope-labeling methods.

  3. Assessing map accuracy in a remotely sensed, ecoregion-scale cover map

    USGS Publications Warehouse

    Edwards, T.C.; Moisen, G.G.; Cutler, D.R.

    1998-01-01

    Landscape- and ecoregion-based conservation efforts increasingly use a spatial component to organize data for analysis and interpretation. A challenge particular to remotely sensed cover maps generated from these efforts is how best to assess the accuracy of the cover maps, especially when they can exceed 1000 s/km2 in size. Here we develop and describe a methodological approach for assessing the accuracy of large-area cover maps, using as a test case the 21.9 million ha cover map developed for Utah Gap Analysis. As part of our design process, we first reviewed the effect of intracluster correlation and a simple cost function on the relative efficiency of cluster sample designs to simple random designs. Our design ultimately combined clustered and subsampled field data stratified by ecological modeling unit and accessibility (hereafter a mixed design). We next outline estimation formulas for simple map accuracy measures under our mixed design and report results for eight major cover types and the three ecoregions mapped as part of the Utah Gap Analysis. Overall accuracy of the map was 83.2% (SE=1.4). Within ecoregions, accuracy ranged from 78.9% to 85.0%. Accuracy by cover type varied, ranging from a low of 50.4% for barren to a high of 90.6% for man modified. In addition, we examined gains in efficiency of our mixed design compared with a simple random sample approach. In regard to precision, our mixed design was more precise than a simple random design, given fixed sample costs. We close with a discussion of the logistical constraints facing attempts to assess the accuracy of large-area, remotely sensed cover maps.

  4. Calibration of ground-based microwave radiometers - Accuracy assessment and recommendations for network users

    NASA Astrophysics Data System (ADS)

    Pospichal, Bernhard; Küchler, Nils; Löhnert, Ulrich; Crewell, Susanne; Czekala, Harald; Güldner, Jürgen

    2016-04-01

    Ground-based microwave radiometers (MWR) are becoming widely used in atmospheric remote sensing and start to be routinely operated by national weather services and other institutions. However, common standards for calibration of these radiometers and a detailed knowledge about the error characteristics is needed, in order to assimilate the data into models. Intercomparisons of calibrations by different MWRs have rarely been done. Therefore, two calibration experiments in Lindenberg (2014) and Meckenheim (2015) were performed in the frame of TOPROF (Cost action ES1303) in order to assess uncertainties and differences between various instruments. In addition, a series of experiments were taken in Oklahoma in autumn 2014. The focus lay on the performance of the two main instrument types, which are currently used operationally. These are the MP-Profiler series by Radiometrics Corporation as well as the HATPRO series by Radiometer Physics GmbH (RPG). Both instrument types are operating in two frequency bands, one along the 22 GHz water vapour line, the other one at the lower wing of the 60 GHz oxygen absorption complex. The goal was to establish protocols for providing quality controlled (QC) MWR data and their uncertainties. To this end, standardized calibration procedures for MWR were developed and recommendations for radiometer users were compiled. We focus here mainly on data types, integration times and optimal settings for calibration intervals, both for absolute (liquid nitrogen, tipping curve) as well as relative (hot load, noise diode) calibrations. Besides the recommendations for ground-based MWR operators, we will present methods to determine the accuracy of the calibration as well as means for automatic data quality control. In addition, some results from the intercomparison of different radiometers will be discussed.

  5. Accuracy Assessment of Underwater Photogrammetric Three Dimensional Modelling for Coral Reefs

    NASA Astrophysics Data System (ADS)

    Guo, T.; Capra, A.; Troyer, M.; Gruen, A.; Brooks, A. J.; Hench, J. L.; Schmitt, R. J.; Holbrook, S. J.; Dubbini, M.

    2016-06-01

    Recent advances in automation of photogrammetric 3D modelling software packages have stimulated interest in reconstructing highly accurate 3D object geometry in unconventional environments such as underwater utilizing simple and low-cost camera systems. The accuracy of underwater 3D modelling is affected by more parameters than in single media cases. This study is part of a larger project on 3D measurements of temporal change of coral cover in tropical waters. It compares the accuracies of 3D point clouds generated by using images acquired from a system camera mounted in an underwater housing and the popular GoPro cameras respectively. A precisely measured calibration frame was placed in the target scene in order to provide accurate control information and also quantify the errors of the modelling procedure. In addition, several objects (cinder blocks) with various shapes were arranged in the air and underwater and 3D point clouds were generated by automated image matching. These were further used to examine the relative accuracy of the point cloud generation by comparing the point clouds of the individual objects with the objects measured by the system camera in air (the best possible values). Given a working distance of about 1.5 m, the GoPro camera can achieve a relative accuracy of 1.3 mm in air and 2.0 mm in water. The system camera achieved an accuracy of 1.8 mm in water, which meets our requirements for coral measurement in this system.

  6. Gender Differences in Structured Risk Assessment: Comparing the Accuracy of Five Instruments

    ERIC Educational Resources Information Center

    Coid, Jeremy; Yang, Min; Ullrich, Simone; Zhang, Tianqiang; Sizmur, Steve; Roberts, Colin; Farrington, David P.; Rogers, Robert D.

    2009-01-01

    Structured risk assessment should guide clinical risk management, but it is uncertain which instrument has the highest predictive accuracy among men and women. In the present study, the authors compared the Psychopathy Checklist-Revised (PCL-R; R. D. Hare, 1991, 2003); the Historical, Clinical, Risk Management-20 (HCR-20; C. D. Webster, K. S.…

  7. 12 CFR 620.3 - Accuracy of reports and assessment of internal control over financial reporting.

    Code of Federal Regulations, 2011 CFR

    2011-01-01

    ... in accordance with all applicable statutory or regulatory requirements, and (3) The information is true, accurate, and complete to the best of signatories' knowledge and belief. (d) Management... CREDIT SYSTEM DISCLOSURE TO SHAREHOLDERS General § 620.3 Accuracy of reports and assessment of...

  8. Assessing the Accuracy of MODIS-NDVI Derived Land-Cover Across the Great Lakes Basin

    EPA Science Inventory

    This research describes the accuracy assessment process for a land-cover dataset developed for the Great Lakes Basin (GLB). This land-cover dataset was developed from the 2007 MODIS Normalized Difference Vegetation Index (NDVI) 16-day composite (MOD13Q) 250 m time-series data. Tr...

  9. The Word Writing CAFE: Assessing Student Writing for Complexity, Accuracy, and Fluency

    ERIC Educational Resources Information Center

    Leal, Dorothy J.

    2005-01-01

    The Word Writing CAFE is a new assessment tool designed for teachers to evaluate objectively students' word-writing ability for fluency, accuracy, and complexity. It is designed to be given to the whole class at one time. This article describes the development of the CAFE and provides directions for administering and scoring it. The author also…

  10. A PIXEL COMPOSITION-BASED REFERENCE DATA SET FOR THEMATIC ACCURACY ASSESSMENT

    EPA Science Inventory

    Developing reference data sets for accuracy assessment of land-cover classifications derived from coarse spatial resolution sensors such as MODIS can be difficult due to the large resolution differences between the image data and available reference data sources. Ideally, the spa...

  11. Assessing the Accuracy of Classwide Direct Observation Methods: Two Analyses Using Simulated and Naturalistic Data

    ERIC Educational Resources Information Center

    Dart, Evan H.; Radley, Keith C.; Briesch, Amy M.; Furlow, Christopher M.; Cavell, Hannah J.; Briesch, Amy M.

    2016-01-01

    Two studies investigated the accuracy of eight different interval-based group observation methods that are commonly used to assess the effects of classwide interventions. In Study 1, a Microsoft Visual Basic program was created to simulate a large set of observational data. Binary data were randomly generated at the student level to represent…

  12. Modifications to the accuracy assessment analysis routine MLTCRP to produce an output file

    NASA Technical Reports Server (NTRS)

    Carnes, J. G.

    1978-01-01

    Modifications are described that were made to the analysis program MLTCRP in the accuracy assessment software system to produce a disk output file. The output files produced by this modified program are used to aggregate data for regions greater than a single segment.

  13. Assessing Observer Accuracy in Continuous Recording of Rate and Duration: Three Algorithms Compared

    ERIC Educational Resources Information Center

    Mudford, Oliver C.; Martin, Neil T.; Hui, Jasmine K. Y.; Taylor, Sarah Ann

    2009-01-01

    The three algorithms most frequently selected by behavior-analytic researchers to compute interobserver agreement with continuous recording were used to assess the accuracy of data recorded from video samples on handheld computers by 12 observers. Rate and duration of responding were recorded for three samples each. Data files were compared with…

  14. Accuracy of Revised and Traditional Parallel Analyses for Assessing Dimensionality with Binary Data

    ERIC Educational Resources Information Center

    Green, Samuel B.; Redell, Nickalus; Thompson, Marilyn S.; Levy, Roy

    2016-01-01

    Parallel analysis (PA) is a useful empirical tool for assessing the number of factors in exploratory factor analysis. On conceptual and empirical grounds, we argue for a revision to PA that makes it more consistent with hypothesis testing. Using Monte Carlo methods, we evaluated the relative accuracy of the revised PA (R-PA) and traditional PA…

  15. In the Right Ballpark? Assessing the Accuracy of Net Price Calculators

    ERIC Educational Resources Information Center

    Anthony, Aaron M.; Page, Lindsay C.; Seldin, Abigail

    2016-01-01

    Large differences often exist between a college's sticker price and net price after accounting for financial aid. Net price calculators (NPCs) were designed to help students more accurately estimate their actual costs to attend a given college. This study assesses the accuracy of information provided by net price calculators. Specifically, we…

  16. The short- to medium-term predictive accuracy of static and dynamic risk assessment measures in a secure forensic hospital.

    PubMed

    Chu, Chi Meng; Thomas, Stuart D M; Ogloff, James R P; Daffern, Michael

    2013-04-01

    Although violence risk assessment knowledge and practice has advanced over the past few decades, it remains practically difficult to decide which measures clinicians should use to assess and make decisions about the violence potential of individuals on an ongoing basis, particularly in the short to medium term. Within this context, this study sought to compare the predictive accuracy of dynamic risk assessment measures for violence with static risk assessment measures over the short term (up to 1 month) and medium term (up to 6 months) in a forensic psychiatric inpatient setting. Results showed that dynamic measures were generally more accurate than static measures for short- to medium-term predictions of inpatient aggression. These findings highlight the necessity of using risk assessment measures that are sensitive to important clinical risk state variables to improve the short- to medium-term prediction of aggression within the forensic inpatient setting. Such knowledge can assist with the development of more accurate and efficient risk assessment procedures, including the selection of appropriate risk assessment instruments to manage and prevent the violence of offenders with mental illnesses during inpatient treatment.

  17. Accuracy Assessment and Correction of Vaisala RS92 Radiosonde Water Vapor Measurements

    NASA Technical Reports Server (NTRS)

    Whiteman, David N.; Miloshevich, Larry M.; Vomel, Holger; Leblanc, Thierry

    2008-01-01

    Relative humidity (RH) measurements from Vaisala RS92 radiosondes are widely used in both research and operational applications, although the measurement accuracy is not well characterized as a function of its known dependences on height, RH, and time of day (or solar altitude angle). This study characterizes RS92 mean bias error as a function of its dependences by comparing simultaneous measurements from RS92 radiosondes and from three reference instruments of known accuracy. The cryogenic frostpoint hygrometer (CFH) gives the RS92 accuracy above the 700 mb level; the ARM microwave radiometer gives the RS92 accuracy in the lower troposphere; and the ARM SurTHref system gives the RS92 accuracy at the surface using 6 RH probes with NIST-traceable calibrations. These RS92 assessments are combined using the principle of Consensus Referencing to yield a detailed estimate of RS92 accuracy from the surface to the lowermost stratosphere. An empirical bias correction is derived to remove the mean bias error, yielding corrected RS92 measurements whose mean accuracy is estimated to be +/-3% of the measured RH value for nighttime soundings and +/-4% for daytime soundings, plus an RH offset uncertainty of +/-0.5%RH that is significant for dry conditions. The accuracy of individual RS92 soundings is further characterized by the 1-sigma "production variability," estimated to be +/-1.5% of the measured RH value. The daytime bias correction should not be applied to cloudy daytime soundings, because clouds affect the solar radiation error in a complicated and uncharacterized way.

  18. After Detection: The Improved Accuracy of Lung Cancer Assessment Using Radiologic Computer-aided Diagnosis

    PubMed Central

    Amir, Guy J.; Lehmann, Harold P.

    2015-01-01

    Rationale and Objectives The aim of this study was to evaluate the improved accuracy of radiologic assessment of lung cancer afforded by computer-aided diagnosis (CADx). Materials and Methods Inclusion/exclusion criteria were formulated, and a systematic inquiry of research databases was conducted. Following title and abstract review, an in-depth review of 149 surviving articles was performed with accepted articles undergoing a Quality Assessment of Diagnostic Accuracy Studies (QUADAS)-based quality review and data abstraction. Results A total of 14 articles, representing 1868 scans, passed the review. Increases in the receiver operating characteristic (ROC) area under the curve of .8 or higher were seen in all nine studies that reported it, except for one that employed subspecialized radiologists. Conclusions This systematic review demonstrated improved accuracy of lung cancer assessment using CADx over manual review, in eight high-quality observer-performance studies. The improved accuracy afforded by radiologic lung-CADx suggests the need to explore its use in screening and regular clinical workflow. PMID:26616209

  19. Increasing accuracy in the assessment of motion sickness: A construct methodology

    NASA Technical Reports Server (NTRS)

    Stout, Cynthia S.; Cowings, Patricia S.

    1993-01-01

    The purpose is to introduce a new methodology that should improve the accuracy of the assessment of motion sickness. This construct methodology utilizes both subjective reports of motion sickness and objective measures of physiological correlates to assess motion sickness. Current techniques and methods used in the framework of a construct methodology are inadequate. Current assessment techniques for diagnosing motion sickness and space motion sickness are reviewed, and attention is called to the problems with the current methods. Further, principles of psychophysiology that when applied will probably resolve some of these problems are described in detail.

  20. Assessment of the genomic prediction accuracy for feed efficiency traits in meat-type chickens

    PubMed Central

    Wang, Jie; Ma, Jie; Shu, Dingming; Lund, Mogens Sandø; Su, Guosheng; Qu, Hao

    2017-01-01

    Feed represents the major cost of chicken production. Selection for improving feed utilization is a feasible way to reduce feed cost and greenhouse gas emissions. The objectives of this study were to investigate the efficiency of genomic prediction for feed conversion ratio (FCR), residual feed intake (RFI), average daily gain (ADG) and average daily feed intake (ADFI) and to assess the impact of selection for feed efficiency traits FCR and RFI on eviscerating percentage (EP), breast muscle percentage (BMP) and leg muscle percentage (LMP) in meat-type chickens. Genomic prediction was assessed using a 4-fold cross-validation for two validation scenarios. The first scenario was a random family sampling validation (CVF), and the second scenario was a random individual sampling validation (CVR). Variance components were estimated based on the genomic relationship built with single nucleotide polymorphism markers. Genomic estimated breeding values (GEBV) were predicted using a genomic best linear unbiased prediction model. The accuracies of GEBV were evaluated in two ways: the correlation between GEBV and corrected phenotypic value divided by the square root of heritability, i.e., the correlation-based accuracy, and model-based theoretical accuracy. Breeding values were also predicted using a conventional pedigree-based best linear unbiased prediction model in order to compare accuracies of genomic and conventional predictions. The heritability estimates of FCR and RFI were 0.29 and 0.50, respectively. The heritability estimates of ADG, ADFI, EP, BMP and LMP ranged from 0.34 to 0.53. In the CVF scenario, the correlation-based accuracy and the theoretical accuracy of genomic prediction for FCR were slightly higher than those for RFI. The correlation-based accuracies for FCR, RFI, ADG and ADFI were 0.360, 0.284, 0.574 and 0.520, respectively, and the model-based theoretical accuracies were 0.420, 0.414, 0.401 and 0.382, respectively. In the CVR scenario, the correlation

  1. A SVD-based method to assess the uniqueness and accuracy of SPECT geometrical calibration.

    PubMed

    Ma, Tianyu; Yao, Rutao; Shao, Yiping; Zhou, Rong

    2009-12-01

    Geometrical calibration is critical to obtaining high resolution and artifact-free reconstructed image for SPECT and CT systems. Most published calibration methods use analytical approach to determine the uniqueness condition for a specific calibration problem, and the calibration accuracy is often evaluated through empirical studies. In this work, we present a general method to assess the characteristics of both the uniqueness and the quantitative accuracy of the calibration. The method uses a singular value decomposition (SVD) based approach to analyze the Jacobian matrix from a least-square cost function for the calibration. With this method, the uniqueness of the calibration can be identified by assessing the nonsingularity of the Jacobian matrix, and the estimation accuracy of the calibration parameters can be quantified by analyzing the SVD components. A direct application of this method is that the efficacy of a calibration configuration can be quantitatively evaluated by choosing a figure-of-merit, e.g., the minimum required number of projection samplings to achieve desired calibration accuracy. The proposed method was validated with a slit-slat SPECT system through numerical simulation studies and experimental measurements with point sources and an ultra-micro hot-rod phantom. The predicted calibration accuracy from the numerical studies was confirmed by the experimental point source calibrations at approximately 0.1 mm for both the center of rotation (COR) estimation of a rotation stage and the slit aperture position (SAP) estimation of a slit-slat collimator by an optimized system calibration protocol. The reconstructed images of a hot rod phantom showed satisfactory spatial resolution with a proper calibration and showed visible resolution degradation with artificially introduced 0.3 mm COR estimation error. The proposed method can be applied to other SPECT and CT imaging systems to analyze calibration method assessment and calibration protocol

  2. The suitability of common metrics for assessing parotid and larynx autosegmentation accuracy.

    PubMed

    Beasley, William J; McWilliam, Alan; Aitkenhead, Adam; Mackay, Ranald I; Rowbottom, Carl G

    2016-03-08

    Contouring structures in the head and neck is time-consuming, and automatic seg-mentation is an important part of an adaptive radiotherapy workflow. Geometric accuracy of automatic segmentation algorithms has been widely reported, but there is no consensus as to which metrics provide clinically meaningful results. This study investigated whether geometric accuracy (as quantified by several commonly used metrics) was associated with dosimetric differences for the parotid and larynx, comparing automatically generated contours against manually drawn ground truth contours. This enabled the suitability of different commonly used metrics to be assessed for measuring automatic segmentation accuracy of the parotid and larynx. Parotid and larynx structures for 10 head and neck patients were outlined by five clinicians to create ground truth structures. An automatic segmentation algorithm was used to create automatically generated normal structures, which were then used to create volumetric-modulated arc therapy plans. The mean doses to the automatically generated structures were compared with those of the corresponding ground truth structures, and the relative difference in mean dose was calculated for each structure. It was found that this difference did not correlate with the geometric accuracy provided by several metrics, notably the Dice similarity coefficient, which is a commonly used measure of spatial overlap. Surface-based metrics provided stronger correlation and are, therefore, more suitable for assessing automatic seg-mentation of the parotid and larynx.

  3. A novel method for assessing the 3-D orientation accuracy of inertial/magnetic sensors.

    PubMed

    Faber, Gert S; Chang, Chien-Chi; Rizun, Peter; Dennerlein, Jack T

    2013-10-18

    A novel method for assessing the accuracy of inertial/magnetic sensors is presented. The method, referred to as the "residual matrix" method, is advantageous because it decouples the sensor's error with respect to Earth's gravity vector (attitude residual error: pitch and roll) from the sensor's error with respect to magnetic north (heading residual error), while remaining insensitive to singularity problems when the second Euler rotation is close to ±90°. As a demonstration, the accuracy of an inertial/magnetic sensor mounted to a participant's forearm was evaluated during a reaching task in a laboratory. Sensor orientation was measured internally (by the inertial/magnetic sensor) and externally using an optoelectronic measurement system with a marker cluster rigidly attached to the sensor's enclosure. Roll, pitch and heading residuals were calculated using the proposed novel method, as well as using a common orientation assessment method where the residuals are defined as the difference between the Euler angles measured by the inertial sensor and those measured by the optoelectronic system. Using the proposed residual matrix method, the roll and pitch residuals remained less than 1° and, as expected, no statistically significant difference between these two measures of attitude accuracy was found; the heading residuals were significantly larger than the attitude residuals but remained below 2°. Using the direct Euler angle comparison method, the residuals were in general larger due to singularity issues, and the expected significant difference between inertial/magnetic sensor attitude and heading accuracy was not present.

  4. [Procedures and methods of benefit assessments for medicines in Germany].

    PubMed

    Bekkering, G E; Kleijnen, J

    2008-12-01

    The Federal Joint Committee (FJC; Gemeinsamer Bundesausschuss, G-BA) defines the health-care elements that are to be reimbursed by sickness funds. To define a directive, the FJC can commission benefit assessments, which provide an overview of the scientific evidence regarding the efficacy and benefits of an intervention. This paper describes the operational implementation of the legal requirements with regard to the benefit assessments of medicines. Such benefit assessments are sometimes referred to as "isolated benefit assessments," to distinguish them from benefit assessments as part of a full economic evaluation. The FJC has the freedom to commission these assessments from any agency; however, to date the majority have commissioned the Institute for Quality and Efficiency in Health Care (IQWiG). Nevertheless, the content of this paper applies integrally to any institute commissioned for such assessments. In this report, "the institute"' is used when the text refers to any of these institutes. The legal framework for benefit assessments is laid out in the German Social Code Book version V (http://www. sozialgesetzbuch.de), Sects. 35b ( section sign 1), 139a ( section sign 4-6) and Sect. 139b ( section sign 3). It is specified that: The institute must guarantee high transparency. The institute must provide appropriate participation of relevant parties for the commission-related development of assessments, and opportunity for comment on all important segments of the assessment procedure. The institute has to report on the progress and results of the work at regular intervals. The institute is held to giving the commission to external experts. Based on the legal framework, the institute must guarantee a high procedural transparency. Transparency of the whole process should be achieved, which is evidenced by clear reporting of procedures and criteria in all phases undertaken in the benefit assessment. The most important means of enhancing transparency are: 1. To

  5. Evaluating the effect of learning style and student background on self-assessment accuracy

    NASA Astrophysics Data System (ADS)

    Alaoutinen, Satu

    2012-06-01

    This study evaluates a new taxonomy-based self-assessment scale and examines factors that affect assessment accuracy and course performance. The scale is based on Bloom's Revised Taxonomy and is evaluated by comparing students' self-assessment results with course performance in a programming course. Correlation has been used to reveal possible connections between student information and both self-assessment and course performance. The results show that students can place their knowledge along the taxonomy-based scale quite well and the scale seems to fit engineering students' learning style. Advanced students assess themselves more accurately than novices. The results also show that reflective students were better in programming than active. The scale used in this study gives a more objective picture of students' knowledge than general scales and with modifications it can be used in other classes than programming.

  6. Neurophysiological assessment of the electrostimulation procedures used in stroke patients during rehabilitation.

    PubMed

    Lisinski, P; Huber, J; Samborski, W; Witkowska, A

    2008-01-01

    The aim of this study was to evaluate the effectiveness of the associated electrotherapeutical and kinesiotherapeutical treatment in patients after ischemic stroke (N=24), mainly by means of neurophysiological tests. All patients underwent the same 20 days of neurorehabilitation procedures. Particular attention was paid to three-stage modified electrotherapy procedures such as: oververtebral functional electrical stimulation (FES), transcutaneous electrical nerve stimulation (TENS) and the alternate neuromuscular functional electrical stimulation (NMFES) of antagonistic muscles of the wrist and the ankle (N=16). Electrotherapy was supplemented with kinesiotherapeutic (mainly PNF) procedures acting as an amplifier. Clinical assessment included muscle tension (Ashworth's scale), muscle force (Lovett's scale) and reflex scoring at wrist and ankle. However, the effectiveness of the procedures was measured by the assessment of results in complex and repetitive, bilaterally performed global electromyography (EMG) and electroneurography (ENG; M-wave studies). The statistical analysis obtained from results in clinical and neurophysiological examinations suggested that the dorsiflexion of wrist and ankle was improved in the majority of patients who took part in this study. EMG and ENG examinations showed that 20 days of therapy improved both activity in muscle motor units on the more paralyzed side (mainly within upper extremities) and to a lesser degree in the transmission of efferent impulses within motor fibers of nerves. The results obtained suggest that patients after ischemic strokes never show an isolated unilateral disability in motor functions. No definite similarities between the results of clinical and neurophysiological studies were found, which may suggest greater accuracy of the neurophysiological evaluation.

  7. QUADAS-2: a revised tool for the quality assessment of diagnostic accuracy studies.

    PubMed

    Whiting, Penny F; Rutjes, Anne W S; Westwood, Marie E; Mallett, Susan; Deeks, Jonathan J; Reitsma, Johannes B; Leeflang, Mariska M G; Sterne, Jonathan A C; Bossuyt, Patrick M M

    2011-10-18

    In 2003, the QUADAS tool for systematic reviews of diagnostic accuracy studies was developed. Experience, anecdotal reports, and feedback suggested areas for improvement; therefore, QUADAS-2 was developed. This tool comprises 4 domains: patient selection, index test, reference standard, and flow and timing. Each domain is assessed in terms of risk of bias, and the first 3 domains are also assessed in terms of concerns regarding applicability. Signalling questions are included to help judge risk of bias. The QUADAS-2 tool is applied in 4 phases: summarize the review question, tailor the tool and produce review-specific guidance, construct a flow diagram for the primary study, and judge bias and applicability. This tool will allow for more transparent rating of bias and applicability of primary diagnostic accuracy studies.

  8. Comparative Accuracy Assessment of Global Land Cover Datasets Using Existing Reference Data

    NASA Astrophysics Data System (ADS)

    Tsendbazar, N. E.; de Bruin, S.; Mora, B.; Herold, M.

    2014-12-01

    Land cover is a key variable to monitor the impact of human and natural processes on the biosphere. As one of the Essential Climate Variables, land cover observations are used for climate models and several other applications. Remote sensing technologies have enabled the generation of several global land cover (GLC) products that are based on different data sources and methods (e.g. legends). Moreover, the reported map accuracies result from varying validation strategies. Such differences make the comparison of the GLC products challenging and create confusion on selecting suitable datasets for different applications. This study aims to conduct comparative accuracy assessment of GLC datasets (LC-CCI 2005, MODIS 2005, and Globcover 2005) using the Globcover 2005 reference data which can represent the thematic differences of these GLC maps. This GLC reference dataset provides LCCS classifier information for 3 main land cover types for each sample plot. The LCCS classifier information was translated according to the legends of the GLC maps analysed. The preliminary analysis showed some challenges in LCCS classifier translation arising from missing important classifier information, differences in class definition between the legends and absence of class proportion of main land cover types. To overcome these issues, we consolidated the entire reference data (i.e. 3857 samples distributed at global scale). Then the GLC maps and the reference dataset were harmonized into 13 general classes to perform the comparative accuracy assessments. To help users on selecting suitable GLC dataset(s) for their application, we conducted the map accuracy assessments considering different users' perspectives: climate modelling, bio-diversity assessments, agriculture monitoring, and map producers. This communication will present the method and the results of this study and provide a set of recommendations to the GLC map producers and users with the aim to facilitate the use of GLC maps.

  9. Standardizing the protocol for hemispherical photographs: accuracy assessment of binarization algorithms.

    PubMed

    Glatthorn, Jonas; Beckschäfer, Philip

    2014-01-01

    Hemispherical photography is a well-established method to optically assess ecological parameters related to plant canopies; e.g. ground-level light regimes and the distribution of foliage within the crown space. Interpreting hemispherical photographs involves classifying pixels as either sky or vegetation. A wide range of automatic thresholding or binarization algorithms exists to classify the photographs. The variety in methodology hampers ability to compare results across studies. To identify an optimal threshold selection method, this study assessed the accuracy of seven binarization methods implemented in software currently available for the processing of hemispherical photographs. Therefore, binarizations obtained by the algorithms were compared to reference data generated through a manual binarization of a stratified random selection of pixels. This approach was adopted from the accuracy assessment of map classifications known from remote sensing studies. Percentage correct (Pc) and kappa-statistics (K) were calculated. The accuracy of the algorithms was assessed for photographs taken with automatic exposure settings (auto-exposure) and photographs taken with settings which avoid overexposure (histogram-exposure). In addition, gap fraction values derived from hemispherical photographs were compared with estimates derived from the manually classified reference pixels. All tested algorithms were shown to be sensitive to overexposure. Three of the algorithms showed an accuracy which was high enough to be recommended for the processing of histogram-exposed hemispherical photographs: "Minimum" (Pc 98.8%; K 0.952), "Edge Detection" (Pc 98.1%; K 0.950), and "Minimum Histogram" (Pc 98.1%; K 0.947). The Minimum algorithm overestimated gap fraction least of all (11%). The overestimation by the algorithms Edge Detection (63%) and Minimum Histogram (67%) were considerably larger. For the remaining four evaluated algorithms (IsoData, Maximum Entropy, MinError, and Otsu

  10. Multinomial tree models for assessing the status of the reference in studies of the accuracy of tools for binary classification

    PubMed Central

    Botella, Juan; Huang, Huiling; Suero, Manuel

    2013-01-01

    Studies that evaluate the accuracy of binary classification tools are needed. Such studies provide 2 × 2 cross-classifications of test outcomes and the categories according to an unquestionable reference (or gold standard). However, sometimes a suboptimal reliability reference is employed. Several methods have been proposed to deal with studies where the observations are cross-classified with an imperfect reference. These methods require that the status of the reference, as a gold standard or as an imperfect reference, is known. In this paper a procedure for determining whether it is appropriate to maintain the assumption that the reference is a gold standard or an imperfect reference, is proposed. This procedure fits two nested multinomial tree models, and assesses and compares their absolute and incremental fit. Its implementation requires the availability of the results of several independent studies. These should be carried out using similar designs to provide frequencies of cross-classification between a test and the reference under investigation. The procedure is applied in two examples with real data. PMID:24106484

  11. Accuracy of Estrogen and Progesterone Receptor Assessment in Core Needle Biopsy Specimens of Breast Cancer

    PubMed Central

    Omranipour, Ramesh; Alipour, Sadaf; Hadji, Maryam; Fereidooni, Forouzandeh; Jahanzad, Issa; Bagheri, Khojasteh

    2013-01-01

    Background Diagnosis of breast cancer is completed through core needle biopsy (CNB) of the tumors but there is controversy on the accuracy of hormone receptor results on CNB specimens. Objectives We undertook this study to compare the results of hormone receptor assessment in CNB and surgical samples on our patients. Patients and Methods Hormone receptor status was determined in CNB and surgical samples in breast cancer patients whose CNB and operation had been performed in this institute from 2009 to 2011 and had not undergone neoadjuvant chemotherapy. Results About 350 patients, 60 cases met all the criteria for entering the study. The mean age was 49.8 years. Considering a confidence interval (CI) of 95%, the sensitivity of ER and PR assessment in CNB was 92.9% and 81%, respectively and the specificity of both was 100%. The Accuracy of CNB was 98% for ER and 93% for PR. Conclusions Our results confirm the acceptable accuracy of ER assessment on CNB. The subject needs further investigation in developing countries where omission of the test in surgical samples can be cost and time-saving. PMID:24349751

  12. Accuracy assessment of topographic mapping using UAV image integrated with satellite images

    NASA Astrophysics Data System (ADS)

    Azmi, S. M.; Ahmad, Baharin; Ahmad, Anuar

    2014-02-01

    Unmanned Aerial Vehicle or UAV is extensively applied in various fields such as military applications, archaeology, agriculture and scientific research. This study focuses on topographic mapping and map updating. UAV is one of the alternative ways to ease the process of acquiring data with lower operating costs, low manufacturing and operational costs, plus it is easy to operate. Furthermore, UAV images will be integrated with QuickBird images that are used as base maps. The objective of this study is to make accuracy assessment and comparison between topographic mapping using UAV images integrated with aerial photograph and satellite image. The main purpose of using UAV image is as a replacement for cloud covered area which normally exists in aerial photograph and satellite image, and for updating topographic map. Meanwhile, spatial resolution, pixel size, scale, geometric accuracy and correction, image quality and information contents are important requirements needed for the generation of topographic map using these kinds of data. In this study, ground control points (GCPs) and check points (CPs) were established using real time kinematic Global Positioning System (RTK-GPS) technique. There are two types of analysis that are carried out in this study which are quantitative and qualitative assessments. Quantitative assessment is carried out by calculating root mean square error (RMSE). The outputs of this study include topographic map and orthophoto. From this study, the accuracy of UAV image is ± 0.460 m. As conclusion, UAV image has the potential to be used for updating of topographic maps.

  13. Assessment of relative accuracy of AHN-2 laser scanning data using planar features.

    PubMed

    van der Sande, Corné; Soudarissanane, Sylvie; Khoshelham, Kourosh

    2010-01-01

    AHN-2 is the second part of the Actueel Hoogtebestand Nederland project, which concerns the acquisition of high-resolution altimetry data over the entire Netherlands using airborne laser scanning. The accuracy assessment of laser altimetry data usually relies on comparing corresponding tie elements, often points or lines, in the overlapping strips. This paper proposes a new approach to strip adjustment and accuracy assessment of AHN-2 data by using planar features. In the proposed approach a transformation is estimated between two overlapping strips by minimizing the distances between points in one strip and their corresponding planes in the other. The planes and the corresponding points are extracted in an automated segmentation process. The point-to-plane distances are used as observables in an estimation model, whereby the parameters of a transformation between the two strips and their associated quality measures are estimated. We demonstrate the performance of the method for the accuracy assessment of the AHN-2 dataset over Zeeland province of The Netherlands. The results show vertical offsets of up to 4 cm between the overlapping strips, and horizontal offsets ranging from 2 cm to 34 cm.

  14. Positioning accuracy assessment for the 4GEO/5IGSO/2MEO constellation of COMPASS

    NASA Astrophysics Data System (ADS)

    Zhou, ShanShi; Cao, YueLing; Zhou, JianHua; Hu, XiaoGong; Tang, ChengPan; Liu, Li; Guo, Rui; He, Feng; Chen, JunPing; Wu, Bin

    2012-12-01

    Determined to become a new member of the well-established GNSS family, COMPASS (or BeiDou-2) is developing its capabilities to provide high accuracy positioning services. Two positioning modes are investigated in this study to assess the positioning accuracy of COMPASS' 4GEO/5IGSO/2MEO constellation. Precise Point Positioning (PPP) for geodetic users and real-time positioning for common navigation users are utilized. To evaluate PPP accuracy, coordinate time series repeatability and discrepancies with GPS' precise positioning are computed. Experiments show that COMPASS PPP repeatability for the east, north and up components of a receiver within mainland China is better than 2 cm, 2 cm and 5 cm, respectively. Apparent systematic offsets of several centimeters exist between COMPASS precise positioning and GPS precise positioning, indicating errors remaining in the treatments of COMPASS measurement and dynamic models and reference frame differences existing between two systems. For common positioning users, COMPASS provides both open and authorized services with rapid differential corrections and integrity information available to authorized users. Our assessment shows that in open service positioning accuracy of dual-frequency and single-frequency users is about 5 m and 6 m (RMS), respectively, which may be improved to about 3 m and 4 m (RMS) with the addition of differential corrections. Less accurate Signal In Space User Ranging Error (SIS URE) and Geometric Dilution of Precision (GDOP) contribute to the relatively inferior accuracy of COMPASS as compared to GPS. Since the deployment of the remaining 1 GEO and 2 MEO is not able to significantly improve GDOP, the performance gap could only be overcome either by the use of differential corrections or improvement of the SIS URE, or both.

  15. New technology in dietary assessment: a review of digital methods in improving food record accuracy.

    PubMed

    Stumbo, Phyllis J

    2013-02-01

    Methods for conducting dietary assessment in the United States date back to the early twentieth century. Methods of assessment encompassed dietary records, written and spoken dietary recalls, FFQ using pencil and paper and more recently computer and internet applications. Emerging innovations involve camera and mobile telephone technology to capture food and meal images. This paper describes six projects sponsored by the United States National Institutes of Health that use digital methods to improve food records and two mobile phone applications using crowdsourcing. The techniques under development show promise for improving accuracy of food records.

  16. A Study of Confidence and Accuracy Using the Rasch Modeling Procedures. Research Report. ETS RR-08-42

    ERIC Educational Resources Information Center

    Paek, Insu; Lee, Jihyun; Stankov, Lazar; Wilson, Mark

    2008-01-01

    This study investigated the relationship between students' actual performance (accuracy) and their subjective judgments of accuracy (confidence) on selected English language proficiency tests. The unidimensional and multidimensional IRT Rasch approaches were used to model the discrepancy between confidence and accuracy at the item and test level…

  17. Accuracy of ELISA detection methods for gluten and reference materials: a realistic assessment.

    PubMed

    Diaz-Amigo, Carmen; Popping, Bert

    2013-06-19

    The determination of prolamins by ELISA and subsequent conversion of the resulting concentration to gluten content in food appears to be a comparatively simple and straightforward process with which many laboratories have years-long experience. At the end of the process, a value of gluten, expressed in mg/kg or ppm, is obtained. This value often is the basis for the decision if a product can be labeled gluten-free or not. On the basis of currently available scientific information, the accuracy of the obtained values with commonly used commercial ELISA kits has to be questioned. Although recently several multilaboratory studies have been conducted in an attempt to emphasize and ensure the accuracy of the results, data suggest that it was the precision of these assays, not the accuracy, that was confirmed because some of the underlying assumptions for calculating the gluten content lack scientific data support as well as appropriate reference materials for comparison. This paper discusses the issues of gluten determination and quantification with respect to antibody specificity, extraction procedures, reference materials, and their commutability.

  18. Results of 17 Independent Geopositional Accuracy Assessments of Earth Satellite Corporation's GeoCover Landsat Thematic Mapper Imagery. Geopositional Accuracy Validation of Orthorectified Landsat TM Imagery: Northeast Asia

    NASA Technical Reports Server (NTRS)

    Smith, Charles M.

    2003-01-01

    This report provides results of an independent assessment of the geopositional accuracy of the Earth Satellite (EarthSat) Corporation's GeoCover, Orthorectified Landsat Thematic Mapper (TM) imagery over Northeast Asia. This imagery was purchased through NASA's Earth Science Enterprise (ESE) Scientific Data Purchase (SDP) program.

  19. Comparing Preference Assessments: Selection- versus Duration-Based Preference Assessment Procedures

    ERIC Educational Resources Information Center

    Kodak, Tiffany; Fisher, Wayne W.; Kelley, Michael E.; Kisamore, April

    2009-01-01

    In the current investigation, the results of a selection- and a duration-based preference assessment procedure were compared. A Multiple Stimulus With Replacement (MSW) preference assessment [Windsor, J., Piche, L. M., & Locke, P. A. (1994). "Preference testing: A comparison of two presentation methods." "Research in Developmental Disabilities,…

  20. Structured Assessment Approach: a procedure for the assessment of fuel cycle safeguard systems

    SciTech Connect

    Parziale, A.A.; Patenaude, C.J.; Renard, P.A.; Sacks, I.J.

    1980-03-06

    Lawrence Livermore National Laboratory has developed and tested for the United States Nuclear Regulatory Commission a procedure for the evaluation of Material Control and Accounting (MC and A) Systems at Nuclear Fuel Facilities. This procedure, called the Structured Assessment Approach, SAA, subjects the MC and A system at a facility to a series of increasingly sophisticated adversaries and strategies. A fully integrated version of the computer codes which assist the analyst in this assessment was made available in October, 1979. The concepts of the SAA and the results of the assessment of a hypothetical but typical facility are presented.

  1. Accuracy assessment of modeling architectural structures and details using terrestrial laser scanning

    NASA Astrophysics Data System (ADS)

    Kedzierski, M.; Walczykowski, P.; Orych, A.; Czarnecka, P.

    2015-08-01

    One of the most important aspects when performing architectural documentation of cultural heritage structures is the accuracy of both the data and the products which are generated from these data: documentation in the form of 3D models or vector drawings. The paper describes an assessment of the accuracy of modelling data acquired using a terrestrial phase scanner in relation to the density of a point cloud representing the surface of different types of construction materials typical for cultural heritage structures. This analysis includes the impact of the scanning geometry: the incidence angle of the laser beam and the scanning distance. For the purposes of this research, a test field consisting of samples of different types of construction materials (brick, wood, plastic, plaster, a ceramic tile, sheet metal) was built. The study involved conducting measurements at different angles and from a range of distances for chosen scanning densities. Data, acquired in the form of point clouds, were then filtered and modelled. An accuracy assessment of the 3D model was conducted by fitting it with the point cloud. The reflection intensity of each type of material was also analyzed, trying to determine which construction materials have the highest reflectance coefficients, and which have the lowest reflection coefficients, and in turn how this variable changes for different scanning parameters. Additionally measurements were taken of a fragment of a building in order to compare the results obtained in laboratory conditions, with those taken in field conditions.

  2. Assessment of accuracy of CFD simulations through quantification of a numerical dissipation rate

    NASA Astrophysics Data System (ADS)

    Domaradzki, J. A.; Sun, G.; Xiang, X.; Chen, K. K.

    2016-11-01

    The accuracy of CFD simulations is typically assessed through a time consuming process of multiple runs and comparisons with available benchmark data. We propose that the accuracy can be assessed in the course of actual runs using a simpler method based on a numerical dissipation rate which is computed at each time step for arbitrary sub-domains using only information provided by the code in question (Schranner et al., 2015; Castiglioni and Domaradzki, 2015). Here, the method has been applied to analyze numerical simulation results obtained using OpenFOAM software for a flow around a sphere at Reynolds number of 1000. Different mesh resolutions were used in the simulations. For the coarsest mesh the ratio of the numerical dissipation to the viscous dissipation downstream of the sphere varies from 4.5% immediately behind the sphere to 22% further away. For the finest mesh this ratio varies from 0.4% behind the sphere to 6% further away. The large numerical dissipation in the former case is a direct indicator that the simulation results are inaccurate, e.g., the predicted Strouhal number is 16% lower than the benchmark. Low numerical dissipation in the latter case is an indicator of an acceptable accuracy, with the Strouhal number in the simulations matching the benchmark. Supported by NSF.

  3. A comparative study between evaluation methods for quality control procedures for determining the accuracy of PET/CT registration

    NASA Astrophysics Data System (ADS)

    Cha, Min Kyoung; Ko, Hyun Soo; Jung, Woo Young; Ryu, Jae Kwang; Choe, Bo-Young

    2015-08-01

    The Accuracy of registration between positron emission tomography (PET) and computed tomography (CT) images is one of the important factors for reliable diagnosis in PET/CT examinations. Although quality control (QC) for checking alignment of PET and CT images should be performed periodically, the procedures have not been fully established. The aim of this study is to determine optimal quality control (QC) procedures that can be performed at the user level to ensure the accuracy of PET/CT registration. Two phantoms were used to carry out this study: the American college of Radiology (ACR)-approved PET phantom and National Electrical Manufacturers Association (NEMA) International Electrotechnical Commission (IEC) body phantom, containing fillable spheres. All PET/CT images were acquired on a Biograph TruePoint 40 PET/CT scanner using routine protocols. To measure registration error, the spatial coordinates of the estimated centers of the target slice (spheres) was calculated independently for the PET and the CT images in two ways. We compared the images from the ACR-approved PET phantom to that from the NEMA IEC body phantom. Also, we measured the total time required from phantom preparation to image analysis. The first analysis method showed a total difference of 0.636 ± 0.11 mm for the largest hot sphere and 0.198 ± 0.09 mm for the largest cold sphere in the case of the ACR-approved PET phantom. In the NEMA IEC body phantom, the total difference was 3.720 ± 0.97 mm for the largest hot sphere and 4.800 ± 0.85 mm for the largest cold sphere. The second analysis method showed that the differences in the x location at the line profile of the lesion on PET and CT were (1.33, 1.33) mm for a bone lesion, (-1.26, -1.33) mm for an air lesion and (-1.67, -1.60) mm for a hot sphere lesion for the ACR-approved PET phantom. For the NEMA IEC body phantom, the differences in the x location at the line profile of the lesion on PET and CT were (-1.33, 4.00) mm for the air

  4. Assessing the accuracy and reproducibility of modality independent elastography in a murine model of breast cancer

    PubMed Central

    Weis, Jared A.; Flint, Katelyn M.; Sanchez, Violeta; Yankeelov, Thomas E.; Miga, Michael I.

    2015-01-01

    Abstract. Cancer progression has been linked to mechanics. Therefore, there has been recent interest in developing noninvasive imaging tools for cancer assessment that are sensitive to changes in tissue mechanical properties. We have developed one such method, modality independent elastography (MIE), that estimates the relative elastic properties of tissue by fitting anatomical image volumes acquired before and after the application of compression to biomechanical models. The aim of this study was to assess the accuracy and reproducibility of the method using phantoms and a murine breast cancer model. Magnetic resonance imaging data were acquired, and the MIE method was used to estimate relative volumetric stiffness. Accuracy was assessed using phantom data by comparing to gold-standard mechanical testing of elasticity ratios. Validation error was <12%. Reproducibility analysis was performed on animal data, and within-subject coefficients of variation ranged from 2 to 13% at the bulk level and 32% at the voxel level. To our knowledge, this is the first study to assess the reproducibility of an elasticity imaging metric in a preclinical cancer model. Our results suggest that the MIE method can reproducibly generate accurate estimates of the relative mechanical stiffness and provide guidance on the degree of change needed in order to declare biological changes rather than experimental error in future therapeutic studies. PMID:26158120

  5. Gender differences in structured risk assessment: comparing the accuracy of five instruments.

    PubMed

    Coid, Jeremy; Yang, Min; Ullrich, Simone; Zhang, Tianqiang; Sizmur, Steve; Roberts, Colin; Farrington, David P; Rogers, Robert D

    2009-04-01

    Structured risk assessment should guide clinical risk management, but it is uncertain which instrument has the highest predictive accuracy among men and women. In the present study, the authors compared the Psychopathy Checklist-Revised (PCL-R; R. D. Hare, 1991, 2003); the Historical, Clinical, Risk Management-20 (HCR-20; C. D. Webster, K. S. Douglas, D. Eaves, & S. D. Hart, 1997); the Risk Matrix 2000-Violence (RM2000[V]; D. Thornton et al., 2003); the Violence Risk Appraisal Guide (VRAG; V. L. Quinsey, G. T. Harris, M. E. Rice, & C. A. Cormier, 1998); the Offenders Group Reconviction Scale (OGRS; J. B. Copas & P. Marshall, 1998; R. Taylor, 1999); and the total previous convictions among prisoners, prospectively assessed prerelease. The authors compared predischarge measures with subsequent offending and instruments ranked using multivariate regression. Most instruments demonstrated significant but moderate predictive ability. The OGRS ranked highest for violence among men, and the PCL-R and HCR-20 H subscale ranked highest for violence among women. The OGRS and total previous acquisitive convictions demonstrated greatest accuracy in predicting acquisitive offending among men and women. Actuarial instruments requiring no training to administer performed as well as personality assessment and structured risk assessment and were superior among men for violence.

  6. Assessing decoding ability: the role of speed and accuracy and a new composite indicator to measure decoding skill in elementary grades.

    PubMed

    Morlini, Isabella; Stella, Giacomo; Scorza, Maristella

    2015-01-01

    Tools for assessing decoding skill in students attending elementary grades are of fundamental importance for guaranteeing an early identification of reading disabled students and reducing both the primary negative effects (on learning) and the secondary negative effects (on the development of the personality) of this disability. This article presents results obtained by administering existing standardized tests of reading and a new screening procedure to about 1,500 students in the elementary grades in Italy. It is found that variables measuring speed and accuracy in all administered reading tests are not Gaussian, and therefore the threshold values used for classifying a student as a normal decoder or as an impaired decoder must be estimated on the basis of the empirical distribution of these variables rather than by using the percentiles of the normal distribution. It is also found that the decoding speed and the decoding accuracy can be measured in either a 1-minute procedure or in much longer standardized tests. The screening procedure and the tests administered are found to be equivalent insofar as they carry the same information. Finally, it is found that speed and accuracy act as complementary effects in the measurement of decoding ability. On the basis of this last finding, the study introduces a new composite indicator aimed at determining the student's performance, which combines speed and accuracy in the measurement of decoding ability.

  7. [Assessment of precision and accuracy of digital surface photogrammetry with the DSP 400 system].

    PubMed

    Krimmel, M; Kluba, S; Dietz, K; Reinert, S

    2005-03-01

    The objective of the present study was to evaluate the precision and accuracy of facial anthropometric measurements obtained through digital 3-D surface photogrammetry with the DSP 400 system in comparison to traditional 2-D photogrammetry. Fifty plaster casts of cleft infants were imaged and 21 standard anthropometric measurements were obtained. For precision assessment the measurements were performed twice in a subsample. Accuracy was determined by comparison of direct measurements and indirect 2-D and 3-D image measurements. Precision of digital surface photogrammetry was almost as good as direct anthropometry and clearly better than 2-D photogrammetry. Measurements derived from 3-D images showed better congruence to direct measurements than from 2-D photos. Digital surface photogrammetry with the DSP 400 system is sufficiently precise and accurate for craniofacial anthropometric examinations.

  8. Theory and methods for accuracy assessment of thematic maps using fuzzy sets

    SciTech Connect

    Gopal, S.; Woodcock, C. )

    1994-02-01

    The use of fuzzy sets in map accuracy assessment expands the amount of information that can be provided regarding the nature, frequency, magnitude, and source of errors in a thematic map. The need for using fuzzy sets arises from the observation that all map locations do not fit unambiguously in a single map category. Fuzzy sets allow for varying levels of set membership for multiple map categories. A linguistic measurement scale allows the kinds of comments commonly made during map evaluations to be used to quantify map accuracy. Four tables result from the use of fuzzy functions, and when taken together they provide more information than traditional confusion matrices. The use of a hypothetical dataset helps illustrate the benefits of the new methods. It is hoped that the enhanced ability to evaluate maps resulting from the use of fuzzy sets will improve our understanding of uncertainty in maps and facilitate improved error modeling. 40 refs.

  9. Evaluation of precision and accuracy assessment of different 3-D surface imaging systems for biomedical purposes.

    PubMed

    Eder, Maximilian; Brockmann, Gernot; Zimmermann, Alexander; Papadopoulos, Moschos A; Schwenzer-Zimmerer, Katja; Zeilhofer, Hans Florian; Sader, Robert; Papadopulos, Nikolaos A; Kovacs, Laszlo

    2013-04-01

    Three-dimensional (3-D) surface imaging has gained clinical acceptance, especially in the field of cranio-maxillo-facial and plastic, reconstructive, and aesthetic surgery. Six scanners based on different scanning principles (Minolta Vivid 910®, Polhemus FastSCAN™, GFM PRIMOS®, GFM TopoCAM®, Steinbichler Comet® Vario Zoom 250, 3dMD DSP 400®) were used to measure five sheep skulls of different sizes. In three areas with varying anatomical complexity (areas, 1 = high; 2 = moderate; 3 = low), 56 distances between 20 landmarks are defined on each skull. Manual measurement (MM), coordinate machine measurements (CMM) and computer tomography (CT) measurements were used to define a reference method for further precision and accuracy evaluation of different 3-D scanning systems. MM showed high correlation to CMM and CT measurements (both r = 0.987; p < 0.001) and served as the reference method. TopoCAM®, Comet® and Vivid 910® showed highest measurement precision over all areas of complexity; Vivid 910®, the Comet® and the DSP 400® demonstrated highest accuracy over all areas with Vivid 910® being most accurate in areas 1 and 3, and the DSP 400® most accurate in area 2. In accordance to the measured distance length, most 3-D devices present higher measurement precision and accuracy for large distances and lower degrees of precision and accuracy for short distances. In general, higher degrees of complexity are associated with lower 3-D assessment accuracy, suggesting that for optimal results, different types of scanners should be applied to specific clinical applications and medical problems according to their special construction designs and characteristics.

  10. Accuracy assessment of a mobile terrestrial lidar survey at Padre Island National Seashore

    USGS Publications Warehouse

    Lim, Samsung; Thatcher, Cindy A.; Brock, John C.; Kimbrow, Dustin R.; Danielson, Jeffrey J.; Reynolds, B.J.

    2013-01-01

    The higher point density and mobility of terrestrial laser scanning (light detection and ranging (lidar)) is desired when extremely detailed elevation data are needed for mapping vertically orientated complex features such as levees, dunes, and cliffs, or when highly accurate data are needed for monitoring geomorphic changes. Mobile terrestrial lidar scanners have the capability for rapid data collection on a larger spatial scale compared with tripod-based terrestrial lidar, but few studies have examined the accuracy of this relatively new mapping technology. For this reason, we conducted a field test at Padre Island National Seashore of a mobile lidar scanner mounted on a sport utility vehicle and integrated with a position and orientation system. The purpose of the study was to assess the vertical and horizontal accuracy of data collected by the mobile terrestrial lidar system, which is georeferenced to the Universal Transverse Mercator coordinate system and the North American Vertical Datum of 1988. To accomplish the study objectives, independent elevation data were collected by conducting a high-accuracy global positioning system survey to establish the coordinates and elevations of 12 targets spaced throughout the 12 km transect. These independent ground control data were compared to the lidar scanner-derived elevations to quantify the accuracy of the mobile lidar system. The performance of the mobile lidar system was also tested at various vehicle speeds and scan density settings (e.g. field of view and linear point spacing) to estimate the optimal parameters for desired point density. After adjustment of the lever arm parameters, the final point cloud accuracy was 0.060 m (east), 0.095 m (north), and 0.053 m (height). The very high density of the resulting point cloud was sufficient to map fine-scale topographic features, such as the complex shape of the sand dunes.

  11. Caspian Rapid Assessment Method: a localized procedure for assessment of wetlands at southern fringe of the Caspian Sea.

    PubMed

    Khorami Pour, Sanaz; Monavari, Seyed Masoud; Riazi, Borhan; Khorasani, Nematollah

    2015-07-01

    Although Iran is of founders of the Ramsar Convention, there is no comprehensive information available in the country on the status of wetlands in the past or at present. There is also no specific guideline for assessing the status of wetlands in the basin of the Caspian Sea as an ecosystem with unique ecological features. The main aim of this study was to develop a new procedure called "Caspian Rapid Assessment Method" (CRAM) for assessment of wetlands at southern fringe of the Caspian Sea. To this end, 16 rapid assessment methods analyzed by US EPA in 2003 were reviewed to provide an inventory of rapid assessment indices. Excluding less important indices, the inventory was short-listed based on Delphi panelists' consensus. The CRAM was developed with 6 main criteria and 12 sub-criteria. The modified method was used to assess three important wetlands of Anzali, Boojagh and Miyankaleh at the southern border of the Caspian Sea. According to the obtained results, the highest score of 60 was assigned to the Anzali Wetland. Obtaining the scores of 56 and 47, Miyankaleh and Boojagh wetlands were ranked in the next priorities, respectively. At final stage, the accuracy of CRAM prioritization values was confirmed using the Friedman test. All of the wetlands were classified into category II, which indicates destroyed wetlands with rehabilitation potentials. In recent years, serious threats have deteriorated the wetlands from class III (normal condition) to the class II.

  12. How could the replica method improve accuracy of performance assessment of channel coding?

    NASA Astrophysics Data System (ADS)

    Kabashima, Yoshiyuki

    2009-12-01

    We explore the relation between the techniques of statistical mechanics and information theory for assessing the performance of channel coding. We base our study on a framework developed by Gallager in IEEE Trans. Inform. Theory IT-11, 3 (1965), where the minimum decoding error probability is upper-bounded by an average of a generalized Chernoff's bound over a code ensemble. We show that the resulting bound in the framework can be directly assessed by the replica method, which has been developed in statistical mechanics of disordered systems, whereas in Gallager's original methodology further replacement by another bound utilizing Jensen's inequality is necessary. Our approach associates a seemingly ad hoc restriction with respect to an adjustable parameter for optimizing the bound with a phase transition between two replica symmetric solutions, and can improve the accuracy of performance assessments of general code ensembles including low density parity check codes, although its mathematical justification is still open.

  13. An assessment of theoretical procedures for π-conjugation stabilisation energies in enones

    NASA Astrophysics Data System (ADS)

    Yu, Li-Juan; Sarrami, Farzaneh; Karton, Amir; O'Reilly, Robert J.

    2015-06-01

    We introduce a representative database of 22 α,β- to β,γ-enecarbonyl isomerisation energies (to be known as the EIE22 data-set). Accurate reaction energies are obtained at the complete basis-set limit CCSD(T) level by means of the high-level W1-F12 thermochemical protocol. The isomerisation reactions involve a migration of one double bond that breaks the conjugated π-system. The considered enecarbonyls involve a range of common functional groups (e.g., Me, NH2, OMe, F, and CN). Apart from π-conjugation effects, the chemical environments are largely conserved on the two sides of the reactions and therefore the EIE22 data-set allows us to assess the performance of a variety of density functional theory (DFT) procedures for the calculation of π-conjugation stabilisation energies in enecarbonyls. We find that, with few exceptions (M05-2X, M06-2X, BMK, and BH&HLYP), all the conventional DFT procedures attain root mean square deviations (RMSDs) between 5.0 and 11.7 kJ mol-1. The range-separated and double-hybrid DFT procedures, on the other hand, show good performance with RMSDs below the 'chemical accuracy' threshold. We also examine the performance of composite and standard ab initio procedures. Of these, SCS-MP2 offers the best performance-to-computational cost ratio with an RMSD of 0.8 kJ mol-1.

  14. Accuracy assessment of CKC high-density surface EMG decomposition in biceps femoris muscle

    NASA Astrophysics Data System (ADS)

    Marateb, H. R.; McGill, K. C.; Holobar, A.; Lateva, Z. C.; Mansourian, M.; Merletti, R.

    2011-10-01

    The aim of this study was to assess the accuracy of the convolution kernel compensation (CKC) method in decomposing high-density surface EMG (HDsEMG) signals from the pennate biceps femoris long-head muscle. Although the CKC method has already been thoroughly assessed in parallel-fibered muscles, there are several factors that could hinder its performance in pennate muscles. Namely, HDsEMG signals from pennate and parallel-fibered muscles differ considerably in terms of the number of detectable motor units (MUs) and the spatial distribution of the motor-unit action potentials (MUAPs). In this study, monopolar surface EMG signals were recorded from five normal subjects during low-force voluntary isometric contractions using a 92-channel electrode grid with 8 mm inter-electrode distances. Intramuscular EMG (iEMG) signals were recorded concurrently using monopolar needles. The HDsEMG and iEMG signals were independently decomposed into MUAP trains, and the iEMG results were verified using a rigorous a posteriori statistical analysis. HDsEMG decomposition identified from 2 to 30 MUAP trains per contraction. 3 ± 2 of these trains were also reliably detected by iEMG decomposition. The measured CKC decomposition accuracy of these common trains over a selected 10 s interval was 91.5 ± 5.8%. The other trains were not assessed. The significant factors that affected CKC decomposition accuracy were the number of HDsEMG channels that were free of technical artifact and the distinguishability of the MUAPs in the HDsEMG signal (P < 0.05). These results show that the CKC method reliably identifies at least a subset of MUAP trains in HDsEMG signals from low force contractions in pennate muscles.

  15. Proposed Testing to Assess the Accuracy of Glass-To-Metal Seal Stress Analyses.

    SciTech Connect

    Chambers, Robert S.; Emery, John M; Tandon, Rajan; Antoun, Bonnie R.; Stavig, Mark E.; Newton, Clay S.; Gibson, Cory S; Bencoe, Denise N.

    2014-09-01

    The material characterization tests conducted on 304L VAR stainless steel and Schott 8061 glass have provided higher fidelity data for calibration of material models used in Glass - T o - Metal (GTM) seal analyses. Specifically, a Thermo - Multi - Linear Elastic Plastic ( thermo - MLEP) material model has be en defined for S S304L and the Simplified Potential Energy Clock nonlinear visc oelastic model has been calibrated for the S8061 glass. To assess the accuracy of finite element stress analyses of GTM seals, a suite of tests are proposed to provide data for comparison to mo del predictions.

  16. Mathematical accuracy of Aztec land surveys assessed from records in the Codex Vergara.

    PubMed

    Jorge, María del Carmen; Williams, Barbara J; Garza-Hume, C E; Olvera, Arturo

    2011-09-13

    Land surveying in ancient states is documented not only for Eurasia but also for the Americas, amply attested by two Acolhua-Aztec pictorial manuscripts from the Valley of Mexico. The Codex Vergara and the Códice de Santa María Asunción consist of hundreds of drawings of agricultural fields that uniquely record surface areas as well as perimeter measurements. A previous study of the Codex Vergara examines how Acolhua-Aztecs determined field area by reconstructing their calculation procedures. Here we evaluate the accuracy of their area values using modern mathematics. The findings verify the overall mathematical validity of the codex records. Three-quarters of the areas are within 5% of the maximum possible value, and 85% are within 10%, which compares well with reported errors by Western surveyors that postdate Aztec-Acolhua work by several centuries.

  17. Mathematical accuracy of Aztec land surveys assessed from records in the Codex Vergara

    PubMed Central

    Williams, Barbara J.; Garza-Hume, C. E.; Olvera, Arturo

    2011-01-01

    Land surveying in ancient states is documented not only for Eurasia but also for the Americas, amply attested by two Acolhua–Aztec pictorial manuscripts from the Valley of Mexico. The Codex Vergara and the Códice de Santa María Asunción consist of hundreds of drawings of agricultural fields that uniquely record surface areas as well as perimeter measurements. A previous study of the Codex Vergara examines how Acolhua–Aztecs determined field area by reconstructing their calculation procedures. Here we evaluate the accuracy of their area values using modern mathematics. The findings verify the overall mathematical validity of the codex records. Three-quarters of the areas are within 5% of the maximum possible value, and 85% are within 10%, which compares well with reported errors by Western surveyors that postdate Aztec–Acolhua work by several centuries. PMID:21876138

  18. Assessing external cause of injury coding accuracy for transport injury hospitalizations.

    PubMed

    Bowman, Stephen M; Aitken, Mary E

    2011-01-01

    External cause of injury codes (E codes) capture circumstances surrounding injuries. While hospital discharge data are primarily collected for administrative/billing purposes, these data are secondarily used for injury surveillance. We assessed the accuracy and completeness of hospital discharge data for transport-related crashes using trauma registry data as the gold standard. We identified mechanisms of injury with significant disagreement and developed recommendations to improve the accuracy of E codes in administrative data. Overall, we linked 2,192 (99.9 percent) of the 2,195 discharge records to trauma registry records. General mechanism categories showed good agreement, with 84.7 percent of records coded consistently between registry and discharge data (Kappa 0.762, p < .001). However, agreement was lower for specific categories (e.g., ATV crashes), with discharge records capturing only 70.4 percent of cases identified in trauma registry records. Efforts should focus on systematically improving E-code accuracy and detail through training, education, and informatics such as automated data linkages to trauma registries.

  19. In vivo estimation of the glenohumeral joint centre by functional methods: accuracy and repeatability assessment.

    PubMed

    Lempereur, Mathieu; Leboeuf, Fabien; Brochard, Sylvain; Rousset, Jean; Burdin, Valérie; Rémy-Néris, Olivier

    2010-01-19

    Several algorithms have been proposed for determining the centre of rotation of ball joints. These algorithms are used rather to locate the hip joint centre. Few studies have focused on the determination of the glenohumeral joint centre. However, no studies have assessed the accuracy and repeatability of functional methods for glenohumeral joint centre. This paper aims at evaluating the accuracy and the repeatability with which the glenohumeral joint rotation centre (GHRC) can be estimated in vivo by functional methods. The reference joint centre is the glenohumeral anatomical centre obtained by medical imaging. Five functional methods were tested: the algorithm of Gamage and Lasenby (2002), bias compensated (Halvorsen, 2003), symmetrical centre of rotation estimation (Ehrig et al., 2006), normalization method (Chang and Pollard, 2007), helical axis (Woltring et al., 1985). The glenohumeral anatomical centre (GHAC) was deduced from the fitting of the humeral head. Four subjects performed three cycles of three different movements (flexion/extension, abduction/adduction and circumduction). For each test, the location of the glenohumeral joint centre was estimated by the five methods. Analyses focused on the 3D location, on the repeatability of location and on the accuracy by computing the Euclidian distance between the estimated GHRC and the GHAC. For all the methods, the error repeatability was inferior to 8.25 mm. This study showed that there are significant differences between the five functional methods. The smallest distance between the estimated joint centre and the centre of the humeral head was obtained with the method of Gamage and Lasenby (2002).

  20. Application of a Monte Carlo accuracy assessment tool to TDRS and GPS

    NASA Technical Reports Server (NTRS)

    Pavloff, Michael S.

    1994-01-01

    In support of a NASA study on the application of radio interferometry to satellite orbit determination, MITRE developed a simulation tool for assessing interferometric tracking accuracy. Initially, the tool was applied to the problem of determining optimal interferometric station siting for orbit determination of the Tracking and Data Relay Satellite (TDRS). Subsequently, the Orbit Determination Accuracy Estimator (ODAE) was expanded to model the general batch maximum likelihood orbit determination algorithms of the Goddard Trajectory Determination System (GTDS) with measurement types including not only group and phase delay from radio interferometry, but also range, range rate, angular measurements, and satellite-to-satellite measurements. The user of ODAE specifies the statistical properties of error sources, including inherent observable imprecision, atmospheric delays, station location uncertainty, and measurement biases. Upon Monte Carlo simulation of the orbit determination process, ODAE calculates the statistical properties of the error in the satellite state vector and any other parameters for which a solution was obtained in the orbit determination. This paper presents results from ODAE application to two different problems: (1)determination of optimal geometry for interferometirc tracking of TDRS, and (2) expected orbit determination accuracy for Global Positioning System (GPS) tracking of low-earth orbit (LEO) satellites. Conclusions about optimal ground station locations for TDRS orbit determination by radio interferometry are presented, and the feasibility of GPS-based tracking for IRIDIUM, a LEO mobile satellite communications (MOBILSATCOM) system, is demonstrated.

  1. Inter-comparison and accuracy assessment of TRMM 3B42 products over Turkey

    NASA Astrophysics Data System (ADS)

    Amjad, Muhammad; Yilmaz, M. Tugrul

    2016-04-01

    Accurate estimation of precipitation, especially over complex topography, is impeded by many factors depending on the platform that it is acquired. Satellites have the advantage of providing spatially and temporally continuous and consistent datasets. However, utilizing satellite precipitation data in various applications requires its uncertainty estimation to be carried out robustly. In this study, accuracy of two Tropical Rainfall Measurement Mission (Version 3B42) products, TRMM 3B42 V6 and TRMM3B42 V7, are assessed for their accuracy by inter-comparing their monthly time series against ground observations obtained over 256 stations in Turkey. Errors are further analyzed for their seasonal and climate-dependent variability. Both V6 and V7 products show better performance during summers than winters. V6 product has dry bias over drier regions and V7 product has wet bias over wetter regions of the country. Moreover, rainfall measuring accuracies of both versions are much lower along coastal regions and at lower altitudes. Overall, the statistics of the monthly products confirm V7 product is an improved version compared to V6. (This study was supported by TUBITAK fund # 114Y676).

  2. Assessing Transfer of Stimulus Control Procedures Across Learners With Autism

    PubMed Central

    Bloh, Christopher

    2008-01-01

    The purpose of this study was to evaluate the effectiveness of 2 transfer of stimulus control procedures to teach tacting to individuals with autism. Five participants with differing verbal skills were assessed by a subset of the ABLLS prior to intervention, then were taught 36 previously unknown tacts using the receptive-echoic-tact (r-e-t) and echoic-tact (e-t) transfer procedures. Each transfer method was used separately to establish different tacts, in a multiple baseline design across tacts for 3 sets of stimuli. The results showed that 4 out of 5 participants (who demonstrated mands, tacts, echoics, and sometimes intraverbals prior to the study) acquired all targeted tacts when either r-e-t or e-t training was presented. One participant (who emitted no verbal operants at the onset of the study) did not acquire any tacts. While some participants appeared to learn more quickly with one transfer method, neither method emerged as more efficient with learners with fewer or more extensive verbal skills. The results indicate that both transfer methods promoted the acquisition of tacts for learners with autism with at least minimal verbal skills. PMID:22477406

  3. The influence of sampling interval on the accuracy of trail impact assessment

    USGS Publications Warehouse

    Leung, Y.-F.; Marion, J.L.

    1999-01-01

    Trail impact assessment and monitoring (IA&M) programs have been growing in importance and application in recreation resource management at protected areas. Census-based and sampling-based approaches have been developed in such programs, with systematic point sampling being the most common survey design. This paper examines the influence of sampling interval on the accuracy of estimates for selected trail impact problems. A complete census of four impact types on 70 trails in Great Smoky Mountains National Park was utilized as the base data set for the analyses. The census data were resampled at increasing intervals to create a series of simulated point data sets. Estimates of frequency of occurrence and lineal extent for the four impact types were compared with the census data set. The responses of accuracy loss on lineal extent estimates to increasing sampling intervals varied across different impact types, while the responses on frequency of occurrence estimates were consistent, approximating an inverse asymptotic curve. These findings suggest that systematic point sampling may be an appropriate method for estimating the lineal extent but not the frequency of trail impacts. Sample intervals of less than 100 m appear to yield an excellent level of accuracy for the four impact types evaluated. Multiple regression analysis results suggest that appropriate sampling intervals are more likely to be determined by the type of impact in question rather than the length of trail. The census-based trail survey and the resampling-simulation method developed in this study can be a valuable first step in establishing long-term trail IA&M programs, in which an optimal sampling interval range with acceptable accuracy is determined before investing efforts in data collection.

  4. Assessing Sensor Accuracy for Non-Adjunct Use of Continuous Glucose Monitoring

    PubMed Central

    Patek, Stephen D.; Ortiz, Edward Andrew; Breton, Marc D.

    2015-01-01

    Abstract Background: The level of continuous glucose monitoring (CGM) accuracy needed for insulin dosing using sensor values (i.e., the level of accuracy permitting non-adjunct CGM use) is a topic of ongoing debate. Assessment of this level in clinical experiments is virtually impossible because the magnitude of CGM errors cannot be manipulated and related prospectively to clinical outcomes. Materials and Methods: A combination of archival data (parallel CGM, insulin pump, self-monitoring of blood glucose [SMBG] records, and meals for 56 pump users with type 1 diabetes) and in silico experiments was used to “replay” real-life treatment scenarios and relate sensor error to glycemic outcomes. Nominal blood glucose (BG) traces were extracted using a mathematical model, yielding 2,082 BG segments each initiated by insulin bolus and confirmed by SMBG. These segments were replayed at seven sensor accuracy levels (mean absolute relative differences [MARDs] of 3–22%) testing six scenarios: insulin dosing using sensor values, threshold, and predictive alarms, each without or with considering CGM trend arrows. Results: In all six scenarios, the occurrence of hypoglycemia (frequency of BG levels ≤50 mg/dL and BG levels ≤39 mg/dL) increased with sensor error, displaying an abrupt slope change at MARD =10%. Similarly, hyperglycemia (frequency of BG levels ≥250 mg/dL and BG levels ≥400 mg/dL) increased and displayed an abrupt slope change at MARD=10%. When added to insulin dosing decisions, information from CGM trend arrows, threshold, and predictive alarms resulted in improvement in average glycemia by 1.86, 8.17, and 8.88 mg/dL, respectively. Conclusions: Using CGM for insulin dosing decisions is feasible below a certain level of sensor error, estimated in silico at MARD=10%. In our experiments, further accuracy improvement did not contribute substantively to better glycemic outcomes. PMID:25436913

  5. Toxicological procedures for assessing the carcinogenic potential of agricultural chemicals.

    PubMed

    Krewski, D; Clayson, D; Collins, B; Munro, I C

    1982-01-01

    Pesticides and other agricultural chemicals are now widely used throughout the world as a means of improving crop yields in order to meet the increasing demands being placed upon the global food supply. In Canada, the use of such chemicals is controlled through government regulations established jointly by the Department of Agriculture and the Department of National Health & Welfare. Such regulations require a detailed evaluation of the toxicological characteristics of the chemical prior to its being cleared for use. In this paper, procedures for assessing the carcinogenic potential of agricultural and other chemicals are discussed. Consideration is given to both the classical long-term in vivo carcinogen bioassay in rodent or other species and the more recently developed short-term in vitro tests based on genetic alterations in bacterial and other test systems.

  6. Shuttle radar topography mission accuracy assessment and evaluation for hydrologic modeling

    NASA Astrophysics Data System (ADS)

    Mercuri, Pablo Alberto

    Digital Elevation Models (DEMs) are increasingly used even in low relief landscapes for multiple mapping applications and modeling approaches such as surface hydrology, flood risk mapping, agricultural suitability, and generation of topographic attributes. The National Aeronautics and Space Administration (NASA) has produced a nearly global database of highly accurate elevation data, the Shuttle Radar Topography Mission (SRTM) DEM. The main goals of this thesis were to investigate quality issues of SRTM, provide measures of vertical accuracy with emphasis on low relief areas, and to analyze the performance for the generation of physical boundaries and streams for watershed modeling and characterization. The absolute and relative accuracy of the two SRTM resolutions, at 1 and 3 arc-seconds, were investigated to generate information that can be used as a reference in areas with similar characteristics in other regions of the world. The absolute accuracy was obtained from accurate point estimates using the best available federal geodetic network in Indiana. The SRTM root mean square error for this area of the Midwest US surpassed data specifications. It was on the order of 2 meters for the 1 arc-second resolution in flat areas of the Midwest US. Estimates of error were smaller for the global coverage 3 arc-second data with very similar results obtained in the flat plains in Argentina. In addition to calculating the vertical accuracy, the impacts of physiography and terrain attributes, like slope, on the error magnitude were studied. The assessment also included analysis of the effects of land cover on vertical accuracy. Measures of local variability were described to identify the adjacency effects produced by surface features in the SRTM DEM, like forests and manmade features near the geodetic point. Spatial relationships among the bare-earth National Elevation Data and SRTM were also analyzed to assess the relative accuracy that was 2.33 meters in terms of the total

  7. Accuracy of teacher assessments of second-language students at risk for reading disability.

    PubMed

    Limbos, M M; Geva, E

    2001-01-01

    This study examined the accuracy of teacher assessments in screening for reading disabilities among students of English as a second language (ESL) and as a first language (L1). Academic and oral language tests were administered to 369 children (249 ESL, 120 L1) at the beginning of Grade 1 and at the end of Grade 2. Concurrently, 51 teachers nominated children at risk for reading failure and completed rating scales assessing academic and oral language skills. Scholastic records were reviewed for notation of concern or referral. The criterion measure was a standardized reading score based on phonological awareness, rapid naming, and word recognition. Results indicated that teacher rating scales and nominations had low sensitivity in identifying ESL and L1 students at risk for reading disability at the 1-year mark. Relative to other forms of screening, teacher-expressed concern had lower sensitivity. Finally, oral language proficiency contributed to misclassifications in the ESL group.

  8. Accuracy of knee range of motion assessment after total knee arthroplasty.

    PubMed

    Lavernia, Carlos; D'Apuzzo, Michele; Rossi, Mark D; Lee, David

    2008-09-01

    Measurement of knee joint range of motion (ROM) is important to assess after total knee arthroplasty. Our objective was to determine level of agreement and accuracy between observers with different knowledge on total ROM after total knee arthroplasty. Forty-one patients underwent x-ray of active and passive knee ROM (gold standard). Five different raters evaluated observed and measured ROM: orthopedic surgeon, clinical fellow, physician assistant, research fellow, and a physical therapist. A 1-way analysis of variance was used to determine differences in ROM between raters over both conditions. Limit of agreement for each rater for both active and passive total ROM under both conditions was calculated. Analysis of variance indicated a difference between raters for all conditions (range, P = .004 to P < or =.0001). The trend for all raters was to overestimate ROM at higher ranges. Assessment of ROM through direct observation without a goniometer provides inaccurate findings.

  9. Geometric calibration and accuracy assessment of a multispectral imager on UAVs

    NASA Astrophysics Data System (ADS)

    Zheng, Fengjie; Yu, Tao; Chen, Xingfeng; Chen, Jiping; Yuan, Guoti

    2012-11-01

    The increasing developments in Unmanned Aerial Vehicles (UAVs) platforms and associated sensing technologies have widely promoted UAVs remote sensing application. UAVs, especially low-cost UAVs, limit the sensor payload in weight and dimension. Mostly, cameras on UAVs are panoramic, fisheye lens, small-format CCD planar array camera, unknown intrinsic parameters and lens optical distortion will cause serious image aberrations, even leading a few meters or tens of meters errors in ground per pixel. However, the characteristic of high spatial resolution make accurate geolocation more critical to UAV quantitative remote sensing research. A method for MCC4-12F Multispectral Imager designed to load on UAVs has been developed and implemented. Using multi-image space resection algorithm to assess geometric calibration parameters of random position and different photogrammetric altitudes in 3D test field, which is suitable for multispectral cameras. Both theoretical and practical accuracy assessments were selected. The results of theoretical strategy, resolving object space and image point coordinate differences by space intersection, showed that object space RMSE were 0.2 and 0.14 pixels in X direction and in Y direction, image space RMSE were superior to 0.5 pixels. In order to verify the accuracy and reliability of the calibration parameters,practical study was carried out in Tianjin UAV flight experiments, the corrected accuracy validated by ground checkpoints was less than 0.3m. Typical surface reflectance retrieved on the basis of geo-rectified data was compared with ground ASD measurement resulting 4% discrepancy. Hence, the approach presented here was suitable for UAV multispectral imager.

  10. Analyses of odontometric sexual dimorphism and sex assessment accuracy on a large sample.

    PubMed

    Angadi, Punnya V; Hemani, S; Prabhu, Sudeendra; Acharya, Ashith B

    2013-08-01

    Correct sex assessment of skeletonized human remains allows investigators to undertake a more focused search of missing persons' files to establish identity. Univariate and multivariate odontometric sex assessment has been explored in recent years on small sample sizes and have not used a test sample. Consequently, inconsistent results have been produced in terms of accuracy of sex allocation. This paper has derived data from a large sample of males and females, and applied logistic regression formulae on a test sample. Using a digital caliper, buccolingual and mesiodistal dimensions of all permanent teeth (except third molars) were measured on 600 dental casts (306 females, 294 males) of young adults (18-32 years), and the data subjected to univariate (independent samples' t-test) and multivariate statistics (stepwise logistic regression analysis, or LRA). The analyses revealed that canines were the most sexually dimorphic teeth followed by molars. All tooth variables were larger in males, with 51/56 (91.1%) being statistically larger (p < 0.05). When the stepwise LRA formulae were applied to a test sample of 69 subjects (40 females, 29 males) of the same age range, allocation accuracy of 68.1% for the maxillary teeth, 73.9% for the mandibular teeth, and 71% for teeth of both jaws combined, were obtained. The high univariate sexual dimorphism observed herein contrasts with some reports of low, and sometimes reverse, sexual dimorphism (the phenomenon of female tooth dimensions being larger than males'); the LRA results, too, are in contradiction to a previous report of virtually 100% sex allocation for a small heterogeneous sample. These reflect the importance of using a large sample to quantify sexual dimorphism in tooth dimensions and the application of the derived formulae on a test dataset to ascertain accuracy which, at best, is moderate in nature.

  11. Accuracy of subjective assessment of fever by Nigerian mothers in under-5 children

    PubMed Central

    Odinaka, Kelechi Kenneth; Edelu, Benedict O.; Nwolisa, Emeka Charles; Amamilo, Ifeyinwa B.; Okolo, Seline N.

    2014-01-01

    Background: Many mothers still rely on palpation to determine if their children have fever at home before deciding to seek medical attention or administer self-medications. This study was carried out to determine the accuracy of subjective assessment of fever by Nigerian mothers in Under-5 Children. Patients and Methods: Each eligible child had a tactile assessment of fever by the mother after which the axillary temperature was measured. Statistical analysis was done using SPSS version 19 (IBM Inc. Chicago Illinois, USA, 2010). Result: A total of 113 mother/child pairs participated in the study. Palpation overestimates fever by 24.6%. Irrespective of the surface of the hand used for palpation, the sensitivity, specificity, positive predictive value (PPV) and negative predictive value (NPV) of tactile assessment were 82.4%, 37.1%, 51.9% and 71.9%, respectively. The use of the palmer surface of the hand had a better sensitivity (95.2%) than the dorsum of the hand (69.2%). The use of multiple sites had better sensitivity (86.7%) than the use of single site (76.2%). Conclusion: Tactile assessment of childhood fevers by mothers is still a relevant screening tool for the presence or absence fever. Palpation with the palmer surface of the hand using multiple sites improves the reliability of tactile assessment of fever. PMID:25114371

  12. Assessment of Accuracy and Reliability in Acetabular Cup Placement Using an iPhone/iPad System.

    PubMed

    Kurosaka, Kenji; Fukunishi, Shigeo; Fukui, Tomokazu; Nishio, Shoji; Fujihara, Yuki; Okahisa, Shohei; Takeda, Yu; Daimon, Takashi; Yoshiya, Shinichi

    2016-07-01

    Implant positioning is one of the critical factors that influences postoperative outcome of total hip arthroplasty (THA). Malpositioning of the implant may lead to an increased risk of postoperative complications such as prosthetic impingement, dislocation, restricted range of motion, polyethylene wear, and loosening. In 2012, the intraoperative use of smartphone technology in THA for improved accuracy of acetabular cup placement was reported. The purpose of this study was to examine the accuracy of an iPhone/iPad-guided technique in positioning the acetabular cup in THA compared with the reference values obtained from the image-free navigation system in a cadaveric experiment. Five hips of 5 embalmed whole-body cadavers were used in the study. Seven orthopedic surgeons (4 residents and 3 senior hip surgeons) participated in the study. All of the surgeons examined each of the 5 hips 3 times. The target angle was 38°/19° for operative inclination/anteversion angles, which corresponded to radiographic inclination/anteversion angles of 40°/15°. The simultaneous assessment using the navigation system showed mean±SD radiographic alignment angles of 39.4°±2.6° and 16.4°±2.6° for inclination and anteversion, respectively. Assessment of cup positioning based on Lewinnek's safe zone criteria showed all of the procedures (n=105) achieved acceptable alignment within the safe zone. A comparison of the performances by resident and senior hip surgeons showed no significant difference between the groups (P=.74 for inclination and P=.81 for anteversion). The iPhone/iPad technique examined in this study could achieve acceptable performance in determining cup alignment in THA regardless of the surgeon's expertise. [Orthopedics. 2016; 39(4):e621-e626.].

  13. Evaluation of the accuracy of land-use based ecosystem service assessments for different thematic resolutions.

    PubMed

    Van der Biest, K; Vrebos, D; Staes, J; Boerema, A; Bodí, M B; Fransen, E; Meire, P

    2015-06-01

    The demand for pragmatic tools for mapping ecosystem services (ES) has led to the widespread application of land-use based proxy methods, mostly using coarse thematic resolution classification systems. Although various studies have demonstrated the limited reliability of land use as an indicator of service delivery, this does not prevent the method from being frequently applied on different institutional levels. It has recently been argued that a more detailed land use classification system may increase the accuracy of this approach. This research statistically compares maps of predicted ES delivery based on land use scoring for three different thematic resolutions (number of classes) with maps of ES delivery produced by biophysical models. Our results demonstrate that using a more detailed land use classification system does not significantly increase the accuracy of land-use based ES assessments for the majority of the considered ES. Correlations between land-use based assessments and biophysical model outcomes are relatively strong for provisioning services, independent of the classification system. However, large discrepancies occur frequently between the score and the model-based estimate. We conclude that land use, as a simple indicator, is not effective enough to be used in environmental management as it cannot capture differences in abiotic conditions and ecological processes that explain differences in service delivery. Using land use as a simple indicator will therefore result in inappropriate management decisions, even if a highly detailed land use classification system is used.

  14. Biased binomial assessment of cross-validated estimation of classification accuracies illustrated in diagnosis predictions.

    PubMed

    Noirhomme, Quentin; Lesenfants, Damien; Gomez, Francisco; Soddu, Andrea; Schrouff, Jessica; Garraux, Gaëtan; Luxen, André; Phillips, Christophe; Laureys, Steven

    2014-01-01

    Multivariate classification is used in neuroimaging studies to infer brain activation or in medical applications to infer diagnosis. Their results are often assessed through either a binomial or a permutation test. Here, we simulated classification results of generated random data to assess the influence of the cross-validation scheme on the significance of results. Distributions built from classification of random data with cross-validation did not follow the binomial distribution. The binomial test is therefore not adapted. On the contrary, the permutation test was unaffected by the cross-validation scheme. The influence of the cross-validation was further illustrated on real-data from a brain-computer interface experiment in patients with disorders of consciousness and from an fMRI study on patients with Parkinson disease. Three out of 16 patients with disorders of consciousness had significant accuracy on binomial testing, but only one showed significant accuracy using permutation testing. In the fMRI experiment, the mental imagery of gait could discriminate significantly between idiopathic Parkinson's disease patients and healthy subjects according to the permutation test but not according to the binomial test. Hence, binomial testing could lead to biased estimation of significance and false positive or negative results. In our view, permutation testing is thus recommended for clinical application of classification with cross-validation.

  15. Assessing Reading. Using Cloze Procedure To Assess Reading Skills. [Packet] and Handbook.

    ERIC Educational Resources Information Center

    Vaughan, Judy

    These instructor's materials consist of a handbook directed to the teacher and 33 worksheets teachers can use with adult students in order to use the cloze procedure to assess how readily they can read materials of differing complexity. The handbook introduces the materials by considering such questions as What is meant by reading?, How could…

  16. A PRIOR EVALUATION OF TWO-STAGE CLUSTER SAMPLING FOR ACCURACY ASSESSMENT OF LARGE-AREA LAND-COVER MAPS

    EPA Science Inventory

    Two-stage cluster sampling reduces the cost of collecting accuracy assessment reference data by constraining sample elements to fall within a limited number of geographic domains (clusters). However, because classification error is typically positively spatially correlated, withi...

  17. The SCoRE residual: a quality index to assess the accuracy of joint estimations.

    PubMed

    Ehrig, Rainald M; Heller, Markus O; Kratzenstein, Stefan; Duda, Georg N; Trepczynski, Adam; Taylor, William R

    2011-04-29

    The determination of an accurate centre of rotation (CoR) from skin markers is essential for the assessment of abnormal gait patterns in clinical gait analysis. Despite the many functional approaches to estimate CoRs, no non-invasive analytical determination of the error in the reconstructed joint location is currently available. The purpose of this study was therefore to verify the residual of the symmetrical centre of rotation estimation (SCoRE) as a reliable indirect measure of the error of the computed joint centre. To evaluate the SCoRE residual, numerical simulations were performed to evaluate CoR estimations at different ranges of joint motion. A statistical model was developed and used to determine the theoretical relationships among the SCoRE residual, the magnitude of the skin marker artefact, the corrections to the marker positions, and the error of the CoR estimations to the known centre of rotation. We found that the equation err=0.5r(s) provides a reliable relationship among the CoR error, err, and the scaled SCoRE residual, r(s), providing that any skin marker artefact is first minimised using the optimal common shape technique (OCST). Measurements on six healthy volunteers showed a reduction of SCoRE residual from 11 to below 6mm and therefore demonstrated consistency of the theoretical considerations and numerical simulations with the in vivo data. This study also demonstrates the significant benefit of the OCST for reducing skin marker artefact and thus for predicting the accuracy of determining joint centre positions in functional gait analysis. For the first time, this understanding of the SCoRE residual allows a measure of error in the non-invasive assessment of joint centres. This measure now enables a rapid assessment of the accuracy of the CoR as well as an estimation of the reproducibility and repeatability of skeletal motion patterns.

  18. Assessment of the sources of error affecting the quantitative accuracy of SPECT imaging in small animals

    SciTech Connect

    Joint Graduate Group in Bioengineering, University of California, San Francisco and University of California, Berkeley; Department of Radiology, University of California; Gullberg, Grant T; Hwang, Andrew B.; Franc, Benjamin L.; Gullberg, Grant T.; Hasegawa, Bruce H.

    2008-02-15

    Small animal SPECT imaging systems have multiple potential applications in biomedical research. Whereas SPECT data are commonly interpreted qualitatively in a clinical setting, the ability to accurately quantify measurements will increase the utility of the SPECT data for laboratory measurements involving small animals. In this work, we assess the effect of photon attenuation, scatter and partial volume errors on the quantitative accuracy of small animal SPECT measurements, first with Monte Carlo simulation and then confirmed with experimental measurements. The simulations modeled the imaging geometry of a commercially available small animal SPECT system. We simulated the imaging of a radioactive source within a cylinder of water, and reconstructed the projection data using iterative reconstruction algorithms. The size of the source and the size of the surrounding cylinder were varied to evaluate the effects of photon attenuation and scatter on quantitative accuracy. We found that photon attenuation can reduce the measured concentration of radioactivity in a volume of interest in the center of a rat-sized cylinder of water by up to 50percent when imaging with iodine-125, and up to 25percent when imaging with technetium-99m. When imaging with iodine-125, the scatter-to-primary ratio can reach up to approximately 30percent, and can cause overestimation of the radioactivity concentration when reconstructing data with attenuation correction. We varied the size of the source to evaluate partial volume errors, which we found to be a strong function of the size of the volume of interest and the spatial resolution. These errors can result in large (>50percent) changes in the measured amount of radioactivity. The simulation results were compared with and found to agree with experimental measurements. The inclusion of attenuation correction in the reconstruction algorithm improved quantitative accuracy. We also found that an improvement of the spatial resolution through the

  19. The Eye Phone Study: reliability and accuracy of assessing Snellen visual acuity using smartphone technology

    PubMed Central

    Perera, C; Chakrabarti, R; Islam, F M A; Crowston, J

    2015-01-01

    Purpose Smartphone-based Snellen visual acuity charts has become popularized; however, their accuracy has not been established. This study aimed to evaluate the equivalence of a smartphone-based visual acuity chart with a standard 6-m Snellen visual acuity (6SVA) chart. Methods First, a review of available Snellen chart applications on iPhone was performed to determine the most accurate application based on optotype size. Subsequently, a prospective comparative study was performed by measuring conventional 6SVA and then iPhone visual acuity using the ‘Snellen' application on an Apple iPhone 4. Results Eleven applications were identified, with accuracy of optotype size ranging from 4.4–39.9%. Eighty-eight patients from general medical and surgical wards in a tertiary hospital took part in the second part of the study. The mean difference in logMAR visual acuity between the two charts was 0.02 logMAR (95% limit of agreement −0.332, 0.372 logMAR). The largest mean difference in logMAR acuity was noted in the subgroup of patients with 6SVA worse than 6/18 (n=5), who had a mean difference of two Snellen visual acuity lines between the charts (0.276 logMAR). Conclusion We did not identify a Snellen visual acuity app at the time of study, which could predict a patients standard Snellen visual acuity within one line. There was considerable variability in the optotype accuracy of apps. Further validation is required for assessment of acuity in patients with severe vision impairment. PMID:25931170

  20. Reproducibility and accuracy of optic nerve sheath diameter assessment using ultrasound compared to magnetic resonance imaging

    PubMed Central

    2013-01-01

    Background Quantification of the optic nerve sheath diameter (ONSD) by transbulbar sonography is a promising non-invasive technique for the detection of altered intracranial pressure. In order to establish this method as follow-up tool in diseases with intracranial hyper- or hypotension scan-rescan reproducibility and accuracy need to be systematically investigated. Methods The right ONSD of 15 healthy volunteers (mean age 24.5 ± 0.8 years) were measured by both transbulbar sonography (9 – 3 MHz) and 3 Tesla MRI (half-Fourier acquisition single-shot turbo spin-echo sequences, HASTE) 3 and 5 mm behind papilla. All volunteers underwent repeated ultrasound and MRI examinations in order to assess scan-rescan reproducibility and accuracy. Moreover, inter- and intra-observer variabilities were calculated for both techniques. Results Scan-rescan reproducibility was robust for ONSD quantification by sonography and MRI at both depths (r > 0.75, p ≤ 0.001, mean differences < 2%). Comparing ultrasound- and MRI-derived ONSD values, we found acceptable agreement between both methods for measurements at a depth of 3 mm (r = 0.72, p = 0.002, mean difference < 5%). Further analyses revealed good inter- and intra-observer reliability for sonographic measurements 3 mm behind the papilla and for MRI at 3 and 5 mm (r > 0.82, p < 0.001, mean differences < 5%). Conclusions Sonographic ONSD quantification 3 mm behind the papilla can be performed with good reproducibility, measurement accuracy and observer agreement. Thus, our findings emphasize the feasibility of this technique as a non-invasive bedside tool for longitudinal ONSD measurements. PMID:24289136

  1. Assessing the accuracy and performance of implicit solvent models for drug molecules: conformational ensemble approaches.

    PubMed

    Kolář, Michal; Fanfrlík, Jindřich; Lepšík, Martin; Forti, Flavio; Luque, F Javier; Hobza, Pavel

    2013-05-16

    The accuracy and performance of implicit solvent methods for solvation free energy calculations were assessed on a set of 20 neutral drug molecules. Molecular dynamics (MD) provided ensembles of conformations in water and water-saturated octanol. The solvation free energies were calculated by popular implicit solvent models based on quantum mechanical (QM) electronic densities (COSMO-RS, MST, SMD) as well as on molecular mechanical (MM) point-charge models (GB, PB). The performance of the implicit models was tested by a comparison with experimental water-octanol transfer free energies (ΔG(ow)) by using single- and multiconformation approaches. MD simulations revealed difficulties in a priori estimation of the flexibility features of the solutes from simple structural descriptors, such as the number of rotatable bonds. An increasing accuracy of the calculated ΔG(ow) was observed in the following order: GB1 ~ PB < GB7 ≪ MST < SMD ~ COSMO-RS with a clear distinction identified between MM- and QM-based models, although for the set excluding three largest molecules, the differences among COSMO-RS, MST, and SMD were negligible. It was shown that the single-conformation approach applied to crystal geometries provides a rather accurate estimate of ΔG(ow) for rigid molecules yet fails completely for the flexible ones. The multiconformation approaches improved the performance, but only when the deformation contribution was ignored. It was revealed that for large-scale calculations on small molecules a recent GB model, GB7, provided a reasonable accuracy/speed ratio. In conclusion, the study contributes to the understanding of solvation free energy calculations for physical and medicinal chemistry applications.

  2. Assessment of the sources of error affecting the quantitative accuracy of SPECT imaging in small animals

    NASA Astrophysics Data System (ADS)

    Hwang, Andrew B.; Franc, Benjamin L.; Gullberg, Grant T.; Hasegawa, Bruce H.

    2008-05-01

    Small animal SPECT imaging systems have multiple potential applications in biomedical research. Whereas SPECT data are commonly interpreted qualitatively in a clinical setting, the ability to accurately quantify measurements will increase the utility of the SPECT data for laboratory measurements involving small animals. In this work, we assess the effect of photon attenuation, scatter and partial volume errors on the quantitative accuracy of small animal SPECT measurements, first with Monte Carlo simulation and then confirmed with experimental measurements. The simulations modeled the imaging geometry of a commercially available small animal SPECT system. We simulated the imaging of a radioactive source within a cylinder of water, and reconstructed the projection data using iterative reconstruction algorithms. The size of the source and the size of the surrounding cylinder were varied to evaluate the effects of photon attenuation and scatter on quantitative accuracy. We found that photon attenuation can reduce the measured concentration of radioactivity in a volume of interest in the center of a rat-sized cylinder of water by up to 50% when imaging with iodine-125, and up to 25% when imaging with technetium-99m. When imaging with iodine-125, the scatter-to-primary ratio can reach up to approximately 30%, and can cause overestimation of the radioactivity concentration when reconstructing data with attenuation correction. We varied the size of the source to evaluate partial volume errors, which we found to be a strong function of the size of the volume of interest and the spatial resolution. These errors can result in large (>50%) changes in the measured amount of radioactivity. The simulation results were compared with and found to agree with experimental measurements. The inclusion of attenuation correction in the reconstruction algorithm improved quantitative accuracy. We also found that an improvement of the spatial resolution through the use of resolution

  3. Point Cloud Derived Fromvideo Frames: Accuracy Assessment in Relation to Terrestrial Laser Scanningand Digital Camera Data

    NASA Astrophysics Data System (ADS)

    Delis, P.; Zacharek, M.; Wierzbicki, D.; Grochala, A.

    2017-02-01

    The use of image sequences in the form of video frames recorded on data storage is very useful in especially when working with large and complex structures. Two cameras were used in this study: Sony NEX-5N (for the test object) and Sony NEX-VG10 E (for the historic building). In both cases, a Sony α f = 16 mm fixed focus wide-angle lens was used. Single frames with sufficient overlap were selected from the video sequence using an equation for automatic frame selection. In order to improve the quality of the generated point clouds, each video frame underwent histogram equalization and image sharpening. Point clouds were generated from the video frames using the SGM-like image matching algorithm. The accuracy assessment was based on two reference point clouds: the first from terrestrial laser scanning and the second generated based on images acquired using a high resolution camera, the NIKON D800. The performed research has shown, that highest accuracies are obtained for point clouds generated from video frames, for which a high pass filtration and histogram equalization had been performed. Studies have shown that to obtain a point cloud density comparable to TLS, an overlap between subsequent video frames must be 85 % or more. Based on the point cloud generated from video data, a parametric 3D model can be generated. This type of the 3D model can be used in HBIM construction.

  4. Conformity assessment of the measurement accuracy in testing laboratories using a software application

    NASA Astrophysics Data System (ADS)

    Diniţă, A.

    2017-02-01

    This article presents a method for assessing the accuracy of the measurements obtained at different tests conducted in laboratories by implementing the interlaboratory comparison method (organization, performance and evaluation of measurements of tests on the same or similar items by two or more laboratories under predetermined conditions). The program (independent software application), realised by the author and described in this paper, analyses the measurement accuracy and performance of testing laboratory by comparing the results obtained from different tests, using the modify Youden diagram, helping identify different types of errors that can occur in measurement, according to ISO 13528:2015, Statistical methods for use in proficiency testing by interlaboratory comparison. A case study is presented in the article by determining the chemical composition of identical samples from five different laboratories. The Youden diagram obtained from this study case was used to identify errors in the laboratory testing equipment. This paper was accepted for publication in Proceedings after double peer reviewing process but was not presented at the Conference ROTRIB’16

  5. 30 CFR 845.17 - Procedures for assessment of civil penalties.

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... 30 Mineral Resources 3 2011-07-01 2011-07-01 false Procedures for assessment of civil penalties..., DEPARTMENT OF THE INTERIOR PERMANENT PROGRAM INSPECTION AND ENFORCEMENT PROCEDURES CIVIL PENALTIES § 845.17 Procedures for assessment of civil penalties. (a) Within 15 days of service of a notice or order, the...

  6. 30 CFR 845.17 - Procedures for assessment of civil penalties.

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... 30 Mineral Resources 3 2012-07-01 2012-07-01 false Procedures for assessment of civil penalties..., DEPARTMENT OF THE INTERIOR PERMANENT PROGRAM INSPECTION AND ENFORCEMENT PROCEDURES CIVIL PENALTIES § 845.17 Procedures for assessment of civil penalties. (a) Within 15 days of service of a notice or order, the...

  7. 30 CFR 846.17 - Procedure for assessment of individual civil penalty.

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... 30 Mineral Resources 3 2012-07-01 2012-07-01 false Procedure for assessment of individual civil..., DEPARTMENT OF THE INTERIOR PERMANENT PROGRAM INSPECTION AND ENFORCEMENT PROCEDURES INDIVIDUAL CIVIL PENALTIES § 846.17 Procedure for assessment of individual civil penalty. (a) Notice. The Office shall serve...

  8. 30 CFR 846.17 - Procedure for assessment of individual civil penalty.

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... 30 Mineral Resources 3 2011-07-01 2011-07-01 false Procedure for assessment of individual civil..., DEPARTMENT OF THE INTERIOR PERMANENT PROGRAM INSPECTION AND ENFORCEMENT PROCEDURES INDIVIDUAL CIVIL PENALTIES § 846.17 Procedure for assessment of individual civil penalty. (a) Notice. The Office shall serve...

  9. 43 CFR 11.33 - What types of assessment procedures are available?

    Code of Federal Regulations, 2012 CFR

    2012-10-01

    ... Environments, Version 1.51 (NRDAM/GLE). (b) Type B procedures require more extensive field observation than the... 43 Public Lands: Interior 1 2012-10-01 2011-10-01 true What types of assessment procedures are... Assessment Model for Coastal and Marine Environments, Version 2.51 (NRDAM/CME); and a procedure for...

  10. 43 CFR 11.33 - What types of assessment procedures are available?

    Code of Federal Regulations, 2013 CFR

    2013-10-01

    ... Environments, Version 1.51 (NRDAM/GLE). (b) Type B procedures require more extensive field observation than the... 43 Public Lands: Interior 1 2013-10-01 2013-10-01 false What types of assessment procedures are... Assessment Model for Coastal and Marine Environments, Version 2.51 (NRDAM/CME); and a procedure for...

  11. 43 CFR 11.33 - What types of assessment procedures are available?

    Code of Federal Regulations, 2014 CFR

    2014-10-01

    ... Environments, Version 1.51 (NRDAM/GLE). (b) Type B procedures require more extensive field observation than the... 43 Public Lands: Interior 1 2014-10-01 2014-10-01 false What types of assessment procedures are... Assessment Model for Coastal and Marine Environments, Version 2.51 (NRDAM/CME); and a procedure for...

  12. 43 CFR 11.33 - What types of assessment procedures are available?

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... procedures: a procedure for coastal or marine environments, which incorporates the Natural Resource Damage Assessment Model for Coastal and Marine Environments, Version 2.51 (NRDAM/CME); and a procedure for Great Lakes environments, which incorporates the Natural Resource Damage Assessment Model for Great...

  13. An assessment of error-correction procedures for learners with autism.

    PubMed

    McGhan, Anna C; Lerman, Dorothea C

    2013-01-01

    Prior research indicates that the relative effectiveness of different error-correction procedures may be idiosyncratic across learners, suggesting the potential benefit of an individualized assessment prior to teaching. In this study, we evaluated the reliability and utility of a rapid error-correction assessment to identify the least intrusive, most effective procedure for teaching discriminations to 5 learners with autism. The initial assessment included 4 commonly used error-correction procedures. We compared the total number of trials required for the subject to reach the mastery criterion under each procedure. Subjects then received additional instruction with the least intrusive procedure associated with the fewest number of trials and 2 less effective procedures from the assessment. Outcomes of the additional instruction were consistent with those from the initial assessment for 4 of 5 subjects. These findings suggest that an initial assessment may be beneficial for identifying the most appropriate error-correction procedure.

  14. 40 CFR 63.1412 - Continuous process vent applicability assessment procedures and methods.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... engineering principles, measurable process parameters, or physical or chemical laws or properties. Examples of... values, and engineering assessment control applicability assessment requirements are to be determined... by using the engineering assessment procedures in paragraph (k) of this section. (f) Volumetric...

  15. Accuracy assessment of Kinect for Xbox One in point-based tracking applications

    NASA Astrophysics Data System (ADS)

    Goral, Adrian; Skalski, Andrzej

    2015-12-01

    We present the accuracy assessment of a point-based tracking system built on Kinect v2. In our approach, color, IR and depth data were used to determine the positions of spherical markers. To accomplish this task, we calibrated the depth/infrared and color cameras using a custom method. As a reference tool we used Polaris Spectra optical tracking system. The mean error obtained within the range from 0.9 to 2.9 m was 61.6 mm. Although the depth component of the error turned out to be the largest, the random error of depth estimation was only 1.24 mm on average. Our Kinect-based system also allowed for reliable angular measurements within the range of ±20° from the sensor's optical axis.

  16. Integrated three-dimensional digital assessment of accuracy of anterior tooth movement using clear aligners

    PubMed Central

    Zhang, Xiao-Juan; He, Li; Tian, Jie; Bai, Yu-Xing; Li, Song

    2015-01-01

    Objective To assess the accuracy of anterior tooth movement using clear aligners in integrated three-dimensional digital models. Methods Cone-beam computed tomography was performed before and after treatment with clear aligners in 32 patients. Plaster casts were laser-scanned for virtual setup and aligner fabrication. Differences in predicted and achieved root and crown positions of anterior teeth were compared on superimposed maxillofacial digital images and virtual models and analyzed by Student's t-test. Results The mean discrepancies in maxillary and mandibular crown positions were 0.376 ± 0.041 mm and 0.398 ± 0.037 mm, respectively. Maxillary and mandibular root positions differed by 2.062 ± 0.128 mm and 1.941 ± 0.154 mm, respectively. Conclusions Crowns but not roots of anterior teeth can be moved to designated positions using clear aligners, because these appliances cause tooth movement by tilting motion. PMID:26629473

  17. How Does One Assess the Accuracy of Academic Success Predictors? ROC Analysis Applied to University Entrance Factors

    ERIC Educational Resources Information Center

    Vivo, Juana-Maria; Franco, Manuel

    2008-01-01

    This article attempts to present a novel application of a method of measuring accuracy for academic success predictors that could be used as a standard. This procedure is known as the receiver operating characteristic (ROC) curve, which comes from statistical decision techniques. The statistical prediction techniques provide predictor models and…

  18. A laboratory assessment of the measurement accuracy of weighing type rainfall intensity gauges

    NASA Astrophysics Data System (ADS)

    Colli, M.; Chan, P. W.; Lanza, L. G.; La Barbera, P.

    2012-04-01

    In recent years the WMO Commission for Instruments and Methods of Observation (CIMO) fostered noticeable advancements in the accuracy of precipitation measurement issue by providing recommendations on the standardization of equipment and exposure, instrument calibration and data correction as a consequence of various comparative campaigns involving manufacturers and national meteorological services from the participating countries (Lanza et al., 2005; Vuerich et al., 2009). Extreme events analysis is proven to be highly affected by the on-site RI measurement accuracy (see e.g. Molini et al., 2004) and the time resolution of the available RI series certainly constitutes another key-factor in constructing hyetographs that are representative of real rain events. The OTT Pluvio2 weighing gauge (WG) and the GEONOR T-200 vibrating-wire precipitation gauge demonstrated very good performance under previous constant flow rate calibration efforts (Lanza et al., 2005). Although WGs do provide better performance than more traditional Tipping Bucket Rain gauges (TBR) under continuous and constant reference intensity, dynamic effects seem to affect the accuracy of WG measurements under real world/time varying rainfall conditions (Vuerich et al., 2009). The most relevant is due to the response time of the acquisition system and the derived systematic delay of the instrument in assessing the exact weight of the bin containing cumulated precipitation. This delay assumes a relevant role in case high resolution rain intensity time series are sought from the instrument, as is the case of many hydrologic and meteo-climatic applications. This work reports the laboratory evaluation of Pluvio2 and T-200 rainfall intensity measurements accuracy. Tests are carried out by simulating different artificial precipitation events, namely non-stationary rainfall intensity, using a highly accurate dynamic rainfall generator. Time series measured by an Ogawa drop counter (DC) at a field test site

  19. A Method for Assessing Ground-Truth Accuracy of the 5DCT Technique

    PubMed Central

    Dou, T. H.; Thomas, D. H.; O'Connell, D.; Lamb, J.M.; Lee, P.; Low, D.A.

    2015-01-01

    Purpose To develop a technique that assesses the accuracy of the breathing phase-specific volume image generation process by patient-specific breathing motion model using the original free-breathing CT scans as ground truths. Methods 16 lung cancer patients underwent a previously published protocol in which 25 free-breathing fast helical CT scans were acquired with a simultaneous breathing surrogate. A patient-specific motion model was constructed based on the tissue displacements determined by a state-of-the-art deformable image registration. The first image was arbitrarily selected as the reference image. The motion model was used, along with the free-breathing phase information of the original 25 image datasets, to generate a set of deformation vector fields (DVF) that mapped the reference image to the 24 non-reference images. The high-pitch helically acquired original scans served as ground truths because they captured the instantaneous tissue positions during free breathing. Image similarity between the simulated and the original scans was assessed using deformable registration that evaluated the point-wise discordance throughout the lungs. Results Qualitative comparisons using image overlays showed excellent agreement between the simulated and the original images. Even large 2 cm diaphragm displacements were very well modeled, as was sliding motion across the lung-chest wall boundary. The mean error across the patient cohort was 1.15±0.37 mm, while the mean 95th percentile error was 2.47±0.78 mm. Conclusion The proposed ground truth based technique provided voxel-by-voxel accuracy analysis that could identify organ or tumor-specific motion modeling errors for treatment planning. Despite a large variety of breathing patterns and lung deformations during the free-breathing scanning session, the 5DCT technique was able to accurately reproduce the original helical CT scans, suggesting its applicability to a wide range of patients. PMID:26530763

  20. Accuracy of Optimized Branched Algorithms to Assess Activity-Specific PAEE

    PubMed Central

    Edwards, Andy G.; Hill, James O.; Byrnes, William C.; Browning, Raymond C.

    2009-01-01

    PURPOSE To assess the activity-specific accuracy achievable by branched algorithm (BA) analysis of simulated daily-living physical activity energy expenditure (PAEE) within a sedentary population. METHODS Sedentary men (n=8) and women (n=8) first performed a treadmill calibration protocol, during which heart rate (HR), accelerometry (ACC), and PAEE were measured in 1-minute epochs. From these data, HR-PAEE, and ACC-PAEE regressions were constructed and used in each of six analytic models to predict PAEE from ACC and HR data collected during a subsequent simulated daily-living protocol. Criterion PAEE was measured during both protocols via indirect calorimetry. The accuracy achieved by each model was assessed by the root mean square of the difference between model-predicted daily–living PAEE and the criterion daily-living PAEE (expressed here as % of mean daily living PAEE). RESULTS Across the range of activities an unconstrained post hoc optimized branched algorithm best predicted criterion PAEE. Estimates using individual calibration were generally more accurate than those using group calibration (14 vs. 16 % error, respectively). These analyses also performed well within each of the six daily-living activities, but systematic errors appeared for several of those activities, which may be explained by an inability of the algorithm to simultaneously accommodate a heterogeneous range of activities. Analyses of between mean square error by subject and activity suggest that optimization involving minimization of RMS for total daily-living PAEE is associated with decreased error between subjects but increased error between activities. CONCLUSION The performance of post hoc optimized branched algorithms may be limited by heterogeneity in the daily-living activities being performed. PMID:19952842

  1. Quality assessment of comparative diagnostic accuracy studies: our experience using a modified version of the QUADAS-2 tool.

    PubMed

    Wade, Ros; Corbett, Mark; Eastwood, Alison

    2013-09-01

    Assessing the quality of included studies is a vital step in undertaking a systematic review. The recently revised Quality Assessment of Diagnostic Accuracy Studies (QUADAS) tool (QUADAS-2), which is the only validated quality assessment tool for diagnostic accuracy studies, does not include specific criteria for assessing comparative studies. As part of an assessment that included comparative diagnostic accuracy studies, we used a modified version of QUADAS-2 to assess study quality. We modified QUADAS-2 by duplicating questions relating to the index test, to assess the relevant potential sources of bias for both the index test and comparator test. We also added review-specific questions. We have presented our modified version of QUADAS-2 and outlined some key issues for consideration when assessing the quality of comparative diagnostic accuracy studies, to help guide other systematic reviewers conducting comparative diagnostic reviews. Until QUADAS is updated to incorporate assessment of comparative studies, QUADAS-2 can be used, although modification and careful thought is required. It is important to reflect upon whether aspects of study design and methodology favour one of the tests over another.

  2. Inertial Measures of Motion for Clinical Biomechanics: Comparative Assessment of Accuracy under Controlled Conditions - Effect of Velocity

    PubMed Central

    Lebel, Karina; Boissy, Patrick; Hamel, Mathieu; Duval, Christian

    2013-01-01

    Background Inertial measurement of motion with Attitude and Heading Reference Systems (AHRS) is emerging as an alternative to 3D motion capture systems in biomechanics. The objectives of this study are: 1) to describe the absolute and relative accuracy of multiple units of commercially available AHRS under various types of motion; and 2) to evaluate the effect of motion velocity on the accuracy of these measurements. Methods The criterion validity of accuracy was established under controlled conditions using an instrumented Gimbal table. AHRS modules were carefully attached to the center plate of the Gimbal table and put through experimental static and dynamic conditions. Static and absolute accuracy was assessed by comparing the AHRS orientation measurement to those obtained using an optical gold standard. Relative accuracy was assessed by measuring the variation in relative orientation between modules during trials. Findings Evaluated AHRS systems demonstrated good absolute static accuracy (mean error < 0.5o) and clinically acceptable absolute accuracy under condition of slow motions (mean error between 0.5o and 3.1o). In slow motions, relative accuracy varied from 2o to 7o depending on the type of AHRS and the type of rotation. Absolute and relative accuracy were significantly affected (p<0.05) by velocity during sustained motions. The extent of that effect varied across AHRS. Interpretation Absolute and relative accuracy of AHRS are affected by environmental magnetic perturbations and conditions of motions. Relative accuracy of AHRS is mostly affected by the ability of all modules to locate the same global reference coordinate system at all time. Conclusions Existing AHRS systems can be considered for use in clinical biomechanics under constrained conditions of use. While their individual capacity to track absolute motion is relatively consistent, the use of multiple AHRS modules to compute relative motion between rigid bodies needs to be optimized according to

  3. A retrospective study to validate an intraoperative robotic classification system for assessing the accuracy of kirschner wire (K-wire) placements with postoperative computed tomography classification system for assessing the accuracy of pedicle screw placements.

    PubMed

    Tsai, Tai-Hsin; Wu, Dong-Syuan; Su, Yu-Feng; Wu, Chieh-Hsin; Lin, Chih-Lung

    2016-09-01

    This purpose of this retrospective study is validation of an intraoperative robotic grading classification system for assessing the accuracy of Kirschner-wire (K-wire) placements with the postoperative computed tomography (CT)-base classification system for assessing the accuracy of pedicle screw placements.We conducted a retrospective review of prospectively collected data from 35 consecutive patients who underwent 176 robotic assisted pedicle screws instrumentation at Kaohsiung Medical University Hospital from September 2014 to November 2015. During the operation, we used a robotic grading classification system for verifying the intraoperative accuracy of K-wire placements. Three months after surgery, we used the common CT-base classification system to assess the postoperative accuracy of pedicle screw placements. The distributions of accuracy between the intraoperative robot-assisted and various postoperative CT-based classification systems were compared using kappa statistics of agreement.The intraoperative accuracies of K-wire placements before and after repositioning were classified as excellent (131/176, 74.4% and 133/176, 75.6%, respectively), satisfactory (36/176, 20.5% and 41/176, 23.3%, respectively), and malpositioned (9/176, 5.1% and 2/176, 1.1%, respectively)In postoperative CT-base classification systems were evaluated. No screw placements were evaluated as unacceptable under any of these systems. Kappa statistics revealed no significant differences between the proposed system and the aforementioned classification systems (P <0.001).Our results revealed no significant differences between the intraoperative robotic grading system and various postoperative CT-based grading systems. The robotic grading classification system is a feasible method for evaluating the accuracy of K-wire placements. Using the intraoperative robot grading system to classify the accuracy of K-wire placements enables predicting the postoperative accuracy of pedicle screw

  4. Improving the Accuracy of Urban Environmental Quality Assessment Using Geographically-Weighted Regression Techniques

    PubMed Central

    Faisal, Kamil; Shaker, Ahmed

    2017-01-01

    Urban Environmental Quality (UEQ) can be treated as a generic indicator that objectively represents the physical and socio-economic condition of the urban and built environment. The value of UEQ illustrates a sense of satisfaction to its population through assessing different environmental, urban and socio-economic parameters. This paper elucidates the use of the Geographic Information System (GIS), Principal Component Analysis (PCA) and Geographically-Weighted Regression (GWR) techniques to integrate various parameters and estimate the UEQ of two major cities in Ontario, Canada. Remote sensing, GIS and census data were first obtained to derive various environmental, urban and socio-economic parameters. The aforementioned techniques were used to integrate all of these environmental, urban and socio-economic parameters. Three key indicators, including family income, higher level of education and land value, were used as a reference to validate the outcomes derived from the integration techniques. The results were evaluated by assessing the relationship between the extracted UEQ results and the reference layers. Initial findings showed that the GWR with the spatial lag model represents an improved precision and accuracy by up to 20% with respect to those derived by using GIS overlay and PCA techniques for the City of Toronto and the City of Ottawa. The findings of the research can help the authorities and decision makers to understand the empirical relationships among environmental factors, urban morphology and real estate and decide for more environmental justice. PMID:28272334

  5. Accuracy assessment of satellite altimetry over central East Antarctica by kinematic GNSS and crossover analysis

    NASA Astrophysics Data System (ADS)

    Schröder, Ludwig; Richter, Andreas; Fedorov, Denis; Knöfel, Christoph; Ewert, Heiko; Dietrich, Reinhard; Matveev, Aleksey Yu.; Scheinert, Mirko; Lukin, Valery

    2014-05-01

    Satellite altimetry is a unique technique to observe the contribution of the Antarctic ice sheet to global sea-level change. To fulfill the high quality requirements for its application, the respective products need to be validated against independent data like ground-based measurements. Kinematic GNSS provides a powerful method to acquire precise height information along the track of a vehicle. Within a collaboration of TU Dresden and Russian partners during the Russian Antarctic Expeditions in the seasons from 2001 to 2013 we recorded several such profiles in the region of the subglacial Lake Vostok, East Antarctica. After 2006 these datasets also include observations along seven continental traverses with a length of about 1600km each between the Antarctic coast and the Russian research station Vostok (78° 28' S, 106° 50' E). After discussing some special issues concerning the processing of the kinematic GNSS profiles under the very special conditions of the interior of the Antarctic ice sheet, we will show their application for the validation of NASA's laser altimeter satellite mission ICESat and of ESA's ice mission CryoSat-2. Analysing the height differences at crossover points, we can get clear insights into the height regime at the subglacial Lake Vostok. Thus, these profiles as well as the remarkably flat lake surface itself can be used to investigate the accuracy and possible error influences of these missions. We will show how the transmit-pulse reference selection correction (Gaussian vs. centroid, G-C) released in January 2013 helped to further improve the release R633 ICESat data and discuss the height offsets and other effects of the CryoSat-2 radar data. In conclusion we show that only a combination of laser and radar altimetry can provide both, a high precision and a good spatial coverage. An independent validation with ground-based observations is crucial for a thorough accuracy assessment.

  6. Accuracy Assessment of Immediate and Delayed Implant Placements Using CAD/CAM Surgical Guides.

    PubMed

    Alzoubi, Fawaz; Massoomi, Nima; Nattestad, Anders

    2016-10-01

    The aim of this study is to assess the accuracy of immediately placed implants using Anatomage Invivo5 computer-assisted design/computer-assisted manufacturing (CAD/CAM) surgical guides and compare the accuracy to delayed implant placement protocol. Patients who had implants placed using Anatomage Invivo5 CAD/CAM surgical guides during the period of 2012-2015 were evaluated retrospectively. Patients who received immediate implant placements and/or delayed implant placements replacing 1-2 teeth were included in this study. Pre- and postsurgical images were superimposed to evaluate deviations at the crest, apex, and angle. A total of 40 implants placed in 29 patients were included in this study. The overall mean deviations measured at the crest, apex, and angle were 0.86 mm, 1.25 mm, and 3.79°, respectively. The means for the immediate group deviations were: crest = 0.85 mm, apex = 1.10, and angle = 3.49°. The means for the delayed group deviations were: crest = 0.88 mm, apex = 1.59, and angle = 4.29°. No statistically significant difference was found at the crest and angle; however, there was a statistically significant difference between the immediate and delayed group at the apex, with the immediate group presenting more accurate placements at the apical point than the delayed group. CAD/CAM surgical guides can be reliable tools to accurately place implants immediately and/or in a delayed fashion. No statistically significant differences were found between the delayed and the immediate group at the crest and angle, however apical position was more accurate in the immediate group.

  7. Accuracy Assessment of GO Pro Hero 3 (black) Camera in Underwater Environment

    NASA Astrophysics Data System (ADS)

    Helmholz, , P.; Long, J.; Munsie, T.; Belton, D.

    2016-06-01

    Modern digital cameras are increasing in quality whilst decreasing in size. In the last decade, a number of waterproof consumer digital cameras (action cameras) have become available, which often cost less than 500. A possible application of such action cameras is in the field of Underwater Photogrammetry. Especially with respect to the fact that with the change of the medium to below water can in turn counteract the distortions present. The goal of this paper is to investigate the suitability of such action cameras for underwater photogrammetric applications focusing on the stability of the camera and the accuracy of the derived coordinates for possible photogrammetric applications. For this paper a series of image sequences was capture in a water tank. A calibration frame was placed in the water tank allowing the calibration of the camera and the validation of the measurements using check points. The accuracy assessment covered three test sets operating three GoPro sports cameras of the same model (Hero 3 black). The test set included the handling of the camera in a controlled manner where the camera was only dunked into the water tank using 7MP and 12MP resolution and a rough handling where the camera was shaken as well as being removed from the waterproof case using 12MP resolution. The tests showed that the camera stability was given with a maximum standard deviation of the camera constant σc of 0.0031mm for 7MB (for an average c of 2.720mm) and 0.0072 mm for 12MB (for an average c of 3.642mm). The residual test of the check points gave for the 7MB test series the largest rms value with only 0.450mm and the largest maximal residual of only 2.5 mm. For the 12MB test series the maximum rms value is 0. 653mm.

  8. Assessment of Exceptional Students: Educational and Psychological Procedures. Fifth Edition.

    ERIC Educational Resources Information Center

    Taylor, Ronald L.

    This book provides information on the assessment of students with disabilities. It is divided into six major parts. Part 1, "Introduction to Assessment: Issues and Concerns," discusses the historical, philosophical, and legal foundations of assessment, introduces psychological assessments, and proposes an assessment model. Part 2, "Informal…

  9. A simple procedure to assess esthetic preference for dentofacial treatment.

    PubMed

    Cohn, E R; Eigenbrode, C R; Dongelli, P; Ferketic, M; Close, J M; Sassouni, V; Sassouni, A

    1986-03-01

    A procedure is described in which lateral facial photographs were cut apart and reassembled in ways that approximated desired esthetic change. Two groups of subjects were asked to complete the Sassouni "cut-up-paste-back" procedure. Group 1 consisted of 20 adult dental professionals; group 2 comprised 18 college students unacquainted with dental studies. Both groups made similar alterations on a photograph at the beginning and at the end of a 2-week period. Photographic alterations were highly similar to written descriptions of intended changes. The "cut-up-paste-back" procedure is a simple and inexpensive way to facilitate dentist-patient communication during treatment planning. The procedure also has applicability for research in facial esthetic preference.

  10. Utilizing the Global Land Cover 2000 reference dataset for a comparative accuracy assessment of 1 km global land cover maps

    NASA Astrophysics Data System (ADS)

    Schultz, M.; Tsendbazazr, N. E.; Herold, M.; Jung, M.; Mayaux, P.; Goehman, H.

    2015-04-01

    Many investigators use global land cover (GLC) maps for different purposes, such as an input for global climate models. The current GLC maps used for such purposes are based on different remote sensing data, methodologies and legends. Consequently, comparison of GLC maps is difficult and information about their relative utility is limited. The objective of this study is to analyse and compare the thematic accuracies of GLC maps (i.e., IGBP-DISCover, UMD, MODIS, GLC2000 and SYNMAP) at 1 km resolutions by (a) re-analysing the GLC2000 reference dataset, (b) applying a generalized GLC legend and (c) comparing their thematic accuracies at different homogeneity levels. The accuracy assessment was based on the GLC2000 reference dataset with 1253 samples that were visually interpreted. The legends of the GLC maps and the reference datasets were harmonized into 11 general land cover classes. There results show that the map accuracy estimates vary up to 10-16% depending on the homogeneity of the reference point (HRP) for all the GLC maps. An increase of the HRP resulted in higher overall accuracies but reduced accuracy confidence for the GLC maps due to less number of accountable samples. The overall accuracy of the SYNMAP was the highest at any HRP level followed by the GLC2000. The overall accuracies of the maps also varied by up to 10% depending on the definition of agreement between the reference and map categories in heterogeneous landscape. A careful consideration of heterogeneous landscape is therefore recommended for future accuracy assessments of land cover maps.

  11. Accuracy of Cameriere's cut-off value for third molar in assessing 18 years of age.

    PubMed

    De Luca, S; Biagi, R; Begnoni, G; Farronato, G; Cingolani, M; Merelli, V; Ferrante, L; Cameriere, R

    2014-02-01

    Due to increasingly numerous international migrations, estimating the age of unaccompanied minors is becoming of enormous significance for forensic professionals who are required to deliver expert opinions. The third molar tooth is one of the few anatomical sites available for estimating the age of individuals in late adolescence. This study verifies the accuracy of Cameriere's cut-off value of the third molar index (I3M) in assessing 18 years of age. For this purpose, a sample of orthopantomographs (OPTs) of 397 living subjects aged between 13 and 22 years (192 female and 205 male) was analyzed. Age distribution gradually decreases as I3M increases in both males and females. The results show that the sensitivity of the test was 86.6%, with a 95% confidence interval of (80.8%, 91.1%), and its specificity was 95.7%, with a 95% confidence interval of (92.1%, 98%). The proportion of correctly classified individuals was 91.4%. Estimated post-test probability, p was 95.6%, with a 95% confidence interval of (92%, 98%). Hence, the probability that a subject positive on the test (i.e., I3M<0.08) was 18 years of age or older was 95.6%.

  12. Descriptive and Inferential Procedures for Assessing Differential Item Functioning in Polytomous Items.

    ERIC Educational Resources Information Center

    Zwick, Rebecca; Thayer, Dorothy T.; Mazzeo, John

    1997-01-01

    Differential item functioning (DIF) assessment procedures for items with more than two ordered score categories, referred to as polytomous items, were evaluated. Three descriptive statistics (standardized mean difference and two procedures based on the SIBTEST computer program) and five inferential procedures were used. Conditions under which the…

  13. A General Factor-Analytic Procedure for Assessing Response Bias in Questionnaire Measures

    ERIC Educational Resources Information Center

    Ferrando, Pere J.; Lorenzo-Seva, Urbano; Chico, Eliseo

    2009-01-01

    This article proposes procedures for simultaneously assessing and controlling acquiescence and social desirability in questionnaire items. The procedures are based on a semi-restricted factor-analytic tridimensional model, and can be used with binary, graded-response, or more continuous items. We discuss procedures for fitting the model (item…

  14. Integrating Landsat and California pesticide exposure estimation at aggregated analysis scales: Accuracy assessment of rurality

    NASA Astrophysics Data System (ADS)

    Vopham, Trang Minh

    Pesticide exposure estimation in epidemiologic studies can be constrained to analysis scales commonly available for cancer data - census tracts and ZIP codes. Research goals included (1) demonstrating the feasibility of modifying an existing geographic information system (GIS) pesticide exposure method using California Pesticide Use Reports (PURs) and land use surveys to incorporate Landsat remote sensing and to accommodate aggregated analysis scales, and (2) assessing the accuracy of two rurality metrics (quality of geographic area being rural), Rural-Urban Commuting Area (RUCA) codes and the U.S. Census Bureau urban-rural system, as surrogates for pesticide exposure when compared to the GIS gold standard. Segments, derived from 1985 Landsat NDVI images, were classified using a crop signature library (CSL) created from 1990 Landsat NDVI images via a sum of squared differences (SSD) measure. Organochlorine, organophosphate, and carbamate Kern County PUR applications (1974-1990) were matched to crop fields using a modified three-tier approach. Annual pesticide application rates (lb/ac), and sensitivity and specificity of each rurality metric were calculated. The CSL (75 land use classes) classified 19,752 segments [median SSD 0.06 NDVI]. Of the 148,671 PUR records included in the analysis, Landsat contributed 3,750 (2.5%) additional tier matches. ZIP Code Tabulation Area (ZCTA) rates ranged between 0 and 1.36 lb/ac and census tract rates between 0 and 1.57 lb/ac. Rurality was a mediocre pesticide exposure surrogate; higher rates were observed among urban areal units. ZCTA-level RUCA codes offered greater specificity (39.1-60%) and sensitivity (25-42.9%). The U.S. Census Bureau metric offered greater specificity (92.9-97.5%) at the census tract level; sensitivity was low (≤6%). The feasibility of incorporating Landsat into a modified three-tier GIS approach was demonstrated. Rurality accuracy is affected by rurality metric, areal aggregation, pesticide chemical

  15. Assessment of Precipitation Forecast Accuracy over Eastern Black Sea Region using WRF-ARW

    NASA Astrophysics Data System (ADS)

    Bıyık, G.; Unal, Y.; Onol, B.

    2009-09-01

    Surface topography such as mountain barriers, existing water bodies and semi-permanent mountain glaciers changes large scale atmospheric patterns and creates a challenge for a reliable precipitation prediction. Eastern Black sea region of Turkey is an example. Black Sea Mountain chains lies west to east along the coastline with the average height of 2000 m and the highest point is 3973 m, and from the coastline to inland there is a very sharp topography change. For this project we select the Eastern Black Sea region of Turkey to assess precipitation forecast accuracy. This is a unique region of Turkey which receive both highest amount of precipitation and precipitation throughout whole year. Amount of rain and snow is important because they supply water to the main river systems of Turkey. Turkey is in general under the influence of both continental polar (Cp) and tropical air masses. Their interaction with the orography causes orographic precipitation being effective on the region. Also Caucasus Mountains, which is the highest point of Georgia, moderates the climate of the southern parts by not letting penetration of colder air masses from north. Southern part of the western Black Sea region has more continental climate because of the lee side effect of the mountains Therefore, precipitation forecast in the region is important for operational forecasters and researchers. Our aim in this project is to investigate WRF precipitation accuracy during 10 extreme precipitation, 10 normal precipitation and 10 no precipitation days by using forecast for two days ahead. Cases are selected in years between 2000 and 2003. Eleven Eastern Black Sea stations located along the coastline are used to determine 20 extreme and 10 average precipitation days. During project, three different resolutions with three nested domains are tested to determine the model sensivity to domain boundaries and resolution. As a result of our tests, 6 km resolution for finer domain was found suitable

  16. Mind-reading accuracy in intimate relationships: assessing the roles of the relationship, the target, and the judge.

    PubMed

    Thomas, Geoff; Fletcher, Garth J O

    2003-12-01

    Using a video-review procedure, multiple perceivers carried out mind-reading tasks of multiple targets at different levels of acquaintanceship (50 dating couples, friends of the dating partners, and strangers). As predicted, the authors found that mind-reading accuracy was (a). higher as a function of increased acquaintanceship, (b). relatively unaffected by target effects, (c). influenced by individual differences in perceivers' ability, and (d). higher for female than male perceivers. In addition, superior mind-reading accuracy (for dating couples and friends) was related to higher relationship satisfaction, closeness, and more prior disclosure about the problems discussed, but only under moderating conditions related to sex and relationship length. The authors conclude that the nature of the relationship between the perceiver and the target occupies a pivotal role in determining mind-reading accuracy.

  17. 78 FR 46905 - Tobacco Transition Program; Final Assessment Procedures

    Federal Register 2010, 2011, 2012, 2013, 2014

    2013-08-02

    ...; ] DEPARTMENT OF AGRICULTURE Commodity Credit Corporation Tobacco Transition Program; Final Assessment... information about the final quarterly assessments for the Tobacco Transition Program (TTP). Through the Tobacco Transition Payment Program (TTPP), which is part of the TTP, eligible former tobacco quota...

  18. Assessing the Accuracy of Sentinel-3 SLSTR Sea-Surface Temperature Retrievals Using High Accuracy Infrared Radiiometers on Ships of Opportunity

    NASA Astrophysics Data System (ADS)

    Minnett, P. J.; Izaguirre, M. A.; Szcszodrak, M.; Williams, E.; Reynolds, R. M.

    2015-12-01

    The assessment of errors and uncertainties in satellite-derived SSTs can be achieved by comparisons with independent measurements of skin SST of high accuracy. Such validation measurements are provided by well-calibrated infrared radiometers mounted on ships. The second generation of Marine-Atmospheric Emitted Radiance Interferometers (M-AERIs) have recently been developed and two are now deployed on cruise ships of Royal Caribbean Cruise Lines that operate in the Caribbean Sea, North Atlantic and Mediterranean Sea. In addition, two Infrared SST Autonomous Radiometers (ISARs) are mounted alternately on a vehicle transporter of NYK Lines that crosses the Pacific Ocean between Japan and the USA. Both M-AERIs and ISARs are self-calibrating radiometers having two internal blackbody cavities to provide at-sea calibration of the measured radiances, and the accuracy of the internal calibration is periodically determined by measurements of a NIST-traceable blackbody cavity in the laboratory. This provides SI-traceability for the at-sea measurements. It is anticipated that these sensors will be deployed during the next several years and will be available for the validation of the SLSTRs on Sentinel-3a and -3b.

  19. Formative Assessment in HL Teaching: Purposes, Procedures, and Practices

    ERIC Educational Resources Information Center

    Carreira, Maria M.

    2012-01-01

    Discussions surrounding assessment in the foreign languages generally focus on the two ends of the teaching/learning process: diagnostic assessment, typically used for placement purposes and administered prior to the start of instruction, and summative assessment, which evaluates learning after instruction for purposes of assigning a grade or…

  20. Procedures for Needs-Assessment Evaluation: A Symposium.

    ERIC Educational Resources Information Center

    Klein, Stephen P.; And Others

    Symposium topics and speakers include "Choosing Needs for Needs Assessment" (Stephen P. Klein); "Selecting Tests to Assess the Needs" (Ralph Hoepfner); "Making Better Decisions on Assessed Needs: Differentiated School Norms" (Paul A. Bradley and Dale Woolley); and "Allocating Resources by Subject Area"…

  1. Assessing the accuracy of an inter-institutional automated patient-specific health problem list

    PubMed Central

    2010-01-01

    Background Health problem lists are a key component of electronic health records and are instrumental in the development of decision-support systems that encourage best practices and optimal patient safety. Most health problem lists require initial clinical information to be entered manually and few integrate information across care providers and institutions. This study assesses the accuracy of a novel approach to create an inter-institutional automated health problem list in a computerized medical record (MOXXI) that integrates three sources of information for an individual patient: diagnostic codes from medical services claims from all treating physicians, therapeutic indications from electronic prescriptions, and single-indication drugs. Methods Data for this study were obtained from 121 general practitioners and all medical services provided for 22,248 of their patients. At the opening of a patient's file, all health problems detected through medical service utilization or single-indication drug use were flagged to the physician in the MOXXI system. Each new arising health problem were presented as 'potential' and physicians were prompted to specify if the health problem was valid (Y) or not (N) or if they preferred to reassess its validity at a later time. Results A total of 263,527 health problems, representing 891 unique problems, were identified for the group of 22,248 patients. Medical services claims contributed to the majority of problems identified (77%), followed by therapeutic indications from electronic prescriptions (14%), and single-indication drugs (9%). Physicians actively chose to assess 41.7% (n = 106,950) of health problems. Overall, 73% of the problems assessed were considered valid; 42% originated from medical service diagnostic codes, 11% from single indication drugs, and 47% from prescription indications. Twelve percent of problems identified through other treating physicians were considered valid compared to 28% identified through study

  2. Using Generalizability Theory to Examine the Accuracy and Validity of Large-Scale ESL Writing Assessment

    ERIC Educational Resources Information Center

    Huang, Jinyan

    2012-01-01

    Using generalizability (G-) theory, this study examined the accuracy and validity of the writing scores assigned to secondary school ESL students in the provincial English examinations in Canada. The major research question that guided this study was: Are there any differences between the accuracy and construct validity of the analytic scores…

  3. 30 CFR 724.17 - Procedure for assessment of individual civil penalty.

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... assessed an individual civil penalty, by certified mail, or by any alternative means consistent with the... 30 Mineral Resources 3 2012-07-01 2012-07-01 false Procedure for assessment of individual civil..., DEPARTMENT OF THE INTERIOR INITIAL PROGRAM REGULATIONS INDIVIDUAL CIVIL PENALTIES § 724.17 Procedure...

  4. 30 CFR 723.17 - Procedures for assessment of civil penalties.

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... 30 Mineral Resources 3 2011-07-01 2011-07-01 false Procedures for assessment of civil penalties..., DEPARTMENT OF THE INTERIOR INITIAL PROGRAM REGULATIONS CIVIL PENALTIES § 723.17 Procedures for assessment of civil penalties. (a) Within 15 days of service of a notice or order, the person to whom it was...

  5. 30 CFR 724.17 - Procedure for assessment of individual civil penalty.

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... assessed an individual civil penalty, by certified mail, or by any alternative means consistent with the... 30 Mineral Resources 3 2011-07-01 2011-07-01 false Procedure for assessment of individual civil..., DEPARTMENT OF THE INTERIOR INITIAL PROGRAM REGULATIONS INDIVIDUAL CIVIL PENALTIES § 724.17 Procedure...

  6. 30 CFR 723.17 - Procedures for assessment of civil penalties.

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... 30 Mineral Resources 3 2012-07-01 2012-07-01 false Procedures for assessment of civil penalties..., DEPARTMENT OF THE INTERIOR INITIAL PROGRAM REGULATIONS CIVIL PENALTIES § 723.17 Procedures for assessment of civil penalties. (a) Within 15 days of service of a notice or order, the person to whom it was...

  7. Critical Emergency Medicine Procedural Skills: A Comparative Study of Methods for Teaching and Assessment.

    ERIC Educational Resources Information Center

    Chapman, Dane M.; And Others

    Three critical procedural skills in emergency medicine were evaluated using three assessment modalities--written, computer, and animal model. The effects of computer practice and previous procedure experience on skill competence were also examined in an experimental sequential assessment design. Subjects were six medical students, six residents,…

  8. 45 CFR 5.44 - Procedures for assessing and collecting fees.

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... 45 Public Welfare 1 2010-10-01 2010-10-01 false Procedures for assessing and collecting fees. 5.44 Section 5.44 Public Welfare DEPARTMENT OF HEALTH AND HUMAN SERVICES GENERAL ADMINISTRATION FREEDOM OF INFORMATION REGULATIONS Fees § 5.44 Procedures for assessing and collecting fees. (a) Agreement to pay. We generally assume that when you...

  9. 30 CFR 723.17 - Procedures for assessment of civil penalties.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... 30 Mineral Resources 3 2010-07-01 2010-07-01 false Procedures for assessment of civil penalties. 723.17 Section 723.17 Mineral Resources OFFICE OF SURFACE MINING RECLAMATION AND ENFORCEMENT, DEPARTMENT OF THE INTERIOR INITIAL PROGRAM REGULATIONS CIVIL PENALTIES § 723.17 Procedures for assessment of civil penalties. (a) Within 15 days of...

  10. Accuracy Assessment of Mobile Mapping Point Clouds Using the Existing Environment as Terrestrial Reference

    NASA Astrophysics Data System (ADS)

    Hofmann, S.; Brenner, C.

    2016-06-01

    Mobile mapping data is widely used in various applications, what makes it especially important for data users to get a statistically verified quality statement on the geometric accuracy of the acquired point clouds or its processed products. The accuracy of point clouds can be divided into an absolute and a relative quality, where the absolute quality describes the position of the point cloud in a world coordinate system such as WGS84 or UTM, whereas the relative accuracy describes the accuracy within the point cloud itself. Furthermore, the quality of processed products such as segmented features depends on the global accuracy of the point cloud but mainly on the quality of the processing steps. Several data sources with different characteristics and quality can be thought of as potential reference data, such as cadastral maps, orthophoto, artificial control objects or terrestrial surveys using a total station. In this work a test field in a selected residential area was acquired as reference data in a terrestrial survey using a total station. In order to reach high accuracy the stationing of the total station was based on a newly made geodetic network with a local accuracy of less than 3 mm. The global position of the network was determined using a long time GNSS survey reaching an accuracy of 8 mm. Based on this geodetic network a 3D test field with facades and street profiles was measured with a total station, each point with a two-dimensional position and altitude. In addition, the surface of poles of street lights, traffic signs and trees was acquired using the scanning mode of the total station. Comparing this reference data to the acquired mobile mapping point clouds of several measurement campaigns a detailed quality statement on the accuracy of the point cloud data is made. Additionally, the advantages and disadvantages of the described reference data source concerning availability, cost, accuracy and applicability are discussed.

  11. A procedure for merging land cover/use data from Landsat, aerial photography, and map sources - Compatibility, accuracy and cost

    NASA Technical Reports Server (NTRS)

    Enslin, W. R.; Tilmann, S. E.; Hill-Rowley, R.; Rogers, R. H.

    1977-01-01

    A method is developed to merge land cover/use data from Landsat, aerial photography and map sources into a grid-based geographic information system. The method basically involves computer-assisted categorization of Landsat data to provide certain user-specified land cover categories; manual interpretation of aerial photography to identify other selected land cover/use categories that cannot be obtained from Landsat data; identification of special features from aerial photography or map sources; merging of the interpreted data from all the sources into a computer compatible file under a standardized coding structure; and the production of land cover/use maps, thematic maps, and tabular data. The specific tasks accomplished in producing the merged land cover/use data file and subsequent output products are identified and discussed. It is shown that effective implementation of the merging method is critically dependent on selecting the 'best' data source for each user-specified category in terms of accuracy and time/cost tradeoffs.

  12. Assessing Children's Implicit Attitudes Using the Affect Misattribution Procedure

    ERIC Educational Resources Information Center

    Williams, Amanda; Steele, Jennifer R.; Lipman, Corey

    2016-01-01

    In the current research, we examined whether the Affect Misattribution Procedure (AMP) could be successfully adapted as an implicit measure of children's attitudes. We tested this possibility in 3 studies with 5- to 10-year-old children. In Study 1, we found evidence that children misattribute affect elicited by attitudinally positive (e.g., cute…

  13. Cognitive Styles in Admission Procedures for Assessing Candidates of Architecture

    ERIC Educational Resources Information Center

    Casakin, Hernan; Gigi, Ariela

    2016-01-01

    Cognitive style has a strong predictive power in academic and professional success. This study investigated the cognitive profile of candidates studying architecture. Specifically, it explored the relation between visual and verbal cognitive styles, and the performance of candidates in admission procedures. The cognitive styles of candidates who…

  14. First Language of Test Takers and Fairness Assessment Procedures

    ERIC Educational Resources Information Center

    Sinharay, Sandip; Dorans, Neil J.; Liang, Longjuan

    2011-01-01

    Over the past few decades, those who take tests in the United States have exhibited increasing diversity with respect to native language. Standard psychometric procedures for ensuring item and test fairness that have existed for some time were developed when test-taking groups were predominantly native English speakers. A better understanding of…

  15. Designing a Multi-Objective Multi-Support Accuracy Assessment of the 2001 National Land Cover Data (NLCD 2001) of the Conterminous United States

    EPA Science Inventory

    The database design and diverse application of NLCD 2001 pose significant challenges for accuracy assessment because numerous objectives are of interest, including accuracy of land cover, percent urban imperviousness, percent tree canopy, land-cover composition, and net change. ...

  16. On the use of polymer gels for assessing the total geometrical accuracy in clinical Gamma Knife radiosurgery applications

    NASA Astrophysics Data System (ADS)

    Moutsatsos, A.; Karaiskos, P.; Petrokokkinos, L.; Zourari, K.; Pantelis, E.; Sakelliou, L.; Seimenis, I.; Constantinou, C.; Peraticou, A.; Georgiou, E.

    2010-11-01

    The nearly tissue equivalent MRI properties and the unique ability of registering 3D dose distributions of polymer gels were exploited to assess the total geometrical accuracy in clinical Gamma Knife applications, taking into account the combined effect of the unit's mechanical accuracy, dose delivery precision and the geometrical distortions inherent in MR images used for irradiation planning. Comparison between planned and experimental data suggests that the MR-related distortions due to susceptibility effects dominate the total clinical geometrical accuracy which was found within 1 mm. The dosimetric effect of the observed sub-millimetre uncertainties on single shot GK irradiation plans was assessed using the target percentage coverage criterion, and a considerable target dose underestimation was found.

  17. Assessing the accuracy of image tracking algorithms on visible and thermal imagery using a deep restricted Boltzmann machine

    NASA Astrophysics Data System (ADS)

    Won, Stephen; Young, S. Susan

    2012-06-01

    Image tracking algorithms are critical to many applications including image super-resolution and surveillance. However, there exists no method to independently verify the accuracy of the tracking algorithm without a supplied control or visual inspection. This paper proposes an image tracking framework that uses deep restricted Boltzmann machines trained without external databases to quantify the accuracy of image tracking algorithms without the use of ground truths. In this paper, the tracking algorithm is comprised of the combination of flux tensor segmentation with four image registration methods, including correlation, Horn-Schunck optical flow, Lucas-Kanade optical flow, and feature correspondence methods. The robustness of the deep restricted Boltzmann machine is assessed by comparing between results from training with trusted and not-trusted data. Evaluations show that the deep restricted Boltzmann machine is a valid mechanism to assess the accuracy of a tracking algorithm without the use of ground truths.

  18. Accuracy assessment on the crop area estimating method based on RS sampling at national scale: a case study of China's rice area estimation assessment

    NASA Astrophysics Data System (ADS)

    Qian, Yonglan; Yang, Bangjie; Jiao, Xianfeng; Pei, Zhiyuan; Li, Xuan

    2008-08-01

    Remote Sensing technology has been used in agricultural statistics since early 1970s in developed countries and since late 1970s in China. It has greatly improved the efficiency with its accurate, timingly and credible information. But agricultural monitoring using remote sensing has not yet been assessed with credible data in China and its accuracy seems not consistent and reliable to many users. The paper reviews different methods and the corresponding assessments of agricultural monitoring using remote sensing in developed countries and China, then assesses the crop area estimating method using Landsat TM remotely sensed data as sampling area in Northeast China. The ground truth is ga-thered with global positioning system and 40 sampling areas are used to assess the classification accu-racy. The error matrix is constructed from which the accuracy is calculated. The producer accuracy, the user accuracy and total accuracy are 89.53%, 95.37% and 87.02% respectively and the correlation coefficient between the ground truth and classification results is 0.96. A new error index δ is introduced and the average δ of rice area estimation to the truth data is 0.084. δ measures how much the RS classification result is positive or negative apart from the truth data.

  19. Accuracy Assessment of Crown Delineation Methods for the Individual Trees Using LIDAR Data

    NASA Astrophysics Data System (ADS)

    Chang, K. T.; Lin, C.; Lin, Y. C.; Liu, J. K.

    2016-06-01

    Forest canopy density and height are used as variables in a number of environmental applications, including the estimation of biomass, forest extent and condition, and biodiversity. The airborne Light Detection and Ranging (LiDAR) is very useful to estimate forest canopy parameters according to the generated canopy height models (CHMs). The purpose of this work is to introduce an algorithm to delineate crown parameters, e.g. tree height and crown radii based on the generated rasterized CHMs. And accuracy assessment for the extraction of volumetric parameters of a single tree is also performed via manual measurement using corresponding aerial photo pairs. A LiDAR dataset of a golf course acquired by Leica ALS70-HP is used in this study. Two algorithms, i.e. a traditional one with the subtraction of a digital elevation model (DEM) from a digital surface model (DSM), and a pit-free approach are conducted to generate the CHMs firstly. Then two algorithms, a multilevel morphological active-contour (MMAC) and a variable window filter (VWF), are implemented and used in this study for individual tree delineation. Finally, experimental results of two automatic estimation methods for individual trees can be evaluated with manually measured stand-level parameters, i.e. tree height and crown diameter. The resulting CHM generated by a simple subtraction is full of empty pixels (called "pits") that will give vital impact on subsequent analysis for individual tree delineation. The experimental results indicated that if more individual trees can be extracted, tree crown shape will became more completely in the CHM data after the pit-free process.

  20. Accuracy of Panoramic Radiograph in Assessment of the Relationship Between Mandibular Canal and Impacted Third Molars

    PubMed Central

    Tantanapornkul, Weeraya; Mavin, Darika; Prapaiphittayakun, Jaruthai; Phipatboonyarat, Natnicha; Julphantong, Wanchanok

    2016-01-01

    Background: The relationship between impacted mandibular third molar and mandibular canal is important for removal of this tooth. Panoramic radiography is one of the commonly used diagnostic tools for evaluating the relationship of these two structures. Objectives: To evaluate the accuracy of panoramic radiographic findings in predicting direct contact between mandibular canal and impacted third molars on 3D digital images, and to define panoramic criterion in predicting direct contact between the two structures. Methods: Two observers examined panoramic radiographs of 178 patients (256 impacted mandibular third molars). Panoramic findings of interruption of mandibular canal wall, isolated or with darkening of third molar root, diversion of mandibular canal and narrowing of third molar root were evaluated for 3D digital radiography. Direct contact between mandibular canal and impacted third molars on 3D digital images was then correlated with panoramic findings. Panoramic criterion was also defined in predicting direct contact between the two structures. Results: Panoramic findings of interruption of mandibular canal wall, isolated or with darkening of third molar root were statistically significantly correlated with direct contact between mandibular canal and impacted third molars on 3D digital images (p < 0.005), and were defined as panoramic criteria in predicting direct contact between the two structures. Conclusion: Interruption of mandibular canal wall, isolated or with darkening of third molar root observed on panoramic radiographs were effective in predicting direct contact between mandibular canal and impacted third molars on 3D digital images. Panoramic radiography is one of the efficient diagnostic tools for pre-operative assessment of impacted mandibular third molars. PMID:27398105

  1. Assessing the dosimetric accuracy of MR-generated synthetic CT images for focal brain VMAT radiotherapy

    PubMed Central

    Paradis, Eric; Cao, Yue; Lawrence, Theodore S.; Tsien, Christina; Vineberg, Karen; Balter, James M.

    2015-01-01

    Purpose To assess the dosimetric accuracy of synthetic CT volumes generated from MRI data for focal brain radiotherapy. Methods A study was conducted on 12 patients with gliomas who underwent both MR and CT imaging as part of their simulation for external beam treatment planning. Synthetic CT (MRCT) volumes were generated from the MR images. The patients’ clinical treatment planning directives were used to create 12 individual Volumetric Modulated Arc Therapy (VMAT) plans, which were then optimized 10 times on each of their respective CT and MRCT-derived electron density maps. Dose metrics derived from optimization criteria, as well as monitor units and gamma analyses, were evaluated to quantify differences between the imaging modalities. Results Mean differences between Planning Target Volume (PTV) doses on MRCT and CT plans across all patients were 0.0% (range −0.1 to 0.2%) for D95%, 0.0% (−0.7 to 0.6%) for D5%, and −0.2% (−1.0 to 0.2%) for Dmax. MRCT plans showed no significant change in monitor units (−0.4%) compared to CT plans. Organs at risk (OARs) had an average Dmax difference of 0.0 Gy (−2.2 to 1.9 Gy) over 85 structures across all 12 patients, with no significant differences when calculated doses approached planning constraints. Conclusions Focal brain VMAT plans optimized on MRCT images show excellent dosimetric agreement with standard CT-optimized plans. PTVs show equivalent coverage, and OARs do not show any overdose. These results indicate that MRI-derived synthetic CT volumes can be used to support treatment planning of most patients treated for intracranial lesions. PMID:26581151

  2. A TECHNIQUE FOR ASSESSING THE ACCURACY OF SUB-PIXEL IMPERVIOUS SURFACE ESTIMATES DERIVED FROM LANDSAT TM IMAGERY

    EPA Science Inventory

    We developed a technique for assessing the accuracy of sub-pixel derived estimates of impervious surface extracted from LANDSAT TM imagery. We utilized spatially coincident
    sub-pixel derived impervious surface estimates, high-resolution planimetric GIS data, vector--to-
    r...

  3. Classification Accuracy of Oral Reading Fluency and Maze in Predicting Performance on Large-Scale Reading Assessments

    ERIC Educational Resources Information Center

    Decker, Dawn M.; Hixson, Michael D.; Shaw, Amber; Johnson, Gloria

    2014-01-01

    The purpose of this study was to examine whether using a multiple-measure framework yielded better classification accuracy than oral reading fluency (ORF) or maze alone in predicting pass/fail rates for middle-school students on a large-scale reading assessment. Participants were 178 students in Grades 7 and 8 from a Midwestern school district.…

  4. How Nonrecidivism Affects Predictive Accuracy: Evidence from a Cross-Validation of the Ontario Domestic Assault Risk Assessment (ODARA)

    ERIC Educational Resources Information Center

    Hilton, N. Zoe; Harris, Grant T.

    2009-01-01

    Prediction effect sizes such as ROC area are important for demonstrating a risk assessment's generalizability and utility. How a study defines recidivism might affect predictive accuracy. Nonrecidivism is problematic when predicting specialized violence (e.g., domestic violence). The present study cross-validates the ability of the Ontario…

  5. Disease severity assessment in epidemiological studies: accuracy and reliability of visual estimates of Septoria leaf blotch (SLB) in winter wheat

    Technology Transfer Automated Retrieval System (TEKTRAN)

    The accuracy and reliability of visual assessments of SLB severity by raters (i.e. one plant pathologist with extensive experience and three other raters trained prior to field observations using standard area diagrams and DISTRAIN) was determined by comparison with assumed actual values obtained by...

  6. Diagnostic Accuracy of Computer-Aided Assessment of Intranodal Vascularity in Distinguishing Different Causes of Cervical Lymphadenopathy.

    PubMed

    Ying, Michael; Cheng, Sammy C H; Ahuja, Anil T

    2016-08-01

    Ultrasound is useful in assessing cervical lymphadenopathy. Advancement of computer science technology allows accurate and reliable assessment of medical images. The aim of the study described here was to evaluate the diagnostic accuracy of computer-aided assessment of the intranodal vascularity index (VI) in differentiating the various common causes of cervical lymphadenopathy. Power Doppler sonograms of 347 patients (155 with metastasis, 23 with lymphoma, 44 with tuberculous lymphadenitis, 125 reactive) with palpable cervical lymph nodes were reviewed. Ultrasound images of cervical nodes were evaluated, and the intranodal VI was quantified using a customized computer program. The diagnostic accuracy of using the intranodal VI to distinguish different disease groups was evaluated and compared. Metastatic and lymphomatous lymph nodes tend to be more vascular than tuberculous and reactive lymph nodes. The intranodal VI had the highest diagnostic accuracy in distinguishing metastatic and tuberculous nodes with a sensitivity of 80%, specificity of 73%, positive predictive value of 91%, negative predictive value of 51% and overall accuracy of 68% when a cutoff VI of 22% was used. Computer-aided assessment provides an objective and quantitative way to evaluate intranodal vascularity. The intranodal VI is a useful parameter in distinguishing certain causes of cervical lymphadenopathy and is particularly useful in differentiating metastatic and tuberculous lymph nodes. However, it has limited value in distinguishing lymphomatous nodes from metastatic and reactive nodes.

  7. A novel technique to evaluate the geometrical accuracy of CT-MR image fusion in Gamma Knife radiosurgery procedures

    NASA Astrophysics Data System (ADS)

    Thomas, Sajeev; Sampath, S.; Indiradevi, B.; Bhanumathy, G.; Supe, Sanjay S.; Musthafa, M. M.

    2010-01-01

    In order to optimize the accuracy of imaging in Gamma Knife radiosurgery using the image fusion options available in the Leksell gamma plan. Phantom images from 1.5 Tesla MRI Scan (Magnetom vision - Siemens) and Computed Tomography images from Philips Brilliance 16 CT scanner were used for image fusion in Gammaplan treatment planning system. The images were fused using co-registration technique using multiview and imagemerge modules. Stereotactic coordinates were then calculated for known targets. Vector distances from the centre of the Leksell coordinate system to five known targets were measured in CT, MR and CT-MR fused images and compared with geometrical measurements. The mean values of maximum absolute errors were 0.34 mm, 0.41 mm.0.38 mm (along x-axis), 0.43 mm, 1.53 mm, 0.62 mm (along y-axis) and 0.75 mm 2.02 mm, 0.93 mm (along z-axis) for CT, MR and CT-MR fused image data respectively. The mean error in calculating the vector distances from the center of the Leksell coordinate system (100, 100, 100) to the known target volumes are 0.22 mm, 0.8 mm and 0.43 mm for CT, MR and CT-MR fused images, respectively. Image fusion functions available in gamma plan are useful for combining the features of CT and MR imaging modalities. These methods are highly useful in clinical situations where the error associated with Magnetic Resonance Imaging is beyond acceptable levels.

  8. Generalized Procedure for Improved Accuracy of Thermal Contact Resistance Measurements for Materials With Arbitrary Temperature-Dependent Thermal Conductivity

    SciTech Connect

    Sayer, Robert A.

    2014-06-26

    Thermal contact resistance (TCR) is most commonly measured using one-dimensional steady-state calorimetric techniques. In the experimental methods we utilized, a temperature gradient is applied across two contacting beams and the temperature drop at the interface is inferred from the temperature profiles of the rods that are measured at discrete points. During data analysis, thermal conductivity of the beams is typically taken to be an average value over the temperature range imposed during the experiment. Our generalized theory is presented and accounts for temperature-dependent changes in thermal conductivity. The procedure presented enables accurate measurement of TCR for contacting materials whose thermal conductivity is any arbitrary function of temperature. For example, it is shown that the standard technique yields TCR values that are about 15% below the actual value for two specific examples of copper and silicon contacts. Conversely, the generalized technique predicts TCR values that are within 1% of the actual value. The method is exact when thermal conductivity is known exactly and no other errors are introduced to the system.

  9. Generalized Procedure for Improved Accuracy of Thermal Contact Resistance Measurements for Materials With Arbitrary Temperature-Dependent Thermal Conductivity

    DOE PAGES

    Sayer, Robert A.

    2014-06-26

    Thermal contact resistance (TCR) is most commonly measured using one-dimensional steady-state calorimetric techniques. In the experimental methods we utilized, a temperature gradient is applied across two contacting beams and the temperature drop at the interface is inferred from the temperature profiles of the rods that are measured at discrete points. During data analysis, thermal conductivity of the beams is typically taken to be an average value over the temperature range imposed during the experiment. Our generalized theory is presented and accounts for temperature-dependent changes in thermal conductivity. The procedure presented enables accurate measurement of TCR for contacting materials whose thermalmore » conductivity is any arbitrary function of temperature. For example, it is shown that the standard technique yields TCR values that are about 15% below the actual value for two specific examples of copper and silicon contacts. Conversely, the generalized technique predicts TCR values that are within 1% of the actual value. The method is exact when thermal conductivity is known exactly and no other errors are introduced to the system.« less

  10. Ecological risk assessment and natural resource damage assessment: synthesis of assessment procedures.

    PubMed

    Gala, William; Lipton, Joshua; Cernera, Phil; Ginn, Thomas; Haddad, Robert; Henning, Miranda; Jahn, Kathryn; Landis, Wayne; Mancini, Eugene; Nicoll, James; Peters, Vicky; Peterson, Jennifer

    2009-10-01

    The Society of Environmental Toxicology and Chemistry (SETAC) convened an invited workshop (August 2008) to address coordination between ecological risk assessment (ERA) and natural resource damage assessment (NRDA). Although ERA and NRDA activities are performed under a number of statutory and regulatory authorities, the primary focus of the workshop was on ERA and NRDA as currently practiced in the United States under the Comprehensive Environmental Response, Compensation, and Liability Act (CERCLA). This paper presents the findings and conclusions of the Synthesis Work Group, 1 of 3 work groups convened at the workshop. The Synthesis Work Group concluded that the different programmatic objectives and legal requirements of the 2 processes preclude development of a single, integrated ERA/NRDA process. However, although institutional and programmatic impediments exist to integration of the 2 processes, parties are capitalizing on opportunities to coordinate technical and scientific elements of the assessments at a number of locations. Although it is important to recognize and preserve the distinctions between ERA and NRDA, opportunities for data sharing exist, particularly for the characterization of environmental exposures and derivation of ecotoxicological information. Thus, effective coordination is not precluded by the underlying science. Rather, willing participants, accommodating schedules, and recognition of potential efficiencies associated with shared data collection can lead to enhanced coordination and consistency between ERA and NRDA.

  11. Dynamic Accuracy of GPS Receivers for Use in Health Research: A Novel Method to Assess GPS Accuracy in Real-World Settings

    PubMed Central

    Schipperijn, Jasper; Kerr, Jacqueline; Duncan, Scott; Madsen, Thomas; Klinker, Charlotte Demant; Troelsen, Jens

    2014-01-01

    The emergence of portable global positioning system (GPS) receivers over the last 10 years has provided researchers with a means to objectively assess spatial position in free-living conditions. However, the use of GPS in free-living conditions is not without challenges and the aim of this study was to test the dynamic accuracy of a portable GPS device under real-world environmental conditions, for four modes of transport, and using three data collection intervals. We selected four routes on different bearings, passing through a variation of environmental conditions in the City of Copenhagen, Denmark, to test the dynamic accuracy of the Qstarz BT-Q1000XT GPS device. Each route consisted of a walk, bicycle, and vehicle lane in each direction. The actual width of each walking, cycling, and vehicle lane was digitized as accurately as possible using ultra-high-resolution aerial photographs as background. For each trip, we calculated the percentage that actually fell within the lane polygon, and within the 2.5, 5, and 10 m buffers respectively, as well as the mean and median error in meters. Our results showed that 49.6% of all ≈68,000 GPS points fell within 2.5 m of the expected location, 78.7% fell within 10 m and the median error was 2.9 m. The median error during walking trips was 3.9, 2.0 m for bicycle trips, 1.5 m for bus, and 0.5 m for car. The different area types showed considerable variation in the median error: 0.7 m in open areas, 2.6 m in half-open areas, and 5.2 m in urban canyons. The dynamic spatial accuracy of the tested device is not perfect, but we feel that it is within acceptable limits for larger population studies. Longer recording periods, for a larger population are likely to reduce the potentially negative effects of measurement inaccuracy. Furthermore, special care should be taken when the environment in which the study takes place could compromise the GPS signal. PMID:24653984

  12. Dynamic Accuracy of GPS Receivers for Use in Health Research: A Novel Method to Assess GPS Accuracy in Real-World Settings.

    PubMed

    Schipperijn, Jasper; Kerr, Jacqueline; Duncan, Scott; Madsen, Thomas; Klinker, Charlotte Demant; Troelsen, Jens

    2014-01-01

    The emergence of portable global positioning system (GPS) receivers over the last 10 years has provided researchers with a means to objectively assess spatial position in free-living conditions. However, the use of GPS in free-living conditions is not without challenges and the aim of this study was to test the dynamic accuracy of a portable GPS device under real-world environmental conditions, for four modes of transport, and using three data collection intervals. We selected four routes on different bearings, passing through a variation of environmental conditions in the City of Copenhagen, Denmark, to test the dynamic accuracy of the Qstarz BT-Q1000XT GPS device. Each route consisted of a walk, bicycle, and vehicle lane in each direction. The actual width of each walking, cycling, and vehicle lane was digitized as accurately as possible using ultra-high-resolution aerial photographs as background. For each trip, we calculated the percentage that actually fell within the lane polygon, and within the 2.5, 5, and 10 m buffers respectively, as well as the mean and median error in meters. Our results showed that 49.6% of all ≈68,000 GPS points fell within 2.5 m of the expected location, 78.7% fell within 10 m and the median error was 2.9 m. The median error during walking trips was 3.9, 2.0 m for bicycle trips, 1.5 m for bus, and 0.5 m for car. The different area types showed considerable variation in the median error: 0.7 m in open areas, 2.6 m in half-open areas, and 5.2 m in urban canyons. The dynamic spatial accuracy of the tested device is not perfect, but we feel that it is within acceptable limits for larger population studies. Longer recording periods, for a larger population are likely to reduce the potentially negative effects of measurement inaccuracy. Furthermore, special care should be taken when the environment in which the study takes place could compromise the GPS signal.

  13. Accuracy assessment of single and double difference models for the single epoch GPS compass

    NASA Astrophysics Data System (ADS)

    Chen, Wantong; Qin, Honglei; Zhang, Yanzhong; Jin, Tian

    2012-02-01

    The single epoch GPS compass is an important field of study, since it is a valuable technique for the orientation estimation of vehicles and it can guarantee a total independence from carrier phase slips in practical applications. To achieve highly accurate angular estimates, the unknown integer ambiguities of the carrier phase observables need to be resolved. Past researches focus on the ambiguity resolution for single epoch; however, accuracy is another significant problem for many challenging applications. In this contribution, the accuracy is evaluated for the non-common clock scheme of the receivers and the common clock scheme of the receivers, respectively. We focus on three scenarios for either scheme: single difference model vs. double difference model, single frequency model vs. multiple frequency model and optimal linear combinations vs. traditional triple-frequency least squares. We deduce the short baseline precision for a number of different available models and analyze the difference in accuracy for those models. Compared with the single or double difference model of the non-common clock scheme, the single difference model of the common clock scheme can greatly reduce the vertical component error of baseline vector, which results in higher elevation accuracy. The least squares estimator can also reduce the error of fixed baseline vector with the aid of the multi-frequency observation, thereby improving the attitude accuracy. In essence, the "accuracy improvement" is attributed to the difference in accuracy for different models, not a real improvement for any specific model. If all noise levels of GPS triple frequency carrier phase are assumed the same in unit of cycles, it can be proved that the optimal linear combination approach is equivalent to the traditional triple-frequency least squares, no matter which scheme is utilized. Both simulations and actual experiments have been performed to verify the correctness of theoretical analysis.

  14. A priori evaluation of two-stage cluster sampling for accuracy assessment of large-area land-cover maps

    USGS Publications Warehouse

    Wickham, J.D.; Stehman, S.V.; Smith, J.H.; Wade, T.G.; Yang, L.

    2004-01-01

    Two-stage cluster sampling reduces the cost of collecting accuracy assessment reference data by constraining sample elements to fall within a limited number of geographic domains (clusters). However, because classification error is typically positively spatially correlated, within-cluster correlation may reduce the precision of the accuracy estimates. The detailed population information to quantify a priori the effect of within-cluster correlation on precision is typically unavailable. Consequently, a convenient, practical approach to evaluate the likely performance of a two-stage cluster sample is needed. We describe such an a priori evaluation protocol focusing on the spatial distribution of the sample by land-cover class across different cluster sizes and costs of different sampling options, including options not imposing clustering. This protocol also assesses the two-stage design's adequacy for estimating the precision of accuracy estimates for rare land-cover classes. We illustrate the approach using two large-area, regional accuracy assessments from the National Land-Cover Data (NLCD), and describe how the a priorievaluation was used as a decision-making tool when implementing the NLCD design.

  15. Multinational assessment of accuracy of equations for predicting risk of kidney failure: a meta-analysis

    PubMed Central

    Tangri, Navdeep; Grams, Morgan E.; Levey, Andrew S.; Coresh, Josef; Appel, Lawrence; Astor, Brad C.; Chodick, Gabriel; Collins, Allan J.; Djurdjev, Ognjenka; Elley, C. Raina; Evans, Marie; Garg, Amit X.; Hallan, Stein I.; Inker, Lesley; Ito, Sadayoshi; Jee, Sun Ha; Kovesdy, Csaba P.; Kronenberg, Florian; Lambers Heerspink, Hiddo J.; Marks, Angharad; Nadkarni, Girish N.; Navaneethan, Sankar D.; Nelson, Robert G.; Titze, Stephanie; Sarnak, Mark J.; Stengel, Benedicte; Woodward, Mark; Iseki, Kunitoshi

    2016-01-01

    Importance Identifying patients at risk of chronic kidney disease (CKD) progression may facilitate more optimal nephrology care. Kidney failure risk equations (KFREs) were previously developed and validated in two Canadian cohorts. Validation in other regions and in CKD populations not under the care of a nephrologist is needed. Objective To evaluate the accuracy of the KFREs across different geographic regions and patient populations through individual-participant data meta-analysis. Data Sources Thirty-one cohorts, including 721,357 participants with CKD Stages 3–5 in over 30 countries spanning 4 continents, were studied. These cohorts collected data from 1982 through 2014. Study Selection Cohorts participating in the CKD Prognosis Consortium with data on end-stage renal disease. Data Extraction and Synthesis Data were obtained and statistical analyses were performed between July 2012 and June 2015. Using the risk factors from the original KFREs, cohort-specific hazard ratios were estimated, and combined in meta-analysis to form new “pooled” KFREs. Original and pooled equation performance was compared, and the need for regional calibration factors was assessed. Main Outcome and Measure Kidney failure (treatment by dialysis or kidney transplantation). Results During a median follow-up of 4 years, 23,829 cases of kidney failure were observed. The original KFREs achieved excellent discrimination (ability to differentiate those who developed kidney failure from those who did not) across all cohorts (overall C statistic, 0.90 (95% CI 0.89–0.92) at 2 years and 0.88 (95% CI 0.86–0.90) at 5 years); discrimination in subgroups by age, race, and diabetes status was similar. There was no improvement with the pooled equations. Calibration (the difference between observed and predicted risk) was adequate in North American cohorts, but the original KFREs overestimated risk in some non-North American cohorts. Addition of a calibration factor that lowered the baseline

  16. Assessment of theoretical procedures for calculating barrier heights for a diverse set of water-catalyzed proton-transfer reactions.

    PubMed

    Karton, Amir; O'Reilly, Robert J; Radom, Leo

    2012-04-26

    Accurate electronic barrier heights are obtained for a set of nine proton-transfer tautomerization reactions, which are either (i) uncatalyzed, (ii) catalyzed by one water molecule, or (iii) catalyzed by two water molecules. The barrier heights for reactions (i) and (ii) are obtained by means of the high-level ab initio W2.2 thermochemical protocol, while those for reaction (iii) are obtained using the W1 protocol. These three sets of benchmark barrier heights allow an assessment of the performance of more approximate theoretical procedures for the calculation of barrier heights of uncatalyzed and water-catalyzed reactions. We evaluate initially the performance of the composite G4 procedure and variants thereof (e.g., G4(MP2) and G4(MP2)-6X), as well as that of standard ab initio procedures (e.g., MP2, SCS-MP2, and MP4). We find that the performance of the G4(MP2)-type thermochemical procedures deteriorates with the number of water molecules involved in the catalysis. This behavior is linked to deficiencies in the MP2-based basis-set-correction term in the G4(MP2)-type procedures. This is remedied in the MP4-based G4 procedure, which shows good performance for both the uncatalyzed and the water-catalyzed reactions, with mean absolute deviations (MADs) from the benchmark values lying below the threshold of "chemical accuracy" (arbitrarily defined as 1 kcal mol(-1) ≈ 4.2 kJ mol(-1)). We also examine the performance of a large number of density functional theory (DFT) and double-hybrid DFT (DHDFT) procedures. We find that, with few exceptions (most notably PW6-B95 and B97-2), the performance of the DFT procedures that give good results for the uncatalyzed reactions deteriorates with the number of water molecules involved in the catalysis. The DHDFT procedures, on the other hand, show excellent performance for both the uncatalyzed and catalyzed reactions. Specifically, almost all of them afford MADs below the "chemical accuracy" threshold, with ROB2-PLYP and B2K

  17. A Method for Assessing the Accuracy of a Photogrammetry System for Precision Deployable Structures

    NASA Technical Reports Server (NTRS)

    Moore, Ashley

    2005-01-01

    The measurement techniques used to validate analytical models of large deployable structures are an integral Part of the technology development process and must be precise and accurate. Photogrammetry and videogrammetry are viable, accurate, and unobtrusive methods for measuring such large Structures. Photogrammetry uses Software to determine the three-dimensional position of a target using camera images. Videogrammetry is based on the same principle, except a series of timed images are analyzed. This work addresses the accuracy of a digital photogrammetry system used for measurement of large, deployable space structures at JPL. First, photogrammetry tests are performed on a precision space truss test article, and the images are processed using Photomodeler software. The accuracy of the Photomodeler results is determined through, comparison with measurements of the test article taken by an external testing group using the VSTARS photogrammetry system. These two measurements are then compared with Australis photogrammetry software that simulates a measurement test to predict its accuracy. The software is then used to study how particular factors, such as camera resolution and placement, affect the system accuracy to help design the setup for the videogrammetry system that will offer the highest level of accuracy for measurement of deploying structures.

  18. Accuracy Assessment of Using Rapid Prototyping Drill Templates for Atlantoaxial Screw Placement: A Cadaver Study

    PubMed Central

    Guo, Shuai; Lu, Teng; Hu, Qiaolong; Yang, Baohui; He, Xijing

    2016-01-01

    Purpose. To preliminarily evaluate the feasibility and accuracy of using rapid prototyping drill templates (RPDTs) for C1 lateral mass screw (C1-LMS) and C2 pedicle screw (C2-PS) placement. Methods. 23 formalin-fixed craniocervical cadaver specimens were randomly divided into two groups. In the conventional method group, intraoperative fluoroscopy was used to assist the screw placement. In the RPDT navigation group, specific RPDTs were constructed for each specimen and were used intraoperatively for screw placement navigation. The screw position, the operating time, and the fluoroscopy time for each screw placement were compared between the 2 groups. Results. Compared with the conventional method, the RPDT technique significantly increased the placement accuracy of the C2-PS (p < 0.05). In the axial plane, using RPDTs also significantly increased C1-LMS placement accuracy (p < 0.05). In the sagittal plane, although using RPDTs had a very high accuracy rate (100%) in C1-LMS placement, it was not statistically significant compared with the conventional method (p > 0.05). Moreover, the RPDT technique significantly decreased the operating and fluoroscopy times. Conclusion. Using RPDTs significantly increases the accuracy of C1-LMS and C2-PS placement while decreasing the screw placement time and the radiation exposure. Due to these advantages, this approach is worth promoting for use in the Harms technique. PMID:28004004

  19. Procedure for assessing visual quality for landscape planning and management

    NASA Astrophysics Data System (ADS)

    Gimblett, H. Randal; Fitzgibbon, John E.; Bechard, Kevin P.; Wightman, J. A.; Itami, Robert M.

    1987-07-01

    Incorporation of aesthetic considerations in the process of landscape planning and development has frequently met with poor results due to its lack of theoretical basis, public involvement, and failure to deal with spatial implications. This problem has been especially evident when dealing with large areas, for example, the Adirondacks, Scenic Highways, and National Forests and Parks. This study made use of public participation to evaluate scenic quality in a portion of the Niagara Escarpment in Southern Ontario, Canada. The results of this study were analyzed using the visual management model proposed by Brown and Itami (1982) as a means of assessing and evaluating scenic quality. The map analysis package formulated by Tomlin (1980) was then applied to this assessment for the purpose of spatial mapping of visual impact. The results of this study illustrate that it is possible to assess visual quality for landscape/management, preservation, and protection using a theoretical basis, public participation, and a systematic spatial mapping process.

  20. An overview of psychosocial assessment procedures in reconstructive hand transplantation.

    PubMed

    Kumnig, Martin; Jowsey, Sheila G; Moreno, Elisa; Brandacher, Gerald; Azari, Kodi; Rumpold, Gerhard

    2014-05-01

    There have been more than 90 hand and upper extremity transplants performed worldwide. Functional and sensory outcomes have been reported in several studies, but little is known about the psychosocial outcomes. A comprehensive systematic literature review was performed, addressing the psychosocial impact of reconstructive hand transplantation. This review provides an overview of psychosocial evaluation protocols and identifies standards in this novel and exciting field. Essentials of the psychosocial assessment are discussed and a new protocol, the 'Chauvet Protocol', representing a standardized assessment protocol for future multicenter psychosocial trials is being introduced.

  1. Cross-Cultural Psychological Assessment: Issues and Procedures for the Psychological Appraisal of Refugee Patients.

    ERIC Educational Resources Information Center

    Butcher, James N.

    This report addresses some of the problems and issues involved in psychological assessment of refugee clients in mental health programs and surveys the assessment procedures in current use. Part I discusses the problems and issues involved in the psychological assessment of ethnic minority and refugee clients, summarizes some of the background…

  2. Quality Issues in Judging Portfolios: Implications for Organizing Teaching Portfolio Assessment Procedures

    ERIC Educational Resources Information Center

    Tigelaar, Dineke E. H.; Dolmans, Diana H. J. M.; Wolfhagen, Ineke H. A. P.; van der Vleuten, Cees P. M.

    2005-01-01

    This article addresses the choice of the most appropriate procedure for the assessment of portfolios used in teacher and lecturer assessment. A characteristic of modern assessment modes, including portfolios, is that the information they provide is often qualitative and derived from different contexts. Unambiguous, objective rating of portfolios…

  3. Do Intervention-Embedded Assessment Procedures Successfully Measure Student Growth in Reading?

    ERIC Educational Resources Information Center

    Begeny, John C.; Whitehouse, Mary H.; Methe, Scott A.; Codding, Robin S.; Stage, Scott A.; Nuepert, Shevaun

    2015-01-01

    Effective intervention delivery requires ongoing assessment to determine whether students are learning at the desired rate. Intervention programs with embedded assessment procedures (i.e., assessment that occurs naturally "during" the process of delivering intervention) can potentially enhance instructional decisions. However, there is…

  4. Acceptability of Functional Behavioral Assessment Procedures to Special Educators and School Psychologists

    ERIC Educational Resources Information Center

    O'Neill, Robert E.; Bundock, Kaitlin; Kladis, Kristin; Hawken, Leanne S.

    2015-01-01

    This survey study assessed the acceptability of a variety of functional behavioral assessment (FBA) procedures (i.e., functional assessment interviews, rating scales/questionnaires, systematic direct observations, functional analysis manipulations) to a national sample of 123 special educators and a state sample of 140 school psychologists.…

  5. Recent Developments in Assessment and Examination Procedures in France.

    ERIC Educational Resources Information Center

    Broadfoot, Patricia

    Recent changes in educational assessment in France reflect pressures to modernize the French educational system to align it with prevailing democratic and egalitarian values and to respond to the economy's vocational training needs. After providing background on the French educational system, this paper discusses two areas of secondary school…

  6. Behavioral Assessment Instruments, Techniques, and Procedures: Summary and Annotated Bibliography.

    ERIC Educational Resources Information Center

    Shorkey, Clayton T.; Williams, Harry

    This annotated bibliography cites 223 articles related to behavioral assessment reported in 18 professional journals between January 1960 and Spring 1976. A summary and a reference grouping of the articles are included to allow for identification of articles related to (1) electromechanical devices used in identification, measurement, and storage…

  7. Accuracy of Assessment of Eligibility for Early Medical Abortion by Community Health Workers in Ethiopia, India and South Africa

    PubMed Central

    Nguyen, My Huong; Habib, Ndema; Afework, Mesganaw Fantahun; Harries, Jane; Iyengar, Kirti; Moodley, Jennifer; Constant, Deborah; Sen, Swapnaleen

    2016-01-01

    Objective To assess the accuracy of assessment of eligibility for early medical abortion by community health workers using a simple checklist toolkit. Design Diagnostic accuracy study. Setting Ethiopia, India and South Africa. Methods Two hundred seventeen women in Ethiopia, 258 in India and 236 in South Africa were enrolled into the study. A checklist toolkit to determine eligibility for early medical abortion was validated by comparing results of clinician and community health worker assessment of eligibility using the checklist toolkit with the reference standard exam. Results Accuracy was over 90% and the negative likelihood ratio <0.1 at all three sites when used by clinician assessors. Positive likelihood ratios were 4.3 in Ethiopia, 5.8 in India and 6.3 in South Africa. When used by community health workers the overall accuracy of the toolkit was 92% in Ethiopia, 80% in India and 77% in South Africa negative likelihood ratios were 0.08 in Ethiopia, 0.25 in India and 0.22 in South Africa and positive likelihood ratios were 5.9 in Ethiopia and 2.0 in India and South Africa. Conclusion The checklist toolkit, as used by clinicians, was excellent at ruling out participants who were not eligible, and moderately effective at ruling in participants who were eligible for medical abortion. Results were promising when used by community health workers particularly in Ethiopia where they had more prior experience with use of diagnostic aids and longer professional training. The checklist toolkit assessments resulted in some participants being wrongly assessed as eligible for medical abortion which is an area of concern. Further research is needed to streamline the components of the tool, explore optimal duration and content of training for community health workers, and test feasibility and acceptability. PMID:26731176

  8. Applying Signal-Detection Theory to the Study of Observer Accuracy and Bias in Behavioral Assessment

    ERIC Educational Resources Information Center

    Lerman, Dorothea C.; Tetreault, Allison; Hovanetz, Alyson; Bellaci, Emily; Miller, Jonathan; Karp, Hilary; Mahmood, Angela; Strobel, Maggie; Mullen, Shelley; Keyl, Alice; Toupard, Alexis

    2010-01-01

    We evaluated the feasibility and utility of a laboratory model for examining observer accuracy within the framework of signal-detection theory (SDT). Sixty-one individuals collected data on aggression while viewing videotaped segments of simulated teacher-child interactions. The purpose of Experiment 1 was to determine if brief feedback and…

  9. Interrater Reliability Estimators Commonly Used in Scoring Language Assessments: A Monte Carlo Investigation of Estimator Accuracy

    ERIC Educational Resources Information Center

    Morgan, Grant B.; Zhu, Min; Johnson, Robert L.; Hodge, Kari J.

    2014-01-01

    Common estimators of interrater reliability include Pearson product-moment correlation coefficients, Spearman rank-order correlations, and the generalizability coefficient. The purpose of this study was to examine the accuracy of estimators of interrater reliability when varying the true reliability, number of scale categories, and number of…

  10. An Accuracy--Response Time Capacity Assessment Function that Measures Performance against Standard Parallel Predictions

    ERIC Educational Resources Information Center

    Townsend, James T.; Altieri, Nicholas

    2012-01-01

    Measures of human efficiency under increases in mental workload or attentional limitations are vital in studying human perception, cognition, and action. Assays of efficiency as workload changes have typically been confined to either reaction times (RTs) or accuracy alone. Within the realm of RTs, a nonparametric measure called the "workload…

  11. [Assessment of overall spatial accuracy in image guided stereotactic body radiotherapy using a spine registration method].

    PubMed

    Nakazawa, Hisato; Uchiyama, Yukio; Komori, Masataka; Hayashi, Naoki

    2014-06-01

    Stereotactic body radiotherapy (SBRT) for lung and liver tumors is always performed under image guidance, a technique used to confirm the accuracy of setup positioning by fusing planning digitally reconstructed radiographs with X-ray, fluoroscopic, or computed tomography (CT) images, using bony structures, tumor shadows, or metallic markers as landmarks. The Japanese SBRT guidelines state that bony spinal structures should be used as the main landmarks for patient setup. In this study, we used the Novalis system as a linear accelerator for SBRT of lung and liver tumors. The current study compared the differences between spine registration and target registration and calculated total spatial accuracy including setup uncertainty derived from our image registration results and the geometric uncertainty of the Novalis system. We were able to evaluate clearly whether overall spatial accuracy is achieved within a setup margin (SM) for planning target volume (PTV) in treatment planning. After being granted approval by the Hospital and University Ethics Committee, we retrospectively analyzed eleven patients with lung tumor and seven patients with liver tumor. The results showed the total spatial accuracy to be within a tolerable range for SM of treatment planning. We therefore regard our method to be suitable for image fusion involving 2-dimensional X-ray images during the treatment planning stage of SBRT for lung and liver tumors.

  12. Development of a Mathematical Model to Assess the Accuracy of Difference between Geodetic Heights

    ERIC Educational Resources Information Center

    Gairabekov, Ibragim; Kliushin, Evgenii; Gayrabekov, Magomed-Bashir; Ibragimova, Elina; Gayrabekova, Amina

    2016-01-01

    The article includes the results of theoretical studies of the accuracy of geodetic height survey and marks points on the Earth's surface using satellite technology. The dependence of the average square error of geodetic heights difference survey from the distance to the base point was detected. It is being proved that by using satellite…

  13. ESA ExoMars: Pre-launch PanCam Geometric Modeling and Accuracy Assessment

    NASA Astrophysics Data System (ADS)

    Li, D.; Li, R.; Yilmaz, A.

    2014-08-01

    ExoMars is the flagship mission of the European Space Agency (ESA) Aurora Programme. The mobile scientific platform, or rover, will carry a drill and a suite of instruments dedicated to exobiology and geochemistry research. As the ExoMars rover is designed to travel kilometres over the Martian surface, high-precision rover localization and topographic mapping will be critical for traverse path planning and safe planetary surface operations. For such purposes, the ExoMars rover Panoramic Camera system (PanCam) will acquire images that are processed into an imagery network providing vision information for photogrammetric algorithms to localize the rover and generate 3-D mapping products. Since the design of the ExoMars PanCam will influence localization and mapping accuracy, quantitative error analysis of the PanCam design will improve scientists' awareness of the achievable level of accuracy, and enable the PanCam design team to optimize its design to achieve the highest possible level of localization and mapping accuracy. Based on photogrammetric principles and uncertainty propagation theory, we have developed a method to theoretically analyze how mapping and localization accuracy would be affected by various factors, such as length of stereo hard-baseline, focal length, and pixel size, etc.

  14. Portable device to assess dynamic accuracy of global positioning systems (GPS) receivers used in agricultural aircraft

    Technology Transfer Automated Retrieval System (TEKTRAN)

    A device was designed to test the dynamic accuracy of Global Positioning System (GPS) receivers used in aerial vehicles. The system works by directing a sun-reflected light beam from the ground to the aircraft using mirrors. A photodetector is placed pointing downward from the aircraft and circuitry...

  15. Comparative analysis of Worldview-2 and Landsat 8 for coastal saltmarsh mapping accuracy assessment

    NASA Astrophysics Data System (ADS)

    Rasel, Sikdar M. M.; Chang, Hsing-Chung; Diti, Israt Jahan; Ralph, Tim; Saintilan, Neil

    2016-05-01

    Coastal saltmarsh and their constituent components and processes are of an interest scientifically due to their ecological function and services. However, heterogeneity and seasonal dynamic of the coastal wetland system makes it challenging to map saltmarshes with remotely sensed data. This study selected four important saltmarsh species Pragmitis australis, Sporobolus virginicus, Ficiona nodosa and Schoeloplectus sp. as well as a Mangrove and Pine tree species, Avecinia and Casuarina sp respectively. High Spatial Resolution Worldview-2 data and Coarse Spatial resolution Landsat 8 imagery were selected in this study. Among the selected vegetation types some patches ware fragmented and close to the spatial resolution of Worldview-2 data while and some patch were larger than the 30 meter resolution of Landsat 8 data. This study aims to test the effectiveness of different classifier for the imagery with various spatial and spectral resolutions. Three different classification algorithm, Maximum Likelihood Classifier (MLC), Support Vector Machine (SVM) and Artificial Neural Network (ANN) were tested and compared with their mapping accuracy of the results derived from both satellite imagery. For Worldview-2 data SVM was giving the higher overall accuracy (92.12%, kappa =0.90) followed by ANN (90.82%, Kappa 0.89) and MLC (90.55%, kappa = 0.88). For Landsat 8 data, MLC (82.04%) showed the highest classification accuracy comparing to SVM (77.31%) and ANN (75.23%). The producer accuracy of the classification results were also presented in the paper.

  16. Spill Assessment Model (SAM) Procedure for Manual Field Calculations.

    DTIC Science & Technology

    1980-04-01

    SPECIFICALLY, THE PART OF SAM UTILIZED AS THE BASIS FOR THE FIELD CALCULATIONS ADDRESSES ONLY INSTANTANEOUS POINT SOURCE DISCHARGES INTO A FLOWING RIVER. FOR...instantaneous point source discharges into a flowing river. For field use, the primary requirement is to assess the maximum concentrations which may result...different classes of chemicals, reference sources such as the Chemical Hazard Response Information ,’stem (CHRIS) of the U.S. Coast Guard should be

  17. Assessing the Item Response Theory with Covariate (IRT-C) Procedure for Ascertaining Differential Item Functioning

    ERIC Educational Resources Information Center

    Tay, Louis; Vermunt, Jeroen K.; Wang, Chun

    2013-01-01

    We evaluate the item response theory with covariates (IRT-C) procedure for assessing differential item functioning (DIF) without preknowledge of anchor items (Tay, Newman, & Vermunt, 2011). This procedure begins with a fully constrained baseline model, and candidate items are tested for uniform and/or nonuniform DIF using the Wald statistic.…

  18. A Procedural Skills OSCE: Assessing Technical and Non-Technical Skills of Internal Medicine Residents

    ERIC Educational Resources Information Center

    Pugh, Debra; Hamstra, Stanley J.; Wood, Timothy J.; Humphrey-Murto, Susan; Touchie, Claire; Yudkowsky, Rachel; Bordage, Georges

    2015-01-01

    Internists are required to perform a number of procedures that require mastery of technical and non-technical skills, however, formal assessment of these skills is often lacking. The purpose of this study was to develop, implement, and gather validity evidence for a procedural skills objective structured clinical examination (PS-OSCE) for internal…

  19. 43 CFR 11.33 - What types of assessment procedures are available?

    Code of Federal Regulations, 2011 CFR

    2011-10-01

    ... procedures: a procedure for coastal or marine environments, which incorporates the Natural Resource Damage... Lakes environments, which incorporates the Natural Resource Damage Assessment Model for Great Lakes... available? 11.33 Section 11.33 Public Lands: Interior Office of the Secretary of the Interior...

  20. Chemical Risk Assessment: Selected Federal Agencies’ Procedures, Assumptions, and Policies

    DTIC Science & Technology

    2001-08-01

    attention to neurotoxicity (because many pesticides work through this mechanism) and, more recently, to endocrine disrupting effects (those affecting...EPA developed its Endocrine Disruptor Screening Program, which focuses on providing methods and procedures to detect and characterize endocrine ...Risk Assessment Procedures 84 Office of Pesticide Programs 85 Office of Pollution Prevention and Toxics 90 Office of Emergency and Remedial Response

  1. Unrestricted Factor Analytic Procedures for Assessing Acquiescent Responding in Balanced, Theoretically Unidimensional Personality Scales

    ERIC Educational Resources Information Center

    Ferrando, Pere J.; Lorenzo-Seva, Urbano; Chico, Eliseo

    2003-01-01

    This article describes and proposes an unrestricted factor analytic procedure to: (a) assess the dimensionality and structure of a balanced personality scale taking into account the potential effects of acquiescent responding, and (b) correct the individual trait estimates for acquiescence. The procedure can be considered as an extension of ten…

  2. Retrieval of Urban Boundary Layer Structures from Doppler Lidar Data. Part I: Accuracy Assessment

    SciTech Connect

    Xia, Quanxin; Lin, Ching Long; Calhoun, Ron; Newsom, Rob K.

    2008-01-01

    Two coherent Doppler lidars from the US Army Research Laboratory (ARL) and Arizona State University (ASU) were deployed in the Joint Urban 2003 atmospheric dispersion field experiment (JU2003) held in Oklahoma City. The dual lidar data are used to evaluate the accuracy of the four-dimensional variational data assimilation (4DVAR) method and identify the coherent flow structures in the urban boundary layer. The objectives of the study are three-fold. The first objective is to examine the effect of eddy viscosity models on the quality of retrieved velocity data. The second objective is to determine the fidelity of single-lidar 4DVAR and evaluate the difference between single- and dual-lidar retrievals. The third objective is to correlate the retrieved flow structures with the ground building data. It is found that the approach of treating eddy viscosity as part of control variables yields better results than the approach of prescribing viscosity. The ARL single-lidar 4DVAR is able to retrieve radial velocity fields with an accuracy of 98% in the along-beam direction and 80-90% in the cross-beam direction. For the dual-lidar 4DVAR, the accuracy of retrieved radial velocity in the ARL cross-beam direction improves to 90-94%. By using the dual-lidar retrieved data as a reference, the single-lidar 4DVAR is able to recover fluctuating velocity fields with 70-80% accuracy in the along-beam direction and 60-70% accuracy in the cross-beam direction. Large-scale convective roll structures are found in the vicinity of downtown airpark and parks. Vortical structures are identified near the business district. Strong updrafts and downdrafts are also found above a cluster of restaurants.

  3. The Bivariate Plotting Procedure for Hearing Assessment of Adults Who Are Severely to Profoundly Mentally Retarded.

    ERIC Educational Resources Information Center

    Cattey, Tommy J.

    1985-01-01

    Puretone auditory assessment of 21 adults with severe to profound mental retardation indicated that a bivariate plotting procedure of predicting hearing sensitivity from the acoustic reflexes should be included in an audiological test battery for this population. (CL)

  4. Accuracy Assessment of Direct Georeferencing for Photogrammetric Applications on Small Unmanned Aerial Platforms

    NASA Astrophysics Data System (ADS)

    Mian, O.; Lutes, J.; Lipa, G.; Hutton, J. J.; Gavelle, E.; Borghini, S.

    2016-03-01

    Microdrones md4-1000 quad-rotor VTOL UAV. The Sony A7R and each lens combination were focused and calibrated terrestrially using the Applanix camera calibration facility, and then integrated with the APX-15 GNSS-Inertial system using a custom mount specifically designed for UAV applications. The mount is constructed in such a way as to maintain the stability of both the interior orientation and IMU boresight calibration over shock and vibration, thus turning the Sony A7R into a metric imaging solution. In July and August 2015, Applanix and Avyon carried out a series of test flights of this system. The goal of these test flights was to assess the performance of DMS APX-15 direct georeferencing system under various scenarios. Furthermore, an examination of how DMS APX-15 can be used to produce accurate map products without the use of ground control points and with reduced sidelap was also carried out. Reducing the side lap for survey missions performed by small UAVs can significantly increase the mapping productivity of these platforms. The area mapped during the first flight campaign was a 250m x 300m block and a 775m long railway corridor in a rural setting in Ontario, Canada. The second area mapped was a 450m long corridor over a dam known as Fryer Dam (over Richelieu River in Quebec, Canada). Several ground control points were distributed within both test areas. The flight over the block area included 8 North-South lines and 1 cross strip flown at 80m AGL, resulting in a ~1cm GSD. The flight over the railway corridor included 2 North-South lines also flown at 80m AGL. Similarly, the flight over the dam corridor included 2 North-South lines flown at 50m AGL. The focus of this paper was to analyse the results obtained from the two corridors. Test results from both areas were processed using Direct Georeferencing techniques, and then compared for accuracy against the known positions of ground control points in each test area. The GNSS-Inertial data collected by the APX-15 was

  5. Assessing the accuracy of the Second Military Survey for the Doren Landslide (Vorarlberg, Austria)

    NASA Astrophysics Data System (ADS)

    Zámolyi, András.; Székely, Balázs; Biszak, Sándor

    2010-05-01

    Reconstruction of the early and long-term evolution of landslide areas is especially important for determining the proportion of anthropogenic influence on the evolution of the region affected by mass movements. The recent geologic and geomorphological setting of the prominent Doren landslide in Vorarlberg (Western Austria) has been studied extensively by various research groups and civil engineering companies. Civil aerial imaging of the area dates back to the 1950's. Modern monitoring techniques include aerial imaging as well as airborne and terrestrial laser scanning (LiDAR) providing us with almost yearly assessment of the changing geomorphology of the area. However, initiation of the landslide occurred most probably earlier than the application of these methods, since there is evidence that the landslide was already active in the 1930's. For studying the initial phase of landslide formation one possibility is to get back on information recorded on historic photographs or historic maps. In this case study we integrated topographic information from the map sheets of the Second Military Survey of the Habsburg Empire that was conducted in Vorarlberg during the years 1816-1821 (Kretschmer et al., 2004) into a comprehensive GIS. The region of interest around the Doren landslide was georeferenced using the method of Timár et al. (2006) refined by Molnár (2009) thus providing a geodetically correct positioning and the possibility of matching the topographic features from the historic map with features recognized in the LiDAR DTM. The landslide of Doren is clearly visible in the historic map. Additionally, prominent geomorphologic features such as morphological scarps, rills and gullies, mass movement lobes and the course of the Weißach rivulet can be matched. Not only the shape and character of these elements can be recognized and matched, but also the positional accuracy is adequate for geomorphological studies. Since the settlement structure is very stable in the

  6. Accuracy assessment of airborne photogrammetrically derived high-resolution digital elevation models in a high mountain environment

    NASA Astrophysics Data System (ADS)

    Müller, Johann; Gärtner-Roer, Isabelle; Thee, Patrick; Ginzler, Christian

    2014-12-01

    High-resolution digital elevation models (DEMs) generated by airborne remote sensing are frequently used to analyze landform structures (monotemporal) and geomorphological processes (multitemporal) in remote areas or areas of extreme terrain. In order to assess and quantify such structures and processes it is necessary to know the absolute accuracy of the available DEMs. This study assesses the absolute vertical accuracy of DEMs generated by the High Resolution Stereo Camera-Airborne (HRSC-A), the Leica Airborne Digital Sensors 40/80 (ADS40 and ADS80) and the analogue camera system RC30. The study area is located in the Turtmann valley, Valais, Switzerland, a glacially and periglacially formed hanging valley stretching from 2400 m to 3300 m a.s.l. The photogrammetrically derived DEMs are evaluated against geodetic field measurements and an airborne laser scan (ALS). Traditional and robust global and local accuracy measurements are used to describe the vertical quality of the DEMs, which show a non Gaussian distribution of errors. The results show that all four sensor systems produce DEMs with similar accuracy despite their different setups and generations. The ADS40 and ADS80 (both with a ground sampling distance of 0.50 m) generate the most accurate DEMs in complex high mountain areas with a RMSE of 0.8 m and NMAD of 0.6 m They also show the highest accuracy relating to flying height (0.14‰). The pushbroom scanning system HRSC-A produces a RMSE of 1.03 m and a NMAD of 0.83 m (0.21‰ accuracy of the flying height and 10 times the ground sampling distance). The analogue camera system RC30 produces DEMs with a vertical accuracy of 1.30 m RMSE and 0.83 m NMAD (0.17‰ accuracy of the flying height and two times the ground sampling distance). It is also shown that the performance of the DEMs strongly depends on the inclination of the terrain. The RMSE of areas up to an inclination <40° is better than 1 m. In more inclined areas the error and outlier occurrence

  7. Assessing the accuracy of software predictions of mammalian and microbial metabolites

    EPA Science Inventory

    New chemical development and hazard assessments benefit from accurate predictions of mammalian and microbial metabolites. Fourteen biotransformation libraries encoded in eight software packages that predict metabolite structures were assessed for their sensitivity (proportion of ...

  8. Accuracy in Student Self-Assessment: Directions and Cautions for Research

    ERIC Educational Resources Information Center

    Brown, Gavin T. L.; Andrade, Heidi L.; Chen, Fei

    2015-01-01

    Student self-assessment is a central component of current conceptions of formative and classroom assessment. The research on self-assessment has focused on its efficacy in promoting both academic achievement and self-regulated learning, with little concern for issues of validity. Because reliability of testing is considered a sine qua non for the…

  9. Image-based in vivo assessment of targeting accuracy of stereotactic brain surgery in experimental rodent models

    NASA Astrophysics Data System (ADS)

    Rangarajan, Janaki Raman; Vande Velde, Greetje; van Gent, Friso; de Vloo, Philippe; Dresselaers, Tom; Depypere, Maarten; van Kuyck, Kris; Nuttin, Bart; Himmelreich, Uwe; Maes, Frederik

    2016-11-01

    Stereotactic neurosurgery is used in pre-clinical research of neurological and psychiatric disorders in experimental rat and mouse models to engraft a needle or electrode at a pre-defined location in the brain. However, inaccurate targeting may confound the results of such experiments. In contrast to the clinical practice, inaccurate targeting in rodents remains usually unnoticed until assessed by ex vivo end-point histology. We here propose a workflow for in vivo assessment of stereotactic targeting accuracy in small animal studies based on multi-modal post-operative imaging. The surgical trajectory in each individual animal is reconstructed in 3D from the physical implant imaged in post-operative CT and/or its trace as visible in post-operative MRI. By co-registering post-operative images of individual animals to a common stereotaxic template, targeting accuracy is quantified. Two commonly used neuromodulation regions were used as targets. Target localization errors showed not only variability, but also inaccuracy in targeting. Only about 30% of electrodes were within the subnucleus structure that was targeted and a-specific adverse effects were also noted. Shifting from invasive/subjective 2D histology towards objective in vivo 3D imaging-based assessment of targeting accuracy may benefit a more effective use of the experimental data by excluding off-target cases early in the study.

  10. Image-based in vivo assessment of targeting accuracy of stereotactic brain surgery in experimental rodent models

    PubMed Central

    Rangarajan, Janaki Raman; Vande Velde, Greetje; van Gent, Friso; De Vloo, Philippe; Dresselaers, Tom; Depypere, Maarten; van Kuyck, Kris; Nuttin, Bart; Himmelreich, Uwe; Maes, Frederik

    2016-01-01

    Stereotactic neurosurgery is used in pre-clinical research of neurological and psychiatric disorders in experimental rat and mouse models to engraft a needle or electrode at a pre-defined location in the brain. However, inaccurate targeting may confound the results of such experiments. In contrast to the clinical practice, inaccurate targeting in rodents remains usually unnoticed until assessed by ex vivo end-point histology. We here propose a workflow for in vivo assessment of stereotactic targeting accuracy in small animal studies based on multi-modal post-operative imaging. The surgical trajectory in each individual animal is reconstructed in 3D from the physical implant imaged in post-operative CT and/or its trace as visible in post-operative MRI. By co-registering post-operative images of individual animals to a common stereotaxic template, targeting accuracy is quantified. Two commonly used neuromodulation regions were used as targets. Target localization errors showed not only variability, but also inaccuracy in targeting. Only about 30% of electrodes were within the subnucleus structure that was targeted and a-specific adverse effects were also noted. Shifting from invasive/subjective 2D histology towards objective in vivo 3D imaging-based assessment of targeting accuracy may benefit a more effective use of the experimental data by excluding off-target cases early in the study. PMID:27901096

  11. Accuracy assessment of the global ionospheric model over the Southern Ocean based on dynamic observation

    NASA Astrophysics Data System (ADS)

    Luo, Xiaowen; Xu, Huajun; Li, Zishen; Zhang, Tao; Gao, Jinyao; Shen, Zhongyan; Yang, Chunguo; Wu, Ziyin

    2017-02-01

    The global ionospheric model based on the reference stations of the Global Navigation Satellite System (GNSS) of the International GNSS Services is presently the most commonly used products of the global ionosphere. It is very important to comprehensively analyze and evaluate the accuracy and reliability of the model for the reasonable use of this kind of ionospheric product. In terms of receiver station deployment, this work is different from the traditional performance evaluation of the global ionosphere model based on observation data of ground-based static reference stations. The preliminary evaluation and analysis of the the global ionospheric model was conducted with the dynamic observation data across different latitudes over the southern oceans. The validation results showed that the accuracy of the global ionospheric model over the southern oceans is about 5 TECu, which deviates from the measured ionospheric TEC by about -0.6 TECu.

  12. Aneroid sphygmomanometers. An assessment of accuracy at a university hospital and clinics.

    PubMed

    Bailey, R H; Knaus, V L; Bauer, J H

    1991-07-01

    Defects of aneroid sphygmomanometers are a source of error in blood pressure measurement. We inspected 230 aneroid sphygmomanometers for physical defects and compared their accuracy against a standard mercury manometer at five different pressure points. An aneroid sphygmomanometer was defined as intolerant if it deviated from the mercury manometer by greater than +/- 3 mm Hg at two or more of the test points. The three most common physical defects were indicator needles not pointing to the "zero box," cracked face plates, and defective tubing. Eighty (34.8 of the 230 aneroid sphygmomanometers were determined to be intolerant with the greatest frequency of deviation seen at pressure levels of 150 mm Hg or greater. We recommend that aneroid manometers be inspected for physical defects and calibrated for accuracy against a standard mercury manometer at 6-month intervals to prevent inaccurate blood pressure measurements.

  13. Task and Observer Skill Factors in Accuracy of Assessment of Performance

    DTIC Science & Technology

    1977-04-01

    BEHAVIORAL and SOCIAL SCIENCES r -j 1300 Wilson Boulevard ^ Arlingfon, Virginia 22209 Approved for public release; distributed unlimitacl U. S...Unclassified /i Unclafislfli’L BCCUKtTY CLASSIFICATION OF THIS PAGEfIVh«n Data Entat-d) ( r .’O continued) H^wQn the basis of these data, a...34 approach has been tak»in to questions of accuracy of clinical judge- ments (Sarbin, Taft, & Bailey, I960; Bierl , Atkins, Briar, Leaman

  14. Quantitative Assessment of the Accuracy of Constitutive Laws for Plasticity with an Emphasis on Cyclic Deformation

    DTIC Science & Technology

    1993-04-01

    new law, the B-L law. The experimental database is constructed from a5 series of constant amplitude and random amplitude strain controlled cyclic...description of the experimental instrumentation is given in Appendix I. The cyclic plasticity experiments were performed under strain control at room5...instrumentation is present and control accuracy is not as good, the increments or difference of strain at two adjacent sampling intervals should be

  15. Accuracy assessment of high frequency 3D ultrasound for digital impression-taking of prepared teeth

    NASA Astrophysics Data System (ADS)

    Heger, Stefan; Vollborn, Thorsten; Tinschert, Joachim; Wolfart, Stefan; Radermacher, Klaus

    2013-03-01

    Silicone based impression-taking of prepared teeth followed by plaster casting is well-established but potentially less reliable, error-prone and inefficient, particularly in combination with emerging techniques like computer aided design and manufacturing (CAD/CAM) of dental prosthesis. Intra-oral optical scanners for digital impression-taking have been introduced but until now some drawbacks still exist. Because optical waves can hardly penetrate liquids or soft-tissues, sub-gingival preparations still need to be uncovered invasively prior to scanning. High frequency ultrasound (HFUS) based micro-scanning has been recently investigated as an alternative to optical intra-oral scanning. Ultrasound is less sensitive against oral fluids and in principal able to penetrate gingiva without invasively exposing of sub-gingival preparations. Nevertheless, spatial resolution as well as digitization accuracy of an ultrasound based micro-scanning system remains a critical parameter because the ultrasound wavelength in water-like media such as gingiva is typically smaller than that of optical waves. In this contribution, the in-vitro accuracy of ultrasound based micro-scanning for tooth geometry reconstruction is being investigated and compared to its extra-oral optical counterpart. In order to increase the spatial resolution of the system, 2nd harmonic frequencies from a mechanically driven focused single element transducer were separated and corresponding 3D surface models were calculated for both fundamentals and 2nd harmonics. Measurements on phantoms, model teeth and human teeth were carried out for evaluation of spatial resolution and surface detection accuracy. Comparison of optical and ultrasound digital impression taking indicate that, in terms of accuracy, ultrasound based tooth digitization can be an alternative for optical impression-taking.

  16. Future dedicated Venus-SGG flight mission: Accuracy assessment and performance analysis

    NASA Astrophysics Data System (ADS)

    Zheng, Wei; Hsu, Houtse; Zhong, Min; Yun, Meijuan

    2016-01-01

    This study concentrates principally on the systematic requirements analysis for the future dedicated Venus-SGG (spacecraft gravity gradiometry) flight mission in China in respect of the matching measurement accuracies of the spacecraft-based scientific instruments and the orbital parameters of the spacecraft. Firstly, we created and proved the single and combined analytical error models of the cumulative Venusian geoid height influenced by the gravity gradient error of the spacecraft-borne atom-interferometer gravity gradiometer (AIGG) and the orbital position error and orbital velocity error tracked by the deep space network (DSN) on the Earth station. Secondly, the ultra-high-precision spacecraft-borne AIGG is propitious to making a significant contribution to globally mapping the Venusian gravitational field and modeling the geoid with unprecedented accuracy and spatial resolution through weighing the advantages and disadvantages among the electrostatically suspended gravity gradiometer, the superconducting gravity gradiometer and the AIGG. Finally, the future dedicated Venus-SGG spacecraft had better adopt the optimal matching accuracy indices consisting of 3 × 10-13/s2 in gravity gradient, 10 m in orbital position and 8 × 10-4 m/s in orbital velocity and the preferred orbital parameters comprising an orbital altitude of 300 ± 50 km, an observation time of 60 months and a sampling interval of 1 s.

  17. Exposure assessment procedures in presence of wideband digital wireless networks.

    PubMed

    Trinchero, D

    2009-12-01

    The article analyses the applicability of traditional methods, as well as recently proposed techniques, to the exposure assessment of electromagnetic field generated by wireless transmitters. As is well known, a correct measurement of the electromagnetic field is conditioned by the complexity of the signal, which requires dedicated instruments or specifically developed extrapolation techniques. Nevertheless, it is also influenced by the typology of the deployment of the transmitting and receiving stations, which varies from network to network. These aspects have been intensively analysed in the literature and several cases of study are available for review. The present article collects the most recent analyses and discusses their applicability to different scenarios, typical of the main wireless networking applications: broadcasting services, mobile cellular networks and data access provisioning infrastructures.

  18. A probabilistic seismic risk assessment procedure for nuclear power plants: (I) Methodology

    USGS Publications Warehouse

    Huang, Y.-N.; Whittaker, A.S.; Luco, N.

    2011-01-01

    A new procedure for probabilistic seismic risk assessment of nuclear power plants (NPPs) is proposed. This procedure modifies the current procedures using tools developed recently for performance-based earthquake engineering of buildings. The proposed procedure uses (a) response-based fragility curves to represent the capacity of structural and nonstructural components of NPPs, (b) nonlinear response-history analysis to characterize the demands on those components, and (c) Monte Carlo simulations to determine the damage state of the components. The use of response-rather than ground-motion-based fragility curves enables the curves to be independent of seismic hazard and closely related to component capacity. The use of Monte Carlo procedure enables the correlation in the responses of components to be directly included in the risk assessment. An example of the methodology is presented in a companion paper to demonstrate its use and provide the technical basis for aspects of the methodology. ?? 2011 Published by Elsevier B.V.

  19. Using Accuracy of Self-Estimated Interest Type as a Sign of Career Choice Readiness in Career Assessment of Secondary Students

    ERIC Educational Resources Information Center

    Hirschi, Andreas; Lage, Damian

    2008-01-01

    A frequent applied method in career assessment to elicit clients' self-concepts is asking them to predict their interest assessment results. Accuracy in estimating one's interest type is commonly taken as a sign of more self-awareness and career choice readiness. The study evaluated the empirical relation of accuracy of self-estimation to career…

  20. A diagnostic tool for determining the quality of accuracy validation. Assessing the method for determination of nitrate in drinking water.

    PubMed

    Escuder-Gilabert, L; Bonet-Domingo, E; Medina-Hernández, M J; Sagrado, S

    2007-01-01

    Realistic internal validation of a method implies the performance validation experiments under intermediate precision conditions. The validation results can be organized in an X (NrxNs) (replicates x runs) data matrix, analysis of which enables assessment of the accuracy of the method. By means of Monte Carlo simulation, uncertainty in the estimates of bias and precision can be assessed. A bivariate plot is presented for assessing whether the uncertainty intervals for the bias (E +/- U(E)) and intermediate precision (RSDi +/- U(RSDi) are included in prefixed limits (requirements for the method). As a case study, a method for determining the concentration of nitrate in drinking water at the official level set by 98/83/EC Directive is assessed by use of the proposed plot.

  1. Increased Throwing Accuracy Improves Children's Catching Performance in a Ball-Catching Task from the Movement Assessment Battery (MABC-2).

    PubMed

    Dirksen, Tim; De Lussanet, Marc H E; Zentgraf, Karen; Slupinski, Lena; Wagner, Heiko

    2016-01-01

    The Movement Assessment Battery for Children (MABC-2) is a functional test for identifying deficits in the motor performance of children. The test contains a ball-catching task that requires the children to catch a self-thrown ball with one hand. As the task can be executed with a variety of different catching strategies, it is assumed that the task success can also vary considerably. Even though it is not clear, whether the performance merely depends on the catching skills or also to some extent on the throwing skills, the MABC-2 takes into account only the movement outcome. Therefore, the purpose of the current study was to examine (1) to what extent the throwing accuracy has an effect on the children's catching performance and (2) to what extent the throwing accuracy influences their choice of catching strategy. In line with the test manual, the children's catching performance was quantified on basis of the number of correctly caught balls. The throwing accuracy and the catching strategy were quantified by applying a kinematic analysis on the ball's trajectory and the hand movements. Based on linear regression analyses, we then investigated the relation between throwing accuracy, catching performance and catching strategy. The results show that an increased throwing accuracy is significantly correlated with an increased catching performance. Moreover, a higher throwing accuracy is significantly correlated with a longer duration of the hand on the ball's parabola, which indicates that throwing the ball more accurately could enable the children to effectively reduce the requirements on temporal precision. As the children's catching performance and their choice of catching strategy in the ball-catching task of the MABC-2 are substantially determined by their throwing accuracy, the test evaluation should not be based on the movement outcome alone, but should also take into account the children's throwing performance. Our findings could be of particular value for the

  2. Increased Throwing Accuracy Improves Children's Catching Performance in a Ball-Catching Task from the Movement Assessment Battery (MABC-2)

    PubMed Central

    Dirksen, Tim; De Lussanet, Marc H. E.; Zentgraf, Karen; Slupinski, Lena; Wagner, Heiko

    2016-01-01

    The Movement Assessment Battery for Children (MABC-2) is a functional test for identifying deficits in the motor performance of children. The test contains a ball-catching task that requires the children to catch a self-thrown ball with one hand. As the task can be executed with a variety of different catching strategies, it is assumed that the task success can also vary considerably. Even though it is not clear, whether the performance merely depends on the catching skills or also to some extent on the throwing skills, the MABC-2 takes into account only the movement outcome. Therefore, the purpose of the current study was to examine (1) to what extent the throwing accuracy has an effect on the children's catching performance and (2) to what extent the throwing accuracy influences their choice of catching strategy. In line with the test manual, the children's catching performance was quantified on basis of the number of correctly caught balls. The throwing accuracy and the catching strategy were quantified by applying a kinematic analysis on the ball's trajectory and the hand movements. Based on linear regression analyses, we then investigated the relation between throwing accuracy, catching performance and catching strategy. The results show that an increased throwing accuracy is significantly correlated with an increased catching performance. Moreover, a higher throwing accuracy is significantly correlated with a longer duration of the hand on the ball's parabola, which indicates that throwing the ball more accurately could enable the children to effectively reduce the requirements on temporal precision. As the children's catching performance and their choice of catching strategy in the ball-catching task of the MABC-2 are substantially determined by their throwing accuracy, the test evaluation should not be based on the movement outcome alone, but should also take into account the children's throwing performance. Our findings could be of particular value for the

  3. Assessment of the Sensitivity, Specificity, and Accuracy of Thermography in Identifying Patients with TMD

    PubMed Central

    Woźniak, Krzysztof; Szyszka-Sommerfeld, Liliana; Trybek, Grzegorz; Piątkowska, Dagmara

    2015-01-01

    Background The purpose of the present study was to evaluate the sensitivity, specificity, and accuracy of thermography in identifying patients with temporomandibular dysfunction (TMD). Material/Methods The study sample consisted of 50 patients (27 women and 23 men) ages 19.2 to 24.5 years (mean age 22.43±1.04) with subjective symptoms of TMD (Ai II–III) and 50 patients (25 women and 25 men) ages 19.3 to 25.1 years (mean age 22.21±1.18) with no subjective symptoms of TMD (Ai I). The anamnestic interviews were conducted according to the three-point anamnestic index of temporomandibular dysfunction (Ai). The thermography was performed using a ThermaCAM TMSC500 (FLIR Systems AB, Sweden) independent thermal vision system. Thermography was closely combined with a 10-min chewing test. Results The results of our study indicated that the absolute difference in temperature between the right and left side (ΔT) has the highest diagnostic value. The diagnostic effectiveness of this parameter increased after the chewing test. The cut-off points for values of temperature differences between the right and left side and identifying 95.5% of subjects with no functional disorders according to the temporomandibular dysfunction index Di (specificity 95.5%) were 0.26°C (AUC=0.7422, sensitivity 44.3%, accuracy 52.4%) before the chewing test and 0.52°C (AUC=0.7920, sensitivity 46.4%, accuracy 56.3%) after it. Conclusions The evaluation of thermography demonstrated its diagnostic usefulness in identifying patients with TMD with limited effectiveness. The chewing test helped in increasing the diagnostic efficiency of thermography in identifying patients with TMD. PMID:26002613

  4. Creating a Standard Set of Metrics to Assess Accuracy of Solar Forecasts: Preliminary Results

    NASA Astrophysics Data System (ADS)

    Banunarayanan, V.; Brockway, A.; Marquis, M.; Haupt, S. E.; Brown, B.; Fowler, T.; Jensen, T.; Hamann, H.; Lu, S.; Hodge, B.; Zhang, J.; Florita, A.

    2013-12-01

    The U.S. Department of Energy (DOE) SunShot Initiative, launched in 2011, seeks to reduce the cost of solar energy systems by 75% from 2010 to 2020. In support of the SunShot Initiative, the DOE Office of Energy Efficiency and Renewable Energy (EERE) is partnering with the National Oceanic and Atmospheric Administration (NOAA) and solar energy stakeholders to improve solar forecasting. Through a funding opportunity announcement issued in the April, 2012, DOE is funding two teams - led by National Center for Atmospheric Research (NCAR), and by IBM - to perform three key activities in order to improve solar forecasts. The teams will: (1) With DOE and NOAA's leadership and significant stakeholder input, develop a standardized set of metrics to evaluate forecast accuracy, and determine the baseline and target values for these metrics; (2) Conduct research that yields a transformational improvement in weather models and methods for forecasting solar irradiance and power; and (3) Incorporate solar forecasts into the system operations of the electric power grid, and evaluate the impact of forecast accuracy on the economics and reliability of operations using the defined, standard metrics. This paper will present preliminary results on the first activity: the development of a standardized set of metrics, baselines and target values. The results will include a proposed framework for metrics development, key categories of metrics, descriptions of each of the proposed set of specific metrics to measure forecast accuracy, feedback gathered from a range of stakeholders on the metrics, and processes to determine baselines and target values for each metric. The paper will also analyze the temporal and spatial resolutions under which these metrics would apply, and conclude with a summary of the work in progress on solar forecasting activities funded by DOE.

  5. Accuracy assessment of the ERP prediction method based on analysis of 100-year ERP series

    NASA Astrophysics Data System (ADS)

    Malkin, Z.; Tissen, V. M.

    2012-12-01

    A new method has been developed at the Siberian Research Institute of Metrology (SNIIM) for highly accurate prediction of UT1 and Pole motion (PM). In this study, a detailed comparison was made of real-time UT1 predictions made in 2006-2011 and PMpredictions made in 2009-2011making use of the SNIIM method with simultaneous predictions computed at the International Earth Rotation and Reference Systems Service (IERS), USNO. Obtained results have shown that proposed method provides better accuracy at different prediction lengths.

  6. Assessing Accuracy of Exchange-Correlation Functionals for the Description of Atomic Excited States

    NASA Astrophysics Data System (ADS)

    Makowski, Marcin; Hanas, Martyna

    2016-09-01

    The performance of exchange-correlation functionals for the description of atomic excitations is investigated. A benchmark set of excited states is constructed and experimental data is compared to Time-Dependent Density Functional Theory (TDDFT) calculations. The benchmark results show that for the selected group of functionals good accuracy may be achieved and the quality of predictions provided is competitive to computationally more demanding coupled-cluster approaches. Apart from testing the standard TDDFT approaches, also the role of self-interaction error plaguing DFT calculations and the adiabatic approximation to the exchange-correlation kernels is given some insight.

  7. Florida Statewide Assessment Program 1971-72 Technical Report; Section 1: Introduction, Procedures, and Program Recommendations.

    ERIC Educational Resources Information Center

    Haynes, Judy L.; Impara, James C.

    The first section of a four-part technical report of Florida's statewide program for assessing reading-related skills in grades 2 and 4 provides an introduction to the program, a description of procedures used, and recommendations regarding program operation. Program background, design, and responsibility for assessment activities are discussed in…

  8. An Assessment of Error-Correction Procedures for Learners with Autism

    ERIC Educational Resources Information Center

    McGhan, Anna C.; Lerman, Dorothea C.

    2013-01-01

    Prior research indicates that the relative effectiveness of different error-correction procedures may be idiosyncratic across learners, suggesting the potential benefit of an individualized assessment prior to teaching. In this study, we evaluated the reliability and utility of a rapid error-correction assessment to identify the least intrusive,…

  9. The Implicit Relational Assessment Procedure as a Measure of Self-Esteem

    ERIC Educational Resources Information Center

    Timko, C. Alix; England, Erica L.; Herbert, James D.; Forman, Evan M.

    2010-01-01

    Two studies were conducted to pilot the Implicit Relational Assessment Procedure (IRAP) in measuring attitudes toward the self: one related to body image specifically and another assessing the broader construct of self-esteem. Study 1 utilized the IRAP with female college students to examine self-referential beliefs regarding body image. Results…

  10. Assessing the accuracy of the International Classification of Diseases codes to identify abusive head trauma: a feasibility study

    PubMed Central

    Berger, Rachel P; Parks, Sharyn; Fromkin, Janet; Rubin, Pamela; Pecora, Peter J

    2016-01-01

    Objective To assess the accuracy of an International Classification of Diseases (ICD) code-based operational case definition for abusive head trauma (AHT). Methods Subjects were children <5 years of age evaluated for AHT by a hospital-based Child Protection Team (CPT) at a tertiary care paediatric hospital with a completely electronic medical record (EMR) system. Subjects were designated as non-AHT traumatic brain injury (TBI) or AHT based on whether the CPT determined that the injuries were due to AHT. The sensitivity and specificity of the ICD-based definition were calculated. Results There were 223 children evaluated for AHT: 117 AHT and 106 non-AHT TBI. The sensitivity and specificity of the ICD-based operational case definition were 92% (95% CI 85.8 to 96.2) and 96% (95% CI 92.3 to 99.7), respectively. All errors in sensitivity and three of the four specificity errors were due to coder error; one specificity error was a physician error. Conclusions In a paediatric tertiary care hospital with an EMR system, the accuracy of an ICD-based case definition for AHT was high. Additional studies are needed to assess the accuracy of this definition in all types of hospitals in which children with AHT are cared for. PMID:24167034

  11. Accuracy assessment of NOAA gridded daily reference evapotranspiration for the Texas High Plains

    USGS Publications Warehouse

    Moorhead, Jerry; Gowda, Prasanna H.; Hobbins, Michael; Senay, Gabriel; Paul, George; Marek, Thomas; Porter, Dana

    2015-01-01

    The National Oceanic and Atmospheric Administration (NOAA) provides daily reference evapotranspiration (ETref) maps for the contiguous United States using climatic data from North American Land Data Assimilation System (NLDAS). This data provides large-scale spatial representation of ETref, which is essential for regional scale water resources management. Data used in the development of NOAA daily ETref maps are derived from observations over surfaces that are different from short (grass — ETos) or tall (alfalfa — ETrs) reference crops, often in nonagricultural settings, which carries an unknown discrepancy between assumed and actual conditions. In this study, NOAA daily ETos and ETrs maps were evaluated for accuracy, using observed data from the Texas High Plains Evapotranspiration (TXHPET) network. Daily ETos, ETrs and the climatic data (air temperature, wind speed, and solar radiation) used for calculating ETref were extracted from the NOAA maps for TXHPET locations and compared against ground measurements on reference grass surfaces. NOAA ETrefmaps generally overestimated the TXHPET observations (1.4 and 2.2 mm/day ETos and ETrs, respectively), which may be attributed to errors in the NLDAS modeled air temperature and wind speed, to which reference ETref is most sensitive. Therefore, a bias correction to NLDAS modeled air temperature and wind speed data, or adjustment to the resulting NOAA ETref, may be needed to improve the accuracy of NOAA ETref maps.

  12. Breeding Jatropha curcas by genomic selection: A pilot assessment of the accuracy of predictive models

    PubMed Central

    de Azevedo Peixoto, Leonardo; Laviola, Bruno Galvêas; Alves, Alexandre Alonso; Rosado, Tatiana Barbosa; Bhering, Leonardo Lopes

    2017-01-01

    Genomic wide selection is a promising approach for improving the selection accuracy in plant breeding, particularly in species with long life cycles, such as Jatropha. Therefore, the objectives of this study were to estimate the genetic parameters for grain yield (GY) and the weight of 100 seeds (W100S) using restricted maximum likelihood (REML); to compare the performance of GWS methods to predict GY and W100S; and to estimate how many markers are needed to train the GWS model to obtain the maximum accuracy. Eight GWS models were compared in terms of predictive ability. The impact that the marker density had on the predictive ability was investigated using a varying number of markers, from 2 to 1,248. Because the genetic variance between evaluated genotypes was significant, it was possible to obtain selection gain. All of the GWS methods tested in this study can be used to predict GY and W100S in Jatropha. A training model fitted using 1,000 and 800 markers is sufficient to capture the maximum genetic variance and, consequently, maximum prediction ability of GY and W100S, respectively. This study demonstrated the applicability of genome-wide prediction to identify useful genetic sources of GY and W100S for Jatropha breeding. Further research is needed to confirm the applicability of the proposed approach to other complex traits. PMID:28296913

  13. Assessment of Classification Accuracies of SENTINEL-2 and LANDSAT-8 Data for Land Cover / Use Mapping

    NASA Astrophysics Data System (ADS)

    Hale Topaloğlu, Raziye; Sertel, Elif; Musaoğlu, Nebiye

    2016-06-01

    This study aims to compare classification accuracies of land cover/use maps created from Sentinel-2 and Landsat-8 data. Istanbul metropolitan city of Turkey, with a population of around 14 million, having different landscape characteristics was selected as study area. Water, forest, agricultural areas, grasslands, transport network, urban, airport- industrial units and barren land- mine land cover/use classes adapted from CORINE nomenclature were used as main land cover/use classes to identify. To fulfil the aims of this research, recently acquired dated 08/02/2016 Sentinel-2 and dated 22/02/2016 Landsat-8 images of Istanbul were obtained and image pre-processing steps like atmospheric and geometric correction were employed. Both Sentinel-2 and Landsat-8 images were resampled to 30m pixel size after geometric correction and similar spectral bands for both satellites were selected to create a similar base for these multi-sensor data. Maximum Likelihood (MLC) and Support Vector Machine (SVM) supervised classification methods were applied to both data sets to accurately identify eight different land cover/ use classes. Error matrix was created using same reference points for Sentinel-2 and Landsat-8 classifications. After the classification accuracy, results were compared to find out the best approach to create current land cover/use map of the region. The results of MLC and SVM classification methods were compared for both images.

  14. Assessing the speed--accuracy trade-off effect on the capacity of information processing.

    PubMed

    Donkin, Chris; Little, Daniel R; Houpt, Joseph W

    2014-06-01

    The ability to trade accuracy for speed is fundamental to human decision making. The speed-accuracy trade-off (SAT) effect has received decades of study, and is well understood in relatively simple decisions: collecting more evidence before making a decision allows one to be more accurate but also slower. The SAT in more complex paradigms has been given less attention, largely due to limits in the models and statistics that can be applied to such tasks. Here, we have conducted the first analysis of the SAT in multiple signal processing, using recently developed technologies for measuring capacity that take into account both response time and choice probability. We show that the primary influence of caution in our redundant-target experiments is on the threshold amount of evidence required to trigger a response. However, in a departure from the usual SAT effect, we found that participants strategically ignored redundant information when they were forced to respond quickly, but only when the additional stimulus was reliably redundant. Interestingly, because the capacity of the system was severely limited on redundant-target trials, ignoring additional targets meant that processing was more efficient when making fast decisions than when making slow and accurate decisions, where participants' limited resources had to be divided between the 2 stimuli.

  15. Breeding Jatropha curcas by genomic selection: A pilot assessment of the accuracy of predictive models.

    PubMed

    Azevedo Peixoto, Leonardo de; Laviola, Bruno Galvêas; Alves, Alexandre Alonso; Rosado, Tatiana Barbosa; Bhering, Leonardo Lopes

    2017-01-01

    Genomic wide selection is a promising approach for improving the selection accuracy in plant breeding, particularly in species with long life cycles, such as Jatropha. Therefore, the objectives of this study were to estimate the genetic parameters for grain yield (GY) and the weight of 100 seeds (W100S) using restricted maximum likelihood (REML); to compare the performance of GWS methods to predict GY and W100S; and to estimate how many markers are needed to train the GWS model to obtain the maximum accuracy. Eight GWS models were compared in terms of predictive ability. The impact that the marker density had on the predictive ability was investigated using a varying number of markers, from 2 to 1,248. Because the genetic variance between evaluated genotypes was significant, it was possible to obtain selection gain. All of the GWS methods tested in this study can be used to predict GY and W100S in Jatropha. A training model fitted using 1,000 and 800 markers is sufficient to capture the maximum genetic variance and, consequently, maximum prediction ability of GY and W100S, respectively. This study demonstrated the applicability of genome-wide prediction to identify useful genetic sources of GY and W100S for Jatropha breeding. Further research is needed to confirm the applicability of the proposed approach to other complex traits.

  16. Assessing the accuracy and repeatability of automated photogrammetrically generated digital surface models from unmanned aerial system imagery

    NASA Astrophysics Data System (ADS)

    Chavis, Christopher

    Using commercial digital cameras in conjunction with Unmanned Aerial Systems (UAS) to generate 3-D Digital Surface Models (DSMs) and orthomosaics is emerging as a cost-effective alternative to Light Detection and Ranging (LiDAR). Powerful software applications such as Pix4D and APS can automate the generation of DSM and orthomosaic products from a handful of inputs. However, the accuracy of these models is relatively untested. The objectives of this study were to generate multiple DSM and orthomosaic pairs of the same area using Pix4D and APS from flights of imagery collected with a lightweight UAS. The accuracy of each individual DSM was assessed in addition to the consistency of the method to model one location over a period of time. Finally, this study determined if the DSMs automatically generated using lightweight UAS and commercial digital cameras could be used for detecting changes in elevation and at what scale. Accuracy was determined by comparing DSMs to a series of reference points collected with survey grade GPS. Other GPS points were also used as control points to georeference the products within Pix4D and APS. The effectiveness of the products for change detection was assessed through image differencing and observance of artificially induced, known elevation changes. The vertical accuracy with the optimal data and model is ≈ 25 cm and the highest consistency over repeat flights is a standard deviation of ≈ 5 cm. Elevation change detection based on such UAS imagery and DSM models should be viable for detecting infrastructure change in urban or suburban environments with little dense canopy vegetation.

  17. Assessment of accuracy of adopted centre of mass corrections for the Etalon geodetic satellites

    NASA Astrophysics Data System (ADS)

    Appleby, Graham; Dunn, Peter; Otsubo, Toshimichi; Rodriguez, Jose

    2016-04-01

    Accurate centre-of-mass corrections are key parameters in the analysis of satellite laser ranging observations. In order to meet current accuracy requirements, the vector from the reflection point of a laser retroreflector array to the centre of mass of the orbiting spacecraft must be known with mm-level accuracy. In general, the centre-of-mass correction will be dependent on the characteristics of the target (geometry, construction materials, type of retroreflectors), the hardware employed by the tracking station (laser system, detector type), the intensity of the returned laser pulses, and the post-processing strategy employed to reduce the observations [1]. For the geodetic targets used by the ILRS to produce the SLR contribution to the ITRF, the LAGEOS and Etalon satellite pairs, there are centre-of-mass correction tables available for each tracking station [2]. These values are based on theoretical considerations, empirical determination of the optical response functions of each satellite, and knowledge of the tracking technology and return intensity employed [1]. Here we present results that put into question the accuracy of some of the current values for the centre-of-mass corrections of the Etalon satellites. We have computed weekly reference frame solutions using LAGEOS and Etalon observations for the period 1996-2014, estimating range bias parameters for each satellite type along with station coordinates. Analysis of the range bias time series reveals an unexplained, cm-level positive bias for the Etalon satellites in the case of most stations operating at high energy return levels. The time series of tracking stations that have undergone a transition from different modes of operation provide the evidence pointing to an inadequate centre-of-mass modelling. [1] Otsubo, T., and G.M. Appleby, System-dependent centre-of-mass correction for spherical geodetic satellites, J Geophys. Res., 108(B4), 2201, 2003 [2] Appleby, G.M., and T. Otsubo, Centre of Mass

  18. ON THE ACCURACY OF THE PROPAGATION THEORY AND THE QUALITY OF BACKGROUND OBSERVATIONS IN A SCHUMANN RESONANCE INVERSION PROCEDURE Vadim MUSHTAK, Earle WILLIAMS PARSONS LABORATORY, MIT

    NASA Astrophysics Data System (ADS)

    Mushtak, V. C.

    2009-12-01

    Observations of electromagnetic fields in the Schumann resonance (SR) frequency range (5 to 40 Hz) contain information about both the major source of the electromagnetic radiation (repeatedly confirmed to be global lightning activity) and the source-to-observer propagation medium (the Earth-ionosphere waveguide). While the electromagnetic signatures from individual lightning discharges provide preferable experimental material for exploring the medium, the properties of the world-wide lightning process are best reflected in background spectral SR observations. In the latter, electromagnetic contributions from thousands of lightning discharges are accumulated in intervals of about 10-15 minutes - long enough to present a statistically significant (and so theoretically treatable) ensemble of individual flashes, and short enough to reflect the spatial-temporal dynamics of global lightning activity. Thanks to the small (well below 1 dB/Mm) attenuation in the SR range and the accumulated nature of background SR observations, the latter present globally integrated information about lightning activity not available via other (satellite, meteorological) techniques. The most interesting characteristics to be extracted in an inversion procedure are the rates of vertical charge moment change (and their temporal variations) in the major global lightning “chimneys”. The success of such a procedure depends critically on the accuracy of the propagation theory (used to carry out “direct” calculations for the inversion) and the quality of experimental material. Due to the nature of the problem, both factors - the accuracy and the quality - can only be estimated indirectly, which requires specific approaches to assure that the estimates are realistic and more importantly, that the factors could be improved. For the first factor, simulations show that the widely exploited theory of propagation in a uniform (spherically symmetrical) waveguide provides unacceptable (up to

  19. A SUB-PIXEL ACCURACY ASSESSMENT FRAMEWORK FOR DETERMINING LANDSAT TM DERIVED IMPERVIOUS SURFACE ESTIMATES.

    EPA Science Inventory

    The amount of impervious surface in a watershed is a landscape indicator integrating a number of concurrent interactions that influence a watershed's hydrology. Remote sensing data and techniques are viable tools to assess anthropogenic impervious surfaces. However a fundamental ...

  20. Assessment of Required Accuracy of Digital Elevation Data for Hydrologic Modeling

    NASA Technical Reports Server (NTRS)

    Kenward, T.; Lettenmaier, D. P.

    1997-01-01

    The effect of vertical accuracy of Digital Elevation Models (DEMs) on hydrologic models is evaluated by comparing three DEMs and resulting hydrologic model predictions applied to a 7.2 sq km USDA - ARS watershed at Mahantango Creek, PA. The high resolution (5 m) DEM was resempled to a 30 m resolution using method that constrained the spatial structure of the elevations to be comparable with the USGS and SIR-C DEMs. This resulting 30 m DEM was used as the reference product for subsequent comparisons. Spatial fields of directly derived quantities, such as elevation differences, slope, and contributing area, were compared to the reference product, as were hydrologic model output fields derived using each of the three DEMs at the common 30 m spatial resolution.

  1. Assessment of the labelling accuracy of spanish semipreserved anchovies products by FINS (forensically informative nucleotide sequencing).

    PubMed

    Velasco, Amaya; Aldrey, Anxela; Pérez-Martín, Ricardo I; Sotelo, Carmen G

    2016-06-01

    Anchovies have been traditionally captured and processed for human consumption for millennia. In the case of Spain, ripened and salted anchovies are a delicacy, which, in some cases, can reach high commercial values. Although there have been a number of studies presenting DNA methodologies for the identification of anchovies, this is one of the first studies investigating the level of mislabelling in this kind of products in Europe. Sixty-three commercial semipreserved anchovy products were collected in different types of food markets in four Spanish cities to check labelling accuracy. Species determination in these commercial products was performed by sequencing two different cyt-b mitochondrial DNA fragments. Results revealed mislabelling levels higher than 15%, what authors consider relatively high considering the importance of the product. The most frequent substitute species was the Argentine anchovy, Engraulis anchoita, which can be interpreted as an economic fraud.

  2. Assessment of dimensional accuracy of preadjusted metal injection molding orthodontic brackets

    PubMed Central

    Alavi, Shiva; Tajmirriahi, Farnaz

    2016-01-01

    Background: the aim of this study is to evaluate the dimensional accuracy of McLaughlin, Bennett, and Trevisi (MBT) brackets manufactured by two different companies (American Orthodontics and Ortho Organizers) and determine variations in incorporation of values in relation to tip and torque in these products. Materials and Methods: In the present analytical/descriptive study, 64 maxillary right central brackets manufactured by two companies (American Orthodontics and Ortho Organizers) were selected randomly and evaluated for the accuracy of the values in relation to torque and angulation presented by the manufacturers. They were placed in a video measuring machine using special revolvers under them and were positioned in a manner so that the light beams would be directed on the floor of the slot without the slot walls being seen. Then, the software program of the same machine was used to determine the values of each bracket type. The means of measurements were determined for each sample and were analyzed with independent t-test and one-sample t-test. Results: Based on the confidence interval, it can be concluded that at 95% probability, the means of tip angles of maxillary right central brackets of these two brands were 4.1–4.3° and the torque angles were 16.39–16.72°. The tips in these samples were at a range of 3.33–4.98°, and the torque was at a range of 15.22–18.48°. Conclusion: In the present study, there were no significant differences in the angulation incorporated into the brackets from the two companies; however, they were significantly different from the tiP values for the MBT prescription. In relation to torque, there was a significant difference between the American Orthodontic brackets exhibited significant differences with the reported 17°, too. PMID:27857770

  3. Assessment of the relationship between lesion segmentation accuracy and computer-aided diagnosis scheme performance

    NASA Astrophysics Data System (ADS)

    Zheng, Bin; Pu, Jiantao; Park, Sang Cheol; Zuley, Margarita; Gur, David

    2008-03-01

    In this study we randomly select 250 malignant and 250 benign mass regions as a training dataset. The boundary contours of these regions were manually identified and marked. Twelve image features were computed for each region. An artificial neural network (ANN) was trained as a classifier. To select a specific testing dataset, we applied a topographic multi-layer region growth algorithm to detect boundary contours of 1,903 mass regions in an initial pool of testing regions. All processed regions are sorted based on a size difference ratio between manual and automated segmentation. We selected a testing dataset involving 250 malignant and 250 benign mass regions with larger size difference ratios. Using the area under ROC curve (A Z value) as performance index we investigated the relationship between the accuracy of mass segmentation and the performance of a computer-aided diagnosis (CAD) scheme. CAD performance degrades as the size difference ratio increases. Then, we developed and tested a hybrid region growth algorithm that combined the topographic region growth with an active contour approach. In this hybrid algorithm, the boundary contour detected by the topographic region growth is used as the initial contour of the active contour algorithm. The algorithm iteratively searches for the optimal region boundaries. A CAD likelihood score of the growth region being a true-positive mass is computed in each iteration. The region growth is automatically terminated once the first maximum CAD score is reached. This hybrid region growth algorithm reduces the size difference ratios between two areas segmented automatically and manually to less than +/-15% for all testing regions and the testing A Z value increases to from 0.63 to 0.90. The results indicate that CAD performance heavily depends on the accuracy of mass segmentation. In order to achieve robust CAD performance, reducing lesion segmentation error is important.

  4. Self Assessment in Schizophrenia: Accuracy of Evaluation of Cognition and Everyday Functioning

    PubMed Central

    Gould, Felicia; McGuire, Laura Stone; Durand, Dante; Sabbag, Samir; Larrauri, Carlos; Patterson, Thomas L.; Twamley, Elizabeth W.; Harvey, Philip D.

    2015-01-01

    Objective Self-assessment deficits, often referred to as impaired insight or unawareness of illness, are well established in people with schizophrenia. There are multiple levels of awareness, including awareness of symptoms, functional deficits, cognitive impairments, and the ability to monitor cognitive and functional performance in an ongoing manner. The present study aimed to evaluate the comparative predictive value of each aspect of awareness on the levels of everyday functioning in people with schizophrenia. Method We examined multiple aspects of self-assessment of functioning in 214 people with schizophrenia. We also collected information on everyday functioning rated by high contact clinicians and examined the importance of self-assessment for the prediction of real world functional outcomes. The relative impact of performance based measures of cognition, functional capacity, and metacognitive performance on everyday functioning was also examined. Results Misestimation of ability emerged as the strongest predictor of real world functioning and exceeded the influences of cognitive performance, functional capacity performance, and performance-based assessment of metacognitive monitoring. The relative contribution of the factors other than self-assessment varied according to which domain of everyday functioning was being examined, but in all cases, accounted for less predictive variance. Conclusions These results underscore the functional impact of misestimating one’s current functioning and relative level of ability. These findings are consistent with the use of insight-focused treatments and compensatory strategies designed to increase self-awareness in multiple functional domains. PMID:25643212

  5. Mass Evolution of Mediterranean, Black, Red, and Caspian Seas from GRACE and Altimetry: Accuracy Assessment and Solution Calibration

    NASA Technical Reports Server (NTRS)

    Loomis, B. D.; Luthcke, S. B.

    2016-01-01

    We present new measurements of mass evolution for the Mediterranean, Black, Red, and Caspian Seas as determined by the NASA Goddard Space Flight Center (GSFC) GRACE time-variable global gravity mascon solutions. These new solutions are compared to sea surface altimetry measurements of sea level anomalies with steric corrections applied. To assess their accuracy, the GRACE and altimetry-derived solutions are applied to the set of forward models used by GSFC for processing the GRACE Level-1B datasets, with the resulting inter-satellite range acceleration residuals providing a useful metric for analyzing solution quality.

  6. Assessing the Accuracy of the Tracer Dilution Method with Atmospheric Dispersion Modeling

    NASA Astrophysics Data System (ADS)

    Taylor, D.; Delkash, M.; Chow, F. K.; Imhoff, P. T.

    2015-12-01

    Landfill methane emissions are difficult to estimate due to limited observations and data uncertainty. The mobile tracer dilution method is a widely used and cost-effective approach for predicting landfill methane emissions. The method uses a tracer gas released on the surface of the landfill and measures the concentrations of both methane and the tracer gas downwind. Mobile measurements are conducted with a gas analyzer mounted on a vehicle to capture transects of both gas plumes. The idea behind the method is that if the measurements are performed far enough downwind, the methane plume from the large area source of the landfill and the tracer plume from a small number of point sources will be sufficiently well-mixed to behave similarly, and the ratio between the concentrations will be a good estimate of the ratio between the two emissions rates. The mobile tracer dilution method is sensitive to different factors of the setup such as placement of the tracer release locations and distance from the landfill to the downwind measurements, which have not been thoroughly examined. In this study, numerical modeling is used as an alternative to field measurements to study the sensitivity of the tracer dilution method and provide estimates of measurement accuracy. Using topography and wind conditions for an actual landfill, a landfill emissions rate is prescribed in the model and compared against the emissions rate predicted by application of the tracer dilution method. Two different methane emissions scenarios are simulated: homogeneous emissions over the entire surface of the landfill, and heterogeneous emissions with a hot spot containing 80% of the total emissions where the daily cover area is located. Numerical modeling of the tracer dilution method is a useful tool for evaluating the method without having the expense and labor commitment of multiple field campaigns. Factors tested include number of tracers, distance between tracers, distance from landfill to transect

  7. Springback Control in Industrial Bending Operations: Assessing the Accuracy of Three Commercial FEA Codes

    NASA Astrophysics Data System (ADS)

    Welo, Torgeir; Granly, Bjørg M.; Elverum, Christer; Søvik, Odd P.; Sørbø, Steinar

    2011-05-01

    Over the past two decades, a quantum leap has been made in FE technology for metal forming applications, including methods, algorithms, models and hardware capabilities. A myriad of research articles reports on methodologies that provide excellent capabilities in reproducing springback obtained from physical experiments. However, it is felt that we are not yet to the point where current modeling practice provides satisfactory value to tool designers and manufacturing engineers, particularly when the results have to be available before the first piece of tool steel has been cut; the main reasons being lack of accuracy in predicting elastic springback. The main objective of the present work is to validate springback capabilities using a strategy that integrates industrial tool simulation practice with carefully controlled physical experiments conducted in an academic setting. An industry-like (rotary) draw bending machine has been built and equipped with advanced measurement capabilities. Extruded rectangular, hollow aluminum alloy AA6060 sections were heat treated to two different tempers to produce a range of material properties prior to forming into two different bending angles. The selected set-up represents a challenging benchmark due to tight-radius bending and complex contact conditions, meaning that elastic springback is resulting from interaction effects between excessive local cross-sectional distortions and global bending mechanisms. The material properties were obtained by tensile testing, curve-fitting data to a conventional isotropic Ludwik-type material model. The bending process was modeled in three different commercial FE codes following best practice, including LS-Dyna, Stampack and Abaqus (explicit). The springback analyses were done prior to bending tests as would be done in an industrial tool design process. After having completed the bending tests and carefully measured the released bend angle for the different combinations, the results were

  8. Disease severity estimates - effects of rater accuracy and assessments methods for comparing treatments

    Technology Transfer Automated Retrieval System (TEKTRAN)

    Assessment of disease is fundamental to the discipline of plant pathology, and estimates of severity are often made visually. However, it is established that visual estimates can be inaccurate and unreliable. In this study estimates of Septoria leaf blotch on leaves of winter wheat from non-treated ...

  9. 12 CFR 630.5 - Accuracy of reports and assessment of internal control over financial reporting.

    Code of Federal Regulations, 2011 CFR

    2011-01-01

    ... information is true, accurate, and complete to the best of signatories' knowledge and belief. (d) Management... reporting for the System-wide report to investors. The assessment must be conducted during the reporting... CREDIT SYSTEM DISCLOSURE TO INVESTORS IN SYSTEMWIDE AND CONSOLIDATED BANK DEBT OBLIGATIONS OF THE...

  10. Do Students Know What They Know? Exploring the Accuracy of Students' Self-Assessments

    ERIC Educational Resources Information Center

    Lindsey, Beth A.; Nagel, Megan L.

    2015-01-01

    We have conducted an investigation into how well students in introductory science classes (both physics and chemistry) are able to predict which questions they will or will not be able to answer correctly on an upcoming assessment. An examination of the data at the level of students' overall scores reveals results consistent with the…

  11. Assessing posttraumatic stress in military service members: improving efficiency and accuracy.

    PubMed

    Fissette, Caitlin L; Snyder, Douglas K; Balderrama-Durbin, Christina; Balsis, Steve; Cigrang, Jeffrey; Talcott, G Wayne; Tatum, JoLyn; Baker, Monty; Cassidy, Daniel; Sonnek, Scott; Heyman, Richard E; Smith Slep, Amy M

    2014-03-01

    Posttraumatic stress disorder (PTSD) is assessed across many different populations and assessment contexts. However, measures of PTSD symptomatology often are not tailored to meet the needs and demands of these different populations and settings. In order to develop population- and context-specific measures of PTSD it is useful first to examine the item-level functioning of existing assessment methods. One such assessment measure is the 17-item PTSD Checklist-Military version (PCL-M; Weathers, Litz, Herman, Huska, & Keane, 1993). Although the PCL-M is widely used in both military and veteran health-care settings, it is limited by interpretations based on aggregate scores that ignore variability in item endorsement rates and relatedness to PTSD. Based on item response theory, this study conducted 2-parameter logistic analyses of the PCL-M in a sample of 196 service members returning from a yearlong, high-risk deployment to Iraq. Results confirmed substantial variability across items both in terms of their relatedness to PTSD and their likelihood of endorsement at any given level of PTSD. The test information curve for the full 17-item PCL-M peaked sharply at a value of θ = 0.71, reflecting greatest information at approximately the 76th percentile level of underlying PTSD symptom levels in this sample. Implications of findings are discussed as they relate to identifying more efficient, accurate subsets of items tailored to military service members as well as other specific populations and evaluation contexts.

  12. Exploring Writing Accuracy and Writing Complexity as Predictors of High-Stakes State Assessments

    ERIC Educational Resources Information Center

    Edman, Ellie Whitner

    2012-01-01

    The advent of No Child Left Behind led to increased teacher accountability for student performance and placed strict sanctions in place for failure to meet a certain level of performance each year. With instructional time at a premium, it is imperative that educators have brief academic assessments that accurately predict performance on…

  13. Assessing and improving the spatial accuracy in MEG source localization by depth-weighted minimum-norm estimates.

    PubMed

    Lin, Fa-Hsuan; Witzel, Thomas; Ahlfors, Seppo P; Stufflebeam, Steven M; Belliveau, John W; Hämäläinen, Matti S

    2006-05-15

    Cerebral currents responsible for the extra-cranially recorded magnetoencephalography (MEG) data can be estimated by applying a suitable source model. A popular choice is the distributed minimum-norm estimate (MNE) which minimizes the l2-norm of the estimated current. Under the l2-norm constraint, the current estimate is related to the measurements by a linear inverse operator. However, the MNE has a bias towards superficial sources, which can be reduced by applying depth weighting. We studied the effect of depth weighting in MNE using a shift metric. We assessed the localization performance of the depth-weighted MNE as well as depth-weighted noise-normalized MNE solutions under different cortical orientation constraints, source space densities, and signal-to-noise ratios (SNRs) in multiple subjects. We found that MNE with depth weighting parameter between 0.6 and 0.8 showed improved localization accuracy, reducing the mean displacement error from 12 mm to 7 mm. The noise-normalized MNE was insensitive to depth weighting. A similar investigation of EEG data indicated that depth weighting parameter between 2.0 and 5.0 resulted in an improved localization accuracy. The application of depth weighting to auditory and somatosensory experimental data illustrated the beneficial effect of depth weighting on the accuracy of spatiotemporal mapping of neuronal sources.

  14. Assessment of the accuracy of plasma shape reconstruction by the Cauchy condition surface method in JT-60SA

    SciTech Connect

    Miyata, Y.; Suzuki, T.; Takechi, M.; Urano, H.; Ide, S.

    2015-07-15

    For the purpose of stable plasma equilibrium control and detailed analysis, it is essential to reconstruct an accurate plasma boundary on the poloidal cross section in tokamak devices. The Cauchy condition surface (CCS) method is a numerical approach for calculating the spatial distribution of the magnetic flux outside a hypothetical surface and reconstructing the plasma boundary from the magnetic measurements located outside the plasma. The accuracy of the plasma shape reconstruction has been assessed by comparing the CCS method and an equilibrium calculation in JT-60SA with a high elongation and triangularity of plasma shape. The CCS, on which both Dirichlet and Neumann conditions are unknown, is defined as a hypothetical surface located inside the real plasma region. The accuracy of the plasma shape reconstruction is sensitive to the CCS free parameters such as the number of unknown parameters and the shape in JT-60SA. It is found that the optimum number of unknown parameters and the size of the CCS that minimizes errors in the reconstructed plasma shape are in proportion to the plasma size. Furthermore, it is shown that the accuracy of the plasma shape reconstruction is greatly improved using the optimum number of unknown parameters and shape of the CCS, and the reachable reconstruction errors in plasma shape and locations of strike points are within the target ranges in JT-60SA.

  15. Assessment of the accuracy of plasma shape reconstruction by the Cauchy condition surface method in JT-60SA

    NASA Astrophysics Data System (ADS)

    Miyata, Y.; Suzuki, T.; Takechi, M.; Urano, H.; Ide, S.

    2015-07-01

    For the purpose of stable plasma equilibrium control and detailed analysis, it is essential to reconstruct an accurate plasma boundary on the poloidal cross section in tokamak devices. The Cauchy condition surface (CCS) method is a numerical approach for calculating the spatial distribution of the magnetic flux outside a hypothetical surface and reconstructing the plasma boundary from the magnetic measurements located outside the plasma. The accuracy of the plasma shape reconstruction has been assessed by comparing the CCS method and an equilibrium calculation in JT-60SA with a high elongation and triangularity of plasma shape. The CCS, on which both Dirichlet and Neumann conditions are unknown, is defined as a hypothetical surface located inside the real plasma region. The accuracy of the plasma shape reconstruction is sensitive to the CCS free parameters such as the number of unknown parameters and the shape in JT-60SA. It is found that the optimum number of unknown parameters and the size of the CCS that minimizes errors in the reconstructed plasma shape are in proportion to the plasma size. Furthermore, it is shown that the accuracy of the plasma shape reconstruction is greatly improved using the optimum number of unknown parameters and shape of the CCS, and the reachable reconstruction errors in plasma shape and locations of strike points are within the target ranges in JT-60SA.

  16. Assessment of the accuracy of plasma shape reconstruction by the Cauchy condition surface method in JT-60SA.

    PubMed

    Miyata, Y; Suzuki, T; Takechi, M; Urano, H; Ide, S

    2015-07-01

    For the purpose of stable plasma equilibrium control and detailed analysis, it is essential to reconstruct an accurate plasma boundary on the poloidal cross section in tokamak devices. The Cauchy condition surface (CCS) method is a numerical approach for calculating the spatial distribution of the magnetic flux outside a hypothetical surface and reconstructing the plasma boundary from the magnetic measurements located outside the plasma. The accuracy of the plasma shape reconstruction has been assessed by comparing the CCS method and an equilibrium calculation in JT-60SA with a high elongation and triangularity of plasma shape. The CCS, on which both Dirichlet and Neumann conditions are unknown, is defined as a hypothetical surface located inside the real plasma region. The accuracy of the plasma shape reconstruction is sensitive to the CCS free parameters such as the number of unknown parameters and the shape in JT-60SA. It is found that the optimum number of unknown parameters and the size of the CCS that minimizes errors in the reconstructed plasma shape are in proportion to the plasma size. Furthermore, it is shown that the accuracy of the plasma shape reconstruction is greatly improved using the optimum number of unknown parameters and shape of the CCS, and the reachable reconstruction errors in plasma shape and locations of strike points are within the target ranges in JT-60SA.

  17. Estimating Orientation Using Magnetic and Inertial Sensors and Different Sensor Fusion Approaches: Accuracy Assessment in Manual and Locomotion Tasks

    PubMed Central

    Bergamini, Elena; Ligorio, Gabriele; Summa, Aurora; Vannozzi, Giuseppe; Cappozzo, Aurelio; Sabatini, Angelo Maria

    2014-01-01

    Magnetic and inertial measurement units are an emerging technology to obtain 3D orientation of body segments in human movement analysis. In this respect, sensor fusion is used to limit the drift errors resulting from the gyroscope data integration by exploiting accelerometer and magnetic aiding sensors. The present study aims at investigating the effectiveness of sensor fusion methods under different experimental conditions. Manual and locomotion tasks, differing in time duration, measurement volume, presence/absence of static phases, and out-of-plane movements, were performed by six subjects, and recorded by one unit located on the forearm or the lower trunk, respectively. Two sensor fusion methods, representative of the stochastic (Extended Kalman Filter) and complementary (Non-linear observer) filtering, were selected, and their accuracy was assessed in terms of attitude (pitch and roll angles) and heading (yaw angle) errors using stereophotogrammetric data as a reference. The sensor fusion approaches provided significantly more accurate results than gyroscope data integration. Accuracy improved mostly for heading and when the movement exhibited stationary phases, evenly distributed 3D rotations, it occurred in a small volume, and its duration was greater than approximately 20 s. These results were independent from the specific sensor fusion method used. Practice guidelines for improving the outcome accuracy are provided. PMID:25302810

  18. Assessment of Completeness and Positional Accuracy of Linear Features in Volunteered Geographic Information (vgi)

    NASA Astrophysics Data System (ADS)

    Eshghi, M.; Alesheikh, A. A.

    2015-12-01

    Recent advances in spatial data collection technologies and online services dramatically increase the contribution of ordinary people to produce, share, and use geographic information. Collecting spatial data as well as disseminating them on the internet by citizens has led to a huge source of spatial data termed as Volunteered Geographic Information (VGI) by Mike Goodchild. Although, VGI has produced previously unavailable data assets, and enriched existing ones. But its quality can be highly variable and challengeable. This presents several challenges to potential end users who are concerned about the validation and the quality assurance of the data which are collected. Almost, all the existing researches are based on how to find accurate VGI data from existing VGI data which consist of a) comparing the VGI data with the accurate official data, or b) in cases that there is no access to correct data; therefore, looking for an alternative way to determine the quality of VGI data is essential, and so forth. In this paper it has been attempt to develop a useful method to reach this goal. In this process, the positional accuracy of linear feature of Iran, Tehran OSM data have been analyzed.

  19. Accuracy Assessments of Cloud Droplet Size Retrievals from Polarized Reflectance Measurements by the Research Scanning Polarimeter

    NASA Technical Reports Server (NTRS)

    Alexandrov, Mikhail Dmitrievic; Cairns, Brian; Emde, Claudia; Ackerman, Andrew S.; vanDiedenhove, Bastiaan

    2012-01-01

    We present an algorithm for the retrieval of cloud droplet size distribution parameters (effective radius and variance) from the Research Scanning Polarimeter (RSP) measurements. The RSP is an airborne prototype for the Aerosol Polarimetery Sensor (APS), which was on-board of the NASA Glory satellite. This instrument measures both polarized and total reflectance in 9 spectral channels with central wavelengths ranging from 410 to 2260 nm. The cloud droplet size retrievals use the polarized reflectance in the scattering angle range between 135deg and 165deg, where they exhibit the sharply defined structure known as the rain- or cloud-bow. The shape of the rainbow is determined mainly by the single scattering properties of cloud particles. This significantly simplifies both forward modeling and inversions, while also substantially reducing uncertainties caused by the aerosol loading and possible presence of undetected clouds nearby. In this study we present the accuracy evaluation of our algorithm based on the results of sensitivity tests performed using realistic simulated cloud radiation fields.

  20. Accuracy assessment of building point clouds automatically generated from iphone images

    NASA Astrophysics Data System (ADS)

    Sirmacek, B.; Lindenbergh, R.

    2014-06-01

    Low-cost sensor generated 3D models can be useful for quick 3D urban model updating, yet the quality of the models is questionable. In this article, we evaluate the reliability of an automatic point cloud generation method using multi-view iPhone images or an iPhone video file as an input. We register such automatically generated point cloud on a TLS point cloud of the same object to discuss accuracy, advantages and limitations of the iPhone generated point clouds. For the chosen example showcase, we have classified 1.23% of the iPhone point cloud points as outliers, and calculated the mean of the point to point distances to the TLS point cloud as 0.11 m. Since a TLS point cloud might also include measurement errors and noise, we computed local noise values for the point clouds from both sources. Mean (μ) and standard deviation (σ) of roughness histograms are calculated as (μ1 = 0.44 m., σ1 = 0.071 m.) and (μ2 = 0.025 m., σ2 = 0.037 m.) for the iPhone and TLS point clouds respectively. Our experimental results indicate possible usage of the proposed automatic 3D model generation framework for 3D urban map updating, fusion and detail enhancing, quick and real-time change detection purposes. However, further insights should be obtained first on the circumstances that are needed to guarantee a successful point cloud generation from smartphone images.

  1. Accuracy assessment of blind and semi-blind restoration methods for hyperspectral images

    NASA Astrophysics Data System (ADS)

    Zhang, Mo; Vozel, Benoit; Chehdi, Kacem; Uss, Mykhail; Abramov, Sergey; Lukin, Vladimir

    2016-10-01

    Hyperspectral images acquired by remote sensing systems are generally degraded by noise and can be sometimes more severely degraded by blur. When no knowledge is available about the degradations present or the original image, blind restoration methods must be considered. Otherwise, when a partial information is needed, semi-blind restoration methods can be considered. Numerous semi-blind and quite advanced methods are available in the literature. So to get better insights and feedback on the applicability and potential efficiency of a representative set of four semi-blind methods recently proposed, we have performed a comparative study of these methods in objective terms of blur filter and original image error estimation accuracy. In particular, we have paid special attention to the accurate recovering in the spectral dimension of original spectral signatures. We have analyzed peculiarities and factors restricting the applicability of these methods. Our tests are performed on a synthetic hyperspectral image, degraded with various synthetic blurs (out-of-focus, gaussian, motion) and with signal independent noise of typical levels such as those encountered in real hyperspectral images. This synthetic image has been built from various samples from classified areas of a real-life hyperspectral image, in order to benefit from realistic reference spectral signatures to recover after synthetic degradation. Conclusions, practical recommendations and perspectives are drawn from the results experimentally obtained.

  2. An extended dynamometer setup to improve the accuracy of knee joint moment assessment.

    PubMed

    Van Campen, Anke; De Groote, Friedl; Jonkers, Ilse; De Schutter, Joris

    2013-05-01

    This paper analyzes an extended dynamometry setup that aims at obtaining accurate knee joint moments. The main problem of the standard setup is the misalignment of the joint and the dynamometer axes of rotation due to nonrigid fixation, and the determination of the joint axis of rotation by palpation. The proposed approach 1) combines 6-D registration of the contact forces with 3-D motion capturing (which is a contribution to the design of the setup); 2) includes a functional axis of rotation in the model to describe the knee joint (which is a contribution to the modeling); and 3) calculates joint moments by a model-based 3-D inverse dynamic analysis. Through a sensitivity analysis, the influence of the accuracy of all model parameters is evaluated. Dynamics resulting from the extended setup are quantified, and are compared to those provided by the dynamometer. Maximal differences between the 3-D joint moment resulting from the inverse dynamics and measured by the dynamometer were 16.4 N ·m (16.9%) isokinetically and 18.3 N ·m (21.6%) isometrically. The calculated moment is most sensitive to the orientation and location of the axis of rotation. In conclusion, more accurate experimental joint moments are obtained using a model-based 3-D inverse dynamic approach that includes a good estimate of the pose of the joint axis.

  3. Assessment of the accuracy of density functional theory for first principles simulations of water

    NASA Astrophysics Data System (ADS)

    Grossman, J. C.; Schwegler, E.; Draeger, E.; Gygi, F.; Galli, G.

    2004-03-01

    We present a series of Car-Parrinello (CP) molecular dynamics simulation in order to better understand the accuracy of density functional theory for the calculation of the properties of water [1]. Through 10 separate ab initio simulations, each for 20 ps of ``production'' time, a number of approximations are tested by varying the density functional employed, the fictitious electron mass, μ, in the CP Langrangian, the system size, and the ionic mass, M (we considered both H_2O and D_2O). We present the impact of these approximations on properties such as the radial distribution function [g(r)], structure factor [S(k)], diffusion coefficient and dipole moment. Our results show that structural properties may artificially depend on μ, and that in the case of an accurate description of the electronic ground state, and in the absence of proton quantum effects, we obtained an oxygen-oxygen correlation function that is over-structured compared to experiment, and a diffusion coefficient which is approximately 10 times smaller. ^1 J.C. Grossman et. al., J. Chem. Phys. (in press, 2004).

  4. Accuracy Assessment of Lidar-Derived Digital Terrain Model (dtm) with Different Slope and Canopy Cover in Tropical Forest Region

    NASA Astrophysics Data System (ADS)

    Salleh, M. R. M.; Ismail, Z.; Rahman, M. Z. A.

    2015-10-01

    Airborne Light Detection and Ranging (LiDAR) technology has been widely used recent years especially in generating high accuracy of Digital Terrain Model (DTM). High density and good quality of airborne LiDAR data promises a high quality of DTM. This study focussing on the analysing the error associated with the density of vegetation cover (canopy cover) and terrain slope in a LiDAR derived-DTM value in a tropical forest environment in Bentong, State of Pahang, Malaysia. Airborne LiDAR data were collected can be consider as low density captured by Reigl system mounted on an aircraft. The ground filtering procedure use adaptive triangulation irregular network (ATIN) algorithm technique in producing ground points. Next, the ground control points (GCPs) used in generating the reference DTM and these DTM was used for slope classification and the point clouds belong to non-ground are then used in determining the relative percentage of canopy cover. The results show that terrain slope has high correlation for both study area (0.993 and 0.870) with the RMSE of the LiDAR-derived DTM. This is similar to canopy cover where high value of correlation (0.989 and 0.924) obtained. This indicates that the accuracy of airborne LiDAR-derived DTM is significantly affected by terrain slope and canopy caver of study area.

  5. Increasing the Rigor of Procedural Fidelity Assessment: An Empirical Comparison of Direct Observation and Permanent Product Review Methods

    ERIC Educational Resources Information Center

    Sanetti, Lisa M. Hagermoser; Collier-Meek, Melissa A.

    2014-01-01

    Although it is widely accepted that procedural fidelity data are important for making valid decisions about intervention effectiveness, there is little empirical guidance for researchers and practitioners regarding how to assess procedural fidelity. A first step in moving procedural fidelity assessment research forward is to develop a…

  6. QuickBird and OrbView-3 Geopositional Accuracy Assessment

    NASA Technical Reports Server (NTRS)

    Helder, Dennis; Ross, Kenton

    2006-01-01

    Objective: Compare vendor-provided image coordinates with known references visible in the imagery. Approach: Use multiple, well-characterized sites with >40 ground control points (GCPs); sites that are a) Well distributed; b) Accurately surveyed; and c) Easily found in imagery. Perform independent assessments with independent teams. Each team has slightly different measurement techniques and data processing methods. NASA Stennis Space Center. South Dakota State University.

  7. The Accuracy of Intelligence Assessment: Bias, Perception, and Judgement in Analysis and Decision

    DTIC Science & Technology

    1993-03-10

    practical inteligence ethic--not a code of conduct but an ethical way of thinking that forces analysts and decision-makers to ask L El the right...incorporated it into their varying viewpoints and turned it often into competitive conclusions. Some embraced, some acquiesced, some ignored, some rejected...dilemma. Intelligence officers and decision-makers compete for viewpoints. Intelligence assessments are always potentially competitive decisions. This

  8. Accuracy assessment of planimetric large-scale map data for decision-making

    NASA Astrophysics Data System (ADS)

    Doskocz, Adam

    2016-06-01

    This paper presents decision-making risk estimation based on planimetric large-scale map data, which are data sets or databases which are useful for creating planimetric maps on scales of 1:5,000 or larger. The studies were conducted on four data sets of large-scale map data. Errors of map data were used for a risk assessment of decision-making about the localization of objects, e.g. for land-use planning in realization of investments. An analysis was performed for a large statistical sample set of shift vectors of control points, which were identified with the position errors of these points (errors of map data). In this paper, empirical cumulative distribution function models for decision-making risk assessment were established. The established models of the empirical cumulative distribution functions of shift vectors of control points involve polynomial equations. An evaluation of the compatibility degree of the polynomial with empirical data was stated by the convergence coefficient and by the indicator of the mean relative compatibility of model. The application of an empirical cumulative distribution function allows an estimation of the probability of the occurrence of position errors of points in a database. The estimated decision-making risk assessment is represented by the probability of the errors of points stored in the database.

  9. Assessment of the accuracy of global geodetic satellite laser ranging observations 1993-2013

    NASA Astrophysics Data System (ADS)

    Appleby, Graham; Rodriguez, Jose

    2014-05-01

    We continue efforts to estimate the intrinsic accuracy of range measurements made by the major satellite laser ranging stations of the ILRS Network using normal point observations of the primary geodetic satellites LAGEOS and LAGEOS-II. In a novel, but risky, approach we carry out weekly, loosely constrained, reference frame solutions for satellite initial state vectors, station coordinates and daily EOPs (X-pole, Y-pole and LoD), as well as estimating range bias for all the stations. We apply known range errors a-priori from the table developed and maintained through the efforts of the ILRS Analysis Working Group and apply station- and time-specific satellite centre of mass corrections (Appleby and Otsubo, 2014), both corrections that are currently implemented in the standard ILRS reference frame products. Our approach, to solve simultaneously for station coordinates and possible range bias for all the stations, has the strength that any bias results are independent of the coordinates taken for example from ITRF2008; thus the approach has the potential to discover bias that may have become absorbed primarily in station height had the coordinates been determined on the assumption of zero bias. A serious complication of the approach is that correlations will inevitably exist between station height and range bias. However, for the major stations of the Network, and using LAGEOS and LAGEOS-II observations simultaneously in our weekly solutions, we are developing techniques and testing their sensitivity in performing a partial separation between these parameters at the expense of an increase in the variance of the stations' height time series. In this paper we discuss the results in terms of potential impact on coordinate solutions, including the reference frame scale, and in the context of preparations for ITRF2013.

  10. Assessing the accuracy of approximate treatments of ion hydration based on primitive quasichemical theory

    NASA Astrophysics Data System (ADS)

    Roux, Benoît; Yu, Haibo

    2010-06-01

    Quasichemical theory (QCT) provides a framework that can be used to partition the influence of the solvent surrounding an ion into near and distant contributions. Within QCT, the solvation properties of the ion are expressed as a sum of configurational integrals comprising only the ion and a small number of solvent molecules. QCT adopts a particularly simple form if it is assumed that the clusters undergo only small thermal fluctuations around a well-defined energy minimum and are affected exclusively in a mean-field sense by the surrounding bulk solvent. The fluctuations can then be integrated out via a simple vibrational analysis, leading to a closed-form expression for the solvation free energy of the ion. This constitutes the primitive form of quasichemical theory (pQCT), which is an approximate mathematical formulation aimed at reproducing the results from the full many-body configurational averages of statistical mechanics. While the results from pQCT from previous applications are reasonable, the accuracy of the approach has not been fully characterized and its range of validity remains unclear. Here, a direct test of pQCT for a set of ion models is carried out by comparing with the results of free energy simulations with explicit solvent. The influence of the distant surrounding bulk on the cluster comprising the ion and the nearest solvent molecule is treated both with a continuum dielectric approximation and with free energy perturbation molecular dynamics simulations with explicit solvent. The analysis shows that pQCT can provide an accurate framework in the case of a small cation such as Li+. However, the approximation encounters increasing difficulties when applied to larger cations such as Na+, and particularly for K+. This suggests that results from pQCT should be interpreted with caution when comparing ions of different sizes.

  11. Assessment of the accuracy of ABC/2 variations in traumatic epidural hematoma volume estimation: a retrospective study

    PubMed Central

    Hu, Tingting; Zhang, Zhen

    2016-01-01

    Background. The traumatic epidural hematoma (tEDH) volume is often used to assist in tEDH treatment planning and outcome prediction. ABC/2 is a well-accepted volume estimation method that can be used for tEDH volume estimation. Previous studies have proposed different variations of ABC/2; however, it is unclear which variation will provide a higher accuracy. Given the promising clinical contribution of accurate tEDH volume estimations, we sought to assess the accuracy of several ABC/2 variations in tEDH volume estimation. Methods. The study group comprised 53 patients with tEDH who had undergone non-contrast head computed tomography scans. For each patient, the tEDH volume was automatically estimated by eight ABC/2 variations (four traditional and four newly derived) with an in-house program, and results were compared to those from manual planimetry. Linear regression, the closest value, percentage deviation, and Bland-Altman plot were adopted to comprehensively assess accuracy. Results. Among all ABC/2 variations assessed, the traditional variations y = 0.5 × A1B1C1 (or A2B2C1) and the newly derived variations y = 0.65 × A1B1C1 (or A2B2C1) achieved higher accuracy than the other variations. No significant differences were observed between the estimated volume values generated by these variations and those of planimetry (p > 0.05). Comparatively, the former performed better than the latter in general, with smaller mean percentage deviations (7.28 ± 5.90% and 6.42 ± 5.74% versus 19.12 ± 6.33% and 21.28 ± 6.80%, respectively) and more values closest to planimetry (18/53 and 18/53 versus 2/53 and 0/53, respectively). Besides, deviations of most cases in the former fell within the range of <10% (71.70% and 84.91%, respectively), whereas deviations of most cases in the latter were in the range of 10–20% and >20% (90.57% and 96.23, respectively). Discussion. In the current study, we adopted an automatic approach to assess the accuracy of several ABC/2 variations

  12. Accuracy and feasibility of video analysis for assessing hamstring flexibility and validity of the sit-and-reach test.

    PubMed

    Mier, Constance M

    2011-12-01

    The accuracy of video analysis of the passive straight-leg raise test (PSLR) and the validity of the sit-and-reach test (SR) were tested in 60 men and women. Computer software measured static hip-joint flexion accurately. High within-session reliability of the PSLR was demonstrated (R > .97). Test-retest (separate days) reliability for SR was high in men (R = .97) and women R = .98) moderate for PSLR in men (R = .79) and women (R = .89). SR validity (PSLR as criterion) was higher in women (Day 1, r = .69; Day 2, r = .81) than men (Day 1, r = .64; Day 2, r = .66). In conclusion, video analysis is accurate and feasible for assessing static joint angles, PSLR and SR tests are very reliable methods for assessing flexibility, and the SR validity for hamstring flexibility was found to be moderate in women and low in men.

  13. The diagnostic accuracy of pharmacological stress echocardiography for the assessment of coronary artery disease: a meta-analysis

    PubMed Central

    Picano, Eugenio; Molinaro, Sabrina; Pasanisi, Emilio

    2008-01-01

    Background Recent American Heart Association/American College of Cardiology guidelines state that "dobutamine stress echo has substantially higher sensitivity than vasodilator stress echo for detection of coronary artery stenosis" while the European Society of Cardiology guidelines and the European Association of Echocardiography recommendations conclude that "the two tests have very similar applications". Who is right? Aim To evaluate the diagnostic accuracy of dobutamine versus dipyridamole stress echocardiography through an evidence-based approach. Methods From PubMed search, we identified all papers with coronary angiographic verification and head-to-head comparison of dobutamine stress echo (40 mcg/kg/min ± atropine) versus dipyridamole stress echo performed with state-of-the art protocols (either 0.84 mg/kg in 10' plus atropine, or 0.84 mg/kg in 6' without atropine). A total of 5 papers have been found. Pooled weight meta-analysis was performed. Results the 5 analyzed papers recruited 435 patients, 299 with and 136 without angiographically assessed coronary artery disease (quantitatively assessed stenosis > 50%). Dipyridamole and dobutamine showed similar accuracy (87%, 95% confidence intervals, CI, 83–90, vs. 84%, CI, 80–88, p = 0.48), sensitivity (85%, CI 80–89, vs. 86%, CI 78–91, p = 0.81) and specificity (89%, CI 82–94 vs. 86%, CI 75–89, p = 0.15). Conclusion When state-of-the art protocols are considered, dipyridamole and dobutamine stress echo have similar accuracy, specificity and – most importantly – sensitivity for detection of CAD. European recommendations concluding that "dobutamine and vasodilators (at appropriately high doses) are equally potent ischemic stressors for inducing wall motion abnormalities in presence of a critical coronary artery stenosis" are evidence-based. PMID:18565214

  14. Accuracy of forced oscillation technique to assess lung function in geriatric COPD population

    PubMed Central

    Tse, Hoi Nam; Tseng, Cee Zhung Steven; Wong, King Ying; Yee, Kwok Sang; Ng, Lai Yun

    2016-01-01

    Introduction Performing lung function test in geriatric patients has never been an easy task. With well-established evidence indicating impaired small airway function and air trapping in patients with geriatric COPD, utilizing forced oscillation technique (FOT) as a supplementary tool may aid in the assessment of lung function in this population. Aims To study the use of FOT in the assessment of airflow limitation and air trapping in geriatric COPD patients. Study design A cross-sectional study in a public hospital in Hong Kong. ClinicalTrials.gov ID: NCT01553812. Methods Geriatric patients who had spirometry-diagnosed COPD were recruited, with both FOT and plethysmography performed. “Resistance” and “reactance” FOT parameters were compared to plethysmography for the assessment of air trapping and airflow limitation. Results In total, 158 COPD subjects with a mean age of 71.9±0.7 years and percentage of forced expiratory volume in 1 second of 53.4±1.7 L were recruited. FOT values had a good correlation (r=0.4–0.7) to spirometric data. In general, X values (reactance) were better than R values (resistance), showing a higher correlation with spirometric data in airflow limitation (r=0.07–0.49 vs 0.61–0.67), small airway (r=0.05–0.48 vs 0.56–0.65), and lung volume (r=0.12–0.29 vs 0.43–0.49). In addition, resonance frequency (Fres) and frequency dependence (FDep) could well identify the severe type (percentage of forced expiratory volume in 1 second <50%) of COPD with high sensitivity (0.76, 0.71) and specificity (0.72, 0.64) (area under the curve: 0.8 and 0.77, respectively). Moreover, X values could stratify different severities of air trapping, while R values could not. Conclusion FOT may act as a simple and accurate tool in the assessment of severity of airflow limitation, small and central airway function, and air trapping in patients with geriatric COPD who have difficulties performing conventional lung function test. Moreover, reactance

  15. Effect of training, education, professional experience, and need for cognition on accuracy of exposure assessment decision-making.

    PubMed

    Vadali, Monika; Ramachandran, Gurumurthy; Banerjee, Sudipto

    2012-04-01

    Results are presented from a study that investigated the effect of characteristics of occupational hygienists relating to educational and professional experience and task-specific experience on the accuracy of occupational exposure judgments. A total of 49 occupational hygienists from six companies participated in the study and 22 tasks were evaluated. Participating companies provided monitoring data on specific tasks. Information on nine educational and professional experience determinants (e.g. educational background, years of occupational hygiene and exposure assessment experience, professional certifications, statistical training and experience, and the 'need for cognition (NFC)', which is a measure of an individual's motivation for thinking) and four task-specific determinants was also collected from each occupational hygienist. Hygienists had a wide range of educational and professional backgrounds for tasks across a range of industries with different workplace and task characteristics. The American Industrial Hygiene Association exposure assessment strategy was used to make exposure judgments on the probability of the 95th percentile of the underlying exposure distribution being located in one of four exposure categories relative to the occupational exposure limit. After reviewing all available job/task/chemical information, hygienists were asked to provide their judgment in probabilistic terms. Both qualitative (judgments without monitoring data) and quantitative judgments (judgments with monitoring data) were recorded. Ninety-three qualitative judgments and 2142 quantitative judgments were obtained. Data interpretation training, with simple rules of thumb for estimating the 95th percentiles of lognormal distributions, was provided to all hygienists. A data interpretation test (DIT) was also administered and judgments were elicited before and after training. General linear models and cumulative logit models were used to analyze the relationship between

  16. Field assessments on the accuracy of spherical gauges in rainfall measurements

    NASA Astrophysics Data System (ADS)

    Chang, Mingteh; Harrison, Lee

    2005-02-01

    claims that spherical gauges are effective in reducing wind effects on rainfall measurements. The spherical gauges could greatly improve the accuracy of hydrologic simulations and the efficiency on the designs and management of water resources. They are suitable for large-scale applications.

  17. Validating the Accuracy of Reaction Time Assessment on Computer-Based Tablet Devices.

    PubMed

    Schatz, Philip; Ybarra, Vincent; Leitner, Donald

    2015-08-01

    Computer-based assessment has evolved to tablet-based devices. Despite the availability of tablets and "apps," there is limited research validating their use. We documented timing delays between stimulus presentation and (simulated) touch response on iOS devices (3rd- and 4th-generation Apple iPads) and Android devices (Kindle Fire, Google Nexus, Samsung Galaxy) at response intervals of 100, 250, 500, and 1,000 milliseconds (ms). Results showed significantly greater timing error on Google Nexus and Samsung tablets (81-97 ms), than Kindle Fire and Apple iPads (27-33 ms). Within Apple devices, iOS 7 obtained significantly lower timing error than iOS 6. Simple reaction time (RT) trials (250 ms) on tablet devices represent 12% to 40% error (30-100 ms), depending on the device, which decreases considerably for choice RT trials (3-5% error at 1,000 ms). Results raise implications for using the same device for serial clinical assessment of RT using tablets, as well as the need for calibration of software and hardware.

  18. [International medical graduates in Dutch health care: the new assessment procedure].

    PubMed

    ten Cate, T J; Kooij, L R

    2008-04-12

    On December 1, 2005 in the Netherlands, a new procedure was introduced to assess international medical graduates (IMGs) with a diploma acquired outside the European Economic Area (EEA). This procedure includes (a) general tests on the active and passive use of Dutch medical language, English reading proficiency, basic IT skills and knowledge of the Dutch health care system, and (b) a specific set of tests of medical competence, including knowledge of basic sciences, clinical knowledge and clinical skills. IMGs who wish to get their diploma acknowledged and be registered as a physician are required to complete this assessment. With the introduction of this procedure, the Netherlands have joined a minority of countries inside and outside Europe with setting high standards for intake procedures. It is advocated that all European countries should devise such procedures, as a European Directive (2005/36/EC) on the recognition of professional qualifications prohibits the assessment of medical graduates with a diploma that is recognised in another EEA country.

  19. Developing best practices teaching procedures for skinfold assessment: observational examination using the Think Aloud method.

    PubMed

    Holmstrup, Michael E; Verba, Steven D; Lynn, Jeffrey S

    2015-12-01

    Skinfold assessment is valid and economical; however, it has a steep learning curve, and many programs only include one exposure to the technique. Increasing the number of exposures to skinfold assessment within an undergraduate curriculum would likely increase skill proficiency. The present study combined observational and Think Aloud methodologies to quantify procedural and cognitive characteristics of skinfold assessment. It was hypothesized that 1) increased curricular exposure to skinfold assessment would improve proficiency and 2) the combination of an observational and Think Aloud analysis would provide quantifiable areas of emphasis for instructing skinfold assessment. Seventy-five undergraduates with varied curricular exposure performed a seven-site skinfold assessment on a test subject while expressing their thoughts aloud. A trained practitioner recorded procedural observations, with transcripts generated from audio recordings to capture cognitive information. Skinfold measurements were compared with a criterion value, and bias scores were generated. Participants whose total bias fell within ±3.5% of the criterion value were proficient, with the remainder nonproficient. An independent-samples t-test was used to compare procedural and cognitive observations across experience and proficiency groups. Additional curricular exposure improved performance of skinfold assessment in areas such as the measurement of specific sites (e.g., chest, abdomen, and thigh) and procedural (e.g., landmark identification) and cognitive skills (e.g., complete site explanation). Furthermore, the Think Aloud method is a valuable tool for determining curricular strengths and weaknesses with skinfold assessment and as a pedagogical tool for individual instruction and feedback in the classroom.

  20. Environmental Impact Research Program: Visual Resources Assessment Procedure for US Army Corps of Engineers

    DTIC Science & Technology

    1988-03-01

    of line, form , color , texture, and scale. This inventory is completed along with FORM 6--VIEWPOINT ASSESS- MENT during the Detailed VIA Procedure...adding an inventory and assessment of design elements, i.e., line, form , color , and texture. This addi- tional information is used to determine the...backlighting or full lighting. ’ Line The path, real or imagined, that the eye follows when perceiving abrupt dif- ferences in form , color , or textures

  1. Assessing the accuracy of tympanometric evaluation of external auditory canal volume: a scientific study using an ear canal model.

    PubMed

    Al-Hussaini, A; Owens, D; Tomkinson, A

    2011-12-01

    Tympanometric evaluation is routinely used as part of the complete otological examination. During tympanometric examination, evaluation of middle ear pressure and ear canal volume is undertaken. Little is reported in relation to the accuracy and precision tympanometry evaluates external ear canal volume. This study examines the capability of the tympanometer to accurately evaluate external auditory canal volume in both simple and partially obstructed ear canal models and assesses its capability to be used in studies examining the effectiveness of cerumolytics. An ear canal model was designed using simple laboratory equipment, including a 5 ml calibrated clinical syringe (Becton Dickinson, Spain). The ear canal model was attached to the sensing probe of a Kamplex tympanometer (Interacoustics, Denmark). Three basic trials were undertaken: evaluation of the tympanometer in simple canal volume measurement, evaluation of the tympanometer in assessing canal volume with partial canal occlusion at different positions within the model, and evaluation of the tympanometer in assessing canal volume with varying degrees of canal occlusion. 1,290 individual test scenarios were completed over the three arms of the study. At volumes of 1.4 cm(3) or below, a perfect relationship was noted between the actual and tympanometric volumes in the simple model (Spearman's ρ = 1) with weakening degrees of agreement with increasing volume of the canal. Bland-Altman plotting confirmed the accuracy of this agreement. In the wax substitute models, tympanometry was observed to have a close relationship (Spearman's ρ > 0.99) with the actual volume present with worsening error above a volume of 1.4 cm(3). Bland-Altman plotting and precision calculations provided evidence of accuracy. Size and position of the wax substitute had no statistical effect on results [Wilcoxon rank-sum test (WRST) p > 0.99], nor did degree of partial obstruction (WRST p > 0.99). The Kamplex tympanometer

  2. Fire Severity Model Accuracy Using Short-term, Rapid Assessment versus Long-term, Anniversary Date Assessment

    Technology Transfer Automated Retrieval System (TEKTRAN)

    Fires are common in rangelands and after a century of fire suppression, the potential exists for fires to burn with high intensity and severity. In addition, the ability of fires to affect long-term changes in rangelands is considerable and for this reason, assessing fire severity after a fire is cr...

  3. Accuracy Assessment for PPP by Comparing Various Online PPP Service Solutions with Bernese 5.2 Network Solution

    NASA Astrophysics Data System (ADS)

    Ozgur Uygur, Sureyya; Aydin, Cuneyt; Demir, Deniz Oz; Cetin, Seda; Dogan, Ugur

    2016-04-01

    GNSS precise point positioning (PPP) technique is frequently used for geodetic applications such as monitoring of reference stations and estimation of tropospheric parameters. This technique uses the undifferenced GNSS observations along with the IGS products to reach high level positioning accuracy. The accuracy level depends on the GNSS data quality as well as the length of the observation duration and the quality of the external data products. It is possible to reach the desired positioning accuracy in the reference frame of satellite coordinates by using a single receiver GNSS data applying PPP technique. PPP technique is provided to users by scientific GNSS processing software packages (like GIPSY of NASA-JPL and Bernese Processing Software of AIUB) as well as several online PPP services. The related services are Auto-GIPSY provided by JPL California Institute of Technology, CSRS-PPP provided by Natural Resources Canada, GAPS provided by the University of New Brunswick and Magic-PPP provided by GMV. In this study, we assess the accuracy of PPP by comparing the solutions from the online PPP services with Bernese 5.2 network solutions. Seven days (DoY 256-262 in 2015) of GNSS observations with 24 hours session duration on the CORS-TR network in Turkey collected on a set of 14 stations were processed in static mode using the above-mentioned PPP services. The average of daily coordinates from Bernese 5.2 static network solution related to 12 IGS stations were taken as the true coordinates. Our results indicate that the distributions of the north, east and up daily position differences are characterized by means and RMS of 1.9±0.5, 2.1±0.7, 4.7±2.1 mm for CSRS, 1.6±0.6, 1.4±0.8, 5.5±3.9 mm for Auto-GIPSY, 3.0±0.8, 3.0±1.2, 6.0±3.2 mm for Magic GNSS, 2.1±1.3, 2.8±1.7, 5.0±2.3 mm for GAPS, with respect to Bernese 5.2 network solution. Keywords: PPP, Online GNSS Service, Bernese, Accuracy

  4. Accuracy Assessment of Digital Surface Models Based on WorldView-2 and ADS80 Stereo Remote Sensing Data

    PubMed Central

    Hobi, Martina L.; Ginzler, Christian

    2012-01-01

    Digital surface models (DSMs) are widely used in forest science to model the forest canopy. Stereo pairs of very high resolution satellite and digital aerial images are relatively new and their absolute accuracy for DSM generation is largely unknown. For an assessment of these input data two DSMs based on a WorldView-2 stereo pair and a ADS80 DSM were generated with photogrammetric instruments. Rational polynomial coefficients (RPCs) are defining the orientation of the WorldView-2 satellite images, which can be enhanced with ground control points (GCPs). Thus two WorldView-2 DSMs were distinguished: a WorldView-2 RPCs-only DSM and a WorldView-2 GCP-enhanced RPCs DSM. The accuracy of the three DSMs was estimated with GPS measurements, manual stereo-measurements, and airborne laser scanning data (ALS). With GCP-enhanced RPCs the WorldView-2 image orientation could be optimised to a root mean square error (RMSE) of 0.56 m in planimetry and 0.32 m in height. This improvement in orientation allowed for a vertical median error of −0.24 m for the WorldView-2 GCP-enhanced RPCs DSM in flat terrain. Overall, the DSM based on ADS80 images showed the highest accuracy of the three models with a median error of 0.08 m over bare ground. As the accuracy of a DSM varies with land cover three classes were distinguished: herb and grass, forests, and artificial areas. The study suggested the ADS80 DSM to best model actual surface height in all three land cover classes, with median errors <1.1 m. The WorldView-2 GCP-enhanced RPCs model achieved good accuracy, too, with median errors of −0.43 m for the herb and grass vegetation and −0.26 m for artificial areas. Forested areas emerged as the most difficult land cover type for height modelling; still, with median errors of −1.85 m for the WorldView-2 GCP-enhanced RPCs model and −1.12 m for the ADS80 model, the input data sets evaluated here are quite promising for forest canopy modelling. PMID:22778645

  5. Do students know what they know? Exploring the accuracy of students' self-assessments

    NASA Astrophysics Data System (ADS)

    Lindsey, Beth A.; Nagel, Megan L.

    2015-12-01

    We have conducted an investigation into how well students in introductory science classes (both physics and chemistry) are able to predict which questions they will or will not be able to answer correctly on an upcoming assessment. An examination of the data at the level of students' overall scores reveals results consistent with the Dunning-Kruger effect, in which low-performing students tend to overestimate their abilities, while high-performing students estimate their abilities more accurately. Similar results have been widely reported in the science education literature. Breaking results out by students' responses to individual questions, however, reveals that students of all ability levels have difficulty distinguishing questions which they are able to answer correctly from those that they are not able to answer correctly. These results have implications for the future study and reporting of students' metacognitive abilities.

  6. Literature Evidence on Live Animal Versus Synthetic Models for Training and Assessing Trauma Resuscitation Procedures.

    PubMed

    Hart, Danielle; McNeil, Mary Ann; Hegarty, Cullen; Rush, Robert; Chipman, Jeffery; Clinton, Joseph; Reihsen, Troy; Sweet, Robert

    2016-01-01

    There are many models currently used for teaching and assessing performance of trauma-related airway, breathing, and hemorrhage procedures. Although many programs use live animal (live tissue [LT]) models, there is a congressional effort to transition to the use of nonanimal- based methods (i.e., simulators, cadavers) for military trainees. We examined the existing literature and compared the efficacy, acceptability, and validity of available models with a focus on comparing LT models with synthetic systems. Literature and Internet searches were conducted to examine current models for seven core trauma procedures. We identified 185 simulator systems. Evidence on acceptability and validity of models was sparse. We found only one underpowered study comparing the performance of learners after training on LT versus simulator models for tube thoracostomy and cricothyrotomy. There is insufficient data-driven evidence to distinguish superior validity of LT or any other model for training or assessment of critical trauma procedures.

  7. Optimized in vitro procedure for assessing the cytocompatibility of magnesium-based biomaterials.

    PubMed

    Jung, Ole; Smeets, Ralf; Porchetta, Dario; Kopp, Alexander; Ptock, Christoph; Müller, Ute; Heiland, Max; Schwade, Max; Behr, Björn; Kröger, Nadja; Kluwe, Lan; Hanken, Henning; Hartjen, Philip

    2015-09-01

    Magnesium (Mg) is a promising biomaterial for degradable implant applications that has been extensively studied in vitro and in vivo in recent years. In this study, we developed a procedure that allows an optimized and uniform in vitro assessment of the cytocompatibility of Mg-based materials while respecting the standard protocol DIN EN ISO 10993-5:2009. The mouse fibroblast line L-929 was chosen as the preferred assay cell line and MEM supplemented with 10% FCS, penicillin/streptomycin and 4mM l-glutamine as the favored assay medium. The procedure consists of (1) an indirect assessment of effects of soluble Mg corrosion products in material extracts and (2) a direct assessment of the surface compatibility in terms of cell attachment and cytotoxicity originating from active corrosion processes. The indirect assessment allows the quantification of cell-proliferation (BrdU-assay), viability (XTT-assay) as well as cytotoxicity (LDH-assay) of the mouse fibroblasts incubated with material extracts. Direct assessment visualizes cells attached to the test materials by means of live-dead staining. The colorimetric assays and the visual evaluation complement each other and the combination of both provides an optimized and simple procedure for assessing the cytocompatibility of Mg-based biomaterials in vitro.

  8. Assessors' Approaches to Portfolio Assessment in Assessment of Prior Learning Procedures

    ERIC Educational Resources Information Center

    Joosten-ten Brinke, Desiree; Sluijsmans, Dominique M. A.; Jochems, Wim M. G.

    2010-01-01

    In an effort to gain better understanding of the assessment of prior informal and non-formal learning, this article explores assessors' approaches to portfolio assessment. Through this portfolio assessment, candidates had requested exemptions from specific courses within an educational programme or admission to the programme based on their prior…

  9. Accuracy Assessment of a Complex Building 3d Model Reconstructed from Images Acquired with a Low-Cost Uas

    NASA Astrophysics Data System (ADS)

    Oniga, E.; Chirilă, C.; Stătescu, F.

    2017-02-01

    Nowadays, Unmanned Aerial Systems (UASs) are a wide used technique for acquisition in order to create buildings 3D models, providing the acquisition of a high number of images at very high resolution or video sequences, in a very short time. Since low-cost UASs are preferred, the accuracy of a building 3D model created using this platforms must be evaluated. To achieve results, the dean's office building from the Faculty of "Hydrotechnical Engineering, Geodesy and Environmental Engineering" of Iasi, Romania, has been chosen, which is a complex shape building with the roof formed of two hyperbolic paraboloids. Seven points were placed on the ground around the building, three of them being used as GCPs, while the remaining four as Check points (CPs) for accuracy assessment. Additionally, the coordinates of 10 natural CPs representing the building characteristic points were measured with a Leica TCR 405 total station. The building 3D model was created as a point cloud which was automatically generated based on digital images acquired with the low-cost UASs, using the image matching algorithm and different software like 3DF Zephyr, Visual SfM, PhotoModeler Scanner and Drone2Map for ArcGIS. Except for the PhotoModeler Scanner software, the interior and exterior orientation parameters were determined simultaneously by solving a self-calibrating bundle adjustment. Based on the UAS point clouds, automatically generated by using the above mentioned software and GNSS data respectively, the parameters of the east side hyperbolic paraboloid were calculated using the least squares method and a statistical blunder detection. Then, in order to assess the accuracy of the building 3D model, several comparisons were made for the facades and the roof with reference data, considered with minimum errors: TLS mesh for the facades and GNSS mesh for the roof. Finally, the front facade of the building was created in 3D based on its characteristic points using the PhotoModeler Scanner

  10. VAP-CAP: A Procedure to Assess the Visual Functioning of Young Visually Impaired Children.

    ERIC Educational Resources Information Center

    Blanksby, D. C.; Langford, P. E.

    1993-01-01

    This article describes a visual assessment procedure (VAP) which evaluates capacity, attention, and processing (CAP) of infants and preschool children with visual impairments. The two-level battery considers, first, visual capacity and basic visual attention and, second, visual perceptual and cognitive abilities. A theoretical analysis of the…

  11. The Risky Situation: A Procedure for Assessing the Father-Child Activation Relationship

    ERIC Educational Resources Information Center

    Paquette, Daniel; Bigras, Marc

    2010-01-01

    Initial validation data are presented for the Risky Situation (RS), a 20-minute observational procedure designed to assess the father-child activation relationship with children aged 12-18 months. The coding grid, which is simple and easy to use, allows parent-child dyads to be classified into three categories and provides an activation score. By…

  12. Defining Course Outcomes and Assessment Procedures: A Model for Individual Courses.

    ERIC Educational Resources Information Center

    Ward, James K., Jr.; Marabeti, Hilary B.

    A description is provided of Tennessee's Volunteer State Community College's (VSCC's) approach to defining the goals, expected outcomes, and assessment procedures of individual courses, utilizing teacher-developed course instruction manuals and standardized course syllabi. Introductory material explains why and how the approach was developed,…

  13. The Stoplight Task: A Procedure for Assessing Risk Taking in Humans

    ERIC Educational Resources Information Center

    Reilly, Mark P.; Greenwald, Mark K.; Johanson, Chris-Ellyn

    2006-01-01

    The Stoplight Task, a procedure involving a computer analog of a stoplight, was evaluated for assessing risk taking in humans. Seventeen participants earned points later exchangeable for money by completing a response requirement before the red light appeared on a simulated traffic light. The green light signaled to start responding; it changed to…

  14. Refining the Measurement of Axis II: A Q-sort Procedure for Assessing Personality Pathology.

    ERIC Educational Resources Information Center

    Shedler, Jonathan; Westen, Drew

    1998-01-01

    Results from a study involving 153 clinicians who used the new Shedler-Westen Assessment Procedure (a Q-sort approach) and eight patient interviews suggest the usefulness of the SWAP to measure personality disorders and refine categories and criteria according to Axis II of the "Diagnostic and Statistical Manual of Mental Disorders"…

  15. Proposed Planning Procedures: Gaming-Simulation as a Method for Early Assessment.

    ERIC Educational Resources Information Center

    Smit, Peter H.

    1982-01-01

    Examines the use of simulation gaming as a research tool in the early assessment of proposed planning procedures in urban renewal projects. About one-half of the citations in the 36-item bibliography are in Dutch; the remainder are in English. (Author/JJD)

  16. A Choice Procedure to Assess the Aversive Effects of Drugs in Rodents

    ERIC Educational Resources Information Center

    Podlesnik, Christopher A.; Jimenez-Gomez, Corina; Woods, James H.

    2010-01-01

    The goal of this series of experiments was to develop an operant choice procedure to examine rapidly the punishing effects of intravenous drugs in rats. First, the cardiovascular effects of experimenter-administered intravenous histamine, a known aversive drug, were assessed to determine a biologically active dose range. Next, rats responded on…

  17. Statistics, Measures, and Quality Standards for Assessing Digital Reference Library Services: Guidelines and Procedures.

    ERIC Educational Resources Information Center

    McClure, Charles R.; Lankes, R. David; Gross, Melissa; Choltco-Devlin, Beverly

    This manual is a first effort to begin to identify, describe, and develop procedures for assessing various aspects of digital reference service. Its overall purpose is to improve the quality of digital reference services and assist librarians to design and implement better digital reference services. More specifically, its aim is to: assist…

  18. Standardised Observation Analogue Procedure (SOAP) for Assessing Parent and Child Behaviours in Clinical Trials

    ERIC Educational Resources Information Center

    Johnson, Cynthia R.; Butter, Eric M.; Handen, Benjamin L.; Sukhodolsky, Denis G.; Mulick, James; Lecavalier, Luc; Aman, Michael G.; Arnold, Eugene L.; Scahill, Lawrence; Swiezy, Naomi; Sacco, Kelley; Stigler, Kimberly A.; McDougle, Christopher J.

    2009-01-01

    Background: Observational measures of parent and child behaviours have a long history in child psychiatric and psychological intervention research, including the field of autism and developmental disability. We describe the development of the Standardised Observational Analogue Procedure (SOAP) for the assessment of parent-child behaviour before…

  19. Meeting on Common Ground: Assessing Parent-Child Relationships through the Joint Painting Procedure

    ERIC Educational Resources Information Center

    Gavron, Tami

    2013-01-01

    A basic assumption in psychotherapy with children is that the parent-child relationship is central to the child's development. This article describes the Joint Painting Procedure, an art-based assessment for evaluating relationships with respect to the two main developmental tasks of middle childhood: (a) the parent's ability to monitor and…

  20. Assessing Women's Responses to Sexual Threat: Validity of a Virtual Role-Play Procedure

    ERIC Educational Resources Information Center

    Jouriles, Ernest N.; Rowe, Lorelei Simpson; McDonald, Renee; Platt, Cora G.; Gomez, Gabriella S.

    2011-01-01

    This study evaluated the validity of a role-play procedure that uses virtual reality technology to assess women's responses to sexual threat. Forty-eight female undergraduate students were randomly assigned to either a standard, face-to-face role-play (RP) or a virtual role-play (VRP) of a sexually coercive situation. A multimethod assessment…

  1. Intonation Features of the Expression of Emotions in Spanish: Preliminary Study for a Prosody Assessment Procedure

    ERIC Educational Resources Information Center

    Martinez-Castilla, Pastora; Peppe, Susan

    2008-01-01

    This study aimed to find out what intonation features reliably represent the emotions of "liking" as opposed to "disliking" in the Spanish language, with a view to designing a prosody assessment procedure for use with children with speech and language disorders. 18 intonationally different prosodic realisations (tokens) of one word (limon) were…

  2. An Example of the Application of the Assessment and Diagnostic Procedures of a Comprehensive Accountability Plan.

    ERIC Educational Resources Information Center

    Marco, Gary L.

    The assessment and diagnostic procedures of a comprehensive accountability plan were applied to several elementary schools from a large midwestern state. Pretest and posttest Word Knowledge and Reading scores from the Primary II Metropolitan Achievement Test administered in 1970-71 to third-graders were used. These data were used to compute…

  3. The Implicit Relational Assessment Procedure (IRAP) and the Malleability of Ageist Attitudes

    ERIC Educational Resources Information Center

    Cullen, Claire; Barnes-Holmes, Dermot; Barnes-Holmes, Yvonne; Stewart, Ian

    2009-01-01

    The current study examined the malleability of implicit attitudes using the Implicit Relational Assessment Procedure (IRAP). In Experiment 1, "similar" and "opposite" were presented as response options with the sample terms "old people" and "young people" and various positive and negative target stimuli.…

  4. EVALUATION OF ENVIRONMENTAL HAZARD ASSESSMENT PROCEDURES FOR NEAR-COASTAL AREAS OF THE GULF OF MEXICO

    EPA Science Inventory

    Lewis, Michael A. In press. Evaluation of Environmental Hazard Assessment Procedures for Near-Coastal Areas of the Gulf of Mexico (Abstract). To be presented at the Annual Meeting of the the Australasian Society of Ecotoxicology, July 2004, Gold Coast, Australia. 1 p. (ERL,GB R98...

  5. The Implicit Relational Assessment Procedure (IRAP) as a Measure of Implicit Relative Preferences: A First Study

    ERIC Educational Resources Information Center

    Power, Patricia; Barnes-Holmes, Dermot; Barnes-Holmes, Yvonne; Stewart, Ian

    2009-01-01

    The Implicit Relational Assessment Procedure (IRAP) was designed to examine implicit beliefs or attitudes. In Experiment 1, response latencies obtained from Irish participants on the IRAP showed a strong preference for Irish over Scottish and American over African. In contrast, responses to explicit Likert measures diverged from the IRAP…

  6. The Implicit Relational Assessment Procedure (IRAP) as a Measure of Spider Fear

    ERIC Educational Resources Information Center

    Nicholson, Emma; Barnes-Holmes, Dermot

    2012-01-01

    A greater understanding of implicit cognition can provide important information regarding the etiology and maintenance of psychological disorders. The current study sought to determine the utility of the Implicit Relational Assessment Procedure (IRAP) as a measure of implicit aversive bias toward spiders in two groups of known variation, high fear…

  7. Accuracy Assessment of Three-dimensional Surface Reconstructions of In vivo Teeth from Cone-beam Computed Tomography

    PubMed Central

    Sang, Yan-Hui; Hu, Hong-Cheng; Lu, Song-He; Wu, Yu-Wei; Li, Wei-Ran; Tang, Zhi-Hui

    2016-01-01

    Background: The accuracy of three-dimensional (3D) reconstructions from cone-beam computed tomography (CBCT) has been particularly important in dentistry, which will affect the effectiveness of diagnosis, treatment plan, and outcome in clinical practice. The aims of this study were to assess the linear, volumetric, and geometric accuracy of 3D reconstructions from CBCT and to investigate the influence of voxel size and CBCT system on the reconstructions results. Methods: Fifty teeth from 18 orthodontic patients were assigned to three groups as NewTom VG 0.15 mm group (NewTom VG; voxel size: 0.15 mm; n = 17), NewTom VG 0.30 mm group (NewTom VG; voxel size: 0.30 mm; n = 16), and VATECH DCTPRO 0.30 mm group (VATECH DCTPRO; voxel size: 0.30 mm; n = 17). The 3D reconstruction models of the teeth were segmented from CBCT data manually using Mimics 18.0 (Materialise Dental, Leuven, Belgium), and the extracted teeth were scanned by 3Shape optical scanner (3Shape A/S, Denmark). Linear and volumetric deviations were separately assessed by comparing the length and volume of the 3D reconstruction model with physical measurement by paired t-test. Geometric deviations were assessed by the root mean square value of the imposed 3D reconstruction and optical models by one-sample t-test. To assess the influence of voxel size and CBCT system on 3D reconstruction, analysis of variance (ANOVA) was used (α = 0.05). Results: The linear, volumetric, and geometric deviations were −0.03 ± 0.48 mm, −5.4 ± 2.8%, and 0.117 ± 0.018 mm for NewTom VG 0.15 mm group; −0.45 ± 0.42 mm, −4.5 ± 3.4%, and 0.116 ± 0.014 mm for NewTom VG 0.30 mm group; and −0.93 ± 0.40 mm, −4.8 ± 5.1%, and 0.194 ± 0.117 mm for VATECH DCTPRO 0.30 mm group, respectively. There were statistically significant differences between groups in terms of linear measurement (P < 0.001), but no significant difference in terms of volumetric measurement (P = 0.774). No statistically significant difference were

  8. Construct measurement quality improves predictive accuracy in violence risk assessment: an illustration using the personality assessment inventory.

    PubMed

    Hendry, Melissa C; Douglas, Kevin S; Winter, Elizabeth A; Edens, John F

    2013-01-01

    Much of the risk assessment literature has focused on the predictive validity of risk assessment tools. However, these tools often comprise a list of risk factors that are themselves complex constructs, and focusing on the quality of measurement of individual risk factors may improve the predictive validity of the tools. The present study illustrates this concern using the Antisocial Features and Aggression scales of the Personality Assessment Inventory (Morey, 1991). In a sample of 1,545 prison inmates and offenders undergoing treatment for substance abuse (85% male), we evaluated (a) the factorial validity of the ANT and AGG scales, (b) the utility of original ANT and AGG scales and newly derived ANT and AGG scales for predicting antisocial outcomes (recidivism and institutional infractions), and (c) whether items with a stronger relationship to the underlying constructs (higher factor loadings) were in turn more strongly related to antisocial outcomes. Confirmatory factor analyses (CFAs) indicated that ANT and AGG items were not structured optimally in these data in terms of correspondence to the subscale structure identified in the PAI manual. Exploratory factor analyses were conducted on a random split-half of the sample to derive optimized alternative factor structures, and cross-validated in the second split-half using CFA. Four-factor models emerged for both the ANT and AGG scales, and, as predicted, the size of item factor loadings was associated with the strength with which items were associated with institutional infractions and community recidivism. This suggests that the quality by which a construct is measured is associated with its predictive strength. Implications for risk assessment are discussed.

  9. Accuracy of qualitative analysis for assessment of skilled baseball pitching technique.

    PubMed

    Nicholls, Rochelle; Fleisig, Glenn; Elliott, Bruce; Lyman, Stephen; Osinski, Edmund

    2003-07-01

    Baseball pitching must be performed with correct technique if injuries are to be avoided and performance maximized. High-speed video analysis is accepted as the most accurate and objective method for evaluation of baseball pitching mechanics. The aim of this research was to develop an equivalent qualitative analysis method for use with standard video equipment. A qualitative analysis protocol (QAP) was developed for 24 kinematic variables identified as important to pitching performance. Twenty male baseball pitchers were videotaped using 60 Hz camcorders, and their technique evaluated using the QAP, by two independent raters. Each pitcher was also assessed using a 6-camera 200 Hz Motion Analysis system (MAS). Four QAP variables (22%) showed significant similarity with MAS results. Inter-rater reliability showed agreement on 33% of QAP variables. It was concluded that a complete and accurate profile of an athlete's pitching mechanics cannot be made using the QAP in its current form, but it is possible such simple forms of biomechanical analysis could yield accurate results before 3-D methods become obligatory.

  10. A probabilistic seismic risk assessment procedure for nuclear power plants: (II) Application

    USGS Publications Warehouse

    Huang, Y.-N.; Whittaker, A.S.; Luco, N.

    2011-01-01

    This paper presents the procedures and results of intensity- and time-based seismic risk assessments of a sample nuclear power plant (NPP) to demonstrate the risk-assessment methodology proposed in its companion paper. The intensity-based assessments include three sets of sensitivity studies to identify the impact of the following factors on the seismic vulnerability of the sample NPP, namely: (1) the description of fragility curves for primary and secondary components of NPPs, (2) the number of simulations of NPP response required for risk assessment, and (3) the correlation in responses between NPP components. The time-based assessment is performed as a series of intensity-based assessments. The studies illustrate the utility of the response-based fragility curves and the inclusion of the correlation in the responses of NPP components directly in the risk computation. ?? 2011 Published by Elsevier B.V.

  11. An Assessment of the Accuracy of Admittance and Coherence Estimates Using Synthetic Data

    NASA Astrophysics Data System (ADS)

    Crosby, A.

    2006-12-01

    The estimation of the effective elastic thickness of the lithosphere (T_e) using spectral relationships between gravity and topography has become a controversial topic in recent years. However, one area which has received relatively little attention is the bias in estimates of T_e and the internal loading fraction (F_2) which results from spectral leakage and noise when using the multi-tapered free-air admittance method. In this study, I use grids of synthetic data to assess the magnitude of that bias. I also assess the bias which occurs when T_e within other planets is estimated using the admittance between observed and topographic line-of-sight accelerations of orbiting satellites. I find that leakage can cause the estimated admittance and coherence to be significantly in error, but only if the box in which they are estimated is too small. The definition of `small' depends on the redness of the gravity spectrum. On the Earth, there is minimal error in the estimate of T_e if the admittance between surface gravity and topography is estimated within a box at least 3000-km-wide. When the true T_e is less than 20~km and the true coherence is high, the errors in the estimate of T_e are mostly less than 5~km for all box sizes greater than 1000~km. On the other hand, when the true T_e is greater than 20~km and the box size is 1000~km, the best-fit T_e is likely to be at least 5-10~km less than the true T_e. Even when the true coherence is high, it is not possible to use the free-air admittance to distinguish between real and spurious small fractions of internal loading when the boxes are smaller than 2000~km in size. Furthermore, the trade-off between T_e and F_2 means that even small amounts of leakage can shift the best-fit values of T_e and F_2 by an appreciable amount when the true F_2 is greater than zero. Geological noise in the gravity is caused by subsurface loads, the flexural surface expression of which has been erased by erosion and deposition. I find that

  12. Constraining OCT with Knowledge of Device Design Enables High Accuracy Hemodynamic Assessment of Endovascular Implants

    PubMed Central

    Brown, Jonathan; Lopes, Augusto C.; Kunio, Mie; Kolachalama, Vijaya B.; Edelman, Elazer R.

    2016-01-01

    Background Stacking cross-sectional intravascular images permits three-dimensional rendering of endovascular implants, yet introduces between-frame uncertainties that limit characterization of device placement and the hemodynamic microenvironment. In a porcine coronary stent model, we demonstrate enhanced OCT reconstruction with preservation of between-frame features through fusion with angiography and a priori knowledge of stent design. Methods and Results Strut positions were extracted from sequential OCT frames. Reconstruction with standard interpolation generated discontinuous stent structures. By computationally constraining interpolation to known stent skeletons fitted to 3D ‘clouds’ of OCT-Angio-derived struts, implant anatomy was resolved, accurately rendering features from implant diameter and curvature (n = 1 vessels, r2 = 0.91, 0.90, respectively) to individual strut-wall configurations (average displacement error ~15 μm). This framework facilitated hemodynamic simulation (n = 1 vessel), showing the critical importance of accurate anatomic rendering in characterizing both quantitative and basic qualitative flow patterns. Discontinuities with standard approaches systematically introduced noise and bias, poorly capturing regional flow effects. In contrast, the enhanced method preserved multi-scale (local strut to regional stent) flow interactions, demonstrating the impact of regional contexts in defining the hemodynamic consequence of local deployment errors. Conclusion Fusion of planar angiography and knowledge of device design permits enhanced OCT image analysis of in situ tissue-device interactions. Given emerging interests in simulation-derived hemodynamic assessment as surrogate measures of biological risk, such fused modalities offer a new window into patient-specific implant environments. PMID:26906566

  13. NASA Safety Standard: Guidelines and Assessment Procedures for Limiting Orbital Debris

    NASA Technical Reports Server (NTRS)

    1995-01-01

    Collision with orbital debris is a hazard of growing concern as historically accepted practices and procedures have allowed man-made objects to accumulate in orbit. To limit future debris generation, NASA Management Instruction (NMI) 1700.8, 'Policy to Limit Orbital Debris Generation,' was issued in April of 1993. The NMI requires each program to conduct a formal assessment of the potential to generate orbital debris. This document serves as a companion to NMI 1700.08 and provides each NASA program with specific guidelines and assessment methods to assure compliance with the NMI. Each main debris assessment issue (e.g., Post Mission Disposal) is developed in a separate chapter.

  14. Assessing the prediction accuracy of cure in the Cox proportional hazards cure model: an application to breast cancer data.

    PubMed

    Asano, Junichi; Hirakawa, Akihiro; Hamada, Chikuma

    2014-01-01

    A cure rate model is a survival model incorporating the cure rate with the assumption that the population contains both uncured and cured individuals. It is a powerful statistical tool for prognostic studies, especially in cancer. The cure rate is important for making treatment decisions in clinical practice. The proportional hazards (PH) cure model can predict the cure rate for each patient. This contains a logistic regression component for the cure rate and a Cox regression component to estimate the hazard for uncured patients. A measure for quantifying the predictive accuracy of the cure rate estimated by the Cox PH cure model is required, as there has been a lack of previous research in this area. We used the Cox PH cure model for the breast cancer data; however, the area under the receiver operating characteristic curve (AUC) could not be estimated because many patients were censored. In this study, we used imputation-based AUCs to assess the predictive accuracy of the cure rate from the PH cure model. We examined the precision of these AUCs using simulation studies. The results demonstrated that the imputation-based AUCs were estimable and their biases were negligibly small in many cases, although ordinary AUC could not be estimated. Additionally, we introduced the bias-correction method of imputation-based AUCs and found that the bias-corrected estimate successfully compensated the overestimation in the simulation studies. We also illustrated the estimation of the imputation-based AUCs using breast cancer data.

  15. Quantitative assessment of the accuracy of dose calculation using pencil beam and Monte Carlo algorithms and requirements for clinical quality assurance

    SciTech Connect

    Ali, Imad; Ahmad, Salahuddin

    2013-10-01

    To compare the doses calculated using the BrainLAB pencil beam (PB) and Monte Carlo (MC) algorithms for tumors located in various sites including the lung and evaluate quality assurance procedures required for the verification of the accuracy of dose calculation. The dose-calculation accuracy of PB and MC was also assessed quantitatively with measurement using ionization chamber and Gafchromic films placed in solid water and heterogeneous phantoms. The dose was calculated using PB convolution and MC algorithms in the iPlan treatment planning system from BrainLAB. The dose calculation was performed on the patient's computed tomography images with lesions in various treatment sites including 5 lungs, 5 prostates, 4 brains, 2 head and necks, and 2 paraspinal tissues. A combination of conventional, conformal, and intensity-modulated radiation therapy plans was used in dose calculation. The leaf sequence from intensity-modulated radiation therapy plans or beam shapes from conformal plans and monitor units and other planning parameters calculated by the PB were identical for calculating dose with MC. Heterogeneity correction was considered in both PB and MC dose calculations. Dose-volume parameters such as V95 (volume covered by 95% of prescription dose), dose distributions, and gamma analysis were used to evaluate the calculated dose by PB and MC. The measured doses by ionization chamber and EBT GAFCHROMIC film in solid water and heterogeneous phantoms were used to quantitatively asses the accuracy of dose calculated by PB and MC. The dose-volume histograms and dose distributions calculated by PB and MC in the brain, prostate, paraspinal, and head and neck were in good agreement with one another (within 5%) and provided acceptable planning target volume coverage. However, dose distributions of the patients with lung cancer had large discrepancies. For a plan optimized with PB, the dose coverage was shown as clinically acceptable, whereas in reality, the MC showed a

  16. Quantitative assessment of the accuracy of dose calculation using pencil beam and Monte Carlo algorithms and requirements for clinical quality assurance.

    PubMed

    Ali, Imad; Ahmad, Salahuddin

    2013-01-01

    To compare the doses calculated using the BrainLAB pencil beam (PB) and Monte Carlo (MC) algorithms for tumors located in various sites including the lung and evaluate quality assurance procedures required for the verification of the accuracy of dose calculation. The dose-calculation accuracy of PB and MC was also assessed quantitatively with measurement using ionization chamber and Gafchromic films placed in solid water and heterogeneous phantoms. The dose was calculated using PB convolution and MC algorithms in the iPlan treatment planning system from BrainLAB. The dose calculation was performed on the patient's computed tomography images with lesions in various treatment sites including 5 lungs, 5 prostates, 4 brains, 2 head and necks, and 2 paraspinal tissues. A combination of conventional, conformal, and intensity-modulated radiation therapy plans was used in dose calculation. The leaf sequence from intensity-modulated radiation therapy plans or beam shapes from conformal plans and monitor units and other planning parameters calculated by the PB were identical for calculating dose with MC. Heterogeneity correction was considered in both PB and MC dose calculations. Dose-volume parameters such as V95 (volume covered by 95% of prescription dose), dose distributions, and gamma analysis were used to evaluate the calculated dose by PB and MC. The measured doses by ionization chamber and EBT GAFCHROMIC film in solid water and heterogeneous phantoms were used to quantitatively asses the accuracy of dose calculated by PB and MC. The dose-volume histograms and dose distributions calculated by PB and MC in the brain, prostate, paraspinal, and head and neck were in good agreement with one another (within 5%) and provided acceptable planning target volume coverage. However, dose distributions of the patients with lung cancer had large discrepancies. For a plan optimized with PB, the dose coverage was shown as clinically acceptable, whereas in reality, the MC showed a

  17. A procedural skills OSCE: assessing technical and non-technical skills of internal medicine residents.

    PubMed

    Pugh, Debra; Hamstra, Stanley J; Wood, Timothy J; Humphrey-Murto, Susan; Touchie, Claire; Yudkowsky, Rachel; Bordage, Georges

    2015-03-01

    Internists are required to perform a number of procedures that require mastery of technical and non-technical skills, however, formal assessment of these skills is often lacking. The purpose of this study was to develop, implement, and gather validity evidence for a procedural skills objective structured clinical examination (PS-OSCE) for internal medicine (IM) residents to assess their technical and non-technical skills when performing procedures. Thirty-five first to third-year IM residents participated in a 5-station PS-OSCE, which combined partial task models, standardized patients, and allied health professionals. Formal blueprinting was performed and content experts were used to develop the cases and rating instruments. Examiners underwent a frame-of-reference training session to prepare them for their rater role. Scores were compared by levels of training, experience, and to evaluation data from a non-procedural OSCE (IM-OSCE). Reliability was calculated using Generalizability analyses. Reliabilities for the technical and non-technical scores were 0.68 and 0.76, respectively. Third-year residents scored significantly higher than first-year residents on the technical (73.5 vs. 62.2%) and non-technical (83.2 vs. 75.1%) components of the PS-OSCE (p < 0.05). Residents who had performed the procedures more frequently scored higher on three of the five stations (p < 0.05). There was a moderate disattenuated correlation (r = 0.77) between the IM-OSCE and the technical component of the PS-OSCE scores. The PS-OSCE is a feasible method for assessing multiple competencies related to performing procedures and this study provides validity evidence to support its use as an in-training examination.

  18. Assessment of the dosimetric accuracies of CATPhan 504 and CIRS 062 using kV-CBCT for performing direct calculations.

    PubMed

    Annkah, James Kwame; Rosenberg, Ivan; Hindocha, Naina; Moinuddin, Syed Ali; Ricketts, Kate; Adeyemi, Abiodun; Royle, Gary

    2014-07-01

    The dosimetric accuracies of CATPhan 504 and CIRS 062 have been evaluated using the kV-CBCT of Varian TrueBeam linac and Eclipse TPS. The assessment was done using the kV-CBCT as a standalone tool for dosimetric calculations towards Adaptive replanning. Dosimetric calculations were made without altering the HU-ED curves of the planning computed tomography (CT) scanner that is used by the Eclipse TPS. All computations were done using the images and dataset from kV-CBCT while maintaining the HU-ED calibration curve of the planning CT (pCT), assuming pCT was used for the initial treatment plan. Results showed that the CIRS phantom produces doses within ±5% of the CT-based plan while CATPhan 504 produces a variation of ±14% of the CT-based plan.

  19. Assessment of the dosimetric accuracies of CATPhan 504 and CIRS 062 using kV-CBCT for performing direct calculations

    PubMed Central

    Annkah, James Kwame; Rosenberg, Ivan; Hindocha, Naina; Moinuddin, Syed Ali; Ricketts, Kate; Adeyemi, Abiodun; Royle, Gary

    2014-01-01

    The dosimetric accuracies of CATPhan 504 and CIRS 062 have been evaluated using the kV-CBCT of Varian TrueBeam linac and Eclipse TPS. The assessment was done using the kV-CBCT as a standalone tool for dosimetric calculations towards Adaptive replanning. Dosimetric calculations were made without altering the HU-ED curves of the planning computed tomography (CT) scanner that is used by the Eclipse TPS. All computations were done using the images and dataset from kV-CBCT while maintaining the HU-ED calibration curve of the planning CT (pCT), assuming pCT was used for the initial treatment plan. Results showed that the CIRS phantom produces doses within ±5% of the CT-based plan while CATPhan 504 produces a variation of ±14% of the CT-based plan. PMID:25190991

  20. Assessment of arterial stenosis in a flow model with power Doppler angiography: accuracy and observations on blood echogenicity.

    PubMed

    Cloutier, G; Qin, Z; Garcia, D; Soulez, G; Oliva, V; Durand, L G

    2000-11-01

    The objective of the project was to study the influence of various hemodynamic and rheologic factors on the accuracy of 3-D power Doppler angiography (PDA) for quantifying the percentage of area reduction of a stenotic artery along its longitudinal axis. The study was performed with a 3-D power Doppler ultrasound (US) imaging system and an in vitro mock flow model containing a simulated artery with a stenosis of 80% area reduction. Measurements were performed under steady and pulsatile flow conditions by circulating, at different flow rates, four types of fluid (porcine whole blood, porcine whole blood with a US contrast agent, porcine blood cell suspension and porcine blood cell suspension with a US contrast agent). A total of 120 measurements were performed. Computational simulations of the fluid dynamics in the vicinity of the axisymmetrical stenosis were performed with finite-element modeling (FEM) to locate and identify the PDA signal loss due to the wall filter of the US instrument. The performance of three segmentation algorithms used to delineate the vessel lumen on the PDA images was assessed and compared. It is shown that the type of fluid flowing in the phantom affects the echoicity of PDA images and the accuracy of the segmentation algorithms. The type of flow (steady or pulsatile) and the flow rate can also influence the PDA image accuracy, whereas the use of US contrast agent has no significant effect. For the conditions that would correspond to a US scan of a common femoral artery (whole blood flowing at a mean pulsatile flow rate of 450 mL min(-1)), the errors in the percentages of area reduction were 4.3 +/- 1.2% before the stenosis, -2.0 +/- 1.0% in the stenosis, 11.5 +/- 3.1% in the recirculation zone, and 2.8 +/- 1.7% after the stenosis, respectively. Based on the simulated blood flow patterns obtained with FEM, the lower accuracy in the recirculation zone can be attributed to the effect of the wall filter that removes low flow velocities. In

  1. Considering the normative, systemic and procedural dimensions in indicator-based sustainability assessments in agriculture

    SciTech Connect

    Binder, Claudia R.; Feola, Giuseppe; Steinberger, Julia K.

    2010-02-15

    This paper develops a framework for evaluating sustainability assessment methods by separately analyzing their normative, systemic and procedural dimensions as suggested by Wiek and Binder [Wiek, A, Binder, C. Solution spaces for decision-making - a sustainability assessment tool for city-regions. Environ Impact Asses Rev 2005, 25: 589-608.]. The framework is then used to characterize indicator-based sustainability assessment methods in agriculture. For a long time, sustainability assessment in agriculture has focused mostly on environmental and technical issues, thus neglecting the economic and, above all, the social aspects of sustainability, the multi-functionality of agriculture and the applicability of the results. In response to these shortcomings, several integrative sustainability assessment methods have been developed for the agricultural sector. This paper reviews seven of these that represent the diversity of tools developed in this area. The reviewed assessment methods can be categorized into three types: (i) top-down farm assessment methods; (ii) top-down regional assessment methods with some stakeholder participation; (iii) bottom-up, integrated participatory or transdisciplinary methods with stakeholder participation throughout the process. The results readily show the trade-offs encountered when selecting an assessment method. A clear, standardized, top-down procedure allows for potentially benchmarking and comparing results across regions and sites. However, this comes at the cost of system specificity. As the top-down methods often have low stakeholder involvement, the application and implementation of the results might be difficult. Our analysis suggests that to include the aspects mentioned above in agricultural sustainability assessment, the bottom-up, integrated participatory or transdisciplinary methods are the most suitable ones.

  2. Pitfalls at the root of facial assessment on photographs: a quantitative study of accuracy in positioning facial landmarks.

    PubMed

    Cummaudo, M; Guerzoni, M; Marasciuolo, L; Gibelli, D; Cigada, A; Obertovà, Z; Ratnayake, M; Poppa, P; Gabriel, P; Ritz-Timme, S; Cattaneo, C

    2013-05-01

    In the last years, facial analysis has gained great interest also for forensic anthropology. The application of facial landmarks may bring about relevant advantages for the analysis of 2D images by measuring distances and extracting quantitative indices. However, this is a complex task which depends upon the variability in positioning facial landmarks. In addition, literature provides only general indications concerning the reliability in positioning facial landmarks on photographic material, and no study is available concerning the specific errors which may be encountered in such an operation. The aim of this study is to analyze the inter- and intra-observer error in defining facial landmarks on photographs by using a software specifically developed for this purpose. Twenty-four operators were requested to define 22 facial landmarks on frontal view photographs and 11 on lateral view images; in addition, three operators repeated the procedure on the same photographs 20 times (at distance of 24 h). In the frontal view, the landmarks with less dispersion were the pupil, cheilion, endocanthion, and stomion (sto), and the landmarks with the highest dispersion were gonion, zygion, frontotemporale, tragion, and selion (se). In the lateral view, the landmarks with the least dispersion were se, pronasale, subnasale, and sto, whereas landmarks with the highest dispersion were gnathion, pogonion, and tragion. Results confirm that few anatomical points can be defined with the highest accuracy and show the importance of the preliminary investigation of reliability in positioning facial landmarks.

  3. Diagnostic accuracy of salivary creatinine, urea, and potassium levels to assess dialysis need in renal failure patients

    PubMed Central

    Bagalad, Bhavana S.; Mohankumar, K. P.; Madhushankari, G. S.; Donoghue, Mandana; Kuberappa, Puneeth Horatti

    2017-01-01

    Background: The prevalence of chronic renal failure is increasing because of increase in chronic debilitating diseases and progressing age of population. These patients experience accumulation of metabolic byproducts and electrolyte imbalance, which has harmful effects on their health. Timely hemodialysis at regular intervals is a life-saving procedure for these patients. Salivary diagnostics is increasingly used as an alternative to the traditional methods. Thus, the aim of the present study was to determine the diagnostic efficacy of saliva in chronic renal failure patients. Materials and Methods: This case–control study included 82 individuals, of which 41 were chronic renal failure patients and 41 were age- and sex-matched controls. Blood and saliva were collected and centrifuged. Serum and supernatant saliva were used for biochemical analysis. Serum and salivary urea, creatinine, sodium, potassium, calcium, and phosphorus were evaluated and correlated in chronic renal failure patients using unpaired t-test, Pearson's correlation coefficient, diagnostic validity tests, and receiver operative curve. Results: When compared to serum; salivary urea, creatinine, sodium, and potassium showed diagnostic accuracy of 93%, 91%, 73%, and 89%, respectively, based on the findings of study. Conclusion: It can be concluded that salivary investigation is a dependable, noninvasive, noninfectious, simple, and quick method for screening the mineral and metabolite values of high-risk patients and monitoring the renal failure patients.

  4. User guide for WIACX: A transonic wind-tunnel wall interference assessment and correction procedure for the NTF

    NASA Technical Reports Server (NTRS)

    Garriz, Javier A.; Haigler, Kara J.

    1992-01-01

    A three dimensional transonic Wind-tunnel Interference Assessment and Correction (WIAC) procedure developed specifically for use in the National Transonic Facility (NTF) at NASA Langley Research Center is discussed. This report is a user manual for the codes comprising the correction procedure. It also includes listings of sample procedures and input files for running a sample case and plotting the results.

  5. Positional Accuracy Assessment of the Openstreetmap Buildings Layer Through Automatic Homologous Pairs Detection: the Method and a Case Study

    NASA Astrophysics Data System (ADS)

    Brovelli, M. A.; Minghini, M.; Molinari, M. E.; Zamboni, G.

    2016-06-01

    OpenStreetMap (OSM) is currently the largest openly licensed collection of geospatial data. Being OSM increasingly exploited in a variety of applications, research has placed great attention on the assessment of its quality. This work focuses on assessing the quality of OSM buildings. While most of the studies available in literature are limited to the evaluation of OSM building completeness, this work proposes an original approach to assess the positional accuracy of OSM buildings based on comparison with a reference dataset. The comparison relies on a quasi-automated detection of homologous pairs on the two datasets. Based on the homologous pairs found, warping algorithms like e.g. affine transformations and multi-resolution splines can be applied to the OSM buildings to generate a new version having an optimal local match to the reference layer. A quality assessment of the OSM buildings of Milan Municipality (Northern Italy), having an area of about 180 km2, is then presented. After computing some measures of completeness, the algorithm based on homologous points is run using the building layer of the official vector cartography of Milan Municipality as the reference dataset. Approximately 100000 homologous points are found, which show a systematic translation of about 0.4 m on both the X and Y directions and a mean distance of about 0.8 m between the datasets. Besides its efficiency and high degree of automation, the algorithm generates a warped version of OSM buildings which, having by definition a closest match to the reference buildings, can be eventually integrated in the OSM database.

  6. Dosimetric accuracy assessment of a treatment plan verification system for scanned proton beam radiotherapy: one-year experimental results and Monte Carlo analysis of the involved uncertainties.

    PubMed

    Molinelli, S; Mairani, A; Mirandola, A; Vilches Freixas, G; Tessonnier, T; Giordanengo, S; Parodi, K; Ciocca, M; Orecchia, R

    2013-06-07

    During one year of clinical activity at the Italian National Center for Oncological Hadron Therapy 31 patients were treated with actively scanned proton beams. Results of patient-specific quality assurance procedures are presented here which assess the accuracy of a three-dimensional dose verification technique with the simultaneous use of multiple small-volume ionization chambers. To investigate critical cases of major deviations between treatment planning system (TPS) calculated and measured data points, a Monte Carlo (MC) simulation tool was implemented for plan verification in water. Starting from MC results, the impact of dose calculation, dose delivery and measurement set-up uncertainties on plan verification results was analyzed. All resulting patient-specific quality checks were within the acceptance threshold, which was set at 5% for both mean deviation between measured and calculated doses and standard deviation. The mean deviation between TPS dose calculation and measurement was less than ±3% in 86% of the cases. When all three sources of uncertainty were accounted for, simulated data sets showed a high level of agreement, with mean and maximum absolute deviation lower than 2.5% and 5%, respectively.

  7. Dosimetric accuracy assessment of a treatment plan verification system for scanned proton beam radiotherapy: one-year experimental results and Monte Carlo analysis of the involved uncertainties

    NASA Astrophysics Data System (ADS)

    Molinelli, S.; Mairani, A.; Mirandola, A.; Vilches Freixas, G.; Tessonnier, T.; Giordanengo, S.; Parodi, K.; Ciocca, M.; Orecchia, R.

    2013-06-01

    During one year of clinical activity at the Italian National Center for Oncological Hadron Therapy 31 patients were treated with actively scanned proton beams. Results of patient-specific quality assurance procedures are presented here which assess the accuracy of a three-dimensional dose verification technique with the simultaneous use of multiple small-volume ionization chambers. To investigate critical cases of major deviations between treatment planning system (TPS) calculated and measured data points, a Monte Carlo (MC) simulation tool was implemented for plan verification in water. Starting from MC results, the impact of dose calculation, dose delivery and measurement set-up uncertainties on plan verification results was analyzed. All resulting patient-specific quality checks were within the acceptance threshold, which was set at 5% for both mean deviation between measured and calculated doses and standard deviation. The mean deviation between TPS dose calculation and measurement was less than ±3% in 86% of the cases. When all three sources of uncertainty were accounted for, simulated data sets showed a high level of agreement, with mean and maximum absolute deviation lower than 2.5% and 5%, respectively.

  8. A procedure of landscape services assessment based on mosaics of patches and boundaries.

    PubMed

    Martín de Agar, Pilar; Ortega, Marta; de Pablo, Carlos L

    2016-09-15

    We develop a procedure for assessing the environmental value of landscape mosaics that simultaneously considers the values of land use patches and the values of the boundaries between them. These boundaries indicate the ecological interactions between the patches. A landscape mosaic is defined as a set of patches and the boundaries between them and corresponds to a spatial pattern of ecological interactions. The procedure is performed in two steps: (i) an environmental assessment of land use patches by means of a function that integrates values based on the goods and services the patches provide, and (ii) an environmental valuation of mosaics using a function that integrates the environmental values of their patches and the types and frequencies of the boundaries between them. This procedure allows us to measure how changes in land uses or in their spatial arrangement cause variations in the environmental value of landscape mosaics and therefore in that of the whole landscape. The procedure was tested in the Sierra Norte of Madrid (central Spain). The results show that the environmental values of the landscape depend not only on the land use patches but also on the values associated with the pattern of the boundaries within the mosaics. The results also highlight the importance of the boundaries between land use patches as determinants of the goods and services provided by the landscape.

  9. WebRASP: a server for computing energy scores to assess the accuracy and stability of RNA 3D structures

    PubMed Central

    Norambuena, Tomas; Cares, Jorge F.; Capriotti, Emidio; Melo, Francisco

    2013-01-01

    Summary: The understanding of the biological role of RNA molecules has changed. Although it is widely accepted that RNAs play important regulatory roles without necessarily coding for proteins, the functions of many of these non-coding RNAs are unknown. Thus, determining or modeling the 3D structure of RNA molecules as well as assessing their accuracy and stability has become of great importance for characterizing their functional activity. Here, we introduce a new web application, WebRASP, that uses knowledge-based potentials for scoring RNA structures based on distance-dependent pairwise atomic interactions. This web server allows the users to upload a structure in PDB format, select several options to visualize the structure and calculate the energy profile. The server contains online help, tutorials and links to other related resources. We believe this server will be a useful tool for predicting and assessing the quality of RNA 3D structures. Availability and implementation: The web server is available at http://melolab.org/webrasp. It has been tested on the most popular web browsers and requires Java plugin for Jmol visualization. Contact: fmelo@bio.puc.cl PMID:23929030

  10. Accuracy and Usefulness of Select Methods for Assessing Complete Collection of 24-Hour Urine: A Systematic Review.

    PubMed

    John, Katherine A; Cogswell, Mary E; Campbell, Norm R; Nowson, Caryl A; Legetic, Branka; Hennis, Anselm J M; Patel, Sheena M

    2016-05-01

    Twenty-four-hour urine collection is the recommended method for estimating sodium intake. To investigate the strengths and limitations of methods used to assess completion of 24-hour urine collection, the authors systematically reviewed the literature on the accuracy and usefulness of methods vs para-aminobenzoic acid (PABA) recovery (referent). The percentage of incomplete collections, based on PABA, was 6% to 47% (n=8 studies). The sensitivity and specificity for identifying incomplete collection using creatinine criteria (n=4 studies) was 6% to 63% and 57% to 99.7%, respectively. The most sensitive method for removing incomplete collections was a creatinine index <0.7. In pooled analysis (≥2 studies), mean urine creatinine excretion and volume were higher among participants with complete collection (P<.05); whereas, self-reported collection time did not differ by completion status. Compared with participants with incomplete collection, mean 24-hour sodium excretion was 19.6 mmol higher (n=1781 specimens, 5 studies) in patients with complete collection. Sodium excretion may be underestimated by inclusion of incomplete 24-hour urine collections. None of the current approaches reliably assess completion of 24-hour urine collection.

  11. Accuracy of the third molar index for assessing the legal majority of 18 years in Turkish population.

    PubMed

    Gulsahi, Ayse; De Luca, Stefano; Cehreli, S Burcak; Tirali, R Ebru; Cameriere, Roberto

    2016-09-01

    In the last few years, forced and unregistered child marriage has widely increased into Turkey. The aim of this study was to test the accuracy of cut-off value of 0.08 by measurement of third molar index (I3M) in assessing legal adult age of 18 years. Digital panoramic images of 293 Turkish children and young adults (165 girls and 128 boys), aged between 14 and 22 years, were analysed. Age distribution gradually decreases as I3M increases in both girls and boys. For girls, the sensitivity was 85.9% (95% CI 77.1-92.8%) and specificity was 100%. The proportion of correctly classified individuals was 92.7%. For boys, the sensitivity was 94.6% (95% CI 88.1-99.8%) and specificity was 100%. The proportion of correctly classified individuals was 97.6%. The cut-off value of 0.08 is a useful method to assess if a subject is older than 18 years of age or not.

  12. Assessment of radiation protection of patients and staff in interventional procedures in four Algerian hospitals.

    PubMed

    Khelassi-Toutaoui, N; Toutaoui, A; Merad, A; Sakhri-Brahimi, Z; Baggoura, B; Mansouri, B

    2016-01-01

    This study was aimed to assess patient dosimetry in interventional cardiology (IC) and radiology (IR) and radiation safety of the medical operating staff. For this purpose, four major Algerian hospitals were investigated. The data collected cover radiation protection tools assigned to the operating staff and measured radiation doses to some selected patient populations. The analysis revealed that lead aprons are systematically worn by the staff but not lead eye glasses, and only a single personal monitoring badge is assigned to the operating staff. Measured doses to patients exhibited large variations in the maximum skin dose (MSD) and in the dose area product (DAP). The mean MSD registered values are as follows: 0.20, 0.14 and 1.28 Gy in endoscopic retrograde cholangiopancreatography (ERCP), coronary angiography (CA) and percutaneous transluminal coronary angioplasty (PTCA) procedures, respectively. In PTCA, doses to 3 out of 22 patients (13.6 %) had even reached the threshold value of 2 Gy. The mean DAP recorded values are as follows: 21.6, 60.1 and 126 Gy cm(2) in ERCP, CA and PTCA procedures, respectively. Mean fluoroscopic times are 2.5, 5 and 15 min in ERCP, CA and PTCA procedures, respectively. The correlation between DAP and MSD is fair in CA (r = 0.62) and poor in PTCA (r = 0.28). Fluoroscopic time was moderately correlated with DAP in CA (r = 0.55) and PTCA (r = 0.61) procedures. Local diagnostic reference levels (DRLs) in CA and PTCA procedures have been proposed. In conclusion, this study stresses the need for a continuous patient dose monitoring in interventional procedures with a special emphasis in IC procedures. Common strategies must be undertaken to substantially reduce radiation doses to both patients and medical staff.

  13. AN ACCURACY ASSESSMENT OF 1992 LANDSAT-MSS DERIVED LAND COVER FOR THE UPPER SAN PEDRO WATERSHED (U.S./MEXICO)

    EPA Science Inventory

    The utility of Digital Orthophoto Quads (DOQS) in assessing the classification accuracy of land cover derived from Landsat MSS data was investigated. Initially, the suitability of DOQs in distinguishing between different land cover classes was assessed using high-resolution airbo...

  14. A limited assessment of the ASEP human reliability analysis procedure using simulator examination results

    SciTech Connect

    Gore, B.R.; Dukelow, J.S. Jr.; Mitts, T.M.; Nicholson, W.L.

    1995-10-01

    This report presents a limited assessment of the conservatism of the Accident Sequence Evaluation Program (ASEP) human reliability analysis (HRA) procedure described in NUREG/CR-4772. In particular, the, ASEP post-accident, post-diagnosis, nominal HRA procedure is assessed within the context of an individual`s performance of critical tasks on the simulator portion of requalification examinations administered to nuclear power plant operators. An assessment of the degree to which operator perforn:Lance during simulator examinations is an accurate reflection of operator performance during actual accident conditions was outside the scope of work for this project; therefore, no direct inference can be made from this report about such performance. The data for this study are derived from simulator examination reports from the NRC requalification examination cycle. A total of 4071 critical tasks were identified, of which 45 had been failed. The ASEP procedure was used to estimate human error probability (HEP) values for critical tasks, and the HEP results were compared with the failure rates observed in the examinations. The ASEP procedure was applied by PNL operator license examiners who supplemented the limited information in the examination reports with expert judgment based upon their extensive simulator examination experience. ASEP analyses were performed for a sample of 162 critical tasks selected randomly from the 4071, and the results were used to characterize the entire population. ASEP analyses were also performed for all of the 45 failed critical tasks. Two tests were performed to assess the bias of the ASEP HEPs compared with the data from the requalification examinations. The first compared the average of the ASEP HEP values with the fraction of the population actually failed and it found a statistically significant factor of two bias on the average.

  15. The accuracy of a patient or parent-administered bleeding assessment tool administered in a paediatric haematology clinic.

    PubMed

    Lang, A T; Sturm, M S; Koch, T; Walsh, M; Grooms, L P; O'Brien, S H

    2014-11-01

    Classifying and describing bleeding symptoms is essential in the diagnosis and management of patients with mild bleeding disorders (MBDs). There has been increased interest in the use of bleeding assessment tools (BATs) to more objectively quantify the presence and severity of bleeding symptoms. To date, the administration of BATs has been performed almost exclusively by clinicians; the accuracy of a parent-proxy BAT has not been studied. Our objective was to determine the accuracy of a parent-administered BAT by measuring the level of agreement between parent and clinician responses to the Condensed MCMDM-1VWD Bleeding Questionnaire. Our cross-sectional study included children 0-21 years presenting to a haematology clinic for initial evaluation of a suspected MBD or follow-up evaluation of a previously diagnosed MBD. The parent/caregiver completed a modified version of the BAT; the clinician separately completed the BAT through interview. The mean parent-report bleeding score (BS) was 6.09 (range: -2 to 25); the mean clinician report BS was 4.54 (range: -1 to 17). The mean percentage of agreement across all bleeding symptoms was 78% (mean κ = 0.40; Gwet's AC1 = 0.74). Eighty percent of the population had an abnormal BS (defined as ≥2) when rated by parents and 76% had an abnormal score when rated by clinicians (86% agreement, κ = 0.59, Gwet's AC1 = 0.79). While parents tended to over-report bleeding as compared to clinicians, overall, BSs were similar between groups. These results lend support for further study of a modified proxy-report BAT as a clinical and research tool.

  16. New models for age estimation and assessment of their accuracy using developing mandibular third molar teeth in a Thai population.

    PubMed

    Duangto, P; Iamaroon, A; Prasitwattanaseree, S; Mahakkanukrauh, P; Janhom, A

    2017-03-01

    Age estimation using developing third molar teeth is considered an important and accurate technique for both clinical and forensic practices. The aims of this study were to establish population-specific reference data, to develop age prediction models using mandibular third molar development, to test the accuracy of the resulting models, and to find the probability of persons being at the age thresholds of legal relevance in a Thai population. A total of 1867 digital panoramic radiographs of Thai individuals aged between 8 and 23 years was selected to assess dental age. The mandibular third molar development was divided into nine stages. The stages were evaluated and each stage was transformed into a development score. Quadratic regression was employed to develop age prediction models. Our results show that males reached mandibular third molar root formation stages earlier than females. The models revealed a high correlation coefficient for both left and right mandibular third molar teeth in both sexes (R = 0.945 and 0.944 in males, R = 0.922 and 0.923 in females, respectively). Furthermore, the accuracy of the resulting models was tested in randomly selected 374 cases and showed low error values between the predicted dental age and the chronological age for both left and right mandibular third molar teeth in both sexes (-0.13 and -0.17 years in males, 0.01 and 0.03 years in females, respectively). In Thai samples, when the mandibular third molar teeth reached stage H, the probability of the person being over 18 years was 100 % in both sexes.

  17. Assessment of the accuracy of coupled cluster perturbation theory for open-shell systems. II. Quadruples expansions.

    PubMed

    Eriksen, Janus J; Matthews, Devin A; Jørgensen, Poul; Gauss, Jürgen

    2016-05-21

    We extend our assessment of the potential of perturbative coupled cluster (CC) expansions for a test set of open-shell atoms and organic radicals to the description of quadruple excitations. Namely, the second- through sixth-order models of the recently proposed CCSDT(Q-n) quadruples series [J. J. Eriksen et al., J. Chem. Phys. 140, 064108 (2014)] are compared to the prominent CCSDT(Q) and ΛCCSDT(Q) models. From a comparison of the models in terms of their recovery of total CC singles, doubles, triples, and quadruples (CCSDTQ) energies, we find that the performance of the CCSDT(Q-n) models is independent of the reference used (unrestricted or restricted (open-shell) Hartree-Fock), in contrast to the CCSDT(Q) and ΛCCSDT(Q) models, for which the accuracy is strongly dependent on the spin of the molecular ground state. By further comparing the ability of the models to recover relative CCSDTQ total atomization energies, the discrepancy between them is found to be even more pronounced, stressing how a balanced description of both closed- and open-shell species-as found in the CCSDT(Q-n) models-is indeed of paramount importance if any perturbative CC model is to be of chemical relevance for high-accuracy applications. In particular, the third-order CCSDT(Q-3) model is found to offer an encouraging alternative to the existing choices of quadruples models used in modern computational thermochemistry, since the model is still only of moderate cost, albeit markedly more costly than, e.g., the CCSDT(Q) and ΛCCSDT(Q) models.

  18. Assessment of the accuracy of coupled cluster perturbation theory for open-shell systems. I. Triples expansions.

    PubMed

    Eriksen, Janus J; Matthews, Devin A; Jørgensen, Poul; Gauss, Jürgen

    2016-05-21

    The accuracy at which total energies of open-shell atoms and organic radicals may be calculated is assessed for selected coupled cluster perturbative triples expansions, all of which augment the coupled cluster singles and doubles (CCSD) energy by a non-iterative correction for the effect of triple excitations. Namely, the second- through sixth-order models of the recently proposed CCSD(T-n) triples series [J. J. Eriksen et al., J. Chem. Phys. 140, 064108 (2014)] are compared to the acclaimed CCSD(T) model for both unrestricted as well as restricted open-shell Hartree-Fock (UHF/ROHF) reference determinants. By comparing UHF- and ROHF-based statistical results for a test set of 18 modest-sized open-shell species with comparable RHF-based results, no behavioral differences are observed for the higher-order models of the CCSD(T-n) series in their correlated descriptions of closed- and open-shell species. In particular, we find that the convergence rate throughout the series towards the coupled cluster singles, doubles, and triples (CCSDT) solution is identical for the two cases. For the CCSD(T) model, on the other hand, not only its numerical consistency, but also its established, yet fortuitous cancellation of errors breaks down in the transition from closed- to open-shell systems. The higher-order CCSD(T-n) models (orders n > 3) thus offer a consistent and significant improvement in accuracy relative to CCSDT over the CCSD(T) model, equally for RHF, UHF, and ROHF reference determinants, albeit at an increased computational cost.

  19. Assessment of the accuracy of coupled cluster perturbation theory for open-shell systems. II. Quadruples expansions

    NASA Astrophysics Data System (ADS)

    Eriksen, Janus J.; Matthews, Devin A.; Jørgensen, Poul; Gauss, Jürgen

    2016-05-01

    We extend our assessment of the potential of perturbative coupled cluster (CC) expansions for a test set of open-shell atoms and organic radicals to the description of quadruple excitations. Namely, the second- through sixth-order models of the recently proposed CCSDT(Q-n) quadruples series [J. J. Eriksen et al., J. Chem. Phys. 140, 064108 (2014)] are compared to the prominent CCSDT(Q) and ΛCCSDT(Q) models. From a comparison of the models in terms of their recovery of total CC singles, doubles, triples, and quadruples (CCSDTQ) energies, we find that the performance of the CCSDT(Q-n) models is independent of the reference used (unrestricted or restricted (open-shell) Hartree-Fock), in contrast to the CCSDT(Q) and ΛCCSDT(Q) models, for which the accuracy is strongly dependent on the spin of the molecular ground state. By further comparing the ability of the models to recover relative CCSDTQ total atomization energies, the discrepancy between them is found to be even more pronounced, stressing how a balanced description of both closed- and open-shell species—as found in the CCSDT(Q-n) models—is indeed of paramount importance if any perturbative CC model is to be of chemical relevance for high-accuracy applications. In particular, the third-order CCSDT(Q-3) model is found to offer an encouraging alternative to the existing choices of quadruples models used in modern computational thermochemistry, since the model is still only of moderate cost, albeit markedly more costly than, e.g., the CCSDT(Q) and ΛCCSDT(Q) models.

  20. Benchmarking an operational procedure for rapid flood mapping and risk assessment in Europe

    NASA Astrophysics Data System (ADS)

    Dottori, Francesco; Salamon, Peter; Kalas, Milan; Bianchi, Alessandra; Feyen, Luc

    2016-04-01

    The development of real-time methods for rapid flood mapping and risk assessment is crucial to improve emergency response and mitigate flood impacts. This work describes the benchmarking of an operational procedure for rapid flood risk assessment based on the flood predictions issued by the European Flood Awareness System (EFAS). The daily forecasts produced for the major European river networks are translated into event-based flood hazard maps using a large map catalogue derived from high-resolution hydrodynamic simulations, based on the hydro-meteorological dataset of EFAS. Flood hazard maps are then combined with exposure and vulnerability information, and the impacts of the forecasted flood events are evaluated in near real-time in terms of flood prone areas, potential economic damage, affected population, infrastructures and cities. An extensive testing of the operational procedure is carried out using the catastrophic floods of May 2014 in Bosnia-Herzegovina, Croatia and Serbia. The reliability of the flood mapping methodology is tested against satellite-derived flood footprints, while ground-based estimations of economic damage and affected population is compared against modelled estimates. We evaluated the skill of flood hazard and risk estimations derived from EFAS flood forecasts with different lead times and combinations. The assessment includes a comparison of several alternative approaches to produce and present the information content, in order to meet the requests of EFAS users. The tests provided good results and showed the potential of the developed real-time operational procedure in helping emergency response and management.

  1. [CONTROVERSIES REGARDING THE ACCURACY AND LIMITATIONS OF FROZEN SECTION IN THYROID PATHOLOGY: AN EVIDENCE-BASED ASSESSMENT].

    PubMed

    Stanciu-Pop, C; Pop, F C; Thiry, A; Scagnol, I; Maweja, S; Hamoir, E; Beckers, A; Meurisse, M; Grosu, F; Delvenne, Ph

    2015-12-01

    Palpable thyroid nodules are present clinically in 4-7% of the population and their prevalence increases to 50%-67% when using high-resolution neck ultrasonography. By contrast, thyroid carcinoma (TC) represents only 5-20% of these nodules, which underlines the need for an appropriate approach to avoid unnecessary surgery. Frozen section (PS) has been used for more than 40 years in thyroid surgery to establish the diagnosis of malignancy. However, a controversy persists regarding the accuracy of FS and its place in thyroid pathology has changed with the emergence of fine-needle aspiration (FNA). A PubMed Medline and SpringerLink search was made covering the period from January 2000 to June 2012 to assess the accuracy of ES, its limitations and indications for the diagnosis of thyroid nodules. Twenty publications encompassing 8.567 subjects were included in our study. The average value of TC among thyroid nodules in analyzed studies was 15.5 %. ES ability to detect cancer expressed by its sensitivity (Ss) was 67.5 %. More than two thirds of the authors considered PS useful exclusively in the presence of doubtful ENA and for guiding the surgical extension in cases confirmed as malignant by FNA; however, only 33% accepted FS as a routine examination for the management of thyroid nodules. The influence of FS on surgical reintervention rate in nodular thyroid pathology was considered to be negligible by most studies, whereas 31 % of the authors thought that FS has a favorable benefit by decreasing the number of surgical re-interventions. In conclusion, the role of FS in thyroid pathology evolved from a mandatory component for thyroid surgery to an optional examination after a pre-operative FNA cytology. The accuracy of FS seems to provide no sufficient additional benefit and most experts support its use only in the presence of equivocal or suspicious cytological features, for guiding the surgical extension in cases confirmed as malignant by FNA and for the

  2. Sacramento City College Assessment Center Research Report: Assessment Procedures, Fall 1983 - Fall 1984.

    ERIC Educational Resources Information Center

    Haase, M.; Caffrey, Patrick

    Studies and analyses conducted by the Assessment Center at Sacramento City College (SCC) between fall 1983 and fall 1984 provided the data on SCC's students and services which are presented in this report. Following an overview of the significant findings of the year's research efforts, part I sets forth the purpose of the report and part II…

  3. Study on accuracy and interobserver reliability of the assessment of odontoid fracture union using plain radiographs or CT scans

    PubMed Central

    Kolb, Klaus; Zenner, Juliane; Reynolds, Jeremy; Dvorak, Marcel; Acosta, Frank; Forstner, Rosemarie; Mayer, Michael; Tauber, Mark; Auffarth, Alexander; Kathrein, Anton; Hitzl, Wolfgang

    2009-01-01

    In odontoid fracture research, outcome can be evaluated based on validated questionnaires, based on functional outcome in terms of atlantoaxial and total neck rotation, and based on the treatment-related union rate. Data on clinical and functional outcome are still sparse. In contrast, there is abundant information on union rates, although, frequently the rates differ widely. Odontoid union is the most frequently assessed outcome parameter and therefore it is imperative to investigate the interobserver reliability of fusion assessment using radiographs compared to CT scans. Our objective was to identify the diagnostic accuracy of plain radiographs in detecting union and non-union after odontoid fractures and compare this to CT scans as the standard of reference. Complete sets of biplanar plain radiographs and CT scans of 21 patients treated for odontoid fractures were subjected to interobserver assessment of fusion. Image sets were presented to 18 international observers with a mean experience in fusion assessment of 10.7 years. Patients selected had complete radiographic follow-up at a mean of 63.3 ± 53 months. Mean age of the patients at follow-up was 68.2 years. We calculated interobserver agreement of the diagnostic assessment using radiographs compared to using CT scans, as well as the sensitivity and specificity of the radiographic assessment. Agreement on the fusion status using radiographs compared to CT scans ranged between 62 and 90% depending on the observer. Concerning the assessment of non-union and fusion, the mean specificity was 62% and mean sensitivity was 77%. Statistical analysis revealed an agreement of 80–100% in 48% of cases only, between the biplanar radiographs and the reconstructed CT scans. In 50% of patients assessed there was an agreement of less than 80%. The mean sensitivity and specificity values indicate that radiographs are not a reliable measure to indicate odontoid fracture union or non-union. Regarding experience in years

  4. A procedure for incorporating spatial variability in ecological risk assessment of Dutch river floodplains.

    PubMed

    Kooistra, L; Leuven, R S; Nienhuis, P H; Wehrens, R; Buydens, L M

    2001-09-01

    Floodplain soils along the river Rhine in the Netherlands show a large spatial variability in pollutant concentrations. For an accurate ecological risk characterization of the river floodplains, this heterogeneity has to be included into the ecological risk assessment. In this paper a procedure is presented that incorporates spatial components of exposure into the risk assessment by linking geographical information systems (GIS) with models that estimate exposure for the most sensitive species of a floodplain. The procedure uses readily available site-specific data and is applicable to a wide range of locations and floodplain management scenarios. The procedure is applied to estimate exposure risks to metals for a typical foodweb in the Afferdensche and Deestsche Waarden floodplain along the river Waal, the main branch of the Rhine in the Netherands. Spatial variability of pollutants is quantified by overlaying appropriate topographic and soil maps resulting in the definition of homogeneous pollution units. Next to that, GIS is used to include foraging behavior of the exposed terrestrial organisms. Risk estimates from a probabilistic exposure model were used to construct site-specific risk maps for the floodplain. Based on these maps, recommendations for future management of the floodplain can be made that aim at both ecological rehabilitation and an optimal flood defense.

  5. Pharmacokinetic digital phantoms for accuracy assessment of image-based dosimetry in (177)Lu-DOTATATE peptide receptor radionuclide therapy.

    PubMed

    Brolin, Gustav; Gustafsson, Johan; Ljungberg, Michael; Gleisner, Katarina Sjögreen

    2015-08-07

    Patient-specific image-based dosimetry is considered to be a useful tool to limit toxicity associated with peptide receptor radionuclide therapy (PRRT). To facilitate the establishment and reliability of absorbed-dose response relationships, it is essential to assess the accuracy of dosimetry in clinically realistic scenarios. To this end, we developed pharmacokinetic digital phantoms corresponding to patients treated with (177)Lu-DOTATATE. Three individual voxel phantoms from the XCAT population were generated and assigned a dynamic activity distribution based on a compartment model for (177)Lu-DOTATATE, designed specifically for this purpose. The compartment model was fitted to time-activity data from 10 patients, primarily acquired using quantitative scintillation camera imaging. S values for all phantom source-target combinations were calculated based on Monte-Carlo simulations. Combining the S values and time-activity curves, reference values of the absorbed dose to the phantom kidneys, liver, spleen, tumours and whole-body were calculated. The phantoms were used in a virtual dosimetry study, using Monte-Carlo simulated gamma-camera images and conventional methods for absorbed-dose calculations. The characteristics of the SPECT and WB planar images were found to well represent those of real patient images, capturing the difficulties present in image-based dosimetry. The phantoms are expected to be useful for further studies and optimisation of clinical dosimetry in (177)Lu PRRT.

  6. Accuracy of a Low-Cost Novel Computer-Vision Dynamic Movement Assessment: Potential Limitations and Future Directions

    NASA Astrophysics Data System (ADS)

    McGroarty, M.; Giblin, S.; Meldrum, D.; Wetterling, F.

    2016-04-01

    The aim of the study was to perform a preliminary validation of a low cost markerless motion capture system (CAPTURE) against an industry gold standard (Vicon). Measurements of knee valgus and flexion during the performance of a countermovement jump (CMJ) between CAPTURE and Vicon were compared. After correction algorithms were applied to the raw CAPTURE data acceptable levels of accuracy and precision were achieved. The knee flexion angle measured for three trials using Capture deviated by -3.8° ± 3° (left) and 1.7° ± 2.8° (right) compared to Vicon. The findings suggest that low-cost markerless motion capture has potential to provide an objective method for assessing lower limb jump and landing mechanics in an applied sports setting. Furthermore, the outcome of the study warrants the need for future research to examine more fully the potential implications of the use of low-cost markerless motion capture in the evaluation of dynamic movement for injury prevention.

  7. Mass evolution of Mediterranean, Black, Red, and Caspian Seas from GRACE and altimetry: accuracy assessment and solution calibration

    NASA Astrophysics Data System (ADS)

    Loomis, B. D.; Luthcke, S. B.

    2017-02-01

    We present new measurements of mass evolution for the Mediterranean, Black, Red, and Caspian Seas as determined by the NASA Goddard Space Flight Center (GSFC) GRACE time-variable global gravity mascon solutions. These new solutions are compared to sea surface altimetry measurements of sea level anomalies with steric corrections applied. To assess their accuracy, the GRACE- and altimetry-derived solutions are applied to the set of forward models used by GSFC for processing the GRACE Level-1B datasets, with the resulting inter-satellite range-acceleration residuals providing a useful metric for analyzing solution quality. We also present a differential correction strategy to calibrate the time series of mass change for each of the seas by establishing the strong linear relationship between differences in the forward modeled mass and the corresponding range-acceleration residuals between the two solutions. These calibrated time series of mass change are directly determined from the range-acceleration residuals, effectively providing regionally-tuned GRACE solutions without the need to form and invert normal equations. Finally, the calibrated GRACE time series are discussed and combined with the steric-corrected sea level anomalies to provide new measurements of the unmodeled steric variability for each of the seas over the span of the GRACE observation record. We apply ensemble empirical mode decomposition (EEMD) to adaptively sort the mass and steric components of sea level anomalies into seasonal, non-seasonal, and long-term temporal scales.

  8. Pharmacokinetic digital phantoms for accuracy assessment of image-based dosimetry in 177Lu-DOTATATE peptide receptor radionuclide therapy

    NASA Astrophysics Data System (ADS)

    Brolin, Gustav; Gustafsson, Johan; Ljungberg, Michael; Sjögreen Gleisner, Katarina

    2015-08-01

    Patient-specific image-based dosimetry is considered to be a useful tool to limit toxicity associated with peptide receptor radionuclide therapy (PRRT). To facilitate the establishment and reliability of absorbed-dose response relationships, it is essential to assess the accuracy of dosimetry in clinically realistic scenarios. To this end, we developed pharmacokinetic digital phantoms corresponding to patients treated with 177Lu-DOTATATE. Three individual voxel phantoms from the XCAT population were generated and assigned a dynamic activity distribution based on a compartment model for 177Lu-DOTATATE, designed specifically for this purpose. The compartment model was fitted to time-activity data from 10 patients, primarily acquired using quantitative scintillation camera imaging. S values for all phantom source-target combinations were calculated based on Monte-Carlo simulations. Combining the S values and time-activity curves, reference values of the absorbed dose to the phantom kidneys, liver, spleen, tumours and whole-body were calculated. The phantoms were used in a virtual dosimetry study, using Monte-Carlo simulated gamma-camera images and conventional methods for absorbed-dose calculations. The characteristics of the SPECT and WB planar images were found to well represent those of real patient images, capturing the difficulties present in image-based dosimetry. The phantoms are expected to be useful for further studies and optimisation of clinical dosimetry in 177Lu PRRT.

  9. Assessment of sucrose and ethanol reinforcement: the across-session breakpoint procedure.

    PubMed

    Czachowski, Cristine L; Legg, Brooke H; Samson, Herman H

    2003-01-01

    We have demonstrated previously that the use of an across-session progressive ratio procedure yields breakpoint values for 10% ethanol (10E) that are stable and comparable to those measured for other drugs of abuse [Alcohol. Clin. Exp. Res. 23 (1999) 1580]. The aims of the present experiment were twofold: (1). to determine whether this procedure is sensitive to changes in reinforcer magnitude using a reinforcer previously demonstrated to affect operant responding in a predictable fashion and (2). to determine whether ethanol reinforcement produced similar changes in behavior. Male, Long-Evans rats were trained to respond for either 3% sucrose (3S) or 10E using the sipper tube appetitive/consummatory procedure where the completion of a single response requirement results in access to a liquid solution for 20 min. Three successive breakpoints were determined for this "baseline" solution by increasing the response requirement each day until it was not completed. The concentration of the solutions was then manipulated such that breakpoints for the Sucrose Group were assessed for 1%, 3%, 5% and 10% sucrose, and breakpoints for the Ethanol Group were assessed for 2%, 5%, 10% and 20% ethanol. The concentration manipulation showed that sucrose concentration had a greater impact on seeking and consumption than did ethanol concentration. Breakpoints in the Sucrose Group were highly correlated with sucrose concentration, whereas in the Ethanol Group, breakpoint was unrelated to ethanol concentration. Ethanol intake patterns suggested that pharmacological factors might have been regulating intake, and that when physiologically detectable amounts of ethanol were consumed, there was a dissociation between seeking and intake with slightly elevated ethanol seeking. Overall, the across-session breakpoint procedure confirmed that sweet taste was highly related to seeking and consumption, whereas ethanol-motivated responding may be controlled by different regulatory mechanisms that

  10. PLÉIADES Project: Assessment of Georeferencing Accuracy, Image Quality, Pansharpening Performence and Dsm/dtm Quality

    NASA Astrophysics Data System (ADS)

    Topan, Hüseyin; Cam, Ali; Özendi, Mustafa; Oruç, Murat; Jacobsen, Karsten; Taşkanat, Talha

    2016-06-01

    Pléiades 1A and 1B are twin optical satellites of Optical and Radar Federated Earth Observation (ORFEO) program jointly running by France and Italy. They are the first satellites of Europe with sub-meter resolution. Airbus DS (formerly Astrium Geo) runs a MyGIC (formerly Pléiades Users Group) program to validate Pléiades images worldwide for various application purposes. The authors conduct three projects, one is within this program, the second is supported by BEU Scientific Research Project Program, and the third is supported by TÜBİTAK. Assessment of georeferencing accuracy, image quality, pansharpening performance and Digital Surface Model/Digital Terrain Model (DSM/DTM) quality subjects are investigated in these projects. For these purposes, triplet panchromatic (50 cm Ground Sampling Distance (GSD)) and VNIR (2 m GSD) Pléiades 1A images were investigated over Zonguldak test site (Turkey) which is urbanised, mountainous and covered by dense forest. The georeferencing accuracy was estimated with a standard deviation in X and Y (SX, SY) in the range of 0.45m by bias corrected Rational Polynomial Coefficient (RPC) orientation, using ~170 Ground Control Points (GCPs). 3D standard deviation of ±0.44m in X, ±0.51m in Y, and ±1.82m in Z directions have been reached in spite of the very narrow angle of convergence by bias corrected RPC orientation. The image quality was also investigated with respect to effective resolution, Signal to Noise Ratio (SNR) and blur coefficient. The effective resolution was estimated with factor slightly below 1.0, meaning that the image quality corresponds to the nominal resolution of 50cm. The blur coefficients were achieved between 0.39-0.46 for triplet panchromatic images, indicating a satisfying image quality. SNR is in the range of other comparable space borne images which may be caused by de-noising of Pléiades images. The pansharpened images were generated by various methods, and are validated by most common statistical

  11. Readability and Content Assessment of Informed Consent Forms for Medical Procedures in Croatia

    PubMed Central

    Vučemilo, Luka; Borovečki, Ana

    2015-01-01

    Background High quality of informed consent form is essential for adequate information transfer between physicians and patients. Current status of medical procedure consent forms in clinical practice in Croatia specifically in terms of the readability and the content is unknown. The aim of this study was to assess the readability and the content of informed consent forms for diagnostic and therapeutic procedures used with patients in Croatia. Methods 52 informed consent forms from six Croatian hospitals on the secondary and tertiary health-care level were tested for reading difficulty using Simple Measure of Gobbledygook (SMOG) formula adjusted for Croatian language and for qualitative analysis of the content. Results The averaged SMOG grade of analyzed informed consent forms was 13.25 (SD 1.59, range 10–19). Content analysis revealed that informed consent forms included description of risks in 96% of the cases, benefits in 81%, description of procedures in 78%, alternatives in 52%, risks and benefits of alternatives in 17% and risks and benefits of not receiving treatment or undergoing procedures in 13%. Conclusions Readability of evaluated informed consent forms is not appropriate for the general population in Croatia. The content of the forms failed to include in high proportion of the cases description of alternatives, risks and benefits of alternatives, as well as risks and benefits of not receiving treatments or undergoing procedures. Data obtained from this research could help in development and improvement of informed consent forms in Croatia especially now when Croatian hospitals are undergoing the process of accreditation. PMID:26376183

  12. Assessment of the accuracy of PPP for very-high-frequency dynamic, satellite positioning and earthquake modeling

    NASA Astrophysics Data System (ADS)

    Moschas, F.; Avallone, A.; Moschonas, N.; Saltogianni, V.; Stiros, S.

    2012-04-01

    With the advent of various GPS/GNSS Point Positioning techniques, it became possible to model the dynamic displacement history of specific points during large and rather moderate earthquakes using satellite positioning, 1Hz and occasionally 10Hz sampling data. While there is evidence that the obtained data are precise, experience from monitoring of engineering structures like bridges, indicates that GPS/GNSS records are contaminated by coloured (mostly background noise) noise even in the cases of differential-type analysis of the satellite signals. This made the necessary the assessment of the results of different PPP processing using supervised learning techniques. Our work was based on a modification of an experiment first made to assess the potential of GPS to measure oscillations of civil engineering structures. A 10Hz GNSS antenna-receiver unit was mounted on the top of a vertical rod, fixed on the ground and forced to controlled oscillations. Oscillations were also recorded by a robotic theodolite and an accelerometer, and the whole experiment was video-recorded. A second 10Hz GNSS antenna-receiver unit was left on stable ground, in a nearby position. The rod was forced to semi-static motion (bending) and then was left to oscillate freely until still, and the whole movement was recorded by all sensors. GNSS data were analyzed both in kinematic mode and in PPP mode, using the GIPSY-OASIS II (http://gipsy-oasis.jpl.nasa.gov) (only GPS) and the PPP CRCS facility (GPS + GLONAS). Recorded PPP and differential kinematic processing coordinates (apparent displacements) were found to follow the real motion, but to be contaminated by a long-period noise. On the contrary, the short-period component of the apparent PPP displacements, obtained using high-pass filtering, were very much consistent with the real motion, with sub-mm mean deviation, though occasionally contaminated by clipping. The assessment of the very-high frequency GPS noise will provide useful information

  13. Using a novel assessment of procedural proficiency provides medical educators insight into blood pressure measurement

    PubMed Central

    Jensen, Brock; Burkart, Rebecca; Levis, Malorie

    2016-01-01

    Objective This investigation was performed to determine how students in a health sciences program utilize and explain techniques within blood pressure measurement using a novel assessment, and changes associated with greater curricular exposure. Methods An exploratory, qualitative and quantitative study was conducted using a ‘Think Aloud’ design with protocol analysis. Following familiarization, participants performed the task of measuring blood pressure on a reference subject while stating their thought processes. A trained practitioner recorded each participant’s procedural proficiency using a standardized rubric. There were 112 participants in the study with varying levels of curricular exposure to blood pressure measurement. Results Four trends are noted. Specifically, a trend was observed wherein a marked increase in procedural proficiency with a plateau occurred (e.g. released cuff pressure 2-4 mmHg, 10%, 60%, 83%, 82%). Secondly, a trend was observed with improvement across groups (e.g. cuff placed snugly/smoothly on upper arm, 20%, 60%, 81%, and 91%). Other trends included a marked improvement with subsequent decrease, and an improvement without achieving proficiency (e.g. palpation of the brachial pulse, 5%, 90%, 81%, 68%, appropriate size cuff, 17%, 40%, 33%, 41%, respectively). Qualitatively, transcript interpretation resulted in a need for clarification in the way blood pressure procedure is instructed in the curriculum.  Conclusions The current investigation provides a snapshot of proficiency in blood pressure assessment across a curriculum and highlights considerations for best instructional practices, including the use of Think Aloud. Consequently, medical educators should use qualitative and quantitative assessments concurrently to determine achievement of blood pressure skill proficiency. PMID:27864919

  14. Incorporating mesh-insensitive structural stress into the fatigue assessment procedure of common structural rules for bulk carriers

    NASA Astrophysics Data System (ADS)

    Kim, Seong-Min; Kim, Myung-Hyun

    2015-01-01

    This study introduces a fatigue assessment procedure using mesh-insensitive structural stress method based on the Common Structural Rules for Bulk Carriers by considering important factors, such as mean stress and thickness effects. The fatigue assessment result of mesh-insensitive structural stress method have been compared with CSR procedure based on equivalent notch stress at major hot spot points in the area near the ballast hold for a 180 K bulk carrier. The possibility of implementing mesh-insensitive structural stress method in the fatigue assessment procedure for ship structures is discussed.

  15. 26 CFR 301.7429-1 - Review of jeopardy and termination assessment and jeopardy levy procedures; information to taxpayer.

    Code of Federal Regulations, 2010 CFR

    2010-04-01

    ... 26 Internal Revenue 18 2010-04-01 2010-04-01 false Review of jeopardy and termination assessment... termination assessment and jeopardy levy procedures; information to taxpayer. Not later than 5 days after the day on which an assessment is made under section 6851(a), 6852(a), 6861(a), or 6862, or a levy is...

  16. Position Paper on the Potential Use of Computerized Testing Procedures for the National Assessment of Educational Progress.

    ERIC Educational Resources Information Center

    Reckase, Mark D.

    The current technology of computerized testing is discussed, and a few comments are made on how such technology might be used for assessing school-related skills as part of the National Assessment of Educational progress (NAEP). The critical feature of computerized assessment procedures is that the test items are presented in interactive fashion,…

  17. Diagnostic accuracy of refractometer and Brix refractometer to assess failure of passive transfer in calves: protocol for a systematic review and meta-analysis.

    PubMed

    Buczinski, S; Fecteau, G; Chigerwe, M; Vandeweerd, J M

    2016-06-01

    Calves are highly dependent of colostrum (and antibody) intake because they are born agammaglobulinemic. The transfer of passive immunity in calves can be assessed directly by dosing immunoglobulin G (IgG) or by refractometry or Brix refractometry. The latter are easier to perform routinely in the field. This paper presents a protocol for a systematic review meta-analysis to assess the diagnostic accuracy of refractometry or Brix refractometry versus dosage of IgG as a reference standard test. With this review protocol we aim to be able to report refractometer and Brix refractometer accuracy in terms of sensitivity and specificity as well as to quantify the impact of any study characteristic on test accuracy.

  18. The accuracy of the 24-h activity recall method for assessing sedentary behaviour: the physical activity measurement survey (PAMS) project.

    PubMed

    Kim, Youngwon; Welk, Gregory J

    2017-02-01

    Sedentary behaviour (SB) has emerged as a modifiable risk factor, but little is known about measurement errors of SB. The purpose of this study was to determine the validity of 24-h Physical Activity Recall (24PAR) relative to SenseWear Armband (SWA) for assessing SB. Each participant (n = 1485) undertook a series of data collection procedures on two randomly selected days: wearing a SWA for full 24-h, and then completing the telephone-administered 24PAR the following day to recall the past 24-h activities. Estimates of total sedentary time (TST) were computed without the inclusion of reported or recorded sleep time. Equivalence testing was used to compare estimates of TST. Analyses from equivalence testing showed no significant equivalence of 24PAR for TST (90% CI: 443.0 and 457.6 min · day(-1)) relative to SWA (equivalence zone: 580.7 and 709.8 min · day(-1)). Bland-Altman plots indicated individuals that were extremely or minimally sedentary provided relatively comparable sedentary time between 24PAR and SWA. Overweight/obese and/or older individuals were more likely to under-estimate sedentary time than normal weight and/or younger individuals. Measurement errors of 24PAR varied by the level of sedentary time and demographic indicators. This evidence informs future work to develop measurement error models to correct for errors of self-reports.

  19. A Comparative Analysis of Diagnostic Accuracy of Focused Assessment With Sonography for Trauma Performed by Emergency Medicine and Radiology Residents

    PubMed Central

    Zamani, Majid; Masoumi, Babak; Esmailian, Mehrdad; Habibi, Amin; Khazaei, Mehdi; Mohammadi Esfahani, Mohammad

    2015-01-01

    Background: Focused assessment with sonography in trauma (FAST) is a method for prompt detection of the abdominal free fluid in patients with abdominal trauma. Objectives: This study was conducted to compare the diagnostic accuracy of FAST performed by emergency medicine residents (EMR) and radiology residents (RRs) in detecting peritoneal free fluids. Patients and Methods: Patients triaged in the emergency department with blunt abdominal trauma, high energy trauma, and multiple traumas underwent a FAST examination by EMRs and RRs with the same techniques to obtain the standard views. Ultrasound findings for free fluid in peritoneal cavity for each patient (positive/negative) were compared with the results of computed tomography, operative exploration, or observation as the final outcome. Results: A total of 138 patients were included in the final analysis. Good diagnostic agreement was noted between the results of FAST scans performed by EMRs and RRs (κ = 0.701, P < 0.001), also between the results of EMRs-performed FAST and the final outcome (κ = 0.830, P < 0.0010), and finally between the results of RRs-performed FAST and final outcome (κ = 0.795, P < 0.001). No significant differences were noted between EMRs- and RRs-performed FASTs regarding sensitivity (84.6% vs 84.6%), specificity (98.4% vs 97.6%), positive predictive value (84.6% vs 84.6%), and negative predictive value (98.4% vs 98.4%). Conclusions: Trained EMRs like their fellow RRs have the ability to perform FAST scan with high diagnostic value in patients with blunt abdominal trauma. PMID:26756009

  20. Assessing the Intraoperative Accuracy of Pedicle Screw Placement by Using a Bone-Mounted Miniature Robot System through Secondary Registration

    PubMed Central

    Wu, Chieh-Hsin; Tsai, Cheng-Yu; Chang, Chih-Hui; Lin, Chih-Lung; Tsai, Tai-Hsin

    2016-01-01

    Introduction Pedicle screws are commonly employed to restore spinal stability and correct deformities. The Renaissance robotic system was developed to improve the accuracy of pedicle screw placement. Purpose In this study, we developed an intraoperative classification system for evaluating the accuracy of pedicle screw placements through secondary registration. Furthermore, we evaluated the benefits of using the Renaissance robotic system in pedicle screw placement and postoperative evaluations. Finally, we examined the factors affecting the accuracy of pedicle screw implantation. Results Through use of the Renaissance robotic system, the accuracy of Kirschner-wire (K-wire) placements deviating <3 mm from the planned trajectory was determined to be 98.74%. According to our classification system, the robot-guided pedicle screw implantation attained an accuracy of 94.00% before repositioning and 98.74% after repositioning. However, the malposition rate before repositioning was 5.99%; among these placements, 4.73% were immediately repositioned using the robot system and 1.26% were manually repositioned after a failed robot repositioning attempt. Most K-wire entry points deviated caudally and laterally. Conclusion The Renaissance robotic system offers high accuracy in pedicle screw placement. Secondary registration improves the accuracy through increasing the precision of the positioning; moreover, intraoperative evaluation enables immediate repositioning. Furthermore, the K-wire tends to deviate caudally and laterally from the entry point because of skiving, which is characteristic of robot-assisted pedicle screw placement. PMID:27054360