Sample records for total survey error

  1. Total Survey Error & Institutional Research: A Case Study of the University Experience Survey

    ERIC Educational Resources Information Center

    Whiteley, Sonia

    2014-01-01

    Total Survey Error (TSE) is a component of Total Survey Quality (TSQ) that supports the assessment of the extent to which a survey is "fit-for-purpose". While TSQ looks at a number of dimensions, such as relevance, credibility and accessibility, TSE is has a more operational focus on accuracy and minimising errors. Mitigating survey…

  2. Neutrino masses and cosmological parameters from a Euclid-like survey: Markov Chain Monte Carlo forecasts including theoretical errors

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Audren, Benjamin; Lesgourgues, Julien; Bird, Simeon

    2013-01-01

    We present forecasts for the accuracy of determining the parameters of a minimal cosmological model and the total neutrino mass based on combined mock data for a future Euclid-like galaxy survey and Planck. We consider two different galaxy surveys: a spectroscopic redshift survey and a cosmic shear survey. We make use of the Monte Carlo Markov Chains (MCMC) technique and assume two sets of theoretical errors. The first error is meant to account for uncertainties in the modelling of the effect of neutrinos on the non-linear galaxy power spectrum and we assume this error to be fully correlated in Fouriermore » space. The second error is meant to parametrize the overall residual uncertainties in modelling the non-linear galaxy power spectrum at small scales, and is conservatively assumed to be uncorrelated and to increase with the ratio of a given scale to the scale of non-linearity. It hence increases with wavenumber and decreases with redshift. With these two assumptions for the errors and assuming further conservatively that the uncorrelated error rises above 2% at k = 0.4 h/Mpc and z = 0.5, we find that a future Euclid-like cosmic shear/galaxy survey achieves a 1-σ error on M{sub ν} close to 32 meV/25 meV, sufficient for detecting the total neutrino mass with good significance. If the residual uncorrelated errors indeed rises rapidly towards smaller scales in the non-linear regime as we have assumed here then the data on non-linear scales does not increase the sensitivity to the total neutrino mass. Assuming instead a ten times smaller theoretical error with the same scale dependence, the error on the total neutrino mass decreases moderately from σ(M{sub ν}) = 18 meV to 14 meV when mildly non-linear scales with 0.1 h/Mpc < k < 0.6 h/Mpc are included in the analysis of the galaxy survey data.« less

  3. Measuring coverage in MNCH: total survey error and the interpretation of intervention coverage estimates from household surveys.

    PubMed

    Eisele, Thomas P; Rhoda, Dale A; Cutts, Felicity T; Keating, Joseph; Ren, Ruilin; Barros, Aluisio J D; Arnold, Fred

    2013-01-01

    Nationally representative household surveys are increasingly relied upon to measure maternal, newborn, and child health (MNCH) intervention coverage at the population level in low- and middle-income countries. Surveys are the best tool we have for this purpose and are central to national and global decision making. However, all survey point estimates have a certain level of error (total survey error) comprising sampling and non-sampling error, both of which must be considered when interpreting survey results for decision making. In this review, we discuss the importance of considering these errors when interpreting MNCH intervention coverage estimates derived from household surveys, using relevant examples from national surveys to provide context. Sampling error is usually thought of as the precision of a point estimate and is represented by 95% confidence intervals, which are measurable. Confidence intervals can inform judgments about whether estimated parameters are likely to be different from the real value of a parameter. We recommend, therefore, that confidence intervals for key coverage indicators should always be provided in survey reports. By contrast, the direction and magnitude of non-sampling error is almost always unmeasurable, and therefore unknown. Information error and bias are the most common sources of non-sampling error in household survey estimates and we recommend that they should always be carefully considered when interpreting MNCH intervention coverage based on survey data. Overall, we recommend that future research on measuring MNCH intervention coverage should focus on refining and improving survey-based coverage estimates to develop a better understanding of how results should be interpreted and used.

  4. Measuring Coverage in MNCH: Total Survey Error and the Interpretation of Intervention Coverage Estimates from Household Surveys

    PubMed Central

    Eisele, Thomas P.; Rhoda, Dale A.; Cutts, Felicity T.; Keating, Joseph; Ren, Ruilin; Barros, Aluisio J. D.; Arnold, Fred

    2013-01-01

    Nationally representative household surveys are increasingly relied upon to measure maternal, newborn, and child health (MNCH) intervention coverage at the population level in low- and middle-income countries. Surveys are the best tool we have for this purpose and are central to national and global decision making. However, all survey point estimates have a certain level of error (total survey error) comprising sampling and non-sampling error, both of which must be considered when interpreting survey results for decision making. In this review, we discuss the importance of considering these errors when interpreting MNCH intervention coverage estimates derived from household surveys, using relevant examples from national surveys to provide context. Sampling error is usually thought of as the precision of a point estimate and is represented by 95% confidence intervals, which are measurable. Confidence intervals can inform judgments about whether estimated parameters are likely to be different from the real value of a parameter. We recommend, therefore, that confidence intervals for key coverage indicators should always be provided in survey reports. By contrast, the direction and magnitude of non-sampling error is almost always unmeasurable, and therefore unknown. Information error and bias are the most common sources of non-sampling error in household survey estimates and we recommend that they should always be carefully considered when interpreting MNCH intervention coverage based on survey data. Overall, we recommend that future research on measuring MNCH intervention coverage should focus on refining and improving survey-based coverage estimates to develop a better understanding of how results should be interpreted and used. PMID:23667331

  5. Sources of Error in Substance Use Prevalence Surveys

    PubMed Central

    Johnson, Timothy P.

    2014-01-01

    Population-based estimates of substance use patterns have been regularly reported now for several decades. Concerns with the quality of the survey methodologies employed to produce those estimates date back almost as far. Those concerns have led to a considerable body of research specifically focused on understanding the nature and consequences of survey-based errors in substance use epidemiology. This paper reviews and summarizes that empirical research by organizing it within a total survey error model framework that considers multiple types of representation and measurement errors. Gaps in our knowledge of error sources in substance use surveys and areas needing future research are also identified. PMID:27437511

  6. Mitigating Errors of Representation: A Practical Case Study of the University Experience Survey

    ERIC Educational Resources Information Center

    Whiteley, Sonia

    2014-01-01

    The Total Survey Error (TSE) paradigm provides a framework that supports the effective planning of research, guides decision making about data collection and contextualises the interpretation and dissemination of findings. TSE also allows researchers to systematically evaluate and improve the design and execution of ongoing survey programs and…

  7. Bathymetric surveys at highway bridges crossing the Missouri and Mississippi Rivers near St. Louis, Missouri, 2010

    USGS Publications Warehouse

    Huizinga, Richard J.

    2011-01-01

    The size of the scour holes observed at the surveyed sites likely was affected by the low to moderate flow conditions on the Missouri and Mississippi Rivers at the time of the surveys. The scour holes likely would be larger during conditions of increased flow. Artifacts of horizontal positioning errors were present in the data, but an analysis of the surveys indicated that most of the bathymetric data have a total propagated error of less than 0.33 foot.

  8. A New Filtering and Smoothing Algorithm for Railway Track Surveying Based on Landmark and IMU/Odometer

    PubMed Central

    Jiang, Qingan; Wu, Wenqi; Jiang, Mingming; Li, Yun

    2017-01-01

    High-accuracy railway track surveying is essential for railway construction and maintenance. The traditional approaches based on total station equipment are not efficient enough since high precision surveying frequently needs static measurements. This paper proposes a new filtering and smoothing algorithm based on the IMU/odometer and landmarks integration for the railway track surveying. In order to overcome the difficulty of estimating too many error parameters with too few landmark observations, a new model with completely observable error states is established by combining error terms of the system. Based on covariance analysis, the analytical relationship between the railway track surveying accuracy requirements and equivalent gyro drifts including bias instability and random walk noise are established. Experiment results show that the accuracy of the new filtering and smoothing algorithm for railway track surveying can reach 1 mm (1σ) when using a Ring Laser Gyroscope (RLG)-based Inertial Measurement Unit (IMU) with gyro bias instability of 0.03°/h and random walk noise of 0.005°/h while control points of the track control network (CPIII) position observations are provided by the optical total station in about every 60 m interval. The proposed approach can satisfy at the same time the demands of high accuracy and work efficiency for railway track surveying. PMID:28629191

  9. Questionnaire-based person trip visualization and its integration to quantitative measurements in Myanmar

    NASA Astrophysics Data System (ADS)

    Kimijiama, S.; Nagai, M.

    2016-06-01

    With telecommunication development in Myanmar, person trip survey is supposed to shift from conversational questionnaire to GPS survey. Integration of both historical questionnaire data to GPS survey and visualizing them are very important to evaluate chronological trip changes with socio-economic and environmental events. The objectives of this paper are to: (a) visualize questionnaire-based person trip data, (b) compare the errors between questionnaire and GPS data sets with respect to sex and age and (c) assess the trip behaviour in time-series. Totally, 345 individual respondents were selected through random stratification to assess person trip using a questionnaire and GPS survey for each. Conversion of trip information such as a destination from the questionnaires was conducted by using GIS. The results show that errors between the two data sets in the number of trips, total trip distance and total trip duration are 25.5%, 33.2% and 37.2%, respectively. The smaller errors are found among working-age females mainly employed with the project-related activities generated by foreign investment. The trip distant was yearly increased. The study concluded that visualization of questionnaire-based person trip data and integrating them to current quantitative measurements are very useful to explore historical trip changes and understand impacts from socio-economic events.

  10. A Survey Data Quality Strategy: The Institutional Research Perspective. IR Applications, Volume 34

    ERIC Educational Resources Information Center

    Liu, Qin

    2012-01-01

    This discussion constructs a survey data quality strategy for institutional researchers in higher education in light of total survey error theory. It starts with describing the characteristics of institutional research and identifying the gaps in literature regarding survey data quality issues in institutional research and then introduces the…

  11. A Survey Data Quality Strategy: The Institutional Research Perspective

    ERIC Educational Resources Information Center

    Liu, Qin

    2009-01-01

    This paper intends to construct a survey data quality strategy for institutional researchers in higher education in light of total survey error theory. It starts with describing the characteristics of institutional research and identifying the gaps in literature regarding survey data quality issues in institutional research. Then it is followed by…

  12. [Longer working hours of pharmacists in the ward resulted in lower medication-related errors--survey of national university hospitals in Japan].

    PubMed

    Matsubara, Kazuo; Toyama, Akira; Satoh, Hiroshi; Suzuki, Hiroshi; Awaya, Toshio; Tasaki, Yoshikazu; Yasuoka, Toshiaki; Horiuchi, Ryuya

    2011-04-01

    It is obvious that pharmacists play a critical role as risk managers in the healthcare system, especially in medication treatment. Hitherto, there is not a single multicenter-survey report describing the effectiveness of clinical pharmacists in preventing medical errors from occurring in the wards in Japan. Thus, we conducted a 1-month survey to elucidate the relationship between the number of errors and working hours of pharmacists in the ward, and verified whether the assignment of clinical pharmacists to the ward would prevent medical errors between October 1-31, 2009. Questionnaire items for the pharmacists at 42 national university hospitals and a medical institute included the total and the respective numbers of medication-related errors, beds and working hours of pharmacist in 2 internal medicine and 2 surgical departments in each hospital. Regardless of severity, errors were consecutively reported to the Medical Security and Safety Management Section in each hospital. The analysis of errors revealed that longer working hours of pharmacists in the ward resulted in less medication-related errors; this was especially significant in the internal medicine ward (where a variety of drugs were used) compared with the surgical ward. However, the nurse assignment mode (nurse/inpatients ratio: 1 : 7-10) did not influence the error frequency. The results of this survey strongly indicate that assignment of clinical pharmacists to the ward is critically essential in promoting medication safety and efficacy.

  13. Errors in clinical laboratories or errors in laboratory medicine?

    PubMed

    Plebani, Mario

    2006-01-01

    Laboratory testing is a highly complex process and, although laboratory services are relatively safe, they are not as safe as they could or should be. Clinical laboratories have long focused their attention on quality control methods and quality assessment programs dealing with analytical aspects of testing. However, a growing body of evidence accumulated in recent decades demonstrates that quality in clinical laboratories cannot be assured by merely focusing on purely analytical aspects. The more recent surveys on errors in laboratory medicine conclude that in the delivery of laboratory testing, mistakes occur more frequently before (pre-analytical) and after (post-analytical) the test has been performed. Most errors are due to pre-analytical factors (46-68.2% of total errors), while a high error rate (18.5-47% of total errors) has also been found in the post-analytical phase. Errors due to analytical problems have been significantly reduced over time, but there is evidence that, particularly for immunoassays, interference may have a serious impact on patients. A description of the most frequent and risky pre-, intra- and post-analytical errors and advice on practical steps for measuring and reducing the risk of errors is therefore given in the present paper. Many mistakes in the Total Testing Process are called "laboratory errors", although these may be due to poor communication, action taken by others involved in the testing process (e.g., physicians, nurses and phlebotomists), or poorly designed processes, all of which are beyond the laboratory's control. Likewise, there is evidence that laboratory information is only partially utilized. A recent document from the International Organization for Standardization (ISO) recommends a new, broader definition of the term "laboratory error" and a classification of errors according to different criteria. In a modern approach to total quality, centered on patients' needs and satisfaction, the risk of errors and mistakes in pre- and post-examination steps must be minimized to guarantee the total quality of laboratory services.

  14. New Methods for Assessing and Reducing Uncertainty in Microgravity Studies

    NASA Astrophysics Data System (ADS)

    Giniaux, J. M.; Hooper, A. J.; Bagnardi, M.

    2017-12-01

    Microgravity surveying, also known as dynamic or 4D gravimetry is a time-dependent geophysical method used to detect mass fluctuations within the shallow crust, by analysing temporal changes in relative gravity measurements. We present here a detailed uncertainty analysis of temporal gravity measurements, considering for the first time all possible error sources, including tilt, error in drift estimations and timing errors. We find that some error sources that are actually ignored, can have a significant impact on the total error budget and it is therefore likely that some gravity signals may have been misinterpreted in previous studies. Our analysis leads to new methods for reducing some of the uncertainties associated with residual gravity estimation. In particular, we propose different approaches for drift estimation and free air correction depending on the survey set up. We also provide formulae to recalculate uncertainties for past studies and lay out a framework for best practice in future studies. We demonstrate our new approach on volcanic case studies, which include Kilauea in Hawaii and Askja in Iceland.

  15. Evaluation of LiDAR-Acquired Bathymetric and Topographic Data Accuracy in Various Hydrogeomorphic Settings in the Lower Boise River, Southwestern Idaho, 2007

    USGS Publications Warehouse

    Skinner, Kenneth D.

    2009-01-01

    Elevation data in riverine environments can be used in various applications for which different levels of accuracy are required. The Experimental Advanced Airborne Research LiDAR (Light Detection and Ranging) - or EAARL - system was used to obtain topographic and bathymetric data along the lower Boise River, southwestern Idaho, for use in hydraulic and habitat modeling. The EAARL data were post-processed into bare earth and bathymetric raster and point datasets. Concurrently with the EAARL data collection, real-time kinetic global positioning system and total station ground-survey data were collected in three areas within the lower Boise River basin to assess the accuracy of the EAARL elevation data in different hydrogeomorphic settings. The accuracies of the EAARL-derived elevation data, determined in open, flat terrain, to provide an optimal vertical comparison surface, had root mean square errors ranging from 0.082 to 0.138 m. Accuracies for bank, floodplain, and in-stream bathymetric data had root mean square errors ranging from 0.090 to 0.583 m. The greater root mean square errors for the latter data are the result of high levels of turbidity in the downstream ground-survey area, dense tree canopy, and horizontal location discrepancies between the EAARL and ground-survey data in steeply sloping areas such as riverbanks. The EAARL point to ground-survey comparisons produced results similar to those for the EAARL raster to ground-survey comparisons, indicating that the interpolation of the EAARL points to rasters did not introduce significant additional error. The mean percent error for the wetted cross-sectional areas of the two upstream ground-survey areas was 1 percent. The mean percent error increases to -18 percent if the downstream ground-survey area is included, reflecting the influence of turbidity in that area.

  16. The effect of short ground vegetation on terrestrial laser scans at a local scale

    NASA Astrophysics Data System (ADS)

    Fan, Lei; Powrie, William; Smethurst, Joel; Atkinson, Peter M.; Einstein, Herbert

    2014-09-01

    Terrestrial laser scanning (TLS) can record a large amount of accurate topographical information with a high spatial accuracy over a relatively short period of time. These features suggest it is a useful tool for topographical survey and surface deformation detection. However, the use of TLS to survey a terrain surface is still challenging in the presence of dense ground vegetation. The bare ground surface may not be illuminated due to signal occlusion caused by vegetation. This paper investigates vegetation-induced elevation error in TLS surveys at a local scale and its spatial pattern. An open, relatively flat area vegetated with dense grass was surveyed repeatedly under several scan conditions. A total station was used to establish an accurate representation of the bare ground surface. Local-highest-point and local-lowest-point filters were applied to the point clouds acquired for deriving vegetation height and vegetation-induced elevation error, respectively. The effects of various factors (for example, vegetation height, edge effects, incidence angle, scan resolution and location) on the error caused by vegetation are discussed. The results are of use in the planning and interpretation of TLS surveys of vegetated areas.

  17. A 1400-MHz survey of 1478 Abell clusters of galaxies

    NASA Technical Reports Server (NTRS)

    Owen, F. N.; White, R. A.; Hilldrup, K. C.; Hanisch, R. J.

    1982-01-01

    Observations of 1478 Abell clusters of galaxies with the NRAO 91-m telescope at 1400 MHz are reported. The measured beam shape was deconvolved from the measured source Gaussian fits in order to estimate the source size and position angle. All detected sources within 0.5 corrected Abell cluster radii are listed, including the cluster number, richness class, distance class, magnitude of the tenth brightest galaxy, redshift estimate, corrected cluster radius in arcmin, right ascension and error, declination and error, total flux density and error, and angular structure for each source.

  18. ACSPRI 2014 4th International Social Science Methodology Conference Report

    DTIC Science & Technology

    2015-04-01

    Validity, trustworthiness and rigour: quality and the idea of qualitative research . Journal of Advanced Nursing, 304-310. Spencer, L., Ritchie, J...increasing data quality; the Total Survey Error framework; multi-modal on-line surveying, quality frameworks for assessing qualitative research ; and...provided an overview of the current perspectives on causal claims in qualitative research . Three approaches to generating plausible causal

  19. Experimental investigation of false positive errors in auditory species occurrence surveys

    USGS Publications Warehouse

    Miller, David A.W.; Weir, Linda A.; McClintock, Brett T.; Grant, Evan H. Campbell; Bailey, Larissa L.; Simons, Theodore R.

    2012-01-01

    False positive errors are a significant component of many ecological data sets, which in combination with false negative errors, can lead to severe biases in conclusions about ecological systems. We present results of a field experiment where observers recorded observations for known combinations of electronically broadcast calling anurans under conditions mimicking field surveys to determine species occurrence. Our objectives were to characterize false positive error probabilities for auditory methods based on a large number of observers, to determine if targeted instruction could be used to reduce false positive error rates, and to establish useful predictors of among-observer and among-species differences in error rates. We recruited 31 observers, ranging in abilities from novice to expert, that recorded detections for 12 species during 180 calling trials (66,960 total observations). All observers made multiple false positive errors and on average 8.1% of recorded detections in the experiment were false positive errors. Additional instruction had only minor effects on error rates. After instruction, false positive error probabilities decreased by 16% for treatment individuals compared to controls with broad confidence interval overlap of 0 (95% CI: -46 to 30%). This coincided with an increase in false negative errors due to the treatment (26%; -3 to 61%). Differences among observers in false positive and in false negative error rates were best predicted by scores from an online test and a self-assessment of observer ability completed prior to the field experiment. In contrast, years of experience conducting call surveys was a weak predictor of error rates. False positive errors were also more common for species that were played more frequently, but were not related to the dominant spectral frequency of the call. Our results corroborate other work that demonstrates false positives are a significant component of species occurrence data collected by auditory methods. Instructing observers to only report detections they are completely certain are correct is not sufficient to eliminate errors. As a result, analytical methods that account for false positive errors will be needed, and independent testing of observer ability is a useful predictor for among-observer variation in observation error rates.

  20. Experimental flights using a small unmanned aircraft system for mapping emergent sandbars

    USGS Publications Warehouse

    Kinzel, Paul J.; Bauer, Mark A.; Feller, Mark R.; Holmquist-Johnson, Christopher; Preston, Todd

    2015-01-01

    The US Geological Survey and Parallel Inc. conducted experimental flights with the Tarantula Hawk (T-Hawk) unmanned aircraft system (UAS ) at the Dyer and Cottonwood Ranch properties located along reaches of the Platte River near Overton, Nebraska, in July 2013. We equipped the T-Hawk UAS platform with a consumer-grade digital camera to collect imagery of emergent sandbars in the reaches and used photogrammetric software and surveyed control points to generate orthophotographs and digital elevation models (DEMS ) of the reaches. To optimize the image alignment process, we retained and/or eliminated tie points based on their relative errors and spatial resolution, whereby minimizing the total error in the project. Additionally, we collected seven transects that traversed emergent sandbars concurrently with global positioning system location data to evaluate the accuracy of the UAS survey methodology. The root mean square errors for the elevation of emergent points along each transect across the DEMS ranged from 0.04 to 0.12 m. If adequate survey control is established, a UAS combined with photogrammetry software shows promise for accurate monitoring of emergent sandbar morphology and river management activities in short (1–2 km) river reaches.

  1. Do surveys with paper and electronic devices differ in quality and cost? Experience from the Rufiji Health and demographic surveillance system in Tanzania.

    PubMed

    Mukasa, Oscar; Mushi, Hildegalda P; Maire, Nicolas; Ross, Amanda; de Savigny, Don

    2017-01-01

    Data entry at the point of collection using mobile electronic devices may make data-handling processes more efficient and cost-effective, but there is little literature to document and quantify gains, especially for longitudinal surveillance systems. To examine the potential of mobile electronic devices compared with paper-based tools in health data collection. Using data from 961 households from the Rufiji Household and Demographic Survey in Tanzania, the quality and costs of data collected on paper forms and electronic devices were compared. We also documented, using qualitative approaches, field workers, whom we called 'enumerators', and households' members on the use of both methods. Existing administrative records were combined with logistics expenditure measured directly from comparison households to approximate annual costs per 1,000 households surveyed. Errors were detected in 17% (166) of households for the paper records and 2% (15) for the electronic records (p < 0.001). There were differences in the types of errors (p = 0.03). Of the errors occurring, a higher proportion were due to accuracy in paper surveys (79%, 95% CI: 72%, 86%) compared with electronic surveys (58%, 95% CI: 29%, 87%). Errors in electronic surveys were more likely to be related to completeness (32%, 95% CI 12%, 56%) than in paper surveys (11%, 95% CI: 7%, 17%).The median duration of the interviews ('enumeration'), per household was 9.4 minutes (90% central range 6.4, 12.2) for paper and 8.3 (6.1, 12.0) for electronic surveys (p = 0.001). Surveys using electronic tools, compared with paper-based tools, were less costly by 28% for recurrent and 19% for total costs. Although there were technical problems with electronic devices, there was good acceptance of both methods by enumerators and members of the community. Our findings support the use of mobile electronic devices for large-scale longitudinal surveys in resource-limited settings.

  2. Introduction to the Application of Web-Based Surveys.

    ERIC Educational Resources Information Center

    Timmerman, Annemarie

    This paper discusses some basic assumptions and issues concerning web-based surveys. Discussion includes: assumptions regarding cost and ease of use; disadvantages of web-based surveys, concerning the inability to compensate for four common errors of survey research: coverage error, sampling error, measurement error and nonresponse error; and…

  3. Effect of survey design and catch rate estimation on total catch estimates in Chinook salmon fisheries

    USGS Publications Warehouse

    McCormick, Joshua L.; Quist, Michael C.; Schill, Daniel J.

    2012-01-01

    Roving–roving and roving–access creel surveys are the primary techniques used to obtain information on harvest of Chinook salmon Oncorhynchus tshawytscha in Idaho sport fisheries. Once interviews are conducted using roving–roving or roving–access survey designs, mean catch rate can be estimated with the ratio-of-means (ROM) estimator, the mean-of-ratios (MOR) estimator, or the MOR estimator with exclusion of short-duration (≤0.5 h) trips. Our objective was to examine the relative bias and precision of total catch estimates obtained from use of the two survey designs and three catch rate estimators for Idaho Chinook salmon fisheries. Information on angling populations was obtained by direct visual observation of portions of Chinook salmon fisheries in three Idaho river systems over an 18-d period. Based on data from the angling populations, Monte Carlo simulations were performed to evaluate the properties of the catch rate estimators and survey designs. Among the three estimators, the ROM estimator provided the most accurate and precise estimates of mean catch rate and total catch for both roving–roving and roving–access surveys. On average, the root mean square error of simulated total catch estimates was 1.42 times greater and relative bias was 160.13 times greater for roving–roving surveys than for roving–access surveys. Length-of-stay bias and nonstationary catch rates in roving–roving surveys both appeared to affect catch rate and total catch estimates. Our results suggest that use of the ROM estimator in combination with an estimate of angler effort provided the least biased and most precise estimates of total catch for both survey designs. However, roving–access surveys were more accurate than roving–roving surveys for Chinook salmon fisheries in Idaho.

  4. Measurement of spine and total body mineral by dual-photon absorptiometry

    NASA Technical Reports Server (NTRS)

    Mazess, R. B.; Young, D.

    1983-01-01

    The use of Gd-153 dual-photon absorptiometry at 43 and 100 keV to measure individual-bone and total-body bone minerals is discussed in a survey of recent studies on humans, phantoms, and monkeys. Precision errors of as low as 1 percent have been achieved in vivo, suggesting the use of sequential measurements in studies of immobilization and space-flight effects.

  5. Analysis of elevation changes detected from multi-temporal LiDAR surveys in forested landslide terrain in western Oregon

    USGS Publications Warehouse

    Burns, W.J.; Coe, J.A.; Kaya, B.S.; Ma, Liwang

    2010-01-01

    We examined elevation changes detected from two successive sets of Light Detection and Ranging (LiDAR) data in the northern Coast Range of Oregon. The first set of LiDAR data was acquired during leafon conditions and the second set during leaf-off conditions. We were able to successfully identify and map active landslides using a differential digital elevation model (DEM) created from the two LiDAR data sets, but this required the use of thresholds (0.50 and 0.75 m) to remove noise from the differential elevation data, visual pattern recognition of landslideinduced elevation changes, and supplemental QuickBird satellite imagery. After mapping, we field-verified 88 percent of the landslides that we had mapped with high confidence, but we could not detect active landslides with elevation changes of less than 0.50 m. Volumetric calculations showed that a total of about 18,100 m3 of material was missing from landslide areas, probably as a result of systematic negative elevation errors in the differential DEM and as a result of removal of material by erosion and transport. We also examined the accuracies of 285 leaf-off LiDAR elevations at four landslide sites using Global Positioning System and total station surveys. A comparison of LiDAR and survey data indicated an overall root mean square error of 0.50 m, a maximum error of 2.21 m, and a systematic error of 0.09 m. LiDAR ground-point densities were lowest in areas with young conifer forests and deciduous vegetation, which resulted in extensive interpolations of elevations in the leaf-on, bare-earth DEM. For optimal use of multi-temporal LiDAR data in forested areas, we recommend that all data sets be flown during leaf-off seasons.

  6. Evaluation of airborne topographic lidar for quantifying beach changes

    USGS Publications Warehouse

    Sallenger, A.H.; Krabill, W.B.; Swift, R.N.; Brock, J.; List, J.; Hansen, M.; Holman, R.A.; Manizade, S.; Sontag, J.; Meredith, A.; Morgan, K.; Yunkel, J.K.; Frederick, E.B.; Stockdon, H.

    2003-01-01

    A scanning airborne topographic lidar was evaluated for its ability to quantify beach topography and changes during the Sandy Duck experiment in 1997 along the North Carolina coast. Elevation estimates, acquired with NASA's Airborne Topographic Mapper (ATM), were compared to elevations measured with three types of ground-based measurements - 1) differential GPS equipped all-terrain vehicle (ATV) that surveyed a 3-km reach of beach from the shoreline to the dune, 2) GPS antenna mounted on a stadia rod used to intensely survey a different 100 m reach of beach, and 3) a second GPS-equipped ATV that surveyed a 70-km-long transect along the coast. Over 40,000 individual intercomparisons between ATM and ground surveys were calculated. RMS vertical differences associated with the ATM when compared to ground measurements ranged from 13 to 19 cm. Considering all of the intercomparisons together, RMS ??? 15 cm. This RMS error represents a total error for individual elevation estimates including uncertainties associated with random and mean errors. The latter was the largest source of error and was attributed to drift in differential GPS. The ??? 15 cm vertical accuracy of the ATM is adequate to resolve beach-change signals typical of the impact of storms. For example, ATM surveys of Assateague Island (spanning the border of MD and VA) prior to and immediately following a severe northeaster showed vertical beach changes in places greater than 2 m, much greater than expected errors associated with the ATM. A major asset of airborne lidar is the high spatial data density. Measurements of elevation are acquired every few m2 over regional scales of hundreds of kilometers. Hence, many scales of beach morphology and change can be resolved, from beach cusps tens of meters in wavelength to entire coastal cells comprising tens to hundreds of kilometers of coast. Topographic lidars similar to the ATM are becoming increasingly available from commercial vendors and should, in the future, be widely used in beach surveying.

  7. A comparison of acoustic montoring methods for common anurans of the northeastern United States

    USGS Publications Warehouse

    Brauer, Corinne; Donovan, Therese; Mickey, Ruth M.; Katz, Jonathan; Mitchell, Brian R.

    2016-01-01

    Many anuran monitoring programs now include autonomous recording units (ARUs). These devices collect audio data for extended periods of time with little maintenance and at sites where traditional call surveys might be difficult. Additionally, computer software programs have grown increasingly accurate at automatically identifying the calls of species. However, increased automation may cause increased error. We collected 435 min of audio data with 2 types of ARUs at 10 wetland sites in Vermont and New York, USA, from 1 May to 1 July 2010. For each minute, we determined presence or absence of 4 anuran species (Hyla versicolor, Pseudacris crucifer, Anaxyrus americanus, and Lithobates clamitans) using 1) traditional human identification versus 2) computer-mediated identification with software package, Song Scope® (Wildlife Acoustics, Concord, MA). Detections were compared with a data set consisting of verified calls in order to quantify false positive, false negative, true positive, and true negative rates. Multinomial logistic regression analysis revealed a strong (P < 0.001) 3-way interaction between the ARU recorder type, identification method, and focal species, as well as a trend in the main effect of rain (P = 0.059). Overall, human surveyors had the lowest total error rate (<2%) compared with 18–31% total errors with automated methods. Total error rates varied by species, ranging from 4% for A. americanus to 26% for L. clamitans. The presence of rain may reduce false negative rates. For survey minutes where anurans were known to be calling, the odds of a false negative were increased when fewer individuals of the same species were calling.

  8. Survey of Radar Refraction Error Corrections

    DTIC Science & Technology

    2016-11-01

    ELECTRONIC TRAJECTORY MEASUREMENTS GROUP RCC 266-16 SURVEY OF RADAR REFRACTION ERROR CORRECTIONS DISTRIBUTION A: Approved for...DOCUMENT 266-16 SURVEY OF RADAR REFRACTION ERROR CORRECTIONS November 2016 Prepared by Electronic...This page intentionally left blank. Survey of Radar Refraction Error Corrections, RCC 266-16 iii Table of Contents Preface

  9. Characterization of measurement errors using structure-from-motion and photogrammetry to measure marine habitat structural complexity.

    PubMed

    Bryson, Mitch; Ferrari, Renata; Figueira, Will; Pizarro, Oscar; Madin, Josh; Williams, Stefan; Byrne, Maria

    2017-08-01

    Habitat structural complexity is one of the most important factors in determining the makeup of biological communities. Recent advances in structure-from-motion and photogrammetry have resulted in a proliferation of 3D digital representations of habitats from which structural complexity can be measured. Little attention has been paid to quantifying the measurement errors associated with these techniques, including the variability of results under different surveying and environmental conditions. Such errors have the potential to confound studies that compare habitat complexity over space and time. This study evaluated the accuracy, precision, and bias in measurements of marine habitat structural complexity derived from structure-from-motion and photogrammetric measurements using repeated surveys of artificial reefs (with known structure) as well as natural coral reefs. We quantified measurement errors as a function of survey image coverage, actual surface rugosity, and the morphological community composition of the habitat-forming organisms (reef corals). Our results indicated that measurements could be biased by up to 7.5% of the total observed ranges of structural complexity based on the environmental conditions present during any particular survey. Positive relationships were found between measurement errors and actual complexity, and the strength of these relationships was increased when coral morphology and abundance were also used as predictors. The numerous advantages of structure-from-motion and photogrammetry techniques for quantifying and investigating marine habitats will mean that they are likely to replace traditional measurement techniques (e.g., chain-and-tape). To this end, our results have important implications for data collection and the interpretation of measurements when examining changes in habitat complexity using structure-from-motion and photogrammetry.

  10. Validation Relaxation: A Quality Assurance Strategy for Electronic Data Collection

    PubMed Central

    Gordon, Nicholas; Griffiths, Thomas; Kraemer, John D; Siedner, Mark J

    2017-01-01

    Background The use of mobile devices for data collection in developing world settings is becoming increasingly common and may offer advantages in data collection quality and efficiency relative to paper-based methods. However, mobile data collection systems can hamper many standard quality assurance techniques due to the lack of a hardcopy backup of data. Consequently, mobile health data collection platforms have the potential to generate datasets that appear valid, but are susceptible to unidentified database design flaws, areas of miscomprehension by enumerators, and data recording errors. Objective We describe the design and evaluation of a strategy for estimating data error rates and assessing enumerator performance during electronic data collection, which we term “validation relaxation.” Validation relaxation involves the intentional omission of data validation features for select questions to allow for data recording errors to be committed, detected, and monitored. Methods We analyzed data collected during a cluster sample population survey in rural Liberia using an electronic data collection system (Open Data Kit). We first developed a classification scheme for types of detectable errors and validation alterations required to detect them. We then implemented the following validation relaxation techniques to enable data error conduct and detection: intentional redundancy, removal of “required” constraint, and illogical response combinations. This allowed for up to 11 identifiable errors to be made per survey. The error rate was defined as the total number of errors committed divided by the number of potential errors. We summarized crude error rates and estimated changes in error rates over time for both individuals and the entire program using logistic regression. Results The aggregate error rate was 1.60% (125/7817). Error rates did not differ significantly between enumerators (P=.51), but decreased for the cohort with increasing days of application use, from 2.3% at survey start (95% CI 1.8%-2.8%) to 0.6% at day 45 (95% CI 0.3%-0.9%; OR=0.969; P<.001). The highest error rate (84/618, 13.6%) occurred for an intentional redundancy question for a birthdate field, which was repeated in separate sections of the survey. We found low error rates (0.0% to 3.1%) for all other possible errors. Conclusions A strategy of removing validation rules on electronic data capture platforms can be used to create a set of detectable data errors, which can subsequently be used to assess group and individual enumerator error rates, their trends over time, and categories of data collection that require further training or additional quality control measures. This strategy may be particularly useful for identifying individual enumerators or systematic data errors that are responsive to enumerator training and is best applied to questions for which errors cannot be prevented through training or software design alone. Validation relaxation should be considered as a component of a holistic data quality assurance strategy. PMID:28821474

  11. Validation Relaxation: A Quality Assurance Strategy for Electronic Data Collection.

    PubMed

    Kenny, Avi; Gordon, Nicholas; Griffiths, Thomas; Kraemer, John D; Siedner, Mark J

    2017-08-18

    The use of mobile devices for data collection in developing world settings is becoming increasingly common and may offer advantages in data collection quality and efficiency relative to paper-based methods. However, mobile data collection systems can hamper many standard quality assurance techniques due to the lack of a hardcopy backup of data. Consequently, mobile health data collection platforms have the potential to generate datasets that appear valid, but are susceptible to unidentified database design flaws, areas of miscomprehension by enumerators, and data recording errors. We describe the design and evaluation of a strategy for estimating data error rates and assessing enumerator performance during electronic data collection, which we term "validation relaxation." Validation relaxation involves the intentional omission of data validation features for select questions to allow for data recording errors to be committed, detected, and monitored. We analyzed data collected during a cluster sample population survey in rural Liberia using an electronic data collection system (Open Data Kit). We first developed a classification scheme for types of detectable errors and validation alterations required to detect them. We then implemented the following validation relaxation techniques to enable data error conduct and detection: intentional redundancy, removal of "required" constraint, and illogical response combinations. This allowed for up to 11 identifiable errors to be made per survey. The error rate was defined as the total number of errors committed divided by the number of potential errors. We summarized crude error rates and estimated changes in error rates over time for both individuals and the entire program using logistic regression. The aggregate error rate was 1.60% (125/7817). Error rates did not differ significantly between enumerators (P=.51), but decreased for the cohort with increasing days of application use, from 2.3% at survey start (95% CI 1.8%-2.8%) to 0.6% at day 45 (95% CI 0.3%-0.9%; OR=0.969; P<.001). The highest error rate (84/618, 13.6%) occurred for an intentional redundancy question for a birthdate field, which was repeated in separate sections of the survey. We found low error rates (0.0% to 3.1%) for all other possible errors. A strategy of removing validation rules on electronic data capture platforms can be used to create a set of detectable data errors, which can subsequently be used to assess group and individual enumerator error rates, their trends over time, and categories of data collection that require further training or additional quality control measures. This strategy may be particularly useful for identifying individual enumerators or systematic data errors that are responsive to enumerator training and is best applied to questions for which errors cannot be prevented through training or software design alone. Validation relaxation should be considered as a component of a holistic data quality assurance strategy. ©Avi Kenny, Nicholas Gordon, Thomas Griffiths, John D Kraemer, Mark J Siedner. Originally published in the Journal of Medical Internet Research (http://www.jmir.org), 18.08.2017.

  12. Evaluation of airborne topographic lidar for quantifying beach changes

    USGS Publications Warehouse

    2003-01-01

    A scanning airborne topographic lidar was evaluated for its ability to quantify beach topography and changes during the Sandy Duck experiment in 1997 along the North Carolina coast. Elevation estimates, acquired with NASA's Airborne Topographic Mapper (ATM), were compared to elevations measured with three types of ground-based mea- surements-1) differential GPS equipped all-terrain vehicle (ATV) that surveyed a 3-km reach of beach from the shoreline to the dune, 2) GPS antenna mounted on a stadia rod used to intensely survey a different 100 m reach of beach, and 3) a second GPS-equipped ATV that surveyed a 70-km-long transect along the coast. Over 40,000 individual intercomparisons between ATM and ground surveys were calculated. RMS vertical differences associated with the ATM when compared to ground measurements ranged from 13 to 19 cm. Considering all of the intercomparisons together, RMS ≃15 cm. This RMS error represents a total error for individual elevation estimates including uncertainties associated with random and mean errors. The latter was the largest source of error and was attributed to drift in differential GPS. The ≃15cm vertical accuracy of the ATM is adequate to resolve beach-change signals typical of the impact of storms. For example, ATM surveys of Assateague Island (spanning the border of MD and VA) prior to and immediately following a severe northeaster showed vertical beach changes in places greater than 2 m, much greater than expected errors associated with the ATM. A major asset of airborne lidar is the high spatial data density. Measurements of elevation are acquired every few m2 over regional scales of hundreds of kilometers. Hence, many scales of beach morphology and change can be resolved, from beach cusps tens of meters in wavelength to entire coastal cells com- prising tens to hundreds of kilometers of coast. Topographic lidars similar to the ATM are becoming increasingly available from commercial vendors and should, in the future, be widely used in beach su

  13. The DiskMass Survey. II. Error Budget

    NASA Astrophysics Data System (ADS)

    Bershady, Matthew A.; Verheijen, Marc A. W.; Westfall, Kyle B.; Andersen, David R.; Swaters, Rob A.; Martinsson, Thomas

    2010-06-01

    We present a performance analysis of the DiskMass Survey. The survey uses collisionless tracers in the form of disk stars to measure the surface density of spiral disks, to provide an absolute calibration of the stellar mass-to-light ratio (Υ_{*}), and to yield robust estimates of the dark-matter halo density profile in the inner regions of galaxies. We find that a disk inclination range of 25°-35° is optimal for our measurements, consistent with our survey design to select nearly face-on galaxies. Uncertainties in disk scale heights are significant, but can be estimated from radial scale lengths to 25% now, and more precisely in the future. We detail the spectroscopic analysis used to derive line-of-sight velocity dispersions, precise at low surface-brightness, and accurate in the presence of composite stellar populations. Our methods take full advantage of large-grasp integral-field spectroscopy and an extensive library of observed stars. We show that the baryon-to-total mass fraction ({F}_bar) is not a well-defined observational quantity because it is coupled to the halo mass model. This remains true even when the disk mass is known and spatially extended rotation curves are available. In contrast, the fraction of the rotation speed supplied by the disk at 2.2 scale lengths (disk maximality) is a robust observational indicator of the baryonic disk contribution to the potential. We construct the error budget for the key quantities: dynamical disk mass surface density (Σdyn), disk stellar mass-to-light ratio (Υ^disk_{*}), and disk maximality ({F}_{*,max}^disk≡ V^disk_{*,max}/ V_c). Random and systematic errors in these quantities for individual galaxies will be ~25%, while survey precision for sample quartiles are reduced to 10%, largely devoid of systematic errors outside of distance uncertainties.

  14. SEDS: The Spitzer Extended Deep Survey. Survey Design, Photometry, and Deep IRAC Source Counts

    NASA Technical Reports Server (NTRS)

    Ashby, M. L. N.; Willner, S. P.; Fazio, G. G.; Huang, J.-S.; Arendt, A.; Barmby, P.; Barro, G; Bell, E. F.; Bouwens, R.; Cattaneo, A.; hide

    2013-01-01

    The Spitzer Extended Deep Survey (SEDS) is a very deep infrared survey within five well-known extragalactic science fields: the UKIDSS Ultra-Deep Survey, the Extended Chandra Deep Field South, COSMOS, the Hubble Deep Field North, and the Extended Groth Strip. SEDS covers a total area of 1.46 deg(exp 2) to a depth of 26 AB mag (3sigma) in both of the warm Infrared Array Camera (IRAC) bands at 3.6 and 4.5 micron. Because of its uniform depth of coverage in so many widely-separated fields, SEDS is subject to roughly 25% smaller errors due to cosmic variance than a single-field survey of the same size. SEDS was designed to detect and characterize galaxies from intermediate to high redshifts (z = 2-7) with a built-in means of assessing the impact of cosmic variance on the individual fields. Because the full SEDS depth was accumulated in at least three separate visits to each field, typically with six-month intervals between visits, SEDS also furnishes an opportunity to assess the infrared variability of faint objects. This paper describes the SEDS survey design, processing, and publicly-available data products. Deep IRAC counts for the more than 300,000 galaxies detected by SEDS are consistent with models based on known galaxy populations. Discrete IRAC sources contribute 5.6 +/- 1.0 and 4.4 +/- 0.8 nW / square m/sr at 3.6 and 4.5 micron to the diffuse cosmic infrared background (CIB). IRAC sources cannot contribute more than half of the total CIB flux estimated from DIRBE data. Barring an unexpected error in the DIRBE flux estimates, half the CIB flux must therefore come from a diffuse component.

  15. Stellar Color Regression: A Spectroscopy-based Method for Color Calibration to a Few Millimagnitude Accuracy and the Recalibration of Stripe 82

    NASA Astrophysics Data System (ADS)

    Yuan, Haibo; Liu, Xiaowei; Xiang, Maosheng; Huang, Yang; Zhang, Huihua; Chen, Bingqiu

    2015-02-01

    In this paper we propose a spectroscopy-based stellar color regression (SCR) method to perform accurate color calibration for modern imaging surveys, taking advantage of millions of stellar spectra now available. The method is straightforward, insensitive to systematic errors in the spectroscopically determined stellar atmospheric parameters, applicable to regions that are effectively covered by spectroscopic surveys, and capable of delivering an accuracy of a few millimagnitudes for color calibration. As an illustration, we have applied the method to the Sloan Digital Sky Survey (SDSS) Stripe 82 data. With a total number of 23,759 spectroscopically targeted stars, we have mapped out the small but strongly correlated color zero-point errors present in the photometric catalog of Stripe 82, and we improve the color calibration by a factor of two to three. Our study also reveals some small but significant magnitude dependence errors in the z band for some charge-coupled devices (CCDs). Such errors are likely to be present in all the SDSS photometric data. Our results are compared with those from a completely independent test based on the intrinsic colors of red galaxies presented by Ivezić et al. The comparison, as well as other tests, shows that the SCR method has achieved a color calibration internally consistent at a level of about 5 mmag in u - g, 3 mmag in g - r, and 2 mmag in r - i and i - z. Given the power of the SCR method, we discuss briefly the potential benefits by applying the method to existing, ongoing, and upcoming imaging surveys.

  16. A STUDY ON REASONS OF ERRORS OF OLD SURVEY MAPS IN CADASTRAL SYSTEM

    NASA Astrophysics Data System (ADS)

    Yanase, Norihiko

    This paper explicates sources on survey map errors which were made in 19th century. The present cadastral system stands on registers and survey maps which were compiled to change the land taxation system in the Meiji era. Many Japanese may recognize the reasons why poor survey technique by farmers, too long measure to avoid heavy tax, careless official check and other deception made such errors of acreage from several to more than ten percent of area in survey maps. The author would like to maintain that such errors, called nawa-nobi, were lawful in accordance with the then survey regulation because of results to analyze old survey regulations, history of making maps and studies of cadastral system. In addition to, a kind of survey maps' errors should be pointed out a reason why the easy subdivision system which could approve without real survey and disposal of state property with inadequate survey.

  17. A vignette study to examine health care professionals' attitudes towards patient involvement in error prevention.

    PubMed

    Schwappach, David L B; Frank, Olga; Davis, Rachel E

    2013-10-01

    Various authorities recommend the participation of patients in promoting patient safety, but little is known about health care professionals' (HCPs') attitudes towards patients' involvement in safety-related behaviours. To investigate how HCPs evaluate patients' behaviours and HCP responses to patient involvement in the behaviour, relative to different aspects of the patient, the involved HCP and the potential error. Cross-sectional fractional factorial survey with seven factors embedded in two error scenarios (missed hand hygiene, medication error). Each survey included two randomized vignettes that described the potential error, a patient's reaction to that error and the HCP response to the patient. Twelve hospitals in Switzerland. A total of 1141 HCPs (response rate 45%). Approval of patients' behaviour, HCP response to the patient, anticipated effects on the patient-HCP relationship, HCPs' support for being asked the question, affective response to the vignettes. Outcomes were measured on 7-point scales. Approval of patients' safety-related interventions was generally high and largely affected by patients' behaviour and correct identification of error. Anticipated effects on the patient-HCP relationship were much less positive, little correlated with approval of patients' behaviour and were mainly determined by the HCP response to intervening patients. HCPs expressed more favourable attitudes towards patients intervening about a medication error than about hand sanitation. This study provides the first insights into predictors of HCPs' attitudes towards patient engagement in safety. Future research is however required to assess the generalizability of the findings into practice before training can be designed to address critical issues. © 2012 John Wiley & Sons Ltd.

  18. Man Versus Machine: Comparing Double Data Entry and Optical Mark Recognition for Processing CAHPS Survey Data.

    PubMed

    Fifolt, Matthew; Blackburn, Justin; Rhodes, David J; Gillespie, Shemeka; Bennett, Aleena; Wolff, Paul; Rucks, Andrew

    Historically, double data entry (DDE) has been considered the criterion standard for minimizing data entry errors. However, previous studies considered data entry alternatives through the limited lens of data accuracy. This study supplies information regarding data accuracy, operational efficiency, and cost for DDE and Optical Mark Recognition (OMR) for processing the Consumer Assessment of Healthcare Providers and Systems 5.0 survey. To assess data accuracy, we compared error rates for DDE and OMR by dividing the number of surveys that were arbitrated by the total number of surveys processed for each method. To assess operational efficiency, we tallied the cost of data entry for DDE and OMR after survey receipt. Costs were calculated on the basis of personnel, depreciation for capital equipment, and costs of noncapital equipment. The cost savings attributed to this method were negated by the operational efficiency of OMR. There was a statistical significance between rates of arbitration between DDE and OMR; however, this statistical significance did not create a practical significance. The potential benefits of DDE in terms of data accuracy did not outweigh the operational efficiency and thereby financial savings of OMR.

  19. Types of Possible Survey Errors in Estimates Published in the Weekly Natural Gas Storage Report

    EIA Publications

    2016-01-01

    This document lists types of potential errors in EIA estimates published in the WNGSR. Survey errors are an unavoidable aspect of data collection. Error is inherent in all collected data, regardless of the source of the data and the care and competence of data collectors. The type and extent of error depends on the type and characteristics of the survey.

  20. Errors and error rates in surgical pathology: an Association of Directors of Anatomic and Surgical Pathology survey.

    PubMed

    Cooper, Kumarasen

    2006-05-01

    This survey on errors in surgical pathology was commissioned by the Association of Directors of Anatomic and Surgical Pathology Council to explore broad perceptions and definitions of error in surgical pathology among its membership and to get some estimate of the perceived frequency of such errors. Overall, 41 laboratories were surveyed, with 34 responding to a confidential questionnaire. Six small, 13 medium, and 10 large laboratories (based on specimen volume), predominantly located in the United States, were surveyed (the remaining 5 laboratories did not provide this particular information). The survey questions, responses, and associated comments are presented. It is clear from this survey that we lack uniformity and consistency with respect to terminology, definitions, and the identification/documentation of errors in surgical pathology. An appeal is made for the urgent need to reach some consensus in order to address these discrepancies as we prepare to combat the issue of errors in surgical pathology.

  1. Testing the Accuracy of Aerial Surveys for Large Mammals: An Experiment with African Savanna Elephants (Loxodonta africana).

    PubMed

    Schlossberg, Scott; Chase, Michael J; Griffin, Curtice R

    2016-01-01

    Accurate counts of animals are critical for prioritizing conservation efforts. Past research, however, suggests that observers on aerial surveys may fail to detect all individuals of the target species present in the survey area. Such errors could bias population estimates low and confound trend estimation. We used two approaches to assess the accuracy of aerial surveys for African savanna elephants (Loxodonta africana) in northern Botswana. First, we used double-observer sampling, in which two observers make observations on the same herds, to estimate detectability of elephants and determine what variables affect it. Second, we compared total counts, a complete survey of the entire study area, against sample counts, in which only a portion of the study area is sampled. Total counts are often considered a complete census, so comparing total counts against sample counts can help to determine if sample counts are underestimating elephant numbers. We estimated that observers detected only 76% ± SE of 2% of elephant herds and 87 ± 1% of individual elephants present in survey strips. Detectability increased strongly with elephant herd size. Out of the four observers used in total, one observer had a lower detection probability than the other three, and detectability was higher in the rear row of seats than the front. The habitat immediately adjacent to animals also affected detectability, with detection more likely in more open habitats. Total counts were not statistically distinguishable from sample counts. Because, however, the double-observer samples revealed that observers missed 13% of elephants, we conclude that total counts may be undercounting elephants as well. These results suggest that elephant population estimates from both sample and total counts are biased low. Because factors such as observer and habitat affected detectability of elephants, comparisons of elephant populations across time or space may be confounded. We encourage survey teams to incorporate detectability analysis in all aerial surveys for mammals.

  2. Testing the Accuracy of Aerial Surveys for Large Mammals: An Experiment with African Savanna Elephants (Loxodonta africana)

    PubMed Central

    Schlossberg, Scott; Chase, Michael J.; Griffin, Curtice R.

    2016-01-01

    Accurate counts of animals are critical for prioritizing conservation efforts. Past research, however, suggests that observers on aerial surveys may fail to detect all individuals of the target species present in the survey area. Such errors could bias population estimates low and confound trend estimation. We used two approaches to assess the accuracy of aerial surveys for African savanna elephants (Loxodonta africana) in northern Botswana. First, we used double-observer sampling, in which two observers make observations on the same herds, to estimate detectability of elephants and determine what variables affect it. Second, we compared total counts, a complete survey of the entire study area, against sample counts, in which only a portion of the study area is sampled. Total counts are often considered a complete census, so comparing total counts against sample counts can help to determine if sample counts are underestimating elephant numbers. We estimated that observers detected only 76% ± SE of 2% of elephant herds and 87 ± 1% of individual elephants present in survey strips. Detectability increased strongly with elephant herd size. Out of the four observers used in total, one observer had a lower detection probability than the other three, and detectability was higher in the rear row of seats than the front. The habitat immediately adjacent to animals also affected detectability, with detection more likely in more open habitats. Total counts were not statistically distinguishable from sample counts. Because, however, the double-observer samples revealed that observers missed 13% of elephants, we conclude that total counts may be undercounting elephants as well. These results suggest that elephant population estimates from both sample and total counts are biased low. Because factors such as observer and habitat affected detectability of elephants, comparisons of elephant populations across time or space may be confounded. We encourage survey teams to incorporate detectability analysis in all aerial surveys for mammals. PMID:27755570

  3. Planck 2013 results. VII. HFI time response and beams

    NASA Astrophysics Data System (ADS)

    Planck Collaboration; Ade, P. A. R.; Aghanim, N.; Armitage-Caplan, C.; Arnaud, M.; Ashdown, M.; Atrio-Barandela, F.; Aumont, J.; Baccigalupi, C.; Banday, A. J.; Barreiro, R. B.; Battaner, E.; Benabed, K.; Benoît, A.; Benoit-Lévy, A.; Bernard, J.-P.; Bersanelli, M.; Bielewicz, P.; Bobin, J.; Bock, J. J.; Bond, J. R.; Borrill, J.; Bouchet, F. R.; Bowyer, J. W.; Bridges, M.; Bucher, M.; Burigana, C.; Cardoso, J.-F.; Catalano, A.; Challinor, A.; Chamballu, A.; Chary, R.-R.; Chiang, H. C.; Chiang, L.-Y.; Christensen, P. R.; Church, S.; Clements, D. L.; Colombi, S.; Colombo, L. P. L.; Couchot, F.; Coulais, A.; Crill, B. P.; Curto, A.; Cuttaia, F.; Danese, L.; Davies, R. D.; de Bernardis, P.; de Rosa, A.; de Zotti, G.; Delabrouille, J.; Delouis, J.-M.; Désert, F.-X.; Diego, J. M.; Dole, H.; Donzelli, S.; Doré, O.; Douspis, M.; Dunkley, J.; Dupac, X.; Efstathiou, G.; Enßlin, T. A.; Eriksen, H. K.; Finelli, F.; Forni, O.; Frailis, M.; Fraisse, A. A.; Franceschi, E.; Galeotta, S.; Ganga, K.; Giard, M.; Giraud-Héraud, Y.; González-Nuevo, J.; Górski, K. M.; Gratton, S.; Gregorio, A.; Gruppuso, A.; Gudmundsson, J. E.; Haissinski, J.; Hansen, F. K.; Hanson, D.; Harrison, D.; Henrot-Versillé, S.; Hernández-Monteagudo, C.; Herranz, D.; Hildebrandt, S. R.; Hivon, E.; Hobson, M.; Holmes, W. A.; Hornstrup, A.; Hou, Z.; Hovest, W.; Huffenberger, K. M.; Jaffe, A. H.; Jaffe, T. R.; Jones, W. C.; Juvela, M.; Keihänen, E.; Keskitalo, R.; Kisner, T. S.; Kneissl, R.; Knoche, J.; Knox, L.; Kunz, M.; Kurki-Suonio, H.; Lagache, G.; Lamarre, J.-M.; Lasenby, A.; Laureijs, R. J.; Lawrence, C. R.; Leonardi, R.; Leroy, C.; Lesgourgues, J.; Liguori, M.; Lilje, P. B.; Linden-Vørnle, M.; López-Caniego, M.; Lubin, P. M.; Macías-Pérez, J. F.; MacTavish, C. J.; Maffei, B.; Mandolesi, N.; Maris, M.; Marshall, D. J.; Martin, P. G.; Martínez-González, E.; Masi, S.; Massardi, M.; Matarrese, S.; Matsumura, T.; Matthai, F.; Mazzotta, P.; McGehee, P.; Melchiorri, A.; Mendes, L.; Mennella, A.; Migliaccio, M.; Mitra, S.; Miville-Deschênes, M.-A.; Moneti, A.; Montier, L.; Morgante, G.; Mortlock, D.; Munshi, D.; Murphy, J. A.; Naselsky, P.; Nati, F.; Natoli, P.; Netterfield, C. B.; Nørgaard-Nielsen, H. U.; Noviello, F.; Novikov, D.; Novikov, I.; Osborne, S.; Oxborrow, C. A.; Paci, F.; Pagano, L.; Pajot, F.; Paoletti, D.; Pasian, F.; Patanchon, G.; Perdereau, O.; Perotto, L.; Perrotta, F.; Piacentini, F.; Piat, M.; Pierpaoli, E.; Pietrobon, D.; Plaszczynski, S.; Pointecouteau, E.; Polegre, A. M.; Polenta, G.; Ponthieu, N.; Popa, L.; Poutanen, T.; Pratt, G. W.; Prézeau, G.; Prunet, S.; Puget, J.-L.; Rachen, J. P.; Reinecke, M.; Remazeilles, M.; Renault, C.; Ricciardi, S.; Riller, T.; Ristorcelli, I.; Rocha, G.; Rosset, C.; Roudier, G.; Rowan-Robinson, M.; Rusholme, B.; Sandri, M.; Santos, D.; Sauvé, A.; Savini, G.; Scott, D.; Shellard, E. P. S.; Spencer, L. D.; Starck, J.-L.; Stolyarov, V.; Stompor, R.; Sudiwala, R.; Sureau, F.; Sutton, D.; Suur-Uski, A.-S.; Sygnet, J.-F.; Tauber, J. A.; Tavagnacco, D.; Terenzi, L.; Tomasi, M.; Tristram, M.; Tucci, M.; Umana, G.; Valenziano, L.; Valiviita, J.; Van Tent, B.; Vielva, P.; Villa, F.; Vittorio, N.; Wade, L. A.; Wandelt, B. D.; Yvon, D.; Zacchei, A.; Zonca, A.

    2014-11-01

    This paper characterizes the effective beams, the effective beam window functions and the associated errors for the Planck High Frequency Instrument (HFI) detectors. The effective beam is theangular response including the effect of the optics, detectors, data processing and the scan strategy. The window function is the representation of this beam in the harmonic domain which is required to recover an unbiased measurement of the cosmic microwave background angular power spectrum. The HFI is a scanning instrument and its effective beams are the convolution of: a) the optical response of the telescope and feeds; b) the processing of the time-ordered data and deconvolution of the bolometric and electronic transfer function; and c) the merging of several surveys to produce maps. The time response transfer functions are measured using observations of Jupiter and Saturn and by minimizing survey difference residuals. The scanning beam is the post-deconvolution angular response of the instrument, and is characterized with observations of Mars. The main beam solid angles are determined to better than 0.5% at each HFI frequency band. Observations of Jupiter and Saturn limit near sidelobes (within 5°) to about 0.1% of the total solid angle. Time response residuals remain as long tails in the scanning beams, but contribute less than 0.1% of the total solid angle. The bias and uncertainty in the beam products are estimated using ensembles of simulated planet observations that include the impact of instrumental noise and known systematic effects. The correlation structure of these ensembles is well-described by five error eigenmodes that are sub-dominant to sample variance and instrumental noise in the harmonic domain. A suite of consistency tests provide confidence that the error model represents a sufficient description of the data. The total error in the effective beam window functions is below 1% at 100 GHz up to multipole ℓ ~ 1500, and below 0.5% at 143 and 217 GHz up to ℓ ~ 2000.

  4. STELLAR COLOR REGRESSION: A SPECTROSCOPY-BASED METHOD FOR COLOR CALIBRATION TO A FEW MILLIMAGNITUDE ACCURACY AND THE RECALIBRATION OF STRIPE 82

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Yuan, Haibo; Liu, Xiaowei; Xiang, Maosheng

    In this paper we propose a spectroscopy-based stellar color regression (SCR) method to perform accurate color calibration for modern imaging surveys, taking advantage of millions of stellar spectra now available. The method is straightforward, insensitive to systematic errors in the spectroscopically determined stellar atmospheric parameters, applicable to regions that are effectively covered by spectroscopic surveys, and capable of delivering an accuracy of a few millimagnitudes for color calibration. As an illustration, we have applied the method to the Sloan Digital Sky Survey (SDSS) Stripe 82 data. With a total number of 23,759 spectroscopically targeted stars, we have mapped out the smallmore » but strongly correlated color zero-point errors present in the photometric catalog of Stripe 82, and we improve the color calibration by a factor of two to three. Our study also reveals some small but significant magnitude dependence errors in the z band for some charge-coupled devices (CCDs). Such errors are likely to be present in all the SDSS photometric data. Our results are compared with those from a completely independent test based on the intrinsic colors of red galaxies presented by Ivezić et al. The comparison, as well as other tests, shows that the SCR method has achieved a color calibration internally consistent at a level of about 5 mmag in u – g, 3 mmag in g – r, and 2 mmag in r – i and i – z. Given the power of the SCR method, we discuss briefly the potential benefits by applying the method to existing, ongoing, and upcoming imaging surveys.« less

  5. Field evaluation of distance-estimation error during wetland-dependent bird surveys

    USGS Publications Warehouse

    Nadeau, Christopher P.; Conway, Courtney J.

    2012-01-01

    Context: The most common methods to estimate detection probability during avian point-count surveys involve recording a distance between the survey point and individual birds detected during the survey period. Accurately measuring or estimating distance is an important assumption of these methods; however, this assumption is rarely tested in the context of aural avian point-count surveys. Aims: We expand on recent bird-simulation studies to document the error associated with estimating distance to calling birds in a wetland ecosystem. Methods: We used two approaches to estimate the error associated with five surveyor's distance estimates between the survey point and calling birds, and to determine the factors that affect a surveyor's ability to estimate distance. Key results: We observed biased and imprecise distance estimates when estimating distance to simulated birds in a point-count scenario (x̄error = -9 m, s.d.error = 47 m) and when estimating distances to real birds during field trials (x̄error = 39 m, s.d.error = 79 m). The amount of bias and precision in distance estimates differed among surveyors; surveyors with more training and experience were less biased and more precise when estimating distance to both real and simulated birds. Three environmental factors were important in explaining the error associated with distance estimates, including the measured distance from the bird to the surveyor, the volume of the call and the species of bird. Surveyors tended to make large overestimations to birds close to the survey point, which is an especially serious error in distance sampling. Conclusions: Our results suggest that distance-estimation error is prevalent, but surveyor training may be the easiest way to reduce distance-estimation error. Implications: The present study has demonstrated how relatively simple field trials can be used to estimate the error associated with distance estimates used to estimate detection probability during avian point-count surveys. Evaluating distance-estimation errors will allow investigators to better evaluate the accuracy of avian density and trend estimates. Moreover, investigators who evaluate distance-estimation errors could employ recently developed models to incorporate distance-estimation error into analyses. We encourage further development of such models, including the inclusion of such models into distance-analysis software.

  6. Validity of mail survey data on bagged waterfowl

    USGS Publications Warehouse

    Atwood, E.L.

    1956-01-01

    Knowledge of the pattern of occurrence and characteristics of response errors obtained during an investigation of the validity of post-season surveys of hunters was used to advantage to devise a two-step method for removing the response-bias errors from the raw survey data. The method was tested on data with known errors and found to have a high efficiency in reducing the effect of response-bias errors. The development of this method for removing the effect of the response-bias errors, and its application to post-season hunter-take survey data, increased the reliability of the data from below the point of practical management significance up to the approximate reliability limits corresponding to the sampling errors.

  7. A Test for Anchoring and Yea-Saying in Experimental Consumption Data.

    PubMed

    van Soest, Arthur; Hurd, Michael

    2008-01-01

    We analyze experimental survey data, with a random split into respondents who get an open-ended question on the amount of total family consumption (with follow-up unfolding brackets of the form "Is consumption $X or more?" for those who answer "don't know" or "refuse") and respondents who are immediately directed to unfolding brackets. In both cases, the entry point of the unfolding bracket sequence is randomized. Allowing for any type of selection into answering the open-ended or bracket questions, a nonparametric test is developed for errors in the answers to the first bracket question that are different from the usual reporting errors that will also affect open-ended answers. Two types of errors are considered explicitly: anchoring and yea-saying. Data are collected in the 1995 wave of the Assets and Health Dynamics survey, which is representative of the population in the United States that is 70 years and older. We reject the joint hypothesis of no anchoring and no yea-saying. Once yea-saying is taken into account, we find no evidence of anchoring at the entry point.

  8. A Test for Anchoring and Yea-Saying in Experimental Consumption Data

    PubMed Central

    van Soest, Arthur; Hurd, Michael

    2017-01-01

    We analyze experimental survey data, with a random split into respondents who get an open-ended question on the amount of total family consumption (with follow-up unfolding brackets of the form “Is consumption $X or more?” for those who answer “don’t know” or “refuse”) and respondents who are immediately directed to unfolding brackets. In both cases, the entry point of the unfolding bracket sequence is randomized. Allowing for any type of selection into answering the open-ended or bracket questions, a nonparametric test is developed for errors in the answers to the first bracket question that are different from the usual reporting errors that will also affect open-ended answers. Two types of errors are considered explicitly: anchoring and yea-saying. Data are collected in the 1995 wave of the Assets and Health Dynamics survey, which is representative of the population in the United States that is 70 years and older. We reject the joint hypothesis of no anchoring and no yea-saying. Once yea-saying is taken into account, we find no evidence of anchoring at the entry point. PMID:29056797

  9. A measurement error model for physical activity level as measured by a questionnaire with application to the 1999-2006 NHANES questionnaire.

    PubMed

    Tooze, Janet A; Troiano, Richard P; Carroll, Raymond J; Moshfegh, Alanna J; Freedman, Laurence S

    2013-06-01

    Systematic investigations into the structure of measurement error of physical activity questionnaires are lacking. We propose a measurement error model for a physical activity questionnaire that uses physical activity level (the ratio of total energy expenditure to basal energy expenditure) to relate questionnaire-based reports of physical activity level to true physical activity levels. The 1999-2006 National Health and Nutrition Examination Survey physical activity questionnaire was administered to 433 participants aged 40-69 years in the Observing Protein and Energy Nutrition (OPEN) Study (Maryland, 1999-2000). Valid estimates of participants' total energy expenditure were also available from doubly labeled water, and basal energy expenditure was estimated from an equation; the ratio of those measures estimated true physical activity level ("truth"). We present a measurement error model that accommodates the mixture of errors that arise from assuming a classical measurement error model for doubly labeled water and a Berkson error model for the equation used to estimate basal energy expenditure. The method was then applied to the OPEN Study. Correlations between the questionnaire-based physical activity level and truth were modest (r = 0.32-0.41); attenuation factors (0.43-0.73) indicate that the use of questionnaire-based physical activity level would lead to attenuated estimates of effect size. Results suggest that sample sizes for estimating relationships between physical activity level and disease should be inflated, and that regression calibration can be used to provide measurement error-adjusted estimates of relationships between physical activity and disease.

  10. Medication errors room: a simulation to assess the medical, nursing and pharmacy staffs' ability to identify errors related to the medication-use system.

    PubMed

    Daupin, Johanne; Atkinson, Suzanne; Bédard, Pascal; Pelchat, Véronique; Lebel, Denis; Bussières, Jean-François

    2016-12-01

    The medication-use system in hospitals is very complex. To improve the health professionals' awareness of the risks of errors related to the medication-use system, a simulation of medication errors was created. The main objective was to assess the medical, nursing and pharmacy staffs' ability to identify errors related to the medication-use system using a simulation. The secondary objective was to assess their level of satisfaction. This descriptive cross-sectional study was conducted in a 500-bed mother-and-child university hospital. A multidisciplinary group set up 30 situations and replicated a patient room and a care unit pharmacy. All hospital staff, including nurses, physicians, pharmacists and pharmacy technicians, was invited. Participants had to detect if a situation contained an error and fill out a response grid. They also answered a satisfaction survey. The simulation was held during 100 hours. A total of 230 professionals visited the simulation, 207 handed in a response grid and 136 answered the satisfaction survey. The participants' overall rate of correct answers was 67.5% ± 13.3% (4073/6036). Among the least detected errors were situations involving a Y-site infusion incompatibility, an oral syringe preparation and the patient's identification. Participants mainly considered the simulation as effective in identifying incorrect practices (132/136, 97.8%) and relevant to their practice (129/136, 95.6%). Most of them (114/136; 84.4%) intended to change their practices in view of their exposure to the simulation. We implemented a realistic medication-use system errors simulation in a mother-child hospital, with a wide audience. This simulation was an effective, relevant and innovative tool to raise the health care professionals' awareness of critical processes. © 2016 John Wiley & Sons, Ltd.

  11. Refractive error in school children in an urban and rural setting in Cambodia.

    PubMed

    Gao, Zoe; Meng, Ngy; Muecke, James; Chan, Weng Onn; Piseth, Horm; Kong, Aimee; Jnguyenphamhh, Theresa; Dehghan, Yalda; Selva, Dinesh; Casson, Robert; Ang, Kim

    2012-02-01

    To assess the prevalence of refractive error in schoolchildren aged 12-14 years in urban and rural settings in Cambodia's Phnom Penh and Kandal provinces. Ten schools from Phnom Penh Province and 26 schools from Kandal Province were randomly selected and surveyed in October 2010. Children were examined by teams of Australian and Cambodian optometrists, ophthalmic nurses and ophthalmologists who performed visual acuity (VA) testing and cycloplegic refraction. A total of 5527 children were included in the study. The prevalence of uncorrected, presenting and best-corrected VA ≤ 6/12 in the better eye were 2.48% (95% confidence interval [CI] 2.02-2.83%), 1.90% (95% CI 1.52-2.24%) and 0.36% (95% CI 0.20-0.52%), respectively; 43 children presented with glasses whilst a total of 315 glasses were dispensed. The total prevalence of refractive error was 6.57% (95% CI 5.91-7.22%), but there was a significant difference between urban (13.7%, 95% CI 12.2-15.2%) and rural (2.5%, 95% CI 2.03-3.07%) schools (P < 0.0001). Refractive error accounted for 91.2% of visually impaired eyes, cataract for 1.7%, and other causes for 7.1%. Myopia (spherical equivalent ≤ -0.50 diopters [D] in either eye) was associated with increased age, female gender and urban schooling. The prevalence of refractive error was significantly higher in urban Phnom Penh schools than rural schools in Kandal Province. The prevalence of refractive error, particularly myopia was relatively low compared to previous reports in Asia. The majority of children did not have appropriate correction with spectacles, highlighting the need for more effective screening and optical intervention.

  12. Effects of Reynolds number on orifice induced pressure error

    NASA Technical Reports Server (NTRS)

    Plentovich, E. B.; Gloss, B. B.

    1982-01-01

    Data previously reported for orifice induced pressure errors are extended to the case of higher Reynolds number flows, and a remedy is presented in the form of a porous metal plug for the orifice. Test orifices with apertures 0.330, 0.660, and 1.321 cm in diam. were fabricated on a flat plate for trials in the NASA Langley wind tunnel at Mach numbers 0.40-0.72. A boundary layer survey rake was also mounted on the flat plate to allow measurement of the total boundary layer pressures at the orifices. At the high Reynolds number flows studied, the orifice induced pressure error was found to be a function of the ratio of the orifice diameter to the boundary layer thickness. The error was effectively eliminated by the insertion of a porous metal disc set flush with the orifice outside surface.

  13. Data collection outcomes comparing paper forms with PDA forms in an office-based patient survey.

    PubMed

    Galliher, James M; Stewart, Thomas V; Pathak, Paramod K; Werner, James J; Dickinson, L Miriam; Hickner, John M

    2008-01-01

    We compared the completeness of data collection using paper forms and using electronic forms loaded on handheld computers in an office-based patient interview survey conducted within the American Academy of Family Physicians National Research Network. We asked 19 medical assistants and nurses in family practices to administer a survey about pneumococcal immunizations to 60 older adults each, 30 using paper forms and 30 using electronic forms on handheld computers. By random assignment, the interviewers used either the paper or electronic form first. Using multilevel analyses adjusted for patient characteristics and clustering of forms by practice, we analyzed the completeness of the data. A total of 1,003 of the expected 1,140 forms were returned to the data center. The overall return rate was better for paper forms (537 of 570, 94%) than for electronic forms (466 of 570, 82%) because of technical difficulties experienced with electronic data collection and stolen or lost handheld computers. Errors of omission on the returned forms, however, were more common using paper forms. Of the returned forms, only 3% of those gathered electronically had errors of omission, compared with 35% of those gathered on paper. Similarly, only 0.04% of total survey items were missing on the electronic forms, compared with 3.5% of the survey items using paper forms. Although handheld computers produced more complete data than the paper method for the returned forms, they were not superior because of the large amount of missing data due to technical difficulties with the hand-held computers or loss or theft. Other hardware solutions, such as tablet computers or cell phones linked via a wireless network directly to a Web site, may be better electronic solutions for the future.

  14. Nurses' attitudes and perceived barriers to the reporting of medication administration errors.

    PubMed

    Yung, Hai-Peng; Yu, Shu; Chu, Chi; Hou, I-Ching; Tang, Fu-In

    2016-07-01

    (1) To explore the attitudes and perceived barriers to reporting medication administration errors and (2) to understand the characteristics of - and nurses' feelings - about error reports. Under-reporting of medication administration errors is a global concern related to the safety of patient care. Understanding nurses' attitudes and perceived barriers to error reporting is the initial step to increasing the reporting rate. A cross-sectional, descriptive survey with a self-administered questionnaire was completed by the nurses of a medical centre hospital in Taiwan. A total of 306 nurses participated in the study. Nurses' attitudes towards medication administration error reporting were inclined towards positive. The major perceived barrier was fear of the consequences after reporting. The results demonstrated that 88.9% of medication administration errors were reported orally, whereas 19.0% were reported through the hospital internet system. Self-recrimination was the common feeling of nurses after the commission of an medication administration error. Even if hospital management encourages errors to be reported without recrimination, nurses' attitudes toward medication administration error reporting are not very positive and fear is the most prominent barrier contributing to underreporting. Nursing managers should establish anonymous reporting systems and counselling classes to create a secure atmosphere to reduce nurses' fear and provide incentives to encourage reporting. © 2016 John Wiley & Sons Ltd.

  15. Measurement error associated with surveys of fish abundance in Lake Michigan

    USGS Publications Warehouse

    Krause, Ann E.; Hayes, Daniel B.; Bence, James R.; Madenjian, Charles P.; Stedman, Ralph M.

    2002-01-01

    In fisheries, imprecise measurements in catch data from surveys adds uncertainty to the results of fishery stock assessments. The USGS Great Lakes Science Center (GLSC) began to survey the fall fish community of Lake Michigan in 1962 with bottom trawls. The measurement error was evaluated at the level of individual tows for nine fish species collected in this survey by applying a measurement-error regression model to replicated trawl data. It was found that the estimates of measurement-error variance ranged from 0.37 (deepwater sculpin, Myoxocephalus thompsoni) to 1.23 (alewife, Alosa pseudoharengus) on a logarithmic scale corresponding to a coefficient of variation = 66% to 156%. The estimates appeared to increase with the range of temperature occupied by the fish species. This association may be a result of the variability in the fall thermal structure of the lake. The estimates may also be influenced by other factors, such as pelagic behavior and schooling. Measurement error might be reduced by surveying the fish community during other seasons and/or by using additional technologies, such as acoustics. Measurement-error estimates should be considered when interpreting results of assessments that use abundance information from USGS-GLSC surveys of Lake Michigan and could be used if the survey design was altered. This study is the first to report estimates of measurement-error variance associated with this survey.

  16. Factors correlated with traffic accidents as a basis for evaluating Advanced Driver Assistance Systems.

    PubMed

    Staubach, Maria

    2009-09-01

    This study aims to identify factors which influence and cause errors in traffic accidents and to use these as a basis for information to guide the application and design of driver assistance systems. A total of 474 accidents were examined in depth for this study by means of a psychological survey, data from accident reports, and technical reconstruction information. An error analysis was subsequently carried out, taking into account the driver, environment, and vehicle sub-systems. Results showed that all accidents were influenced by errors as a consequence of distraction and reduced activity. For crossroad accidents, there were further errors resulting from sight obstruction, masked stimuli, focus errors, and law infringements. Lane departure crashes were additionally caused by errors as a result of masked stimuli, law infringements, expectation errors as well as objective and action slips, while same direction accidents occurred additionally because of focus errors, expectation errors, and objective and action slips. Most accidents were influenced by multiple factors. There is a safety potential for Advanced Driver Assistance Systems (ADAS), which support the driver in information assimilation and help to avoid distraction and reduced activity. The design of the ADAS is dependent on the specific influencing factors of the accident type.

  17. Cosmological baryonic and matter densities from 600000 SDSS luminous red galaxies with photometric redshifts

    NASA Astrophysics Data System (ADS)

    Blake, Chris; Collister, Adrian; Bridle, Sarah; Lahav, Ofer

    2007-02-01

    We analyse MegaZ-LRG, a photometric-redshift catalogue of luminous red galaxies (LRGs) based on the imaging data of the Sloan Digital Sky Survey (SDSS) 4th Data Release. MegaZ-LRG, presented in a companion paper, contains >106 photometric redshifts derived with ANNZ, an artificial neural network method, constrained by a spectroscopic subsample of ~13000 galaxies obtained by the 2dF-SDSS LRG and Quasar (2SLAQ) survey. The catalogue spans the redshift range 0.4 < z < 0.7 with an rms redshift error σz ~ 0.03(1 + z), covering 5914 deg2 to map out a total cosmic volume 2.5h-3Gpc3. In this study we use the most reliable 600000 photometric redshifts to measure the large-scale structure using two methods: (1) a spherical harmonic analysis in redshift slices, and (2) a direct re-construction of the spatial clustering pattern using Fourier techniques. We present the first cosmological parameter fits to galaxy angular power spectra from a photometric-redshift survey. Combining the redshift slices with appropriate covariances, we determine best-fitting values for the matter density Ωm and baryon density Ωb of Ωmh = 0.195 +/- 0.023 and Ωb/Ωm = 0.16 +/- 0.036 (with the Hubble parameter h = 0.75 and scalar index of primordial fluctuations nscalar = 1 held fixed). These results are in agreement with and independent of the latest studies of the cosmic microwave background radiation, and their precision is comparable to analyses of contemporary spectroscopic-redshift surveys. We perform an extensive series of tests which conclude that our power spectrum measurements are robust against potential systematic photometric errors in the catalogue. We conclude that photometric-redshift surveys are competitive with spectroscopic surveys for measuring cosmological parameters in the simplest `vanilla' models. Future deep imaging surveys have great potential for further improvement, provided that systematic errors can be controlled.

  18. Ubiquitous Total Station Development using Smartphone, RSSI and Laser Sensor providing service to Ubi-GIS

    NASA Astrophysics Data System (ADS)

    Shoushtari, M. A.; Sadeghi-Niaraki, H.

    2014-10-01

    The growing trend in technological advances and Micro Electro Mechanical Systems (MEMS) has targeted for intelligent human lives. Accordingly, Ubiquitous Computing Approach was proposed by Mark Weiser. This paper proposes an ubiquitous surveying solution in Geometrics and surveying field. Ubiquitous Surveying provides cost-effective, smart and available surveying techniques while traditional surveying equipment are so expensive and have small availability specially in indoor and daily surveying jobs. In order to have a smart surveying instrument, different information technology methods and tools like Triangle method, Received Signal Strength Indicator (RSSI) method and laser sensor are used. These new ways in combine with surveying equations introduces a modern surveying equipment called Ubi-Total Station that also employed different sensors embedded in smartphone and mobile stand. RSSI-based localization and Triangle method technique are easy and well known methods to predict the position of an unknown node in indoor environments whereas additional measures are required for a sufficient accuracy. In this paper the main goal is to introduce the Ubiquitous Total Station as a development in smart and ubiquitous GIS. In order to public use of the surveying equipment, design and implementation of this instrument has been done. Conceptual model of Smartphone-based system is designed for this study and based on this model, an Android application as a first sample is developed. Finally the evaluations shows that absolute errors in X and Y calculation are 0.028 and 0.057 meter respectively. Also RMSE of 0.26 was calculated in RSSI method for distance measurement. The high price of traditional equipment and their requirement for professional surveyors has given way to intelligent surveying. In the suggested system, smartphones can be used as tools for positioning and coordinating geometric information of objects.

  19. Development and implementation of a human accuracy program in patient foodservice.

    PubMed

    Eden, S H; Wood, S M; Ptak, K M

    1987-04-01

    For many years, industry has utilized the concept of human error rates to monitor and minimize human errors in the production process. A consistent quality-controlled product increases consumer satisfaction and repeat purchase of product. Administrative dietitians have applied the concepts of using human error rates (the number of errors divided by the number of opportunities for error) at four hospitals, with a total bed capacity of 788, within a tertiary-care medical center. Human error rate was used to monitor and evaluate trayline employee performance and to evaluate layout and tasks of trayline stations, in addition to evaluating employees in patient service areas. Long-term employees initially opposed the error rate system with some hostility and resentment, while newer employees accepted the system. All employees now believe that the constant feedback given by supervisors enhances their self-esteem and productivity. Employee error rates are monitored daily and are used to counsel employees when necessary; they are also utilized during annual performance evaluation. Average daily error rates for a facility staffed by new employees decreased from 7% to an acceptable 3%. In a facility staffed by long-term employees, the error rate increased, reflecting improper error documentation. Patient satisfaction surveys reveal satisfaction, for tray accuracy increased from 88% to 92% in the facility staffed by long-term employees and has remained above the 90% standard in the facility staffed by new employees.

  20. Characterisation of false-positive observations in botanical surveys

    PubMed Central

    2017-01-01

    Errors in botanical surveying are a common problem. The presence of a species is easily overlooked, leading to false-absences; while misidentifications and other mistakes lead to false-positive observations. While it is common knowledge that these errors occur, there are few data that can be used to quantify and describe these errors. Here we characterise false-positive errors for a controlled set of surveys conducted as part of a field identification test of botanical skill. Surveys were conducted at sites with a verified list of vascular plant species. The candidates were asked to list all the species they could identify in a defined botanically rich area. They were told beforehand that their final score would be the sum of the correct species they listed, but false-positive errors counted against their overall grade. The number of errors varied considerably between people, some people create a high proportion of false-positive errors, but these are scattered across all skill levels. Therefore, a person’s ability to correctly identify a large number of species is not a safeguard against the generation of false-positive errors. There was no phylogenetic pattern to falsely observed species; however, rare species are more likely to be false-positive as are species from species rich genera. Raising the threshold for the acceptance of an observation reduced false-positive observations dramatically, but at the expense of more false negative errors. False-positive errors are higher in field surveying of plants than many people may appreciate. Greater stringency is required before accepting species as present at a site, particularly for rare species. Combining multiple surveys resolves the problem, but requires a considerable increase in effort to achieve the same sensitivity as a single survey. Therefore, other methods should be used to raise the threshold for the acceptance of a species. For example, digital data input systems that can verify, feedback and inform the user are likely to reduce false-positive errors significantly. PMID:28533972

  1. PREVALENCE OF UNCORRECTED REFRACTIVE ERRORS IN ADULTS AGED 30 YEARS AND ABOVE IN A RURAL POPULATION IN PAKISTAN.

    PubMed

    Abdullah, Ayesha S; Jadoon, Milhammad Zahid; Akram, Mohammad; Awan, Zahid Hussain; Azam, Mohammad; Safdar, Mohammad; Nigar, Mohammad

    2015-01-01

    Uncorrected refractive errors are a leading cause of visual disability globally. This population-based study was done to estimate the prevalence of uncorrected refractive errors in adults aged 30 years and above of village Pawakah, Khyber Pakhtunkhwa (KPK), Pakistan. It was a cross-sectional survey in which 1000 individuals were included randomly. All the individuals were screened for uncorrected refractive errors and those whose visual acuity (VA) was found to be less than 6/6 were refracted. In whom refraction was found to be unsatisfactory (i.e., a best corrected visual acuity of <6/6) further examination was done to establish the cause for the subnormal vision. A total of 917 subjects participated in the survey (response rate 92%). The prevalence of uncorrected refractive errors was found to be 23.97% among males and 20% among females. The prevalence of visually disabling refractive errors was 6.89% in males and 5.71% in females. The prevalence was seen to increase with age, with maximum prevalence in 51-60 years age group. Hypermetropia (10.14%) was found to be the commonest refractive error followed by Myopia (6.00%) and Astigmatism (5.6%). The prevalence of Presbyopia was 57.5% (60.45% in males and 55.23% in females). Poor affordability was the commonest barrier to the use of spectacles, followed by unawareness. Cataract was the commonest reason for impaired vision after refractive correction. The prevalence of blindness was 1.96% (1.53% in males and 2.28% in females) in this community with cataract as the commonest cause. Despite being the most easily avoidable cause of subnormal vision uncorrected refractive errors still account for a major proportion of the burden of decreased vision in this area. Effective measures for the screening and affordable correction of uncorrected refractive errors need to be incorpora'ted into the health care delivery system.

  2. The potential of small unmanned aircraft systems and structure-from-motion for topographic surveys: A test of emerging integrated approaches at Cwm Idwal, North Wales

    NASA Astrophysics Data System (ADS)

    Tonkin, T. N.; Midgley, N. G.; Graham, D. J.; Labadz, J. C.

    2014-12-01

    Novel topographic survey methods that integrate both structure-from-motion (SfM) photogrammetry and small unmanned aircraft systems (sUAS) are a rapidly evolving investigative technique. Due to the diverse range of survey configurations available and the infancy of these new methods, further research is required. Here, the accuracy, precision and potential applications of this approach are investigated. A total of 543 images of the Cwm Idwal moraine-mound complex were captured from a light (< 5 kg) semi-autonomous multi-rotor unmanned aircraft system using a consumer-grade 18 MP compact digital camera. The images were used to produce a DSM (digital surface model) of the moraines. The DSM is in good agreement with 7761 total station survey points providing a total vertical RMSE value of 0.517 m and vertical RMSE values as low as 0.200 m for less densely vegetated areas of the DSM. High-precision topographic data can be acquired rapidly using this technique with the resulting DSMs and orthorectified aerial imagery at sub-decimetre resolutions. Positional errors on the total station dataset, vegetation and steep terrain are identified as the causes of vertical disagreement. Whilst this aerial survey approach is advocated for use in a range of geomorphological settings, care must be taken to ensure that adequate ground control is applied to give a high degree of accuracy.

  3. Suppression of Systematic Errors of Electronic Distance Meters for Measurement of Short Distances

    PubMed Central

    Braun, Jaroslav; Štroner, Martin; Urban, Rudolf; Dvořáček, Filip

    2015-01-01

    In modern industrial geodesy, high demands are placed on the final accuracy, with expectations currently falling below 1 mm. The measurement methodology and surveying instruments used have to be adjusted to meet these stringent requirements, especially the total stations as the most often used instruments. A standard deviation of the measured distance is the accuracy parameter, commonly between 1 and 2 mm. This parameter is often discussed in conjunction with the determination of the real accuracy of measurements at very short distances (5–50 m) because it is generally known that this accuracy cannot be increased by simply repeating the measurement because a considerable part of the error is systematic. This article describes the detailed testing of electronic distance meters to determine the absolute size of their systematic errors, their stability over time, their repeatability and the real accuracy of their distance measurement. Twenty instruments (total stations) have been tested, and more than 60,000 distances in total were measured to determine the accuracy and precision parameters of the distance meters. Based on the experiments’ results, calibration procedures were designed, including a special correction function for each instrument, whose usage reduces the standard deviation of the measurement of distance by at least 50%. PMID:26258777

  4. Suppression of Systematic Errors of Electronic Distance Meters for Measurement of Short Distances.

    PubMed

    Braun, Jaroslav; Štroner, Martin; Urban, Rudolf; Dvoček, Filip

    2015-08-06

    In modern industrial geodesy, high demands are placed on the final accuracy, with expectations currently falling below 1 mm. The measurement methodology and surveying instruments used have to be adjusted to meet these stringent requirements, especially the total stations as the most often used instruments. A standard deviation of the measured distance is the accuracy parameter, commonly between 1 and 2 mm. This parameter is often discussed in conjunction with the determination of the real accuracy of measurements at very short distances (5-50 m) because it is generally known that this accuracy cannot be increased by simply repeating the measurement because a considerable part of the error is systematic. This article describes the detailed testing of electronic distance meters to determine the absolute size of their systematic errors, their stability over time, their repeatability and the real accuracy of their distance measurement. Twenty instruments (total stations) have been tested, and more than 60,000 distances in total were measured to determine the accuracy and precision parameters of the distance meters. Based on the experiments' results, calibration procedures were designed, including a special correction function for each instrument, whose usage reduces the standard deviation of the measurement of distance by at least 50%.

  5. Measurement error in the Liebowitz Social Anxiety Scale: results from a general adult population in Japan.

    PubMed

    Takada, Koki; Takahashi, Kana; Hirao, Kazuki

    2018-01-17

    Although the self-report version of Liebowitz Social Anxiety Scale (LSAS) is frequently used to measure social anxiety, data is lacking on the smallest detectable change (SDC), an important index of measurement error. We therefore aimed to determine the SDC of LSAS. Japanese adults aged 20-69 years were invited from a panel managed by a nationwide internet research agency. We then conducted a test-retest internet survey with a two-week interval to estimate the SDC at the individual (SDC ind ) and group (SDC group ) levels. The analysis included 1300 participants. The SDC ind and SDC group for the total fear subscale (scoring range: 0-72) were 23.52 points (32.7%) and 0.65 points (0.9%), respectively. The SDC ind and SDC group for the total avoidance subscale (scoring range: 0-72) were 32.43 points (45.0%) and 0.90 points (1.2%), respectively. The SDC ind and SDC group for the overall total score (scoring range: 0-144) were 45.90 points (31.9%) and 1.27 points (0.9%), respectively. Measurement error is large and indicate the potential for major problems when attempting to use the LSAS to detect changes at the individual level. These results should be considered when using the LSAS as measures of treatment change.

  6. Medication errors in chemotherapy preparation and administration: a survey conducted among oncology nurses in Turkey.

    PubMed

    Ulas, Arife; Silay, Kamile; Akinci, Sema; Dede, Didem Sener; Akinci, Muhammed Bulent; Sendur, Mehmet Ali Nahit; Cubukcu, Erdem; Coskun, Hasan Senol; Degirmenci, Mustafa; Utkan, Gungor; Ozdemir, Nuriye; Isikdogan, Abdurrahman; Buyukcelik, Abdullah; Inanc, Mevlude; Bilici, Ahmet; Odabasi, Hatice; Cihan, Sener; Avci, Nilufer; Yalcin, Bulent

    2015-01-01

    Medication errors in oncology may cause severe clinical problems due to low therapeutic indices and high toxicity of chemotherapeutic agents. We aimed to investigate unintentional medication errors and underlying factors during chemotherapy preparation and administration based on a systematic survey conducted to reflect oncology nurses experience. This study was conducted in 18 adult chemotherapy units with volunteer participation of 206 nurses. A survey developed by primary investigators and medication errors (MAEs) defined preventable errors during prescription of medication, ordering, preparation or administration. The survey consisted of 4 parts: demographic features of nurses; workload of chemotherapy units; errors and their estimated monthly number during chemotherapy preparation and administration; and evaluation of the possible factors responsible from ME. The survey was conducted by face to face interview and data analyses were performed with descriptive statistics. Chi-square or Fisher exact tests were used for a comparative analysis of categorical data. Some 83.4% of the 210 nurses reported one or more than one error during chemotherapy preparation and administration. Prescribing or ordering wrong doses by physicians (65.7%) and noncompliance with administration sequences during chemotherapy administration (50.5%) were the most common errors. The most common estimated average monthly error was not following the administration sequence of the chemotherapeutic agents (4.1 times/month, range 1-20). The most important underlying reasons for medication errors were heavy workload (49.7%) and insufficient number of staff (36.5%). Our findings suggest that the probability of medication error is very high during chemotherapy preparation and administration, the most common involving prescribing and ordering errors. Further studies must address the strategies to minimize medication error in chemotherapy receiving patients, determine sufficient protective measures and establishing multistep control mechanisms.

  7. Validation of the Family Inpatient Communication Survey.

    PubMed

    Torke, Alexia M; Monahan, Patrick; Callahan, Christopher M; Helft, Paul R; Sachs, Greg A; Wocial, Lucia D; Slaven, James E; Montz, Kianna; Inger, Lev; Burke, Emily S

    2017-01-01

    Although many family members who make surrogate decisions report problems with communication, there is no validated instrument to accurately measure surrogate/clinician communication for older adults in the acute hospital setting. The objective of this study was to validate a survey of surrogate-rated communication quality in the hospital that would be useful to clinicians, researchers, and health systems. After expert review and cognitive interviewing (n = 10 surrogates), we enrolled 350 surrogates (250 development sample and 100 validation sample) of hospitalized adults aged 65 years and older from three hospitals in one metropolitan area. The communication survey and a measure of decision quality were administered within hospital days 3 and 10. Mental health and satisfaction measures were administered six to eight weeks later. Factor analysis showed support for both one-factor (Total Communication) and two-factor models (Information and Emotional Support). Item reduction led to a final 30-item scale. For the validation sample, internal reliability (Cronbach's alpha) was 0.96 (total), 0.94 (Information), and 0.90 (Emotional Support). Confirmatory factor analysis fit statistics were adequate (one-factor model, comparative fit index = 0.981, root mean square error of approximation = 0.62, weighted root mean square residual = 1.011; two-factor model comparative fit index = 0.984, root mean square error of approximation = 0.055, weighted root mean square residual = 0.930). Total score and subscales showed significant associations with the Decision Conflict Scale (Pearson correlation -0.43, P < 0.001 for total score). Emotional Support was associated with improved mental health outcomes at six to eight weeks, such as anxiety (-0.19 P < 0.001), and Information was associated with satisfaction with the hospital stay (0.49, P < 0.001). The survey shows high reliability and validity in measuring communication experiences for hospital surrogates. The scale has promise for measurement of communication quality and is predictive of important outcomes, such as surrogate satisfaction and well-being. Copyright © 2016 American Academy of Hospice and Palliative Medicine. Published by Elsevier Inc. All rights reserved.

  8. Head office commitment to quality-related event reporting in community pharmacy

    PubMed Central

    Scobie, Andrea C.; Boyle, Todd A.; MacKinnon, Neil J.; Mahaffey, Thomas

    2012-01-01

    Background: This research explores how perceptions of head office commitment to quality-related event (QRE) reporting differ between pharmacy staff type and between pharmacies with high and low QRE reporting and learning performance. QREs include known, alleged or suspected medication errors that reach the patient as well as medication errors that are intercepted prior to dispensing. Methods: A survey questionnaire was mailed in the spring of 2010 to 427 pharmacy managers, pharmacists and pharmacy technicians in Nova Scotia. Nonparametric statistics were used to determine differences based on pharmacy staff type and pharmacy performance. Content analysis was used to analyze the responses to open-ended survey questions. Results: A total of 210 surveys were returned, for a response rate of 49.2%. However, the current study used only the subgroup of pharmacy staff who self-reported working at a chain pharmacy, for a total of 124 usable questionnaires. The results showed that community pharmacies viewed head office commitment to QRE reporting as an area to improve. In general, high-performing pharmacies ranked head office commitment higher than low-performing pharmacies. Discussion: One possible reason why high-performing pharmacies ranked the variables higher may be that increased levels of head office support for QRE processes have led these pharmacies to adopt and commit to QRE processes and thus increase their performance. Conclusion: Demonstrated commitment to QRE reporting, ongoing encouragement and targeted messages to staff could be important steps for head office to increase QRE reporting and learning in community pharmacies. PMID:23509532

  9. Head office commitment to quality-related event reporting in community pharmacy.

    PubMed

    Scobie, Andrea C; Boyle, Todd A; Mackinnon, Neil J; Mahaffey, Thomas

    2012-05-01

    This research explores how perceptions of head office commitment to quality-related event (QRE) reporting differ between pharmacy staff type and between pharmacies with high and low QRE reporting and learning performance. QREs include known, alleged or suspected medication errors that reach the patient as well as medication errors that are intercepted prior to dispensing. A survey questionnaire was mailed in the spring of 2010 to 427 pharmacy managers, pharmacists and pharmacy technicians in Nova Scotia. Nonparametric statistics were used to determine differences based on pharmacy staff type and pharmacy performance. Content analysis was used to analyze the responses to open-ended survey questions. A total of 210 surveys were returned, for a response rate of 49.2%. However, the current study used only the subgroup of pharmacy staff who self-reported working at a chain pharmacy, for a total of 124 usable questionnaires. The results showed that community pharmacies viewed head office commitment to QRE reporting as an area to improve. In general, high-performing pharmacies ranked head office commitment higher than low-performing pharmacies. One possible reason why high-performing pharmacies ranked the variables higher may be that increased levels of head office support for QRE processes have led these pharmacies to adopt and commit to QRE processes and thus increase their performance. Demonstrated commitment to QRE reporting, ongoing encouragement and targeted messages to staff could be important steps for head office to increase QRE reporting and learning in community pharmacies.

  10. Factors associated with disclosure of medical errors by housestaff.

    PubMed

    Kronman, Andrea C; Paasche-Orlow, Michael; Orlander, Jay D

    2012-04-01

    Attributes of the organisational culture of residency training programmes may impact patient safety. Training environments are complex, composed of clinical teams, residency programmes, and clinical units. We examined the relationship between residents' perceptions of their training environment and disclosure of or apology for their worst error. Anonymous, self-administered surveys were distributed to Medicine and Surgery residents at Boston Medical Center in 2005. Surveys asked residents to describe their worst medical error, and to answer selected questions from validated surveys measuring elements of working environments that promote learning from error. Subscales measured the microenvironments of the clinical team, residency programme, and clinical unit. Univariate and bivariate statistical analyses examined relationships between trainee characteristics, their perceived learning environment(s), and their responses to the error. Out of 109 surveys distributed to residents, 99 surveys were returned (91% overall response rate), two incomplete surveys were excluded, leaving 97: 61% internal medicine, 39% surgery, 59% male residents. While 31% reported apologising for the situation associated with the error, only 17% reported disclosing the error to patients and/or family. More male residents disclosed the error than female residents (p=0.04). Surgery residents scored higher on the subscales of safety culture pertaining to the residency programme (p=0.02) and managerial commitment to safety (p=0.05). Our Medical Culture Summary score was positively associated with disclosure (p=0.04) and apology (p=0.05). Factors in the learning environments of residents are associated with responses to medical errors. Organisational safety culture can be measured, and used to evaluate environmental attributes of clinical training that are associated with disclosure of, and apology for, medical error.

  11. Do Survey Data Estimate Earnings Inequality Correctly? Measurement Errors among Black and White Male Workers

    ERIC Educational Resources Information Center

    Kim, ChangHwan; Tamborini, Christopher R.

    2012-01-01

    Few studies have considered how earnings inequality estimates may be affected by measurement error in self-reported earnings in surveys. Utilizing restricted-use data that links workers in the Survey of Income and Program Participation with their W-2 earnings records, we examine the effect of measurement error on estimates of racial earnings…

  12. Sleep quality, but not quantity, is associated with self-perceived minor error rates among emergency department nurses.

    PubMed

    Weaver, Amy L; Stutzman, Sonja E; Supnet, Charlene; Olson, DaiWai M

    2016-03-01

    The emergency department (ED) is demanding and high risk. The impact of sleep quantity has been hypothesized to impact patient care. This study investigated the hypothesis that fatigue and impaired mentation, due to sleep disturbance and shortened overall sleeping hours, would lead to increased nursing errors. This is a prospective observational study of 30 ED nurses using self-administered survey and sleep architecture measured by wrist actigraphy as predictors of self-reported error rates. An actigraphy device was worn prior to working a 12-hour shift and nurses completed the Pittsburgh Sleep Quality Index (PSQI). Error rates were reported on a visual analog scale at the end of a 12-hour shift. The PSQI responses indicated that 73.3% of subjects had poor sleep quality. Lower sleep quality measured by actigraphy (hours asleep/hours in bed) was associated with higher self-perceived minor errors. Sleep quantity (total hours slept) was not associated with minor, moderate, nor severe errors. Our study found that ED nurses' sleep quality, immediately prior to a working 12-hour shift, is more predictive of error than sleep quantity. These results present evidence that a "good night's sleep" prior to working a nursing shift in the ED is beneficial for reducing minor errors. Copyright © 2016 Elsevier Ltd. All rights reserved.

  13. Influence of survey strategy and interpolation model on DEM quality

    NASA Astrophysics Data System (ADS)

    Heritage, George L.; Milan, David J.; Large, Andrew R. G.; Fuller, Ian C.

    2009-11-01

    Accurate characterisation of morphology is critical to many studies in the field of geomorphology, particularly those dealing with changes over time. Digital elevation models (DEMs) are commonly used to represent morphology in three dimensions. The quality of the DEM is largely a function of the accuracy of individual survey points, field survey strategy, and the method of interpolation. Recommendations concerning field survey strategy and appropriate methods of interpolation are currently lacking. Furthermore, the majority of studies to date consider error to be uniform across a surface. This study quantifies survey strategy and interpolation error for a gravel bar on the River Nent, Blagill, Cumbria, UK. Five sampling strategies were compared: (i) cross section; (ii) bar outline only; (iii) bar and chute outline; (iv) bar and chute outline with spot heights; and (v) aerial LiDAR equivalent, derived from degraded terrestrial laser scan (TLS) data. Digital Elevation Models were then produced using five different common interpolation algorithms. Each resultant DEM was differentiated from a terrestrial laser scan of the gravel bar surface in order to define the spatial distribution of vertical and volumetric error. Overall triangulation with linear interpolation (TIN) or point kriging appeared to provide the best interpolators for the bar surface. Lowest error on average was found for the simulated aerial LiDAR survey strategy, regardless of interpolation technique. However, comparably low errors were also found for the bar-chute-spot sampling strategy when TINs or point kriging was used as the interpolator. The magnitude of the errors between survey strategy exceeded those found between interpolation technique for a specific survey strategy. Strong relationships between local surface topographic variation (as defined by the standard deviation of vertical elevations in a 0.2-m diameter moving window), and DEM errors were also found, with much greater errors found at slope breaks such as bank edges. A series of curves are presented that demonstrate these relationships for each interpolation and survey strategy. The simulated aerial LiDAR data set displayed the lowest errors across the flatter surfaces; however, sharp slope breaks are better modelled by the morphologically based survey strategy. The curves presented have general application to spatially distributed data of river beds and may be applied to standard deviation grids to predict spatial error within a surface, depending upon sampling strategy and interpolation algorithm.

  14. The statistical validity of nursing home survey findings.

    PubMed

    Woolley, Douglas C

    2011-11-01

    The Medicare nursing home survey is a high-stakes process whose findings greatly affect nursing homes, their current and potential residents, and the communities they serve. Therefore, survey findings must achieve high validity. This study looked at the validity of one key assessment made during a nursing home survey: the observation of the rate of errors in administration of medications to residents (med-pass). Statistical analysis of the case under study and of alternative hypothetical cases. A skilled nursing home affiliated with a local medical school. The nursing home administrators and the medical director. Observational study. The probability that state nursing home surveyors make a Type I or Type II error in observing med-pass error rates, based on the current case and on a series of postulated med-pass error rates. In the common situation such as our case, where med-pass errors occur at slightly above a 5% rate after 50 observations, and therefore trigger a citation, the chance that the true rate remains above 5% after a large number of observations is just above 50%. If the true med-pass error rate were as high as 10%, and the survey team wished to achieve 75% accuracy in determining that a citation was appropriate, they would have to make more than 200 med-pass observations. In the more common situation where med pass errors are closer to 5%, the team would have to observe more than 2000 med-passes to achieve even a modest 75% accuracy in their determinations. In settings where error rates are low, large numbers of observations of an activity must be made to reach acceptable validity of estimates for the true rates of errors. In observing key nursing home functions with current methodology, the State Medicare nursing home survey process does not adhere to well-known principles of valid error determination. Alternate approaches in survey methodology are discussed. Copyright © 2011 American Medical Directors Association. Published by Elsevier Inc. All rights reserved.

  15. The first Extreme Ultraviolet Explorer source catalog

    NASA Technical Reports Server (NTRS)

    Bowyer, S.; Lieu, R.; Lampton, M.; Lewis, J.; Wu, X.; Drake, J. J.; Malina, R. F.

    1994-01-01

    The Extreme Ultraviolet Explorer (EUVE) has conducted an all-sky survey to locate and identify point sources of emission in four extreme ultraviolet wavelength bands centered at approximately 100, 200, 400, and 600 A. A companion deep survey of a strip along half the ecliptic plane was simultaneously conducted. In this catalog we report the sources found in these surveys using rigorously defined criteria uniformly applied to the data set. These are the first surveys to be made in the three longer wavelength bands, and a substantial number of sources were detected in these bands. We present a number of statistical diagnostics of the surveys, including their source counts, their sensitivites, and their positional error distributions. We provide a separate list of those sources reported in the EUVE Bright Source List which did not meet our criteria for inclusion in our primary list. We also provide improved count rate and position estimates for a majority of these sources based on the improved methodology used in this paper. In total, this catalog lists a total of 410 point sources, of which 372 have plausible optical ultraviolet, or X-ray identifications, which are also listed.

  16. Knowledge of healthcare professionals about medication errors in hospitals

    PubMed Central

    Abdel-Latif, Mohamed M. M.

    2016-01-01

    Context: Medication errors are the most common types of medical errors in hospitals and leading cause of morbidity and mortality among patients. Aims: The aim of the present study was to assess the knowledge of healthcare professionals about medication errors in hospitals. Settings and Design: A self-administered questionnaire was distributed to randomly selected healthcare professionals in eight hospitals in Madinah, Saudi Arabia. Subjects and Methods: An 18-item survey was designed and comprised questions on demographic data, knowledge of medication errors, availability of reporting systems in hospitals, attitudes toward error reporting, causes of medication errors. Statistical Analysis Used: Data were analyzed with Statistical Package for the Social Sciences software Version 17. Results: A total of 323 of healthcare professionals completed the questionnaire with 64.6% response rate of 138 (42.72%) physicians, 34 (10.53%) pharmacists, and 151 (46.75%) nurses. A majority of the participants had a good knowledge about medication errors concept and their dangers on patients. Only 68.7% of them were aware of reporting systems in hospitals. Healthcare professionals revealed that there was no clear mechanism available for reporting of errors in most hospitals. Prescribing (46.5%) and administration (29%) errors were the main causes of errors. The most frequently encountered medication errors were anti-hypertensives, antidiabetics, antibiotics, digoxin, and insulin. Conclusions: This study revealed differences in the awareness among healthcare professionals toward medication errors in hospitals. The poor knowledge about medication errors emphasized the urgent necessity to adopt appropriate measures to raise awareness about medication errors in Saudi hospitals. PMID:27330261

  17. Uncorrected refractive errors and spectacle utilisation rate in Tehran: the unmet need

    PubMed Central

    Fotouhi, A; Hashemi, H; Raissi, B; Mohammad, K

    2006-01-01

    Aim To determine the prevalence of the met and unmet need for spectacles and their associated factors in the population of Tehran. Methods 6497 Tehran citizens were enrolled through random cluster sampling and were invited to a clinic for an interview and ophthalmic examination. 4354 (70.3%) participated in the survey, and refraction measurement results of 4353 people aged 5 years and over are presented. The unmet need for spectacles was defined as the proportion of people who did not use spectacles despite a correctable visual acuity of worse than 20/40 in the better eye. Results The need for spectacles in the studied population, standardised for age and sex, was 14.1% (95% confidence interval (CI), 12.8% to 15.4%). This need was met with appropriate spectacles in 416 people (9.3% of the total sample), while it was unmet in 230 people, representing 4.8% of the total sample population (95% CI, 4.1% to 5.4%). The spectacle coverage rate (met need/(met need + unmet need)) was 66.0%. Multivariate logistic regression showed that variables of age, education, and type of refractive error were associated with lack of spectacle correction. There was an increase in the unmet need with older age, lesser education, and myopia. Conclusion This survey determined the met and unmet need for spectacles in a Tehran population. It also identified high risk groups with uncorrected refractive errors to guide intervention programmes for the society. While the study showed the unmet need for spectacles and its determinants, more extensive studies towards the causes of unmet need are recommended. PMID:16488929

  18. Evaluating a Modular Design Approach to Collecting Survey Data Using Text Messages

    PubMed Central

    West, Brady T.; Ghimire, Dirgha; Axinn, William G.

    2015-01-01

    This article presents analyses of data from a pilot study in Nepal that was designed to provide an initial examination of the errors and costs associated with an innovative methodology for survey data collection. We embedded a randomized experiment within a long-standing panel survey, collecting data on a small number of items with varying sensitivity from a probability sample of 450 young Nepalese adults. Survey items ranged from simple demographics to indicators of substance abuse and mental health problems. Sampled adults were randomly assigned to one of three different modes of data collection: 1) a standard one-time telephone interview, 2) a “single sitting” back-and-forth interview with an interviewer using text messaging, and 3) an interview using text messages within a modular design framework (which generally involves breaking the survey response task into distinct parts over a short period of time). Respondents in the modular group were asked to respond (via text message exchanges with an interviewer) to only one question on a given day, rather than complete the entire survey. Both bivariate and multivariate analyses demonstrate that the two text messaging modes increased the probability of disclosing sensitive information relative to the telephone mode, and that respondents in the modular design group, while responding less frequently, found the survey to be significantly easier. Further, those who responded in the modular group were not unique in terms of available covariates, suggesting that the reduced item response rates only introduced limited nonresponse bias. Future research should consider enhancing this methodology, applying it with other modes of data collection (e. g., web surveys), and continuously evaluating its effectiveness from a total survey error perspective. PMID:26322137

  19. Assessment of Systematic Chromatic Errors that Impact Sub-1% Photometric Precision in Large-Area Sky Surveys

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Li, T. S.

    Meeting the science goals for many current and future ground-based optical large-area sky surveys requires that the calibrated broadband photometry is stable in time and uniform over the sky to 1% precision or better. Past surveys have achieved photometric precision of 1-2% by calibrating the survey's stellar photometry with repeated measurements of a large number of stars observed in multiple epochs. The calibration techniques employed by these surveys only consider the relative frame-by-frame photometric zeropoint offset and the focal plane position-dependent illumination corrections, which are independent of the source color. However, variations in the wavelength dependence of the atmospheric transmissionmore » and the instrumental throughput induce source color-dependent systematic errors. These systematic errors must also be considered to achieve the most precise photometric measurements. In this paper, we examine such systematic chromatic errors using photometry from the Dark Energy Survey (DES) as an example. We define a natural magnitude system for DES and calculate the systematic errors on stellar magnitudes, when the atmospheric transmission and instrumental throughput deviate from the natural system. We conclude that the systematic chromatic errors caused by the change of airmass in each exposure, the change of the precipitable water vapor and aerosol in the atmosphere over time, and the non-uniformity of instrumental throughput over the focal plane, can be up to 2% in some bandpasses. We compare the calculated systematic chromatic errors with the observed DES data. For the test sample data, we correct these errors using measurements of the atmospheric transmission and instrumental throughput. The residual after correction is less than 0.3%. We also find that the errors for non-stellar objects are redshift-dependent and can be larger than those for stars at certain redshifts.« less

  20. The computation of equating errors in international surveys in education.

    PubMed

    Monseur, Christian; Berezner, Alla

    2007-01-01

    Since the IEA's Third International Mathematics and Science Study, one of the major objectives of international surveys in education has been to report trends in achievement. The names of the two current IEA surveys reflect this growing interest: Trends in International Mathematics and Science Study (TIMSS) and Progress in International Reading Literacy Study (PIRLS). Similarly a central concern of the OECD's PISA is with trends in outcomes over time. To facilitate trend analyses these studies link their tests using common item equating in conjunction with item response modelling methods. IEA and PISA policies differ in terms of reporting the error associated with trends. In IEA surveys, the standard errors of the trend estimates do not include the uncertainty associated with the linking step while PISA does include a linking error component in the standard errors of trend estimates. In other words, PISA implicitly acknowledges that trend estimates partly depend on the selected common items, while the IEA's surveys do not recognise this source of error. Failing to recognise the linking error leads to an underestimation of the standard errors and thus increases the Type I error rate, thereby resulting in reporting of significant changes in achievement when in fact these are not significant. The growing interest of policy makers in trend indicators and the impact of the evaluation of educational reforms appear to be incompatible with such underestimation. However, the procedure implemented by PISA raises a few issues about the underlying assumptions for the computation of the equating error. After a brief introduction, this paper will describe the procedure PISA implemented to compute the linking error. The underlying assumptions of this procedure will then be discussed. Finally an alternative method based on replication techniques will be presented, based on a simulation study and then applied to the PISA 2000 data.

  1. The total satellite population of the Milky Way

    NASA Astrophysics Data System (ADS)

    Newton, Oliver; Cautun, Marius; Jenkins, Adrian; Frenk, Carlos S.; Helly, John C.

    2018-05-01

    The total number and luminosity function of the population of dwarf galaxies of the Milky Way (MW) provide important constraints on the nature of the dark matter and on the astrophysics of galaxy formation at low masses. However, only a partial census of this population exists because of the flux limits and restricted sky coverage of existing Galactic surveys. We combine the sample of satellites recently discovered by the Dark Energy Survey (DES) survey with the satellites found in Sloan Digital Sky Survey (SDSS) Data Release 9 (together these surveys cover nearly half the sky) to estimate the total luminosity function of satellites down to MV = 0. We apply a new Bayesian inference method in which we assume that the radial distribution of satellites independently of absolute magnitude follows that of subhaloes selected according to their peak maximum circular velocity. We find that there should be at least 124^{+40}_{-27}(68% CL, statistical error) satellites brighter than MV = 0 within 300kpc of the Sun. As a result of our use of new data and better simulations, and a more robust statistical method, we infer a much smaller population of satellites than reported in previous studies using earlier SDSS data only; we also address an underestimation of the uncertainties in earlier work by accounting for stochastic effects. We find that the inferred number of faint satellites depends only weakly on the assumed mass of the MW halo and we provide scaling relations to extend our results to different assumed halo masses and outer radii. We predict that half of our estimated total satellite population of the MW should be detected by the Large Synoptic Survey Telescope (LSST). The code implementing our estimation method is available online.†

  2. Secondary analysis of national survey datasets.

    PubMed

    Boo, Sunjoo; Froelicher, Erika Sivarajan

    2013-06-01

    This paper describes the methodological issues associated with secondary analysis of large national survey datasets. Issues about survey sampling, data collection, and non-response and missing data in terms of methodological validity and reliability are discussed. Although reanalyzing large national survey datasets is an expedient and cost-efficient way of producing nursing knowledge, successful investigations require a methodological consideration of the intrinsic limitations of secondary survey analysis. Nursing researchers using existing national survey datasets should understand potential sources of error associated with survey sampling, data collection, and non-response and missing data. Although it is impossible to eliminate all potential errors, researchers using existing national survey datasets must be aware of the possible influence of errors on the results of the analyses. © 2012 The Authors. Japan Journal of Nursing Science © 2012 Japan Academy of Nursing Science.

  3. Simulation of streamflow, evapotranspiration, and groundwater recharge in the Lower Frio River watershed, south Texas, 1961-2008

    USGS Publications Warehouse

    Lizarraga, Joy S.; Ockerman, Darwin J.

    2011-01-01

    The U.S. Geological Survey, in cooperation with the U.S. Army Corps of Engineers, Fort Worth District; the City of Corpus Christi; the Guadalupe-Blanco River Authority; the San Antonio River Authority; and the San Antonio Water System, configured, calibrated, and tested a watershed model for a study area consisting of about 5,490 mi2 of the Frio River watershed in south Texas. The purpose of the model is to contribute to the understanding of watershed processes and hydrologic conditions in the lower Frio River watershed. The model simulates streamflow, evapotranspiration (ET), and groundwater recharge by using a numerical representation of physical characteristics of the landscape, and meteorological and streamflow data. Additional time-series inputs to the model include wastewater-treatment-plant discharges, surface-water withdrawals, and estimated groundwater inflow from Leona Springs. Model simulations of streamflow, ET, and groundwater recharge were done for various periods of record depending upon available measured data for input and comparison, starting as early as 1961. Because of the large size of the study area, the lower Frio River watershed was divided into 12 subwatersheds; separate Hydrological Simulation Program-FORTRAN models were developed for each subwatershed. Simulation of the overall study area involved running simulations in downstream order. Output from the model was summarized by subwatershed, point locations, reservoir reaches, and the Carrizo-Wilcox aquifer outcrop. Four long-term U.S. Geological Survey streamflow-gaging stations and two short-term streamflow-gaging stations were used for streamflow model calibration and testing with data from 1991-2008. Calibration was based on data from 2000-08, and testing was based on data from 1991-99. Choke Canyon Reservoir stage data from 1992-2008 and monthly evaporation estimates from 1999-2008 also were used for model calibration. Additionally, 2006-08 ET data from a U.S. Geological Survey meteorological station in Medina County were used for calibration. Streamflow and ET calibration were considered good or very good. For the 2000-08 calibration period, total simulated flow volume and the flow volume of the highest 10 percent of simulated daily flows were calibrated to within about 10 percent of measured volumes at six U.S. Geological Survey streamflow-gaging stations. The flow volume of the lowest 50 percent of daily flows was not simulated as accurately but represented a small percent of the total flow volume. The model-fit efficiency for the weekly mean streamflow during the calibration periods ranged from 0.60 to 0.91, and the root mean square error ranged from 16 to 271 percent of the mean flow rate. The simulated total flow volumes during the testing periods at the long-term gaging stations exceeded the measured total flow volumes by approximately 22 to 50 percent at three stations and were within 7 percent of the measured total flow volumes at one station. For the longer 1961-2008 simulation period at the long-term stations, simulated total flow volumes were within about 3 to 18 percent of measured total flow volumes. The calibrations made by using Choke Canyon reservoir volume for 1992-2008, reservoir evaporation for 1999-2008, and ET in Medina County for 2006-08, are considered very good. Model limitations include possible errors related to model conceptualization and parameter variability, lack of data to better quantify certain model inputs, and measurement errors. Uncertainty regarding the degree to which available rainfall data represent actual rainfall is potentially the most serious source of measurement error. A sensitivity analysis was performed for the Upper San Miguel subwatershed model to show the effect of changes to model parameters on the estimated mean recharge, ET, and surface runoff from that part of the Carrizo-Wilcox aquifer outcrop. Simulated recharge was most sensitive to the changes in the lower-zone ET (LZ

  4. Simulation of quantity and quality of storm runoff for urban catchments in Fresno, California

    USGS Publications Warehouse

    Guay, J.R.; Smith, P.E.

    1988-01-01

    Rainfall-runoff models were developed for a multiple-dwelling residential catchment (2 applications), a single-dwelling residential catchment, and a commercial catchment in Fresno, California, using the U.S. Geological Survey Distributed Routing Rainfall-Runoff Model (DR3M-II). A runoff-quality model also was developed at the commercial catchment using the Survey 's Multiple-Event Urban Runoff Quality model (DR3M-qual). The purpose of this study was: (1) to demonstrate the capabilites of the two models for use in designing storm drains, estimating the frequency of storm runoff loads, and evaluating the effectiveness of street sweeping on an urban drainage catchment; and (2) to determine the simulation accuracies of these models. Simulation errors of the two models were summarized as the median absolute deviation in percent (mad) between measured and simulated values. Calibration and verification mad errors for runoff volumes and peak discharges ranged from 14 to 20%. The estimated annual storm-runoff loads, in pounds/acre of effective impervious area, that could occur once every hundred years at the commercial catchment was 95 for dissolved solids, 1.6 for the dissolved nitrite plus nitrate, 0.31 for total recoverable lead, and 120 for suspended sediment. Calibration and verification mad errors for the above constituents ranged from 11 to 54%. (USGS)

  5. The impact of safety organizing, trusted leadership, and care pathways on reported medication errors in hospital nursing units.

    PubMed

    Vogus, Timothy J; Sutcliffe, Kathleen M

    2011-01-01

    Prior research has found that safety organizing behaviors of registered nurses (RNs) positively impact patient safety. However, little research exists on the joint benefits of safety organizing and other contextual factors that help foster safety. Although we know that organizational practices often have more powerful effects when combined with other mutually reinforcing practices, little research exists on the joint benefits of safety organizing and other contextual factors believed to foster safety. Specifically, we examined the benefits of bundling safety organizing with leadership (trust in manager) and design (use of care pathways) factors on reported medication errors. A total of 1033 RNs and 78 nurse managers in 78 emergency, internal medicine, intensive care, and surgery nursing units in 10 acute-care hospitals in Indiana, Iowa, Maryland, Michigan, and Ohio who completed questionnaires between December 2003 and June 2004. Cross-sectional analysis of medication errors reported to the hospital incident reporting system for the 6 months after the administration of the survey linked to survey data on safety organizing, trust in manager, use of care pathways, and RN characteristics and staffing. Multilevel Poisson regression analyses indicated that the benefits of safety organizing on reported medication errors were amplified when paired with high levels of trust in manager or the use of care pathways. Safety organizing plays a key role in improving patient safety on hospital nursing units especially when bundled with other organizational components of a safety supportive system.

  6. Design and Analysis of Cognitive Interviews for Comparative Multinational Testing

    PubMed Central

    Fitzgerald, Rory; Padilla, José-Luis; Willson, Stephanie; Widdop, Sally; Caspar, Rachel; Dimov, Martin; Gray, Michelle; Nunes, Cátia; Prüfer, Peter; Schöbi, Nicole; Schoua-Glusberg, Alisú

    2011-01-01

    This article summarizes the work of the Comparative Cognitive Testing Workgroup, an international coalition of survey methodologists interested in developing an evidence-based methodology for examining the comparability of survey questions within cross-cultural or multinational contexts. To meet this objective, it was necessary to ensure that the cognitive interviewing (CI) method itself did not introduce method bias. Therefore, the workgroup first identified specific characteristics inherent in CI methodology that could undermine the comparability of CI evidence. The group then developed and implemented a protocol addressing those issues. In total, 135 cognitive interviews were conducted by participating countries. Through the process, the group identified various interpretive patterns resulting from sociocultural and language-related differences among countries as well as other patterns of error that would impede comparability of survey data. PMID:29081719

  7. 3-D Survey Applied to Industrial Archaeology by Tls Methodology

    NASA Astrophysics Data System (ADS)

    Monego, M.; Fabris, M.; Menin, A.; Achilli, V.

    2017-05-01

    This work describes the three-dimensional survey of "Ex Stazione Frigorifera Specializzata": initially used for agricultural storage, during the years it was allocated to different uses until the complete neglect. The historical relevance and the architectural heritage that this building represents has brought the start of a recent renovation project and functional restoration. In this regard it was necessary a global 3-D survey that was based on the application and integration of different geomatic methodologies (mainly terrestrial laser scanner, classical topography, and GNSS). The acquisitions of point clouds was performed using different laser scanners: with time of flight (TOF) and phase shift technologies for the distance measurements. The topographic reference network, needed for scans alignment in the same system, was measured with a total station. For the complete survey of the building, 122 scans were acquired and 346 targets were measured from 79 vertices of the reference network. Moreover, 3 vertices were measured with GNSS methodology in order to georeference the network. For the detail survey of machine room were executed 14 scans with 23 targets. The 3-D global model of the building have less than one centimeter of error in the alignment (for the machine room the error in alignment is not greater than 6 mm) and was used to extract products such as longitudinal and transversal sections, plans, architectural perspectives, virtual scans. A complete spatial knowledge of the building is obtained from the processed data, providing basic information for restoration project, structural analysis, industrial and architectural heritage valorization.

  8. The effect of changes to question order on the prevalence of 'sufficient' physical activity in an Australian population survey.

    PubMed

    Hanley, Christine; Duncan, Mitch J; Mummery, W Kerry

    2013-03-01

    Population surveys are frequently used to assess prevalence, correlates and health benefits of physical activity. However, nonsampling errors, such as question order effects, in surveys may lead to imprecision in self reported physical activity. This study examined the impact of modified question order in a commonly used physical activity questionnaire on the prevalence of sufficient physical activity. Data were obtained from a telephone survey of adults living in Queensland, Australia. A total of 1243 adults participated in the computer-assisted telephone interview (CATI) survey conducted in July 2008 which included the Active Australia Questionnaire (AAQ) presented in traditional or modified order. Binary logistic regression analyses was used to examine relationships between question order and physical activity outcomes. Significant relationships were found between question order and sufficient activity, recreational walking, moderate activity, vigorous activity, and total activity. Respondents who received the AAQ in modified order were more likely to be categorized as sufficiently active (OR = 1.28, 95% CI 1.01-1.60). This study highlights the importance of question order on estimates of self reported physical activity. This study has shown that changes in question order can lead to an increase in the proportion of participants classified as sufficiently active.

  9. [Safety Culture in Orthopaedic Surgery and Trauma Surgery - Where Are We Today?

    PubMed

    Münzberg, Matthias; Rüsseler, Miriam; Egerth, Martin; Doepfer, Anna Katharina; Mutschler, Manuel; Stange, Richard; Bouillon, Bertil; Kladny, Bernd; Hoffmann, Reinhard

    2018-06-05

    The development of a new safety culture in orthopaedics and trauma surgery needs to be based on the knowledge of the status quo. The objective of this research was therefore to perform a survey of orthopaedic and trauma surgeons to achieve a subjective assessment of the frequency and causes of "insecurities" or errors in daily practice. Based on current literature, an online questionnaire was created by a team of experts (26 questions total) and was sent via e-mail to all active members of a medical society (DGOU) in April 2015. This was followed by two reminder e-mails. The survey was completed in May 2015. The results were transmitted electronically, anonymously and voluntarily into a database and evaluated by univariate analyses. 799 active members took part in the survey. 65% of the interviewed people stated that they noticed mistakes in their own clinical work environment at least once a week. The main reasons for these mistakes were "time pressure", "lack of communication", "lack of staff" and "stress". Technical mistakes or lack of knowledge were not of primary importance. The survey indicated that errors in orthopaedics and trauma surgery are observed regularly. "Human factors" were regarded as a major cause. In order to develop a new safety culture in orthopaedics and trauma surgery, new approaches must focus on the human factor. Georg Thieme Verlag KG Stuttgart · New York.

  10. A survey of the prevalence of refractive errors among children in lower primary schools in Kampala district.

    PubMed

    Kawuma, Medi; Mayeku, Robert

    2002-08-01

    Refractive errors are a known cause of visual impairment and may cause blindness worldwide. In children, refractive errors may prevent those afflicted from progressing with their studies. In Uganda, like in many developing countries, there is no established vision-screening programme for children on commencement of school, such that those with early onset of such errors will have many years of poor vision. Over all, there is limited information on refractive errors among children in Africa. To determine the prevalence of refractive errors among school children attending lower primary in Kampala district; the frequency of the various types of refractive errors, and their relationship to sexuality and ethnicity. A cross-sectional descriptive study. Kampala district, Uganda A total of 623 children aged between 6 and 9 years had a visual acuity testing done at school using the same protocol; of these 301 (48.3%) were boys and 322 (51.7%) girls. Seventy-three children had a significant refractive error of +/-0.50 or worse in one or both eyes, giving a prevalence of 11.6% and the commonest single refractive error was astigmatism, which accounted for 52% of all errors. This was followed by hypermetropia, and myopia was the least common. Significant refractive errors occur among primary school children aged 6 to 9 years at a prevalence of approximately 12%. Therefore, there is a need to have regular and simple vision testing in primary school children at least at the commencement of school so as to defect those who may suffer from these disabilities.

  11. The use of a tablet computer to complete the DASH questionnaire.

    PubMed

    Dy, Christopher J; Schmicker, Thomas; Tran, Quynh; Chadwick, Brian; Daluiski, Aaron

    2012-12-01

    To determine whether electronic self-administration of the Disabilities of the Arm, Shoulder, and Hand (DASH) questionnaire using a tablet computer increased completion rate compared with paper self-administration. We gave the DASH in self-administered paper form to 222 new patients in a single hand surgeon's practice. After a washout period of 5 weeks, we gave the DASH in self-administered tablet computer form to 264 new patients. A maximum of 3 questions could be omitted before the questionnaire was considered unscorable. We reviewed the submitted surveys to determine the number of scorable questionnaires and the number of omitted questions in each survey. We completed univariate analysis and regression modeling to determine the influence of survey administration type on respondent error while controlling for patient age and sex. Of the 486 total surveys, 60 (12%) were not scorable. A significantly higher proportion of the paper surveys (24%) were unscorable compared with electronic surveys (2%), with significantly more questions omitted in each paper survey (2.6 ± 4.4 questions) than in each electronic survey (0.1 ± 0.8 questions). Logistic regression analysis revealed survey administration mode to be significantly associated with DASH scorability while controlling for age and sex, with electronic survey administration being 14 times more likely than paper administration to yield a scorable DASH. In our retrospective series, electronic self-administration of the DASH decreased the number of omitted questions and yielded a higher number of scorable questionnaires. Prospective, randomized evaluation is needed to better delineate the effect of survey administration on respondent error. Administration of the DASH with a tablet computer may be beneficial for both clinical and research endeavors to increase completion rate and to gain other benefits from electronic data capture. Copyright © 2012 American Society for Surgery of the Hand. Published by Elsevier Inc. All rights reserved.

  12. The Effect of Random Error on Diagnostic Accuracy Illustrated with the Anthropometric Diagnosis of Malnutrition

    PubMed Central

    2016-01-01

    Background It is often thought that random measurement error has a minor effect upon the results of an epidemiological survey. Theoretically, errors of measurement should always increase the spread of a distribution. Defining an illness by having a measurement outside an established healthy range will lead to an inflated prevalence of that condition if there are measurement errors. Methods and results A Monte Carlo simulation was conducted of anthropometric assessment of children with malnutrition. Random errors of increasing magnitude were imposed upon the populations and showed that there was an increase in the standard deviation with each of the errors that became exponentially greater with the magnitude of the error. The potential magnitude of the resulting error of reported prevalence of malnutrition were compared with published international data and found to be of sufficient magnitude to make a number of surveys and the numerous reports and analyses that used these data unreliable. Conclusions The effect of random error in public health surveys and the data upon which diagnostic cut-off points are derived to define “health” has been underestimated. Even quite modest random errors can more than double the reported prevalence of conditions such as malnutrition. Increasing sample size does not address this problem, and may even result in less accurate estimates. More attention needs to be paid to the selection, calibration and maintenance of instruments, measurer selection, training & supervision, routine estimation of the likely magnitude of errors using standardization tests, use of statistical likelihood of error to exclude data from analysis and full reporting of these procedures in order to judge the reliability of survey reports. PMID:28030627

  13. Error disclosure: a new domain for safety culture assessment.

    PubMed

    Etchegaray, Jason M; Gallagher, Thomas H; Bell, Sigall K; Dunlap, Ben; Thomas, Eric J

    2012-07-01

    To (1) develop and test survey items that measure error disclosure culture, (2) examine relationships among error disclosure culture, teamwork culture and safety culture and (3) establish predictive validity for survey items measuring error disclosure culture. All clinical faculty from six health institutions (four medical schools, one cancer centre and one health science centre) in The University of Texas System were invited to anonymously complete an electronic survey containing questions about safety culture and error disclosure. The authors found two factors to measure error disclosure culture: one factor is focused on the general culture of error disclosure and the second factor is focused on trust. Both error disclosure culture factors were unique from safety culture and teamwork culture (correlations were less than r=0.85). Also, error disclosure general culture and error disclosure trust culture predicted intent to disclose a hypothetical error to a patient (r=0.25, p<0.001 and r=0.16, p<0.001, respectively) while teamwork and safety culture did not predict such an intent (r=0.09, p=NS and r=0.12, p=NS). Those who received prior error disclosure training reported significantly higher levels of error disclosure general culture (t=3.7, p<0.05) and error disclosure trust culture (t=2.9, p<0.05). The authors created and validated a new measure of error disclosure culture that predicts intent to disclose an error better than other measures of healthcare culture. This measure fills an existing gap in organisational assessments by assessing transparent communication after medical error, an important aspect of culture.

  14. Catastrophic photometric redshift errors: Weak-lensing survey requirements

    DOE PAGES

    Bernstein, Gary; Huterer, Dragan

    2010-01-11

    We study the sensitivity of weak lensing surveys to the effects of catastrophic redshift errors - cases where the true redshift is misestimated by a significant amount. To compute the biases in cosmological parameters, we adopt an efficient linearized analysis where the redshift errors are directly related to shifts in the weak lensing convergence power spectra. We estimate the number N spec of unbiased spectroscopic redshifts needed to determine the catastrophic error rate well enough that biases in cosmological parameters are below statistical errors of weak lensing tomography. While the straightforward estimate of N spec is ~10 6 we findmore » that using only the photometric redshifts with z ≤ 2.5 leads to a drastic reduction in N spec to ~ 30,000 while negligibly increasing statistical errors in dark energy parameters. Therefore, the size of spectroscopic survey needed to control catastrophic errors is similar to that previously deemed necessary to constrain the core of the z s – z p distribution. We also study the efficacy of the recent proposal to measure redshift errors by cross-correlation between the photo-z and spectroscopic samples. We find that this method requires ~ 10% a priori knowledge of the bias and stochasticity of the outlier population, and is also easily confounded by lensing magnification bias. In conclusion, the cross-correlation method is therefore unlikely to supplant the need for a complete spectroscopic redshift survey of the source population.« less

  15. Using snowball sampling method with nurses to understand medication administration errors.

    PubMed

    Sheu, Shuh-Jen; Wei, Ien-Lan; Chen, Ching-Huey; Yu, Shu; Tang, Fu-In

    2009-02-01

    We aimed to encourage nurses to release information about drug administration errors to increase understanding of error-related circumstances and to identify high-alert situations. Drug administration errors represent the majority of medication errors, but errors are underreported. Effective ways are lacking to encourage nurses to actively report errors. Snowball sampling was conducted to recruit participants. A semi-structured questionnaire was used to record types of error, hospital and nurse backgrounds, patient consequences, error discovery mechanisms and reporting rates. Eighty-five nurses participated, reporting 328 administration errors (259 actual, 69 near misses). Most errors occurred in medical surgical wards of teaching hospitals, during day shifts, committed by nurses working fewer than two years. Leading errors were wrong drugs and doses, each accounting for about one-third of total errors. Among 259 actual errors, 83.8% resulted in no adverse effects; among remaining 16.2%, 6.6% had mild consequences and 9.6% had serious consequences (severe reaction, coma, death). Actual errors and near misses were discovered mainly through double-check procedures by colleagues and nurses responsible for errors; reporting rates were 62.5% (162/259) vs. 50.7% (35/69) and only 3.5% (9/259) vs. 0% (0/69) were disclosed to patients and families. High-alert situations included administration of 15% KCl, insulin and Pitocin; using intravenous pumps; and implementation of cardiopulmonary resuscitation (CPR). Snowball sampling proved to be an effective way to encourage nurses to release details concerning medication errors. Using empirical data, we identified high-alert situations. Strategies for reducing drug administration errors by nurses are suggested. Survey results suggest that nurses should double check medication administration in known high-alert situations. Nursing management can use snowball sampling to gather error details from nurses in a non-reprimanding atmosphere, helping to establish standard operational procedures for known high-alert situations.

  16. Monitoring and reporting of preanalytical errors in laboratory medicine: the UK situation.

    PubMed

    Cornes, Michael P; Atherton, Jennifer; Pourmahram, Ghazaleh; Borthwick, Hazel; Kyle, Betty; West, Jamie; Costelloe, Seán J

    2016-03-01

    Most errors in the clinical laboratory occur in the preanalytical phase. This study aimed to comprehensively describe the prevalence and nature of preanalytical quality monitoring practices in UK clinical laboratories. A survey was sent on behalf of the Association for Clinical Biochemistry and Laboratory Medicine Preanalytical Working Group (ACB-WG-PA) to all heads of department of clinical laboratories in the UK. The survey captured data on the analytical platform and Laboratory Information Management System in use; which preanalytical errors were recorded and how they were classified and gauged interest in an external quality assurance scheme for preanalytical errors. Of the 157 laboratories asked to participate, responses were received from 104 (66.2%). Laboratory error rates were recorded per number of specimens, rather than per number of requests in 51% of respondents. Aside from serum indices for haemolysis, icterus and lipaemia, which were measured in 80% of laboratories, the most common errors recorded were booking-in errors (70.1%) and sample mislabelling (56.9%) in laboratories who record preanalytical errors. Of the laboratories surveyed, 95.9% expressed an interest in guidance on recording preanalytical error and 91.8% expressed interest in an external quality assurance scheme. This survey observes a wide variation in the definition, repertoire and collection methods for preanalytical errors in the UK. Data indicate there is a lot of interest in improving preanalytical data collection. The ACB-WG-PA aims to produce guidance and support for laboratories to standardize preanalytical data collection and to help establish and validate an external quality assurance scheme for interlaboratory comparison. © The Author(s) 2015.

  17. Medical errors and uncertainty in primary healthcare: A comparative study of coping strategies among young and experienced GPs

    PubMed Central

    Kuikka, Liisa; Pitkälä, Kaisu

    2014-01-01

    Abstract Objective. To study coping differences between young and experienced GPs in primary care who experience medical errors and uncertainty. Design. Questionnaire-based survey (self-assessment) conducted in 2011. Setting. Finnish primary practice offices in Southern Finland. Subjects. Finnish GPs engaged in primary health care from two different respondent groups: young (working experience ≤ 5years, n = 85) and experienced (working experience > 5 years, n = 80). Main outcome measures. Outcome measures included experiences and attitudes expressed by the included participants towards medical errors and tolerance of uncertainty, their coping strategies, and factors that may influence (positively or negatively) sources of errors. Results. In total, 165/244 GPs responded (response rate: 68%). Young GPs expressed significantly more often fear of committing a medical error (70.2% vs. 48.1%, p = 0.004) and admitted more often than experienced GPs that they had committed a medical error during the past year (83.5% vs. 68.8%, p = 0.026). Young GPs were less prone to apologize to a patient for an error (44.7% vs. 65.0%, p = 0.009) and found, more often than their more experienced colleagues, on-site consultations and electronic databases useful for avoiding mistakes. Conclusion. Experienced GPs seem to better tolerate uncertainty and also seem to fear medical errors less than their young colleagues. Young and more experienced GPs use different coping strategies for dealing with medical errors. Implications. When GPs become more experienced, they seem to get better at coping with medical errors. Means to support these skills should be studied in future research. PMID:24914458

  18. Measurement Error Calibration in Mixed-Mode Sample Surveys

    ERIC Educational Resources Information Center

    Buelens, Bart; van den Brakel, Jan A.

    2015-01-01

    Mixed-mode surveys are known to be susceptible to mode-dependent selection and measurement effects, collectively referred to as mode effects. The use of different data collection modes within the same survey may reduce selectivity of the overall response but is characterized by measurement errors differing across modes. Inference in sample surveys…

  19. [Comparison study on sampling methods of Oncomelania hupensis snail survey in marshland schistosomiasis epidemic areas in China].

    PubMed

    An, Zhao; Wen-Xin, Zhang; Zhong, Yao; Yu-Kuan, Ma; Qing, Liu; Hou-Lang, Duan; Yi-di, Shang

    2016-06-29

    To optimize and simplify the survey method of Oncomelania hupensis snail in marshland endemic region of schistosomiasis and increase the precision, efficiency and economy of the snail survey. A quadrate experimental field was selected as the subject of 50 m×50 m size in Chayegang marshland near Henghu farm in the Poyang Lake region and a whole-covered method was adopted to survey the snails. The simple random sampling, systematic sampling and stratified random sampling methods were applied to calculate the minimum sample size, relative sampling error and absolute sampling error. The minimum sample sizes of the simple random sampling, systematic sampling and stratified random sampling methods were 300, 300 and 225, respectively. The relative sampling errors of three methods were all less than 15%. The absolute sampling errors were 0.221 7, 0.302 4 and 0.047 8, respectively. The spatial stratified sampling with altitude as the stratum variable is an efficient approach of lower cost and higher precision for the snail survey.

  20. Error management in blood establishments: results of eight years of experience (2003–2010) at the Croatian Institute of Transfusion Medicine

    PubMed Central

    Vuk, Tomislav; Barišić, Marijan; Očić, Tihomir; Mihaljević, Ivanka; Šarlija, Dorotea; Jukić, Irena

    2012-01-01

    Background. Continuous and efficient error management, including procedures from error detection to their resolution and prevention, is an important part of quality management in blood establishments. At the Croatian Institute of Transfusion Medicine (CITM), error management has been systematically performed since 2003. Materials and methods. Data derived from error management at the CITM during an 8-year period (2003–2010) formed the basis of this study. Throughout the study period, errors were reported to the Department of Quality Assurance. In addition to surveys and the necessary corrective activities, errors were analysed and classified according to the Medical Event Reporting System for Transfusion Medicine (MERS-TM). Results. During the study period, a total of 2,068 errors were recorded, including 1,778 (86.0%) in blood bank activities and 290 (14.0%) in blood transfusion services. As many as 1,744 (84.3%) errors were detected before issue of the product or service. Among the 324 errors identified upon release from the CITM, 163 (50.3%) errors were detected by customers and reported as complaints. In only five cases was an error detected after blood product transfusion however without any harmful consequences for the patients. All errors were, therefore, evaluated as “near miss” and “no harm” events. Fifty-two (2.5%) errors were evaluated as high-risk events. With regards to blood bank activities, the highest proportion of errors occurred in the processes of labelling (27.1%) and blood collection (23.7%). With regards to blood transfusion services, errors related to blood product issuing prevailed (24.5%). Conclusion. This study shows that comprehensive management of errors, including near miss errors, can generate data on the functioning of transfusion services, which is a precondition for implementation of efficient corrective and preventive actions that will ensure further improvement of the quality and safety of transfusion treatment. PMID:22395352

  1. How Radiation Oncologists Would Disclose Errors: Results of a Survey of Radiation Oncologists and Trainees

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Evans, Suzanne B., E-mail: Suzannne.evans@yale.edu; Yu, James B.; Chagpar, Anees

    2012-10-01

    Purpose: To analyze error disclosure attitudes of radiation oncologists and to correlate error disclosure beliefs with survey-assessed disclosure behavior. Methods and Materials: With institutional review board exemption, an anonymous online survey was devised. An email invitation was sent to radiation oncologists (American Society for Radiation Oncology [ASTRO] gold medal winners, program directors and chair persons of academic institutions, and former ASTRO lecturers) and residents. A disclosure score was calculated based on the number or full, partial, or no disclosure responses chosen to the vignette-based questions, and correlation was attempted with attitudes toward error disclosure. Results: The survey received 176 responses:more » 94.8% of respondents considered themselves more likely to disclose in the setting of a serious medical error; 72.7% of respondents did not feel it mattered who was responsible for the error in deciding to disclose, and 3.9% felt more likely to disclose if someone else was responsible; 38.0% of respondents felt that disclosure increased the likelihood of a lawsuit, and 32.4% felt disclosure decreased the likelihood of lawsuit; 71.6% of respondents felt near misses should not be disclosed; 51.7% thought that minor errors should not be disclosed; 64.7% viewed disclosure as an opportunity for forgiveness from the patient; and 44.6% considered the patient's level of confidence in them to be a factor in disclosure. For a scenario that could be considerable, a non-harmful error, 78.9% of respondents would not contact the family. Respondents with high disclosure scores were more likely to feel that disclosure was an opportunity for forgiveness (P=.003) and to have never seen major medical errors (P=.004). Conclusions: The surveyed radiation oncologists chose to respond with full disclosure at a high rate, although ideal disclosure practices were not uniformly adhered to beyond the initial decision to disclose the occurrence of the error.« less

  2. The Impact of Information Culture on Patient Safety Outcomes

    PubMed Central

    Mikkonen, Santtu; Saranto, Kaija; Bates, David W.

    2017-01-01

    Summary Background An organization’s information culture and information management practices create conditions for processing patient information in hospitals. Information management incidents are failures that could lead to adverse events for the patient if they are not detected. Objectives To test a theoretical model that links information culture in acute care hospitals to information management incidents and patient safety outcomes. Methods Reason’s model for the stages of development of organizational accidents was applied. Study data were collected from a cross-sectional survey of 909 RNs who work in medical or surgical units at 32 acute care hospitals in Finland. Structural equation modeling was used to assess how well the hypothesized model fit the study data. Results Fit indices indicated a good fit for the model. In total, 18 of the 32 paths tested were statistically significant. Documentation errors had the strongest total effect on patient safety outcomes. Organizational guidance positively affected information availability and utilization of electronic patient records, whereas the latter had the strongest total effect on the reduction of information delays. Conclusions Patient safety outcomes are associated with information management incidents and information culture. Further, the dimensions of the information culture create work conditions that generate errors in hospitals. PMID:28272647

  3. The Impact of Information Culture on Patient Safety Outcomes. Development of a Structural Equation Model.

    PubMed

    Jylhä, Virpi; Mikkonen, Santtu; Saranto, Kaija; Bates, David W

    2017-03-08

    An organization's information culture and information management practices create conditions for processing patient information in hospitals. Information management incidents are failures that could lead to adverse events for the patient if they are not detected. To test a theoretical model that links information culture in acute care hospitals to information management incidents and patient safety outcomes. Reason's model for the stages of development of organizational accidents was applied. Study data were collected from a cross-sectional survey of 909 RNs who work in medical or surgical units at 32 acute care hospitals in Finland. Structural equation modeling was used to assess how well the hypothesized model fit the study data. Fit indices indicated a good fit for the model. In total, 18 of the 32 paths tested were statistically significant. Documentation errors had the strongest total effect on patient safety outcomes. Organizational guidance positively affected information availability and utilization of electronic patient records, whereas the latter had the strongest total effect on the reduction of information delays. Patient safety outcomes are associated with information management incidents and information culture. Further, the dimensions of the information culture create work conditions that generate errors in hospitals.

  4. Pediatric residents' decision-making around disclosing and reporting adverse events: the importance of social context.

    PubMed

    Coffey, Maitreya; Thomson, Kelly; Tallett, Susan; Matlow, Anne

    2010-10-01

    Although experts advise disclosing medical errors to patients, individual physicians' different levels of knowledge and comfort suggest a gap between recommendations and practice. This study explored pediatric residents' knowledge and attitudes about disclosure. In 2006, the authors of this single-center, mixed-methods study surveyed 64 pediatric residents at the University of Toronto and then held three focus groups with a total of 24 of those residents. Thirty-seven (58%) residents completed questionnaires. Most agreed that medical errors are one of the most serious problems in health care, that errors should be disclosed, and that disclosure would be difficult. When shown a scenario involving a medical error, over 90% correctly identified the error, but only 40% would definitely disclose it. Most would apologize, but far fewer would acknowledge harm if it occurred or use the word "mistake." Most had witnessed or performed a disclosure, but only 40% reported receiving teaching on disclosure. Most reported experiencing negative effects of errors, including anxiety and reduced confidence. Data from the focus groups emphasized the extent to which residents consider contextual information when making decisions around disclosure. Themes included their or their team's degree of responsibility for the error versus others, quality of team relationships, training level, existence of social boundaries, and their position within a hierarchy. These findings add to the understanding of facilitators and inhibitors of error disclosure and reporting. The influence of social context warrants further study and should be considered in medical curriculum design and hospital guideline implementation.

  5. Seepage investigation and selected hydrologic data for the Escalante River drainage basin, Garfield and Kane Counties, Utah, 1909-2002

    USGS Publications Warehouse

    Wilberg, Dale E.; Stolp, Bernard J.

    2005-01-01

    This report contains the results of an October 2001 seepage investigation conducted along a reach of the Escalante River in Utah extending from the U.S. Geological Survey streamflow-gaging station near Escalante to the mouth of Stevens Canyon. Discharge was measured at 16 individual sites along 15 consecutive reaches. Total reach length was about 86 miles. A reconnaissance-level sampling of water for tritium and chlorofluorcarbons was also done. In addition, hydrologic and water-quality data previously collected and published by the U.S. Geological Survey for the 2,020-square-mile Escalante River drainage basin was compiled and is presented in 12 tables. These data were collected from 64 surface-water sites and 28 springs from 1909 to 2002.None of the 15 consecutive reaches along the Escalante River had a measured loss or gain that exceeded the measurement error. All discharge measurements taken during the seepage investigation were assigned a qualitative rating of accuracy that ranged from 5 percent to greater than 8 percent of the actual flow. Summing the potential error for each measurement and dividing by the maximum of either the upstream discharge and any tributary inflow, or the downstream discharge, determined the normalized error for a reach. This was compared to the computed loss or gain that also was normalized to the maximum discharge. A loss or gain for a specified reach is considered significant when the loss or gain (normalized percentage difference) is greater than the measurement error (normalized percentage error). The percentage difference and percentage error were normalized to allow comparison between reaches with different amounts of discharge.The plate that accompanies the report is 36" by 40" and can be printed in 16 tiles, 8.5 by 11 inches. An index for the tiles is located on the lower left-hand side of the plate. Using Adobe Acrobat, the plate can be viewed independent of the report; all Acrobat functions are available.

  6. Narratives of Response Error from Cognitive Interviews of Survey Questions about Normative Behavior

    ERIC Educational Resources Information Center

    Brenner, Philip S.

    2017-01-01

    That rates of normative behaviors produced by sample surveys are higher than actual behavior warrants is well evidenced in the research literature. Less well understood is the source of this error. Twenty-five cognitive interviews were conducted to probe responses to a set of common, conventional survey questions about one such normative behavior:…

  7. Rates and patterns of surface deformation from laser scanning following the South Napa earthquake, California

    USGS Publications Warehouse

    DeLong, Stephen B.; Lienkaemper, James J.; Pickering, Alexandra J; Avdievitch, Nikita N.

    2015-01-01

    The A.D. 2014 M6.0 South Napa earthquake, despite its moderate magnitude, caused significant damage to the Napa Valley in northern California (USA). Surface rupture occurred along several mapped and unmapped faults. Field observations following the earthquake indicated that the magnitude of postseismic surface slip was likely to approach or exceed the maximum coseismic surface slip and as such presented ongoing hazard to infrastructure. Using a laser scanner, we monitored postseismic deformation in three dimensions through time along 0.5 km of the main surface rupture. A key component of this study is the demonstration of proper alignment of repeat surveys using point cloud–based methods that minimize error imposed by both local survey errors and global navigation satellite system georeferencing errors. Using solid modeling of natural and cultural features, we quantify dextral postseismic displacement at several hundred points near the main fault trace. We also quantify total dextral displacement of initially straight cultural features. Total dextral displacement from both coseismic displacement and the first 2.5 d of postseismic displacement ranges from 0.22 to 0.29 m. This range increased to 0.33–0.42 m at 59 d post-earthquake. Furthermore, we estimate up to 0.15 m of vertical deformation during the first 2.5 d post-earthquake, which then increased by ∼0.02 m at 59 d post-earthquake. This vertical deformation is not expressed as a distinct step or scarp at the fault trace but rather as a broad up-to-the-west zone of increasing elevation change spanning the fault trace over several tens of meters, challenging common notions about fault scarp development in strike-slip systems. Integrating these analyses provides three-dimensional mapping of surface deformation and identifies spatial variability in slip along the main fault trace that we attribute to distributed slip via subtle block rotation. These results indicate the benefits of laser scanner surveys along active faults and demonstrate that fine-scale variability in fault slip has been missed by traditional earthquake response methods.

  8. How Big of a Problem is Analytic Error in Secondary Analyses of Survey Data?

    PubMed

    West, Brady T; Sakshaug, Joseph W; Aurelien, Guy Alain S

    2016-01-01

    Secondary analyses of survey data collected from large probability samples of persons or establishments further scientific progress in many fields. The complex design features of these samples improve data collection efficiency, but also require analysts to account for these features when conducting analysis. Unfortunately, many secondary analysts from fields outside of statistics, biostatistics, and survey methodology do not have adequate training in this area, and as a result may apply incorrect statistical methods when analyzing these survey data sets. This in turn could lead to the publication of incorrect inferences based on the survey data that effectively negate the resources dedicated to these surveys. In this article, we build on the results of a preliminary meta-analysis of 100 peer-reviewed journal articles presenting analyses of data from a variety of national health surveys, which suggested that analytic errors may be extremely prevalent in these types of investigations. We first perform a meta-analysis of a stratified random sample of 145 additional research products analyzing survey data from the Scientists and Engineers Statistical Data System (SESTAT), which describes features of the U.S. Science and Engineering workforce, and examine trends in the prevalence of analytic error across the decades used to stratify the sample. We once again find that analytic errors appear to be quite prevalent in these studies. Next, we present several example analyses of real SESTAT data, and demonstrate that a failure to perform these analyses correctly can result in substantially biased estimates with standard errors that do not adequately reflect complex sample design features. Collectively, the results of this investigation suggest that reviewers of this type of research need to pay much closer attention to the analytic methods employed by researchers attempting to publish or present secondary analyses of survey data.

  9. How Big of a Problem is Analytic Error in Secondary Analyses of Survey Data?

    PubMed Central

    West, Brady T.; Sakshaug, Joseph W.; Aurelien, Guy Alain S.

    2016-01-01

    Secondary analyses of survey data collected from large probability samples of persons or establishments further scientific progress in many fields. The complex design features of these samples improve data collection efficiency, but also require analysts to account for these features when conducting analysis. Unfortunately, many secondary analysts from fields outside of statistics, biostatistics, and survey methodology do not have adequate training in this area, and as a result may apply incorrect statistical methods when analyzing these survey data sets. This in turn could lead to the publication of incorrect inferences based on the survey data that effectively negate the resources dedicated to these surveys. In this article, we build on the results of a preliminary meta-analysis of 100 peer-reviewed journal articles presenting analyses of data from a variety of national health surveys, which suggested that analytic errors may be extremely prevalent in these types of investigations. We first perform a meta-analysis of a stratified random sample of 145 additional research products analyzing survey data from the Scientists and Engineers Statistical Data System (SESTAT), which describes features of the U.S. Science and Engineering workforce, and examine trends in the prevalence of analytic error across the decades used to stratify the sample. We once again find that analytic errors appear to be quite prevalent in these studies. Next, we present several example analyses of real SESTAT data, and demonstrate that a failure to perform these analyses correctly can result in substantially biased estimates with standard errors that do not adequately reflect complex sample design features. Collectively, the results of this investigation suggest that reviewers of this type of research need to pay much closer attention to the analytic methods employed by researchers attempting to publish or present secondary analyses of survey data. PMID:27355817

  10. IRIS-S - Extending geodetic very long baseline interferometry observations to the Southern Hemisphere

    NASA Astrophysics Data System (ADS)

    Carter, W. E.; Robertson, D. S.; Nothnagel, A.; Nicolson, G. D.; Schuh, H.

    1988-12-01

    High-accuracy geodetic very long baseline interferometry measurements between the African, Eurasian, and North American plates have been analyzed to determine the terrestrial coordinates of the Hartebeesthoek observatory to better than 10 cm, to determine the celestial coordinates of eight Southern Hemisphere radio sources with milliarc second (mas) accuracy, and to derive quasi-independent polar motion, UTI, and nutation time series. Comparison of the earth orientation time series with ongoing International Radio Interferometric Surveying project values shows agreement at about the 1 mas of arc level in polar motion and nutation and 0.1 ms of time in UTI. Given the independence of the observing sessions and the unlikeliness of common systematic error sources, this level of agreement serves to bound the total errors in both measurement series.

  11. Questionnaire surveys of dentists on radiology

    PubMed Central

    Shelley, AM; Brunton, P; Horner, K

    2012-01-01

    Objectives Survey by questionnaire is a widely used research method in dental radiology. A major concern in reviews of questionnaires is non-response. The objectives of this study were to review questionnaire studies in dental radiology with regard to potential survey errors and to develop recommendations to assist future researchers. Methods A literature search with the software search package PubMed was used to obtain internet-based access to Medline through the website www.ncbi.nlm.nih.gov/pubmed. A search of the English language peer-reviewed literature was conducted of all published studies, with no restriction on date. The search strategy found articles with dates from 1983 to 2010. The medical subject heading terms used were “questionnaire”, “dental radiology” and “dental radiography”. The reference sections of articles retrieved by this method were hand-searched in order to identify further relevant papers. Reviews, commentaries and relevant studies from the wider literature were also included. Results 53 questionnaire studies were identified in the dental literature that concerned dental radiography and included a report of response rate. These were all published between 1983 and 2010. In total, 87 articles are referred to in this review, including the 53 dental radiology studies. Other cited articles include reviews, commentaries and examples of studies outside dental radiology where they are germane to the arguments presented. Conclusions Non-response is only one of four broad areas of error to which questionnaire surveys are subject. This review considers coverage, sampling and measurement, as well as non-response. Recommendations are made to assist future research that uses questionnaire surveys. PMID:22517994

  12. RCSLenS: The Red Cluster Sequence Lensing Survey

    NASA Astrophysics Data System (ADS)

    Hildebrandt, H.; Choi, A.; Heymans, C.; Blake, C.; Erben, T.; Miller, L.; Nakajima, R.; van Waerbeke, L.; Viola, M.; Buddendiek, A.; Harnois-Déraps, J.; Hojjati, A.; Joachimi, B.; Joudaki, S.; Kitching, T. D.; Wolf, C.; Gwyn, S.; Johnson, N.; Kuijken, K.; Sheikhbahaee, Z.; Tudorica, A.; Yee, H. K. C.

    2016-11-01

    We present the Red Cluster Sequence Lensing Survey (RCSLenS), an application of the methods developed for the Canada-France-Hawaii Telescope Lensing Survey (CFHTLenS) to the ˜785 deg2, multi-band imaging data of the Red-sequence Cluster Survey 2. This project represents the largest public, sub-arcsecond seeing, multi-band survey to date that is suited for weak gravitational lensing measurements. With a careful assessment of systematic errors in shape measurements and photometric redshifts, we extend the use of this data set to allow cross-correlation analyses between weak lensing observables and other data sets. We describe the imaging data, the data reduction, masking, multi-colour photometry, photometric redshifts, shape measurements, tests for systematic errors, and a blinding scheme to allow for more objective measurements. In total, we analyse 761 pointings with r-band coverage, which constitutes our lensing sample. Residual large-scale B-mode systematics prevent the use of this shear catalogue for cosmic shear science. The effective number density of lensing sources over an unmasked area of 571.7 deg2 and down to a magnitude limit of r ˜ 24.5 is 8.1 galaxies per arcmin2 (weighted: 5.5 arcmin-2) distributed over 14 patches on the sky. Photometric redshifts based on four-band griz data are available for 513 pointings covering an unmasked area of 383.5 deg2. We present weak lensing mass reconstructions of some example clusters as well as the full survey representing the largest areas that have been mapped in this way. All our data products are publicly available through Canadian Astronomy Data Centre at http://www.cadc-ccda.hia-iha.nrc-cnrc.gc.ca/en/community/rcslens/query.html in a format very similar to the CFHTLenS data release.

  13. Quantification of LiDAR measurement uncertainty through propagation of errors due to sensor sub-systems and terrain morphology

    NASA Astrophysics Data System (ADS)

    Goulden, T.; Hopkinson, C.

    2013-12-01

    The quantification of LiDAR sensor measurement uncertainty is important for evaluating the quality of derived DEM products, compiling risk assessment of management decisions based from LiDAR information, and enhancing LiDAR mission planning capabilities. Current quality assurance estimates of LiDAR measurement uncertainty are limited to post-survey empirical assessments or vendor estimates from commercial literature. Empirical evidence can provide valuable information for the performance of the sensor in validated areas; however, it cannot characterize the spatial distribution of measurement uncertainty throughout the extensive coverage of typical LiDAR surveys. Vendor advertised error estimates are often restricted to strict and optimal survey conditions, resulting in idealized values. Numerical modeling of individual pulse uncertainty provides an alternative method for estimating LiDAR measurement uncertainty. LiDAR measurement uncertainty is theoretically assumed to fall into three distinct categories, 1) sensor sub-system errors, 2) terrain influences, and 3) vegetative influences. This research details the procedures for numerical modeling of measurement uncertainty from the sensor sub-system (GPS, IMU, laser scanner, laser ranger) and terrain influences. Results show that errors tend to increase as the laser scan angle, altitude or laser beam incidence angle increase. An experimental survey over a flat and paved runway site, performed with an Optech ALTM 3100 sensor, showed an increase in modeled vertical errors of 5 cm, at a nadir scan orientation, to 8 cm at scan edges; for an aircraft altitude of 1200 m and half scan angle of 15°. In a survey with the same sensor, at a highly sloped glacial basin site absent of vegetation, modeled vertical errors reached over 2 m. Validation of error models within the glacial environment, over three separate flight lines, respectively showed 100%, 85%, and 75% of elevation residuals fell below error predictions. Future work in LiDAR sensor measurement uncertainty must focus on the development of vegetative error models to create more robust error prediction algorithms. To achieve this objective, comprehensive empirical exploratory analysis is recommended to relate vegetative parameters to observed errors.

  14. Trachoma, cataracts and uncorrected refractive error are still important contributors to visual morbidity in two remote indigenous communities of the Northern Territory, Australia.

    PubMed

    Wright, Heathcote R; Keeffe, Jill E; Taylor, Hugh R

    2009-08-01

    To assess the contribution of trachoma, cataract and refractive error to visual morbidity among Indigenous adults living in two remote communities of the Northern Territory. Cross-sectional survey of all adults aged 40 and over within a desert and coastal community. Visual acuity, clinical signs of trachoma using the simplified WHO grading system and assessment of cataract through a non-dilated pupil. Two hundred and sixty individuals over the age of 40 years participated in the study. The prevalence of visual impairment (<6/12) was 17%. The prevalence of blindness (<3/60) was 2%, 40-fold higher than seen in an urban Australian population when adjusted for age. In total, 78% of adults who grew up in a desert community had trachomatous scarring compared with 26% of those who grew up in a coastal community (P < or = 0.001). In the desert community the prevalence of trachomatous trichiasis was 10% and corneal opacity was 6%. No trachomatous trichiasis or corneal opacity was seen in the coastal community. Trachoma, cataract and uncorrected refractive error remain significant contributors to visual morbidity in at least two remote indigenous communities. A wider survey is required to determine if these findings represent a more widespread pattern and existing eye care services may need to be re-assessed to determine the cause of this unmet need.

  15. Quantifying uncertainty in carbon and nutrient pools of coarse woody debris

    NASA Astrophysics Data System (ADS)

    See, C. R.; Campbell, J. L.; Fraver, S.; Domke, G. M.; Harmon, M. E.; Knoepp, J. D.; Woodall, C. W.

    2016-12-01

    Woody detritus constitutes a major pool of both carbon and nutrients in forested ecosystems. Estimating coarse wood stocks relies on many assumptions, even when full surveys are conducted. Researchers rarely report error in coarse wood pool estimates, despite the importance to ecosystem budgets and modelling efforts. To date, no study has attempted a comprehensive assessment of error rates and uncertainty inherent in the estimation of this pool. Here, we use Monte Carlo analysis to propagate the error associated with the major sources of uncertainty present in the calculation of coarse wood carbon and nutrient (i.e., N, P, K, Ca, Mg, Na) pools. We also evaluate individual sources of error to identify the importance of each source of uncertainty in our estimates. We quantify sampling error by comparing the three most common field methods used to survey coarse wood (two transect methods and a whole-plot survey). We quantify the measurement error associated with length and diameter measurement, and technician error in species identification and decay class using plots surveyed by multiple technicians. We use previously published values of model error for the four most common methods of volume estimation: Smalian's, conical frustum, conic paraboloid, and average-of-ends. We also use previously published values for error in the collapse ratio (cross-sectional height/width) of decayed logs that serves as a surrogate for the volume remaining. We consider sampling error in chemical concentration and density for all decay classes, using distributions from both published and unpublished studies. Analytical uncertainty is calculated using standard reference plant material from the National Institute of Standards. Our results suggest that technician error in decay classification can have a large effect on uncertainty, since many of the error distributions included in the calculation (e.g. density, chemical concentration, volume-model selection, collapse ratio) are decay-class specific.

  16. A Search for TRUTH in Student Responses to Selected Survey Items. AIR 1993 Annual Forum Paper.

    ERIC Educational Resources Information Center

    Takalkar, Pradnya; And Others

    This study compared 4,594 student responses from three different surveys of incoming students at the University of South Florida (USF) with data from Florida's State University System (SUS) admissions files to determine what proportion of error occurs in the survey responses. Specifically, the study investigated the amount of measurement error in…

  17. Laboratory errors and patient safety.

    PubMed

    Miligy, Dawlat A

    2015-01-01

    Laboratory data are extensively used in medical practice; consequently, laboratory errors have a tremendous impact on patient safety. Therefore, programs designed to identify and reduce laboratory errors, as well as, setting specific strategies are required to minimize these errors and improve patient safety. The purpose of this paper is to identify part of the commonly encountered laboratory errors throughout our practice in laboratory work, their hazards on patient health care and some measures and recommendations to minimize or to eliminate these errors. Recording the encountered laboratory errors during May 2008 and their statistical evaluation (using simple percent distribution) have been done in the department of laboratory of one of the private hospitals in Egypt. Errors have been classified according to the laboratory phases and according to their implication on patient health. Data obtained out of 1,600 testing procedure revealed that the total number of encountered errors is 14 tests (0.87 percent of total testing procedures). Most of the encountered errors lay in the pre- and post-analytic phases of testing cycle (representing 35.7 and 50 percent, respectively, of total errors). While the number of test errors encountered in the analytic phase represented only 14.3 percent of total errors. About 85.7 percent of total errors were of non-significant implication on patients health being detected before test reports have been submitted to the patients. On the other hand, the number of test errors that have been already submitted to patients and reach the physician represented 14.3 percent of total errors. Only 7.1 percent of the errors could have an impact on patient diagnosis. The findings of this study were concomitant with those published from the USA and other countries. This proves that laboratory problems are universal and need general standardization and bench marking measures. Original being the first data published from Arabic countries that evaluated the encountered laboratory errors and launch the great need for universal standardization and bench marking measures to control the laboratory work.

  18. Quality requirements for veterinary hematology analyzers in small animals-a survey about veterinary experts' requirements and objective evaluation of analyzer performance based on a meta-analysis of method validation studies: bench top hematology analyzer.

    PubMed

    Cook, Andrea M; Moritz, Andreas; Freeman, Kathleen P; Bauer, Natali

    2016-09-01

    Scarce information exists about quality requirements and objective evaluation of performance of large veterinary bench top hematology analyzers. The study was aimed at comparing the observed total error (TEobs ) derived from meta-analysis of published method validation data to the total allowable error (TEa ) for veterinary hematology variables in small animals based on experts' opinions. Ideally, TEobs should be < TEa . An online survey was sent to veterinary experts in clinical pathology and small animal internal medicine for providing the maximal allowable deviation from a given result for each variable. Percent of TEa = (allowable median deviation/clinical threshold) * 100%. Second, TEobs for 3 laser-based bench top hematology analyzers (ADVIA 2120; Sysmex XT2000iV, and CellDyn 3500) was calculated based on method validation studies published between 2005 and 2013 (n = 4). The percent TEobs = 2 * CV (%) + bias (%). The CV was derived from published studies except for the ADVIA 2120 (internal data), and bias was estimated from the regression equation. A total of 41 veterinary experts (19 diplomates, 8 residents, 10 postgraduate students, 4 anonymous specialists) responded. The proposed range of TEa was wide, but generally ≤ 20%. The TEobs was < TEa for all variables and analyzers except for canine and feline HGB (high bias, low CV) and platelet counts (high bias, high CV). Overall, veterinary bench top analyzers fulfilled experts' requirements except for HGB due to method-related bias, and platelet counts due to known preanalytic/analytic issues. © 2016 American Society for Veterinary Clinical Pathology.

  19. Combining structure-from-motion derived point clouds from satellites and unmanned aircraft systems images with ground-truth data to create high-resolution digital elevation models

    NASA Astrophysics Data System (ADS)

    Palaseanu, M.; Thatcher, C.; Danielson, J.; Gesch, D. B.; Poppenga, S.; Kottermair, M.; Jalandoni, A.; Carlson, E.

    2016-12-01

    Coastal topographic and bathymetric (topobathymetric) data with high spatial resolution (1-meter or better) and high vertical accuracy are needed to assess the vulnerability of Pacific Islands to climate change impacts, including sea level rise. According to the Intergovernmental Panel on Climate Change reports, low-lying atolls in the Pacific Ocean are extremely vulnerable to king tide events, storm surge, tsunamis, and sea-level rise. The lack of coastal topobathymetric data has been identified as a critical data gap for climate vulnerability and adaptation efforts in the Republic of the Marshall Islands (RMI). For Majuro Atoll, home to the largest city of RMI, the only elevation dataset currently available is the Shuttle Radar Topography Mission data which has a 30-meter spatial resolution and 16-meter vertical accuracy (expressed as linear error at 90%). To generate high-resolution digital elevation models (DEMs) in the RMI, elevation information and photographic imagery have been collected from field surveys using GNSS/total station and unmanned aerial vehicles for Structure-from-Motion (SfM) point cloud generation. Digital Globe WorldView II imagery was processed to create SfM point clouds to fill in gaps in the point cloud derived from the higher resolution UAS photos. The combined point cloud data is filtered and classified to bare-earth and georeferenced using the GNSS data acquired on roads and along survey transects perpendicular to the coast. A total station was used to collect elevation data under tree canopies where heavy vegetation cover blocked the view of GNSS satellites. A subset of the GPS / total station data was set aside for error assessment of the resulting DEM.

  20. The Trojan Lifetime Champions Health Survey: development, validity, and reliability.

    PubMed

    Sorenson, Shawn C; Romano, Russell; Scholefield, Robin M; Schroeder, E Todd; Azen, Stanley P; Salem, George J

    2015-04-01

    Self-report questionnaires are an important method of evaluating lifespan health, exercise, and health-related quality of life (HRQL) outcomes among elite, competitive athletes. Few instruments, however, have undergone formal characterization of their psychometric properties within this population. To evaluate the validity and reliability of a novel health and exercise questionnaire, the Trojan Lifetime Champions (TLC) Health Survey. Descriptive laboratory study. A large National Collegiate Athletic Association Division I university. A total of 63 university alumni (age range, 24 to 84 years), including former varsity collegiate athletes and a control group of nonathletes. Participants completed the TLC Health Survey twice at a mean interval of 23 days with randomization to the paper or electronic version of the instrument. Content validity, feasibility of administration, test-retest reliability, parallel-form reliability between paper and electronic forms, and estimates of systematic and typical error versus differences of clinical interest were assessed across a broad range of health, exercise, and HRQL measures. Correlation coefficients, including intraclass correlation coefficients (ICCs) for continuous variables and κ agreement statistics for ordinal variables, for test-retest reliability averaged 0.86, 0.90, 0.80, and 0.74 for HRQL, lifetime health, recent health, and exercise variables, respectively. Correlation coefficients, again ICCs and κ, for parallel-form reliability (ie, equivalence) between paper and electronic versions averaged 0.90, 0.85, 0.85, and 0.81 for HRQL, lifetime health, recent health, and exercise variables, respectively. Typical measurement error was less than the a priori thresholds of clinical interest, and we found minimal evidence of systematic test-retest error. We found strong evidence of content validity, convergent construct validity with the Short-Form 12 Version 2 HRQL instrument, and feasibility of administration in an elite, competitive athletic population. These data suggest that the TLC Health Survey is a valid and reliable instrument for assessing lifetime and recent health, exercise, and HRQL, among elite competitive athletes. Generalizability of the instrument may be enhanced by additional, larger-scale studies in diverse populations.

  1. On the recovery of the local group motion from galaxy redshift surveys

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Nusser, Adi; Davis, Marc; Branchini, Enzo, E-mail: adi@physics.technion.ac.il, E-mail: mdavis@berkeley.edu, E-mail: branchin@fis.uniroma3.it

    2014-06-20

    There is an ∼150 km s{sup –1} discrepancy between the measured motion of the Local Group (LG) of galaxies with respect to the cosmic microwave background and the linear theory prediction based on the gravitational force field of the large-scale structure in full-sky redshift surveys. We perform a variety of tests which show that the LG motion cannot be recovered to better than 150-200 km s{sup –1} in amplitude and within ≈10° in direction. The tests rely on catalogs of mock galaxies identified in the Millennium simulation using semi-analytic galaxy formation models. We compare these results to the K{sub s}more » = 11.75 Two-Mass Galaxy Redshift Survey, which provides the deepest and most complete all-sky spatial distribution of galaxies with spectroscopic redshifts available thus far. In our analysis, we use a new concise relation for deriving the LG motion and bulk flow from the true distribution of galaxies in redshift space. Our results show that the main source of uncertainty is the small effective depth of surveys like the Two-Mass Redshift Survey (2MRS), which prevents a proper sampling of the large-scale structure beyond ∼100 h {sup –1} Mpc. Deeper redshift surveys are needed to reach the 'convergence scale' of ≈250 h {sup –1} Mpc in a ΛCDM universe. Deeper surveys would also mitigate the impact of the 'Kaiser rocket' which, in a survey like 2MRS, remains a significant source of uncertainty. Thanks to the quiet and moderate density environment of the LG, purely dynamical uncertainties of the linear predictions are subdominant at the level of ∼90 km s{sup –1}. Finally, we show that deviations from linear galaxy biasing and shot noise errors provide a minor contribution to the total error budget.« less

  2. A national physician survey of diagnostic error in paediatrics.

    PubMed

    Perrem, Lucy M; Fanshawe, Thomas R; Sharif, Farhana; Plüddemann, Annette; O'Neill, Michael B

    2016-10-01

    This cross-sectional survey explored paediatric physician perspectives regarding diagnostic errors. All paediatric consultants and specialist registrars in Ireland were invited to participate in this anonymous online survey. The response rate for the study was 54 % (n = 127). Respondents had a median of 9-year clinical experience (interquartile range (IQR) 4-20 years). A diagnostic error was reported at least monthly by 19 (15.0 %) respondents. Consultants reported significantly less diagnostic errors compared to trainees (p value = 0.01). Cognitive error was the top-ranked contributing factor to diagnostic error, with incomplete history and examination considered to be the principal cognitive error. Seeking a second opinion and close follow-up of patients to ensure that the diagnosis is correct were the highest-ranked, clinician-based solutions to diagnostic error. Inadequate staffing levels and excessive workload were the most highly ranked system-related and situational factors. Increased access to and availability of consultants and experts was the most highly ranked system-based solution to diagnostic error. We found a low level of self-perceived diagnostic error in an experienced group of paediatricians, at variance with the literature and warranting further clarification. The results identify perceptions on the major cognitive, system-related and situational factors contributing to diagnostic error and also key preventative strategies. • Diagnostic errors are an important source of preventable patient harm and have an estimated incidence of 10-15 %. • They are multifactorial in origin and include cognitive, system-related and situational factors. What is New: • We identified a low rate of self-perceived diagnostic error in contrast to the existing literature. • Incomplete history and examination, inadequate staffing levels and excessive workload are cited as the principal contributing factors to diagnostic error in this study.

  3. THE MOST MASSIVE GALAXIES AT 3.0 {<=} z < 4.0 IN THE NEWFIRM MEDIUM-BAND SURVEY: PROPERTIES AND IMPROVED CONSTRAINTS ON THE STELLAR MASS FUNCTION

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Marchesini, Danilo; Whitaker, Katherine E.; Brammer, Gabriel

    2010-12-10

    We use the optical to mid-infrared coverage of the NEWFIRM Medium-Band Survey (NMBS) to characterize, for the first time, the properties of a mass-complete sample of 14 galaxies at 3.0 {<=} z < 4.0 with M{sub star}>2.5 x 10{sup 11} M{sub sun}, and to derive significantly more accurate measurements of the high-mass end of the stellar mass function (SMF) of galaxies at 3.0 {<=} z < 4.0. The accurate photometric redshifts and well-sampled spectral energy distributions (SEDs) provided by the NMBS combined with the large surveyed area result in significantly reduced contributions from photometric redshift errors and cosmic variance tomore » the total error budget of the SMF. The typical very massive galaxy at 3.0 {<=} z < 4.0 is red and faint in the observer's optical, with a median r-band magnitude of (r{sub tot}) = 26.1, and median rest-frame U - V colors of (U - V) = 1.6. About 60% of the mass-complete sample has optical colors satisfying either the U- or the B-dropout color criteria, although {approx}50% of these galaxies has r>25.5. We find that {approx}30% of the sample has star formation rates (SFRs) from SED modeling consistent with zero, although SFRs of up to {approx}1-18 M{sub sun} yr{sup -1} are also allowed within 1{sigma}. However, >80% of the sample is detected at 24 {mu}m, resulting in total infrared luminosities in the range (0.5-4.0) x 10{sup 13} L{sub sun}. This implies the presence of either dust-enshrouded starburst activity (with SFRs of 600-4300 M{sub sun} yr{sup -1}) and/or highly obscured active galactic nuclei (AGNs). The contribution of galaxies with M{sub star}>2.5 x 10{sup 11} M{sub sun} to the total stellar mass budget at 3.0 {<=} z < 4.0 is {approx}8{sup +13}{sub -3}%. Compared to recent estimates of the stellar mass density in galaxies with M{sub star} {approx} 10{sup 9}-10{sup 11} M{sub sun} at z {approx} 5 and z {approx} 6, we find an evolution by a factor of 2-7 and 3-22 from z {approx} 5 and z {approx} 6, respectively, to z = 3.5. The previously found disagreement at the high-mass end between observed and model-predicted SMFs is now significant at the 3{sigma} level when only random uncertainties are considered. However, systematic uncertainties dominate the total error budget, with errors up to a factor of {approx}8 in the densities at the high-mass end, bringing the observed SMF in marginal agreement with the predicted SMF. Additional systematic uncertainties on the high-mass end could be potentially introduced by either (1) the intense star formation and/or the very common AGN activities as inferred from the MIPS 24 {mu}m detections, and/or (2) contamination by a significant population of massive, old, and dusty galaxies at z {approx} 2.6.« less

  4. Use of units of measurement error in anthropometric comparisons.

    PubMed

    Lucas, Teghan; Henneberg, Maciej

    2017-09-01

    Anthropometrists attempt to minimise measurement errors, however, errors cannot be eliminated entirely. Currently, measurement errors are simply reported. Measurement errors should be included into analyses of anthropometric data. This study proposes a method which incorporates measurement errors into reported values, replacing metric units with 'units of technical error of measurement (TEM)' by applying these to forensics, industrial anthropometry and biological variation. The USA armed forces anthropometric survey (ANSUR) contains 132 anthropometric dimensions of 3982 individuals. Concepts of duplication and Euclidean distance calculations were applied to the forensic-style identification of individuals in this survey. The National Size and Shape Survey of Australia contains 65 anthropometric measurements of 1265 women. This sample was used to show how a woman's body measurements expressed in TEM could be 'matched' to standard clothing sizes. Euclidean distances show that two sets of repeated anthropometric measurements of the same person cannot be matched (> 0) on measurements expressed in millimetres but can in units of TEM (= 0). Only 81 women can fit into any standard clothing size when matched using centimetres, with units of TEM, 1944 women fit. The proposed method can be applied to all fields that use anthropometry. Units of TEM are considered a more reliable unit of measurement for comparisons.

  5. Experimental investigation of observation error in anuran call surveys

    USGS Publications Warehouse

    McClintock, B.T.; Bailey, L.L.; Pollock, K.H.; Simons, T.R.

    2010-01-01

    Occupancy models that account for imperfect detection are often used to monitor anuran and songbird species occurrence. However, presenceabsence data arising from auditory detections may be more prone to observation error (e.g., false-positive detections) than are sampling approaches utilizing physical captures or sightings of individuals. We conducted realistic, replicated field experiments using a remote broadcasting system to simulate simple anuran call surveys and to investigate potential factors affecting observation error in these studies. Distance, time, ambient noise, and observer abilities were the most important factors explaining false-negative detections. Distance and observer ability were the best overall predictors of false-positive errors, but ambient noise and competing species also affected error rates for some species. False-positive errors made up 5 of all positive detections, with individual observers exhibiting false-positive rates between 0.5 and 14. Previous research suggests false-positive errors of these magnitudes would induce substantial positive biases in standard estimators of species occurrence, and we recommend practices to mitigate for false positives when developing occupancy monitoring protocols that rely on auditory detections. These recommendations include additional observer training, limiting the number of target species, and establishing distance and ambient noise thresholds during surveys. ?? 2010 The Wildlife Society.

  6. Risk managers, physicians, and disclosure of harmful medical errors.

    PubMed

    Loren, David J; Garbutt, Jane; Dunagan, W Claiborne; Bommarito, Kerry M; Ebers, Alison G; Levinson, Wendy; Waterman, Amy D; Fraser, Victoria J; Summy, Elizabeth A; Gallagher, Thomas H

    2010-03-01

    Physicians are encouraged to disclose medical errors to patients, which often requires close collaboration between physicians and risk managers. An anonymous national survey of 2,988 healthcare facility-based risk managers was conducted between November 2004 and March 2005, and results were compared with those of a previous survey (conducted between July 2003 and March 2004) of 1,311 medical physicians in Washington and Missouri. Both surveys included an error-disclosure scenario for an obvious and a less obvious error with scripted response options. More risk managers than physicians were aware that an error-reporting system was present at their hospital (81% versus 39%, p < .001) and believed that mechanisms to inform physicians about errors in their hospital were adequate (51% versus 17%, p < .001). More risk managers than physicians strongly agreed that serious errors should be disclosed to patients (70% versus 49%, p < .001). Across both error scenario, risk managers were more likely than physicians to definitely recommend that the error be disclosed (76% versus 50%, p < .001) and to provide full details about how the error would be prevented in the future (62% versus 51%, p < .001). However, physicians were more likely than risk managers to provide a full apology recognizing the harm caused by the error (39% versus 21%, p < .001). Risk managers have more favorable attitudes about disclosing errors to patients compared with physicians but are less supportive of providing a full apology. These differences may create conflicts between risk managers and physicians regarding disclosure. Health care institutions should promote greater collaboration between these two key participants in disclosure conversations.

  7. Assessment of Systematic Chromatic Errors that Impact Sub-1% Photometric Precision in Large-Area Sky Surveys

    DOE PAGES

    Li, T. S.; DePoy, D. L.; Marshall, J. L.; ...

    2016-06-01

    Here, we report that meeting the science goals for many current and future ground-based optical large-area sky surveys requires that the calibrated broadband photometry is both stable in time and uniform over the sky to 1% precision or better. Past and current surveys have achieved photometric precision of 1%–2% by calibrating the survey's stellar photometry with repeated measurements of a large number of stars observed in multiple epochs. The calibration techniques employed by these surveys only consider the relative frame-by-frame photometric zeropoint offset and the focal plane position-dependent illumination corrections, which are independent of the source color. However, variations inmore » the wavelength dependence of the atmospheric transmission and the instrumental throughput induce source color-dependent systematic errors. These systematic errors must also be considered to achieve the most precise photometric measurements. In this paper, we examine such systematic chromatic errors (SCEs) using photometry from the Dark Energy Survey (DES) as an example. We first define a natural magnitude system for DES and calculate the systematic errors on stellar magnitudes when the atmospheric transmission and instrumental throughput deviate from the natural system. We conclude that the SCEs caused by the change of airmass in each exposure, the change of the precipitable water vapor and aerosol in the atmosphere over time, and the non-uniformity of instrumental throughput over the focal plane can be up to 2% in some bandpasses. We then compare the calculated SCEs with the observed DES data. For the test sample data, we correct these errors using measurements of the atmospheric transmission and instrumental throughput from auxiliary calibration systems. In conclusion, the residual after correction is less than 0.3%. Moreover, we calculate such SCEs for Type Ia supernovae and elliptical galaxies and find that the chromatic errors for non-stellar objects are redshift-dependent and can be larger than those for stars at certain redshifts.« less

  8. An empirical model for global earthquake fatality estimation

    USGS Publications Warehouse

    Jaiswal, Kishor; Wald, David

    2010-01-01

    We analyzed mortality rates of earthquakes worldwide and developed a country/region-specific empirical model for earthquake fatality estimation within the U.S. Geological Survey's Prompt Assessment of Global Earthquakes for Response (PAGER) system. The earthquake fatality rate is defined as total killed divided by total population exposed at specific shaking intensity level. The total fatalities for a given earthquake are estimated by multiplying the number of people exposed at each shaking intensity level by the fatality rates for that level and then summing them at all relevant shaking intensities. The fatality rate is expressed in terms of a two-parameter lognormal cumulative distribution function of shaking intensity. The parameters are obtained for each country or a region by minimizing the residual error in hindcasting the total shaking-related deaths from earthquakes recorded between 1973 and 2007. A new global regionalization scheme is used to combine the fatality data across different countries with similar vulnerability traits.

  9. Fibromyalgia survey criteria are associated with increased postoperative opioid consumption in women undergoing hysterectomy.

    PubMed

    Janda, Allison M; As-Sanie, Sawsan; Rajala, Baskar; Tsodikov, Alex; Moser, Stephanie E; Clauw, Daniel J; Brummett, Chad M

    2015-05-01

    The current study was designed to test the hypothesis that the fibromyalgia survey criteria would be directly associated with increased opioid consumption after hysterectomy even when accounting for other factors previously described as being predictive for acute postoperative pain. Two hundred eight adult patients undergoing hysterectomy between October 2011 and December 2013 were phenotyped preoperatively with the use of validated self-reported questionnaires including the 2011 fibromyalgia survey criteria, measures of pain severity and descriptors, psychological measures, preoperative opioid use, and health information. The primary outcome was the total postoperative opioid consumption converted to oral morphine equivalents. Higher fibromyalgia survey scores were significantly associated with worse preoperative pain characteristics, including higher pain severity, more neuropathic pain, greater psychological distress, and more preoperative opioid use. In a multivariate linear regression model, the fibromyalgia survey score was independently associated with increased postoperative opioid consumption, with an increase of 7-mg oral morphine equivalents for every 1-point increase on the 31-point measure (Estimate, 7.0; Standard Error, 1.7; P < 0.0001). In addition to the fibromyalgia survey score, multivariate analysis showed that more severe medical comorbidity, catastrophizing, laparotomy surgical approach, and preoperative opioid use were also predictive of increased postoperative opioid consumption. As was previously demonstrated in a total knee and hip arthroplasty cohort, this study demonstrated that increased fibromyalgia survey scores were predictive of postoperative opioid consumption in the posthysterectomy surgical population during their hospital stay. By demonstrating the generalizability in a second surgical cohort, these data suggest that patients with fibromyalgia-like characteristics may require a tailored perioperative analgesic regimen.

  10. Automatic-repeat-request error control schemes

    NASA Technical Reports Server (NTRS)

    Lin, S.; Costello, D. J., Jr.; Miller, M. J.

    1983-01-01

    Error detection incorporated with automatic-repeat-request (ARQ) is widely used for error control in data communication systems. This method of error control is simple and provides high system reliability. If a properly chosen code is used for error detection, virtually error-free data transmission can be attained. Various types of ARQ and hybrid ARQ schemes, and error detection using linear block codes are surveyed.

  11. A Spatio-Temporally Explicit Random Encounter Model for Large-Scale Population Surveys

    PubMed Central

    Jousimo, Jussi; Ovaskainen, Otso

    2016-01-01

    Random encounter models can be used to estimate population abundance from indirect data collected by non-invasive sampling methods, such as track counts or camera-trap data. The classical Formozov–Malyshev–Pereleshin (FMP) estimator converts track counts into an estimate of mean population density, assuming that data on the daily movement distances of the animals are available. We utilize generalized linear models with spatio-temporal error structures to extend the FMP estimator into a flexible Bayesian modelling approach that estimates not only total population size, but also spatio-temporal variation in population density. We also introduce a weighting scheme to estimate density on habitats that are not covered by survey transects, assuming that movement data on a subset of individuals is available. We test the performance of spatio-temporal and temporal approaches by a simulation study mimicking the Finnish winter track count survey. The results illustrate how the spatio-temporal modelling approach is able to borrow information from observations made on neighboring locations and times when estimating population density, and that spatio-temporal and temporal smoothing models can provide improved estimates of total population size compared to the FMP method. PMID:27611683

  12. Bathymetric surveying with GPS and heave, pitch, and roll compensation

    USGS Publications Warehouse

    Work, P.A.; Hansen, M.; Rogers, W.E.

    1998-01-01

    Field and laboratory tests of a shipborne hydrographic survey system were conducted. The system consists of two 12-channel GPS receivers (one on-board, one fixed on shore), a digital acoustic fathometer, and a digital heave-pitch-roll (HPR) recorder. Laboratory tests of the HPR recorder and fathometer are documented. Results of field tests of the isolated GPS system and then of the entire suite of instruments are presented. A method for data reduction is developed to account for vertical errors introduced by roll and pitch of the survey vessel, which can be substantial (decimeters). The GPS vertical position data are found to be reliable to 2-3 cm and the fathometer to 5 cm in the laboratory. The field test of the complete system in shallow water (<2 m) indicates absolute vertical accuracy of 10-20 cm. Much of this error is attributed to the fathometer. Careful surveying and equipment setup can minimize systematic error and yield much smaller average errors.

  13. Modeling Water Temperature in the Yakima River, Washington, from Roza Diversion Dam to Prosser Dam, 2005-06

    USGS Publications Warehouse

    Voss, Frank D.; Curran, Christopher A.; Mastin, Mark C.

    2008-01-01

    A mechanistic water-temperature model was constructed by the U.S. Geological Survey for use by the Bureau of Reclamation for studying the effect of potential water management decisions on water temperature in the Yakima River between Roza and Prosser, Washington. Flow and water temperature data for model input were obtained from the Bureau of Reclamation Hydromet database and from measurements collected by the U.S. Geological Survey during field trips in autumn 2005. Shading data for the model were collected by the U.S. Geological Survey in autumn 2006. The model was calibrated with data collected from April 1 through October 31, 2005, and tested with data collected from April 1 through October 31, 2006. Sensitivity analysis results showed that for the parameters tested, daily maximum water temperature was most sensitive to changes in air temperature and solar radiation. Root mean squared error for the five sites used for model calibration ranged from 1.3 to 1.9 degrees Celsius (?C) and mean error ranged from ?1.3 to 1.6?C. The root mean squared error for the five sites used for testing simulation ranged from 1.6 to 2.2?C and mean error ranged from 0.1 to 1.3?C. The accuracy of the stream temperatures estimated by the model is limited by four errors (model error, data error, parameter error, and user error).

  14. A two-phase sampling survey for nonresponse and its paradata to correct nonresponse bias in a health surveillance survey.

    PubMed

    Santin, G; Bénézet, L; Geoffroy-Perez, B; Bouyer, J; Guéguen, A

    2017-02-01

    The decline in participation rates in surveys, including epidemiological surveillance surveys, has become a real concern since it may increase nonresponse bias. The aim of this study is to estimate the contribution of a complementary survey among a subsample of nonrespondents, and the additional contribution of paradata in correcting for nonresponse bias in an occupational health surveillance survey. In 2010, 10,000 workers were randomly selected and sent a postal questionnaire. Sociodemographic data were available for the whole sample. After data collection of the questionnaires, a complementary survey among a random subsample of 500 nonrespondents was performed using a questionnaire administered by an interviewer. Paradata were collected for the complete subsample of the complementary survey. Nonresponse bias in the initial sample and in the combined samples were assessed using variables from administrative databases available for the whole sample, not subject to differential measurement errors. Corrected prevalences by reweighting technique were estimated by first using the initial survey alone and then the initial and complementary surveys combined, under several assumptions regarding the missing data process. Results were compared by computing relative errors. The response rates of the initial and complementary surveys were 23.6% and 62.6%, respectively. For the initial and the combined surveys, the relative errors decreased after correction for nonresponse on sociodemographic variables. For the combined surveys without paradata, relative errors decreased compared with the initial survey. The contribution of the paradata was weak. When a complex descriptive survey has a low response rate, a short complementary survey among nonrespondents with a protocol which aims to maximize the response rates, is useful. The contribution of sociodemographic variables in correcting for nonresponse bias is important whereas the additional contribution of paradata in correcting for nonresponse bias is questionable. Copyright © 2016 Elsevier Masson SAS. All rights reserved.

  15. Nonresponse and Underreporting Errors Increase over the Data Collection Week Based on Paradata from the National Household Food Acquisition and Purchase Survey.

    PubMed

    Hu, Mengyao; Gremel, Garrett W; Kirlin, John A; West, Brady T

    2017-05-01

    Background: Food acquisition diary surveys are important for studying food expenditures, factors affecting food acquisition decisions, and relations between these decisions with selected measures of health (e.g., body mass index, self-reported health). However, to our knowledge, no studies have evaluated the errors associated with these diary surveys, which can bias survey estimates and research findings. The use of paradata, which has been largely ignored in previous literature on diary surveys, could be useful for studying errors in these surveys. Objective: We used paradata to assess survey errors in the National Household Food Acquisition and Purchase Survey (FoodAPS). Methods: To evaluate the patterns of nonresponse over the diary period, we fit a multinomial logistic regression model to data from this 1-wk diary survey. We also assessed factors influencing respondents' probability of reporting food acquisition events during the diary process by using logistic regression models. Finally, with the use of an ordinal regression model, we studied factors influencing respondents' perceived ease of participation in the survey. Results: As the diary period progressed, nonresponse increased, especially for those starting the survey on Friday (where the odds of a refusal increased by 12% with each fielding day). The odds of reporting food acquisition events also decreased by 6% with each additional fielding day. Similarly, the odds of reporting ≥1 food-away-from-home event (i.e., meals, snacks, and drinks obtained outside the home) decreased significantly over the fielding period. Male respondents, larger households, households that eat together less often, and households with frequent guests reported a significantly more difficult time getting household members to participate, as did non-English-speaking households and households currently experiencing difficult financial conditions. Conclusions: Nonresponse and underreporting of food acquisition events tended to increase in the FoodAPS as data collection proceeded. This analysis of paradata available in the FoodAPS revealed these errors and suggests methodologic improvements for future food acquisition surveys. © 2017 American Society for Nutrition.

  16. Measurement error in earnings data: Using a mixture model approach to combine survey and register data.

    PubMed

    Meijer, Erik; Rohwedder, Susann; Wansbeek, Tom

    2012-01-01

    Survey data on earnings tend to contain measurement error. Administrative data are superior in principle, but they are worthless in case of a mismatch. We develop methods for prediction in mixture factor analysis models that combine both data sources to arrive at a single earnings figure. We apply the methods to a Swedish data set. Our results show that register earnings data perform poorly if there is a (small) probability of a mismatch. Survey earnings data are more reliable, despite their measurement error. Predictors that combine both and take conditional class probabilities into account outperform all other predictors.

  17. Early self-managed focal sensorimotor rehabilitative training enhances functional mobility and sensorimotor function in patients following total knee replacement: a controlled clinical trial.

    PubMed

    Moutzouri, Maria; Gleeson, Nigel; Coutts, Fiona; Tsepis, Elias; John, Gliatis

    2018-02-01

    To assess the effects of early self-managed focal sensorimotor training compared to functional exercise training after total knee replacement on functional mobility and sensorimotor function. A single-blind controlled clinical trial. University Hospital of Rion, Greece. A total of 52 participants following total knee replacement. The primary outcome was the Timed Up and Go Test and the secondary outcomes were balance, joint position error, the Knee Outcome Survey Activities of Daily Living Scale, and pain. Patients were assessed on three separate occasions (presurgery, 8 weeks post surgery, and 14 weeks post surgery). Participants were randomized to either focal sensorimotor exercise training (experimental group) or functional exercise training (control group). Both groups received a 12-week home-based programme prescribed for 3-5 sessions/week (35-45 minutes). Consistently greater improvements ( F 2,98  = 4.3 to 24.8; P < 0.05) in group mean scores favour the experimental group compared to the control group: Timed Up and Go (7.8 ± 2.9 seconds vs. 4.6 ± 2.6 seconds); balance (2.1 ± 0.9° vs. 0.7 ± 1.2°); joint position error (13.8 ± 7.3° vs. 6.2 ± 9.1°); Knee Outcome Survey Activities of Daily Living Scale (44.2 ± 11.3 vs. 26.1 ± 11.4); and pain (5.9 ± 1.3 cm vs. 4.6 ± 1.1 cm). Patterns of improvement for the experimental group over time were represented by a relative effect size range of 1.3-6.5. Overall, the magnitude of improvements in functional mobility and sensorimotor function endorses using focal sensorimotor training as an effective mode of rehabilitation following knee replacement.

  18. Questions for Surveys

    PubMed Central

    Schaeffer, Nora Cate; Dykema, Jennifer

    2011-01-01

    We begin with a look back at the field to identify themes of recent research that we expect to continue to occupy researchers in the future. As part of this overview, we characterize the themes and topics examined in research about measurement and survey questions published in Public Opinion Quarterly in the past decade. We then characterize the field more broadly by highlighting topics that we expect to continue or to grow in importance, including the relationship between survey questions and the total survey error perspective, cognitive versus interactional approaches, interviewing practices, mode and technology, visual aspects of question design, and culture. Considering avenues for future research, we advocate for a decision-oriented framework for thinking about survey questions and their characteristics. The approach we propose distinguishes among various aspects of question characteristics, including question topic, question type and response dimension, conceptualization and operationalization of the target object, question structure, question form, response categories, question implementation, and question wording. Thinking about question characteristics more systematically would allow study designs to take into account relationships among these characteristics and identify gaps in current knowledge. PMID:24970951

  19. Comparison of survey and photogrammetry methods to position gravity data, Yucca Mountain, Nevada

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ponce, D.A.; Wu, S.S.C.; Spielman, J.B.

    1985-12-31

    Locations of gravity stations at Yucca Mountain, Nevada, were determined by a survey using an electronic distance-measuring device and by a photogram-metric method. The data from both methods were compared to determine if horizontal and vertical coordinates developed from photogrammetry are sufficently accurate to position gravity data at the site. The results show that elevations from the photogrammetric data have a mean difference of 0.57 +- 0.70 m when compared with those of the surveyed data. Comparison of the horizontal control shows that the two methods agreed to within 0.01 minute. At a latitude of 45{sup 0}, an error ofmore » 0.01 minute (18 m) corresponds to a gravity anomaly error of 0.015 mGal. Bouguer gravity anomalies are most sensitive to errors in elevation, thus elevation is the determining factor for use of photogrammetric or survey methods to position gravity data. Because gravity station positions are difficult to locate on aerial photographs, photogrammetric positions are not always exactly at the gravity station; therefore, large disagreements may appear when comparing electronic and photogrammetric measurements. A mean photogrammetric elevation error of 0.57 m corresponds to a gravity anomaly error of 0.11 mGal. Errors of 0.11 mGal are too large for high-precision or detailed gravity measurements but acceptable for regional work. 1 ref. 2 figs., 4 tabs.« less

  20. Skylab water balance error analysis

    NASA Technical Reports Server (NTRS)

    Leonard, J. I.

    1977-01-01

    Estimates of the precision of the net water balance were obtained for the entire Skylab preflight and inflight phases as well as for the first two weeks of flight. Quantitative estimates of both total sampling errors and instrumentation errors were obtained. It was shown that measurement error is minimal in comparison to biological variability and little can be gained from improvement in analytical accuracy. In addition, a propagation of error analysis demonstrated that total water balance error could be accounted for almost entirely by the errors associated with body mass changes. Errors due to interaction between terms in the water balance equation (covariances) represented less than 10% of the total error. Overall, the analysis provides evidence that daily measurements of body water changes obtained from the indirect balance technique are reasonable, precise, and relaible. The method is not biased toward net retention or loss.

  1. Status of pelagic prey fishes and pelagic macroinvertebrates in Lake Michigan, 2008

    USGS Publications Warehouse

    Warner, David M.; Claramunt, Randall M.; Holuszko, Jeffrey D.; Desorcie, Timothy J.

    2009-01-01

    Acoustic surveys were conducted in late summer/early fall during the years 1992-1996 and 2001-2008 to estimate pelagic prey fish biomass in Lake Michigan. Midwater trawling during the surveys provided a measure of species and size composition of the fish community for use in scaling acoustic data and providing species-specific abundance estimates. In 2005, we began sampling Mysis diluviana during the survey. The 2008 survey provided data from 24 acoustic transects (734 km), 33 midwater tows, and 39 mysid tows. Mean total prey fish biomass was 15.3 kg/ha (relative standard error, RSE = 7.6%) or ~82 kilotonnes (kt, 1,000 metric tons), which was 1.9 times higher than the estimate for 2007 but 78% lower than the long-term mean. The increase from 2007 was because of increased biomass of age-1 and age-3 alewife. The 2008 alewife year-class contributed ~12% of total alewife biomass (11.0 kg/ha, RSE = 9.0%), while the 2007 and 2005 alewife year-classes contributed ~33% and 35%, respectively. In 2008, alewife comprised 72% of total biomass, while rainbow smelt and bloater were 11 and 17% of total biomass, respectively. Rainbow smelt biomass in 2008 (1.6 kg/ha, RSE = 10.6%) was identical to the biomass in 2007 (1.6 kg/ha). Bloater biomass was again much lower (2.6 kg/ha, RSE = 15.2%) than in the 1990s, but mean density of small bloater in 2008 (534 fish/ha, RSE = 10.9) was the highest observed in any acoustic survey on record. Prey fish biomass remained well below the Fish Community Objectives target of 500-800 kt and only alewife and small bloater are above or near long-term mean biomass levels. Mysis diluviana remains relatively abundant. Mean density ranged from 185 ind./m2 (RSE = 6.8%) in 2005 to 112 ind./m2 (RSE = 5.1%) in 2007, but there was not a statistically significant difference among years.

  2. Prevalence and Risk Factors for Refractive Errors: Korean National Health and Nutrition Examination Survey 2008-2011

    PubMed Central

    Kim, Eun Chul; Morgan, Ian G.; Kakizaki, Hirohiko; Kang, Seungbum; Jee, Donghyun

    2013-01-01

    Purpose To examine the prevalence and risk factors of refractive errors in a representative Korean population aged 20 years old or older. Methods A total of 23,392 people aged 20+ years were selected for the Korean National Health and Nutrition Survey 2008–2011, using stratified, multistage, clustered sampling. Refractive error was measured by autorefraction without cycloplegia, and interviews were performed regarding associated risk factors including gender, age, height, education level, parent's education level, economic status, light exposure time, and current smoking history. Results Of 23,392 participants, refractive errors were examined in 22,562 persons, including 21,356 subjects with phakic eyes. The overall prevalences of myopia (< -0.5 D), high myopia (< -6.0 D), and hyperopia (> 0.5 D) were 48.1% (95% confidence interval [CI], 47.4–48.8), 4.0% (CI, 3.7–4.3), and 24.2% (CI, 23.6–24.8), respectively. The prevalence of myopia sharply decreased from 78.9% (CI, 77.4–80.4) in 20–29 year olds to 16.1% (CI, 14.9–17.3) in 60–69 year olds. In multivariable logistic regression analyses restricted to subjects aged 40+ years, myopia was associated with younger age (odds ratio [OR], 0.94; 95% Confidence Interval [CI], 0.93-0.94, p < 0.001), education level of university or higher (OR, 2.31; CI, 1.97–2.71, p < 0.001), and shorter sunlight exposure time (OR, 0.84; CI, 0.76–0.93, p = 0.002). Conclusions This study provides the first representative population-based data on refractive error for Korean adults. The prevalence of myopia in Korean adults in 40+ years (34.7%) was comparable to that in other Asian countries. These results show that the younger generations in Korea are much more myopic than previous generations, and that important factors associated with this increase are increased education levels and reduced sunlight exposures. PMID:24224049

  3. Prevalence and risk factors for refractive errors: Korean National Health and Nutrition Examination Survey 2008-2011.

    PubMed

    Kim, Eun Chul; Morgan, Ian G; Kakizaki, Hirohiko; Kang, Seungbum; Jee, Donghyun

    2013-01-01

    To examine the prevalence and risk factors of refractive errors in a representative Korean population aged 20 years old or older. A total of 23,392 people aged 20+ years were selected for the Korean National Health and Nutrition Survey 2008-2011, using stratified, multistage, clustered sampling. Refractive error was measured by autorefraction without cycloplegia, and interviews were performed regarding associated risk factors including gender, age, height, education level, parent's education level, economic status, light exposure time, and current smoking history. Of 23,392 participants, refractive errors were examined in 22,562 persons, including 21,356 subjects with phakic eyes. The overall prevalences of myopia (< -0.5 D), high myopia (< -6.0 D), and hyperopia (> 0.5 D) were 48.1% (95% confidence interval [CI], 47.4-48.8), 4.0% (CI, 3.7-4.3), and 24.2% (CI, 23.6-24.8), respectively. The prevalence of myopia sharply decreased from 78.9% (CI, 77.4-80.4) in 20-29 year olds to 16.1% (CI, 14.9-17.3) in 60-69 year olds. In multivariable logistic regression analyses restricted to subjects aged 40+ years, myopia was associated with younger age (odds ratio [OR], 0.94; 95% Confidence Interval [CI], 0.93-0.94, p < 0.001), education level of university or higher (OR, 2.31; CI, 1.97-2.71, p < 0.001), and shorter sunlight exposure time (OR, 0.84; CI, 0.76-0.93, p = 0.002). This study provides the first representative population-based data on refractive error for Korean adults. The prevalence of myopia in Korean adults in 40+ years (34.7%) was comparable to that in other Asian countries. These results show that the younger generations in Korea are much more myopic than previous generations, and that important factors associated with this increase are increased education levels and reduced sunlight exposures.

  4. Physician Preferences to Communicate Neuropsychological Results: Comparison of Qualitative Descriptors and a Proposal to Reduce Communication Errors.

    PubMed

    Schoenberg, Mike R; Osborn, Katie E; Mahone, E Mark; Feigon, Maia; Roth, Robert M; Pliskin, Neil H

    2017-11-08

    Errors in communication are a leading cause of medical errors. A potential source of error in communicating neuropsychological results is confusion in the qualitative descriptors used to describe standardized neuropsychological data. This study sought to evaluate the extent to which medical consumers of neuropsychological assessments believed that results/findings were not clearly communicated. In addition, preference data for a variety of qualitative descriptors commonly used to communicate normative neuropsychological test scores were obtained. Preference data were obtained for five qualitative descriptor systems as part of a larger 36-item internet-based survey of physician satisfaction with neuropsychological services. A new qualitative descriptor system termed the Simplified Qualitative Classification System (Q-Simple) was proposed to reduce the potential for communication errors using seven terms: very superior, superior, high average, average, low average, borderline, and abnormal/impaired. A non-random convenience sample of 605 clinicians identified from four United States academic medical centers from January 1, 2015 through January 7, 2016 were invited to participate. A total of 182 surveys were completed. A minority of clinicians (12.5%) indicated that neuropsychological study results were not clearly communicated. When communicating neuropsychological standardized scores, the two most preferred qualitative descriptor systems were by Heaton and colleagues (26%) and a newly proposed Q-simple system (22%). Comprehensive norms for an extended Halstead-Reitan battery: Demographic corrections, research findings, and clinical applications. Odessa, TX: Psychological Assessment Resources) (26%) and the newly proposed Q-Simple system (22%). Initial findings highlight the need to improve and standardize communication of neuropsychological results. These data offer initial guidance for preferred terms to communicate test results and form a foundation for more standardized practice among neuropsychologists. © The Author 2017. Published by Oxford University Press. All rights reserved. For permissions, please e-mail: journals.permissions@oup.com.

  5. Perceptions of the 2011 ACGME duty hour requirements among residents in all core programs at a large academic medical center.

    PubMed

    Sandefur, Benjamin J; Shewmaker, Diana M; Lohse, Christine M; Rose, Steven H; Colletti, James E

    2017-11-10

    The Accreditation Council for Graduate Medical Education (ACGME) implemented revisions to resident duty hour requirements (DHRs) in 2011 to improve patient safety and resident well-being. Perceptions of DHRs have been reported to vary by training stage and specialty among internal medicine and general surgery residents. The authors explored perceptions of DHRs among all residents at a large academic medical center. The authors administered an anonymous cross-sectional survey about DHRs to residents enrolled in all ACGME-accredited core residency programs at their institution. Residents were categorized as medical and pediatric, surgery, or other. In total, 736 residents representing 24 core specialty residency programs were surveyed. The authors received responses from 495 residents (67%). A majority reported satisfaction (78%) with DHRs and believed DHRs positively affect their training (73%). Residents in surgical specialties and in advanced stages of training were significantly less likely to view DHRs favorably. Most respondents believed fatigue contributes to errors (89%) and DHRs reduce both fatigue (80%) and performance of clinical duties while fatigued (74%). A minority of respondents (37%) believed that DHRs decrease medical errors. This finding may reflect beliefs that handovers contribute more to errors than fatigue (41%). Negative perceived effects included diminished patient familiarity and continuity of care (62%) and diminished clinical educational experiences for residents (41%). A majority of residents reported satisfaction with the 2011 DHRs, although satisfaction was significantly less among residents in surgical specialties and those in advanced stages of training.

  6. [Refractive errors among schoolchildren in the central region of Togo].

    PubMed

    Nonon Saa, K B; Atobian, K; Banla, M; Rédah, T; Maneh, N; Walser, A

    2013-11-01

    Untreated refractive errors represent the main visual impairment in the world but also the easiest to avoid. The goal of this survey is to use clinical and epidemiological data to efficiently plan distribution of corrective glasses in a project supported by the Swiss Red Cross in the central region of Togo. To achieve this goal, 66 primary schools were identified randomly in the catchment area of the project. The teachers at these schools were previously trained to test visual acuity (VA). The schoolchildren referred by these teachers were examined by eye care professionals. The schoolchildren with ametropia (VA≤7/10 in at least one eye) underwent cycloplegic autorefraction. Of a total of 19,252 registered schoolchildren, 13,039 underwent VA testing by the teachers (participation rate=68%). Among them, 366 cases of ametropia were identified (prevalence about 3%). The average age of the schoolchildren examined was 10.7±2.3years, with a sex ratio of 1.06. Autorefraction, which was performed for 37% of the schoolchildren with ametropia allowed them to be classified into three groups: hyperopia (4%), myopia (5%) and astigmatism of all types (91%). Regardless of the type of ametropia, the degree of severity was mild in 88%. The results of this survey have highlighted the importance of the teachers' contribution to eye care education in the struggle against refractive errors within the school environment, as well as helping to efficiently plan actions against ametropia. Copyright © 2013 Elsevier Masson SAS. All rights reserved.

  7. The Surveillance Error Grid

    PubMed Central

    Lias, Courtney; Vigersky, Robert; Clarke, William; Parkes, Joan Lee; Sacks, David B.; Kirkman, M. Sue; Kovatchev, Boris

    2014-01-01

    Introduction: Currently used error grids for assessing clinical accuracy of blood glucose monitors are based on out-of-date medical practices. Error grids have not been widely embraced by regulatory agencies for clearance of monitors, but this type of tool could be useful for surveillance of the performance of cleared products. Diabetes Technology Society together with representatives from the Food and Drug Administration, the American Diabetes Association, the Endocrine Society, and the Association for the Advancement of Medical Instrumentation, and representatives of academia, industry, and government, have developed a new error grid, called the surveillance error grid (SEG) as a tool to assess the degree of clinical risk from inaccurate blood glucose (BG) monitors. Methods: A total of 206 diabetes clinicians were surveyed about the clinical risk of errors of measured BG levels by a monitor. The impact of such errors on 4 patient scenarios was surveyed. Each monitor/reference data pair was scored and color-coded on a graph per its average risk rating. Using modeled data representative of the accuracy of contemporary meters, the relationships between clinical risk and monitor error were calculated for the Clarke error grid (CEG), Parkes error grid (PEG), and SEG. Results: SEG action boundaries were consistent across scenarios, regardless of whether the patient was type 1 or type 2 or using insulin or not. No significant differences were noted between responses of adult/pediatric or 4 types of clinicians. Although small specific differences in risk boundaries between US and non-US clinicians were noted, the panel felt they did not justify separate grids for these 2 types of clinicians. The data points of the SEG were classified in 15 zones according to their assigned level of risk, which allowed for comparisons with the classic CEG and PEG. Modeled glucose monitor data with realistic self-monitoring of blood glucose errors derived from meter testing experiments plotted on the SEG when compared to the data plotted on the CEG and PEG produced risk estimates that were more granular and reflective of a continuously increasing risk scale. Discussion: The SEG is a modern metric for clinical risk assessments of BG monitor errors that assigns a unique risk score to each monitor data point when compared to a reference value. The SEG allows the clinical accuracy of a BG monitor to be portrayed in many ways, including as the percentages of data points falling into custom-defined risk zones. For modeled data the SEG, compared with the CEG and PEG, allows greater precision for quantifying risk, especially when the risks are low. This tool will be useful to allow regulators and manufacturers to monitor and evaluate glucose monitor performance in their surveillance programs. PMID:25562886

  8. Simplified methods for computing total sediment discharge with the modified Einstein procedure

    USGS Publications Warehouse

    Colby, Bruce R.; Hubbell, David Wellington

    1961-01-01

    A procedure was presented in 1950 by H. A. Einstein for computing the total discharge of sediment particles of sizes that are in appreciable quantities in the stream bed. This procedure was modified by the U.S. Geological Survey and adapted to computing the total sediment discharge of a stream on the basis of samples of bed sediment, depth-integrated samples of suspended sediment, streamflow measurements, and water temperature. This paper gives simplified methods for computing total sediment discharge by the modified Einstein procedure. Each of four homographs appreciably simplifies a major step in the computations. Within the stated limitations, use of the homographs introduces much less error than is present in either the basic data or the theories on which the computations of total sediment discharge are based. The results are nearly as accurate mathematically as those that could be obtained from the longer and more complex arithmetic and algebraic computations of the Einstein procedure.

  9. The surveillance error grid.

    PubMed

    Klonoff, David C; Lias, Courtney; Vigersky, Robert; Clarke, William; Parkes, Joan Lee; Sacks, David B; Kirkman, M Sue; Kovatchev, Boris

    2014-07-01

    Currently used error grids for assessing clinical accuracy of blood glucose monitors are based on out-of-date medical practices. Error grids have not been widely embraced by regulatory agencies for clearance of monitors, but this type of tool could be useful for surveillance of the performance of cleared products. Diabetes Technology Society together with representatives from the Food and Drug Administration, the American Diabetes Association, the Endocrine Society, and the Association for the Advancement of Medical Instrumentation, and representatives of academia, industry, and government, have developed a new error grid, called the surveillance error grid (SEG) as a tool to assess the degree of clinical risk from inaccurate blood glucose (BG) monitors. A total of 206 diabetes clinicians were surveyed about the clinical risk of errors of measured BG levels by a monitor. The impact of such errors on 4 patient scenarios was surveyed. Each monitor/reference data pair was scored and color-coded on a graph per its average risk rating. Using modeled data representative of the accuracy of contemporary meters, the relationships between clinical risk and monitor error were calculated for the Clarke error grid (CEG), Parkes error grid (PEG), and SEG. SEG action boundaries were consistent across scenarios, regardless of whether the patient was type 1 or type 2 or using insulin or not. No significant differences were noted between responses of adult/pediatric or 4 types of clinicians. Although small specific differences in risk boundaries between US and non-US clinicians were noted, the panel felt they did not justify separate grids for these 2 types of clinicians. The data points of the SEG were classified in 15 zones according to their assigned level of risk, which allowed for comparisons with the classic CEG and PEG. Modeled glucose monitor data with realistic self-monitoring of blood glucose errors derived from meter testing experiments plotted on the SEG when compared to the data plotted on the CEG and PEG produced risk estimates that were more granular and reflective of a continuously increasing risk scale. The SEG is a modern metric for clinical risk assessments of BG monitor errors that assigns a unique risk score to each monitor data point when compared to a reference value. The SEG allows the clinical accuracy of a BG monitor to be portrayed in many ways, including as the percentages of data points falling into custom-defined risk zones. For modeled data the SEG, compared with the CEG and PEG, allows greater precision for quantifying risk, especially when the risks are low. This tool will be useful to allow regulators and manufacturers to monitor and evaluate glucose monitor performance in their surveillance programs. © 2014 Diabetes Technology Society.

  10. Policies on documentation and disciplinary action in hospital pharmacies after a medication error.

    PubMed

    Bauman, A N; Pedersen, C A; Schommer, J C; Griffith, N L

    2001-06-15

    Hospital pharmacies were surveyed about policies on medication error documentation and actions taken against pharmacists involved in an error. The survey was mailed to 500 randomly selected hospital pharmacy directors in the United States. Data were collected on the existence of medication error reporting policies, what types of errors were documented and how, and hospital demographics. The response rate was 28%. Virtually all of the hospitals had policies and procedures for medication error reporting. Most commonly, documentation of oral and written reprimand was placed in the personnel file of a pharmacist involved in an error. One sixth of respondents had no policy on documentation or disciplinary action in the event of an error. Approximately one fourth of respondents reported that suspension or termination had been used as a form of disciplinary action; legal action was rarely used. Many respondents said errors that caused harm (42%) or death (40%) to the patient were documented in the personnel file, but 34% of hospitals did not document errors in the personnel file regardless of error type. Nearly three fourths of respondents differentiated between errors caught and not caught before a medication leaves the pharmacy and between errors caught and not caught before administration to the patient. More emphasis is needed on documentation of medication errors in hospital pharmacies.

  11. An International Survey of Industrial Applications of Formal Methods. Volume 2. Case Studies

    DTIC Science & Technology

    1993-09-30

    impact of the product on IBM revenues. 4. Error rates were claimed to be below industrial average and errors were minimal to fix. Formal methods, as...critical applications. These include: 3 I I International Survey of Industrial Applications 41 i) "Software failures, particularly under first use, seem...project to add improved modelling capability. I U International Survey of Industrial Applications 93 I Design and Implementation These products are being

  12. Oriented Scintillation Spectrometer Experiment (OSSE). Revision A. Volume 1

    DTIC Science & Technology

    1988-05-19

    SYSTEM-LEVEL ENVIRONMENTAL TESTS ................... 108 3.5.1 OPERATION REPORT, PROOF MODEL STRUCTURE TESTS.. .108 3.5.1.1 PROOF MODEL MODAL SURVEY...81 3-21 ALIGNMENT ERROR BUDGET, FOV, A4 ................ 82 3-22 ALIGNMENT ERROR BUDGET, ROTATION AXIS, A4 ...... 83 3-23 OSSE PROOF MODEL MODAL SURVEY...PROOF MODEL MODAL SURVEY .................. 112 3-27-1 OSSE PROOF MODEL STATIC LOAD TEST ............. 116 3-27-2 OSSE PROOF MODEL STATIC LOAD TEST

  13. Flux Sampling Errors for Aircraft and Towers

    NASA Technical Reports Server (NTRS)

    Mahrt, Larry

    1998-01-01

    Various errors and influences leading to differences between tower- and aircraft-measured fluxes are surveyed. This survey is motivated by reports in the literature that aircraft fluxes are sometimes smaller than tower-measured fluxes. Both tower and aircraft flux errors are larger with surface heterogeneity due to several independent effects. Surface heterogeneity may cause tower flux errors to increase with decreasing wind speed. Techniques to assess flux sampling error are reviewed. Such error estimates suffer various degrees of inapplicability in real geophysical time series due to nonstationarity of tower time series (or inhomogeneity of aircraft data). A new measure for nonstationarity is developed that eliminates assumptions on the form of the nonstationarity inherent in previous methods. When this nonstationarity measure becomes large, the surface energy imbalance increases sharply. Finally, strategies for obtaining adequate flux sampling using repeated aircraft passes and grid patterns are outlined.

  14. Quantifying acoustic doppler current profiler discharge uncertainty: A Monte Carlo based tool for moving-boat measurements

    USGS Publications Warehouse

    Mueller, David S.

    2017-01-01

    This paper presents a method using Monte Carlo simulations for assessing uncertainty of moving-boat acoustic Doppler current profiler (ADCP) discharge measurements using a software tool known as QUant, which was developed for this purpose. Analysis was performed on 10 data sets from four Water Survey of Canada gauging stations in order to evaluate the relative contribution of a range of error sources to the total estimated uncertainty. The factors that differed among data sets included the fraction of unmeasured discharge relative to the total discharge, flow nonuniformity, and operator decisions about instrument programming and measurement cross section. As anticipated, it was found that the estimated uncertainty is dominated by uncertainty of the discharge in the unmeasured areas, highlighting the importance of appropriate selection of the site, the instrument, and the user inputs required to estimate the unmeasured discharge. The main contributor to uncertainty was invalid data, but spatial inhomogeneity in water velocity and bottom-track velocity also contributed, as did variation in the edge velocity, uncertainty in the edge distances, edge coefficients, and the top and bottom extrapolation methods. To a lesser extent, spatial inhomogeneity in the bottom depth also contributed to the total uncertainty, as did uncertainty in the ADCP draft at shallow sites. The estimated uncertainties from QUant can be used to assess the adequacy of standard operating procedures. They also provide quantitative feedback to the ADCP operators about the quality of their measurements, indicating which parameters are contributing most to uncertainty, and perhaps even highlighting ways in which uncertainty can be reduced. Additionally, QUant can be used to account for self-dependent error sources such as heading errors, which are a function of heading. The results demonstrate the importance of a Monte Carlo method tool such as QUant for quantifying random and bias errors when evaluating the uncertainty of moving-boat ADCP measurements.

  15. Usual intake of added sugars and lipid profiles among the U.S. adolescents: National Health and Nutrition Examination Survey, 2005-2010.

    PubMed

    Zhang, Zefeng; Gillespie, Cathleen; Welsh, Jean A; Hu, Frank B; Yang, Quanhe

    2015-03-01

    Although studies suggest that higher consumption of added sugars is associated with cardiovascular risk factors in adolescents, none have adjusted for measurement errors or examined its association with the risk of dyslipidemia. We analyzed data of 4,047 adolescents aged 12-19 years from the 2005-2010 National Health and Nutrition Examination Survey, a nationally representative, cross-sectional survey. We estimated the usual percentage of calories (%kcal) from added sugars using up to two 24-hour dietary recalls and the National Cancer Institute method to account for measurement error. The average usual %kcal from added sugars was 16.0%. Most adolescents (88.0%) had usual intake of ≥10% of total energy, and 5.5% had usual intake of ≥25% of total energy. After adjustment for potential confounders, usual %kcal from added sugars was inversely associated with high-density lipoprotein (HDL) and positively associated with triglycerides (TGs), TG-to-HDL ratio, and total cholesterol (TC) to HDL ratio. Comparing the lowest and highest quintiles of intake, HDLs were 49.5 (95% confidence interval [CI], 47.4-51.6) and 46.4 mg/dL (95% CI, 45.2-47.6; p = .009), TGs were 85.6 (95% CI, 75.5-95.6) and 101.2 mg/dL (95% CI, 88.7-113.8; p = .037), TG to HDL ratios were 2.28 (95% CI, 1.84-2.70) and 2.73 (95% CI, 2.11-3.32; p = .017), and TC to HDL ratios were 3.41 (95% CI, 3.03-3.79) and 3.70 (95% CI, 3.24-4.15; p = .028), respectively. Comparing the highest and lowest quintiles of intake, adjusted odds ratio of dyslipidemia was 1.41 (95% CI, 1.01-1.95). The patterns were consistent across sex, race/ethnicity, and body mass index subgroups. No association was found for TC, low-density lipoprotein, and non-HDL cholesterol. Most U.S. adolescents consumed more added sugars than recommended for heart health. Usual intake of added sugars was significantly associated with several measures of lipid profiles. Published by Elsevier Inc.

  16. Undergraduate medical students' perceptions and intentions regarding patient safety during clinical clerkship.

    PubMed

    Lee, Hoo-Yeon; Hahm, Myung-Il; Lee, Sang Gyu

    2018-04-04

    The purpose of this study was to examine undergraduate medical students' perceptions and intentions regarding patient safety during clinical clerkships. Cross-sectional study administered in face-to-face interviews using modified the Medical Student Safety Attitudes and Professionalism Survey (MSSAPS) from three colleges of medicine in Korea. We assessed medical students' perceptions of the cultures ('safety', 'teamwork', and 'error disclosure'), 'behavioural intentions' concerning patient safety issues and 'overall patient safety'. Confirmatory factor analysis and Spearman's correlation analyses was performed. In total, 194(91.9%) of the 211 third-year undergraduate students participated. 78% of medical students reported that the quality of care received by patients was impacted by teamwork during clinical rotations. Regarding error disclosure, positive scores ranged from 10% to 74%. Except for one question asking whether the disclosure of medical errors was an important component of patient safety (74%), the percentages of positive scores for all the other questions were below 20%. 41.2% of medical students have intention to disclose it when they saw a medical error committed by another team member. Many students had difficulty speaking up about medical errors. Error disclosure guidelines and educational efforts aimed at developing sophisticated communication skills are needed. This study may serve as a reference for other institutions planning patient safety education in their curricula. Assessing student perceptions of safety culture can provide clerkship directors and clinical service chiefs with information that enhances the educational environment and promotes patient safety.

  17. Global magnitude of visual impairment caused by uncorrected refractive errors in 2004

    PubMed Central

    Pascolini, Donatella; Mariotti, Silvio P; Pokharel, Gopal P

    2008-01-01

    Abstract Estimates of the prevalence of visual impairment caused by uncorrected refractive errors in 2004 have been determined at regional and global levels for people aged 5 years and over from recent published and unpublished surveys. The estimates were based on the prevalence of visual acuity of less than 6/18 in the better eye with the currently available refractive correction that could be improved to equal to or better than 6/18 by refraction or pinhole. A total of 153 million people (range of uncertainty: 123 million to 184 million) are estimated to be visually impaired from uncorrected refractive errors, of whom eight million are blind. This cause of visual impairment has been overlooked in previous estimates that were based on best-corrected vision. Combined with the 161 million people visually impaired estimated in 2002 according to best-corrected vision, 314 million people are visually impaired from all causes: uncorrected refractive errors become the main cause of low vision and the second cause of blindness. Uncorrected refractive errors can hamper performance at school, reduce employability and productivity, and generally impair quality of life. Yet the correction of refractive errors with appropriate spectacles is among the most cost-effective interventions in eye health care. The results presented in this paper help to unearth a formerly hidden problem of public health dimensions and promote policy development and implementation, programmatic decision-making and corrective interventions, as well as stimulate research. PMID:18235892

  18. Statistical analysis of modeling error in structural dynamic systems

    NASA Technical Reports Server (NTRS)

    Hasselman, T. K.; Chrostowski, J. D.

    1990-01-01

    The paper presents a generic statistical model of the (total) modeling error for conventional space structures in their launch configuration. Modeling error is defined as the difference between analytical prediction and experimental measurement. It is represented by the differences between predicted and measured real eigenvalues and eigenvectors. Comparisons are made between pre-test and post-test models. Total modeling error is then subdivided into measurement error, experimental error and 'pure' modeling error, and comparisons made between measurement error and total modeling error. The generic statistical model presented in this paper is based on the first four global (primary structure) modes of four different structures belonging to the generic category of Conventional Space Structures (specifically excluding large truss-type space structures). As such, it may be used to evaluate the uncertainty of predicted mode shapes and frequencies, sinusoidal response, or the transient response of other structures belonging to the same generic category.

  19. Computer-aided field editing in DHS: the Turkey experiment.

    PubMed

    1995-01-01

    A study comparing field editing using a Notebook computer, computer-aided field editing (CAFE), with that done manually in the standard manner, during the 1993 Demographic and Health Survey (DHS) in Turkey, demonstrated that there was less missing data and a lower mean number of errors for teams using CAFE. 6 of 13 teams used CAFE in the Turkey experiment; the computers were equipped with Integrated System for Survey Analysis (ISSA) software for editing the DHS questionnaires. The CAFE teams completed 2466 out of 8619 household questionnaires and 1886 out of 6649 individual questionnaires. The CAFE team editor entered data into the computer and marked any detected errors on the questionnaire; the errors were then corrected by the editor, in the field, based on other responses in the questionnaire, or on corrections made by the interviewer to which the questionnaire was returned. Errors in questionnaires edited manually are not identified until they are sent to the survey office for data processing, when it is too late to ask for clarification from respondents. There was one area where the error rate was higher for CAFE teams; the CAFE editors paid less attention to errors presented as warnings only.

  20. National survey on internal quality control for tumour markers in clinical laboratories in China.

    PubMed

    Wang, Wei; Zhong, Kun; Yuan, Shuai; He, Falin; Du, Yuxuan; Hu, Zhehui; Wang, Zhiguo

    2018-06-15

    This survey was initiated to obtain knowledge on the current situation of internal quality control (IQC) practice for tumour markers (TMs) in China. Additionally, we tried to acquire the most appropriate quality specifications. This survey was a current status survey. The IQC information had been collected via online questionnaires. All of 1821 clinical laboratories which participated in the 2016 TMs external quality assessment (EQA) programme had been enrolled. The imprecision evaluation criteria were the minimal, desirable, and optimal allowable imprecisions based on biological variations, and 1/3 total allowable error (TEa) and 1/4 TEa. A total of 1628 laboratories answered the questionnaires (89%). The coefficients of variation (CVs) of the IQC of participant laboratories varied greatly from 1% (5 th percentile) to 13% (95 th percentile). More than 82% (82 - 91%) of participant laboratories two types of CVs met 1/3 TEa except for CA 19-9. The percentiles of current CVs were smaller than cumulative CVs. A number of 1240 laboratories (76%) reported their principles and systems used. The electrochemiluminescence was the most used principle (45%) and had the smallest CVs. The performance of laboratories for TMs IQC has yet to be improved. On the basis of the obtained results, 1/3 TEa would be realistic and attainable quality specification for TMs IQC for clinical laboratories in China.

  1. Testing the Recognition and Perception of Errors in Context

    ERIC Educational Resources Information Center

    Brandenburg, Laura C.

    2015-01-01

    This study tests the recognition of errors in context and whether the presence of errors affects the reader's perception of the writer's ethos. In an experimental, posttest only design, participants were randomly assigned a memo to read in an online survey: one version with errors and one version without. Of the six intentional errors in version…

  2. A circadian rhythm in skill-based errors in aviation maintenance.

    PubMed

    Hobbs, Alan; Williamson, Ann; Van Dongen, Hans P A

    2010-07-01

    In workplaces where activity continues around the clock, human error has been observed to exhibit a circadian rhythm, with a characteristic peak in the early hours of the morning. Errors are commonly distinguished by the nature of the underlying cognitive failure, particularly the level of intentionality involved in the erroneous action. The Skill-Rule-Knowledge (SRK) framework of Rasmussen is used widely in the study of industrial errors and accidents. The SRK framework describes three fundamental types of error, according to whether behavior is under the control of practiced sensori-motor skill routines with minimal conscious awareness; is guided by implicit or explicit rules or expertise; or where the planning of actions requires the conscious application of domain knowledge. Up to now, examinations of circadian patterns of industrial errors have not distinguished between different types of error. Consequently, it is not clear whether all types of error exhibit the same circadian rhythm. A survey was distributed to aircraft maintenance personnel in Australia. Personnel were invited to anonymously report a safety incident and were prompted to describe, in detail, the human involvement (if any) that contributed to it. A total of 402 airline maintenance personnel reported an incident, providing 369 descriptions of human error in which the time of the incident was reported and sufficient detail was available to analyze the error. Errors were categorized using a modified version of the SRK framework, in which errors are categorized as skill-based, rule-based, or knowledge-based, or as procedure violations. An independent check confirmed that the SRK framework had been applied with sufficient consistency and reliability. Skill-based errors were the most common form of error, followed by procedure violations, rule-based errors, and knowledge-based errors. The frequency of errors was adjusted for the estimated proportion of workers present at work/each hour of the day, and the 24 h pattern of each error type was examined. Skill-based errors exhibited a significant circadian rhythm, being most prevalent in the early hours of the morning. Variation in the frequency of rule-based errors, knowledge-based errors, and procedure violations over the 24 h did not reach statistical significance. The results suggest that during the early hours of the morning, maintenance technicians are at heightened risk of "absent minded" errors involving failures to execute action plans as intended.

  3. Comparing errors in Medicaid reporting across surveys: evidence to date.

    PubMed

    Call, Kathleen T; Davern, Michael E; Klerman, Jacob A; Lynch, Victoria

    2013-04-01

    To synthesize evidence on the accuracy of Medicaid reporting across state and federal surveys. All available validation studies. Compare results from existing research to understand variation in reporting across surveys. Synthesize all available studies validating survey reports of Medicaid coverage. Across all surveys, reporting some type of insurance coverage is better than reporting Medicaid specifically. Therefore, estimates of uninsurance are less biased than estimates of specific sources of coverage. The CPS stands out as being particularly inaccurate. Measuring health insurance coverage is prone to some level of error, yet survey overstatements of uninsurance are modest in most surveys. Accounting for all forms of bias is complex. Researchers should consider adjusting estimates of Medicaid and uninsurance in surveys prone to high levels of misreporting. © Health Research and Educational Trust.

  4. Methods for increasing cooperation rates for surveys of family forest owners

    Treesearch

    Brett J. Butler; Jaketon H. Hewes; Mary L. Tyrrell; Sarah M. Butler

    2016-01-01

    To maximize the representativeness of results from surveys, coverage, sampling, nonresponse, measurement, and analysis errors must be minimized. Although not a cure-all, one approach for mitigating nonresponse errors is to maximize cooperation rates. In this study, personalizing mailings, token financial incentives, and the use of real stamps were tested for their...

  5. Are Divorce Studies Trustworthy? The Effects of Survey Nonresponse and Response Errors

    ERIC Educational Resources Information Center

    Mitchell, Colter

    2010-01-01

    Researchers rely on relationship data to measure the multifaceted nature of families. This article speaks to relationship data quality by examining the ramifications of different types of error on divorce estimates, models predicting divorce behavior, and models employing divorce as a predictor. Comparing matched survey and divorce certificate…

  6. The Calibration and error analysis of Shallow water (less than 100m) Multibeam Echo-Sounding System

    NASA Astrophysics Data System (ADS)

    Lin, M.

    2016-12-01

    Multibeam echo-sounders(MBES) have been developed to gather bathymetric and acoustic data for more efficient and more exact mapping of the oceans. This gain in efficiency does not come without drawbacks. Indeed, the finer the resolution of remote sensing instruments, the harder they are to calibrate. This is the case for multibeam echo-sounding systems (MBES). We are no longer dealing with sounding lines where the bathymetry must be interpolated between them to engender consistent representations of the seafloor. We now need to match together strips (swaths) of totally ensonified seabed. As a consequence, misalignment and time lag problems emerge as artifacts in the bathymetry from adjacent or overlapping swaths, particularly when operating in shallow water. More importantly, one must still verify that bathymetric data meet the accuracy requirements. This paper aims to summarize the system integration involved with MBES and identify the various source of error pertaining to shallow water survey (100m and less). A systematic method for the calibration of shallow water MBES is proposed and presented as a set of field procedures. The procedures aim at detecting, quantifying and correcting systematic instrumental and installation errors. Hence, calibrating for variations of the speed of sound in the water column, which is natural in origin, is not addressed in this document. The data which used in calibration will reference International Hydrographic Organization(IHO) and other related standards to compare. This paper aims to set a model in the specific area which can calibrate the error due to instruments. We will construct a procedure in patch test and figure out all the possibilities may make sounding data with error then calculate the error value to compensate. In general, the problems which have to be solved is the patch test's 4 correction in the Hypack system 1.Roll 2.GPS Latency 3.Pitch 4.Yaw. Cause These 4 correction affect each others, we run each survey line to calibrate. GPS Latency is synchronized GPS to echo sounder. Future studies concerning any shallower portion of an area, by this procedure can be more accurate sounding value and can do more detailed research.

  7. Comparison of Low Cost Photogrammetric Survey with Tls and Leica Pegasus Backpack 3d Modelss

    NASA Astrophysics Data System (ADS)

    Masiero, A.; Fissore, F.; Guarnieri, A.; Piragnolo, M.; Vettore, A.

    2017-11-01

    This paper considers Leica backpack and photogrammetric surveys of a mediaeval bastion in Padua, Italy. Furhtermore, terrestrial laser scanning (TLS) survey is considered in order to provide a state of the art reconstruction of the bastion. Despite control points are typically used to avoid deformations in photogrammetric surveys and ensure correct scaling of the reconstruction, in this paper a different approach is considered: this work is part of a project aiming at the development of a system exploiting ultra-wide band (UWB) devices to provide correct scaling of the reconstruction. In particular, low cost Pozyx UWB devices are used to estimate camera positions during image acquisitions. Then, in order to obtain a metric reconstruction, scale factor in the photogrammetric survey is estimated by comparing camera positions obtained from UWB measurements with those obtained from photogrammetric reconstruction. Compared with the TLS survey, the considered photogrammetric model of the bastion results in a RMSE of 21.9cm, average error 13.4cm, and standard deviation 13.5cm. Excluding the final part of the bastion left wing, where the presence of several poles make reconstruction more difficult, (RMSE) fitting error is 17.3cm, average error 11.5cm, and standard deviation 9.5cm. Instead, comparison of Leica backpack and TLS surveys leads to an average error of 4.7cm and standard deviation 0.6cm (4.2cm and 0.3cm, respectively, by excluding the final part of the left wing).

  8. Integrated Sachs-Wolfe map reconstruction in the presence of systematic errors

    NASA Astrophysics Data System (ADS)

    Weaverdyck, Noah; Muir, Jessica; Huterer, Dragan

    2018-02-01

    The decay of gravitational potentials in the presence of dark energy leads to an additional, late-time contribution to anisotropies in the cosmic microwave background (CMB) at large angular scales. The imprint of this so-called integrated Sachs-Wolfe (ISW) effect to the CMB angular power spectrum has been detected and studied in detail, but reconstructing its spatial contributions to the CMB map, which would offer the tantalizing possibility of separating the early- from the late-time contributions to CMB temperature fluctuations, is more challenging. Here, we study the technique for reconstructing the ISW map based on information from galaxy surveys and focus in particular on how its accuracy is impacted by the presence of photometric calibration errors in input galaxy maps, which were previously found to be a dominant contaminant for ISW signal estimation. We find that both including tomographic information from a single survey and using data from multiple, complementary galaxy surveys improve the reconstruction by mitigating the impact of spurious power contributions from calibration errors. A high-fidelity reconstruction further requires one to account for the contribution of calibration errors to the observed galaxy power spectrum in the model used to construct the ISW estimator. We find that if the photometric calibration errors in galaxy surveys can be independently controlled at the level required to obtain unbiased dark energy constraints, then it is possible to reconstruct ISW maps with excellent accuracy using a combination of maps from two galaxy surveys with properties similar to Euclid and SPHEREx.

  9. SU-E-T-192: FMEA Severity Scores - Do We Really Know?

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Tonigan, J; Johnson, J; Kry, S

    2014-06-01

    Purpose: Failure modes and effects analysis (FMEA) is a subjective risk mitigation technique that has not been applied to physics-specific quality management practices. There is a need for quantitative FMEA data as called for in the literature. This work focuses specifically on quantifying FMEA severity scores for physics components of IMRT delivery and comparing to subjective scores. Methods: Eleven physical failure modes (FMs) for head and neck IMRT dose calculation and delivery are examined near commonly accepted tolerance criteria levels. Phantom treatment planning studies and dosimetry measurements (requiring decommissioning in several cases) are performed to determine the magnitude of dosemore » delivery errors for the FMs (i.e., severity of the FM). Resultant quantitative severity scores are compared to FMEA scores obtained through an international survey and focus group studies. Results: Physical measurements for six FMs have resulted in significant PTV dose errors up to 4.3% as well as close to 1 mm significant distance-to-agreement error between PTV and OAR. Of the 129 survey responses, the vast majority of the responders used Varian machines with Pinnacle and Eclipse planning systems. The average years of experience was 17, yet familiarity with FMEA less than expected. Survey reports perception of dose delivery error magnitude varies widely, in some cases 50% difference in dose delivery error expected amongst respondents. Substantial variance is also seen for all FMs in occurrence, detectability, and severity scores assigned with average variance values of 5.5, 4.6, and 2.2, respectively. Survey shows for MLC positional FM(2mm) average of 7.6% dose error expected (range 0–50%) compared to 2% error seen in measurement. Analysis of ranking in survey, treatment planning studies, and quantitative value comparison will be presented. Conclusion: Resultant quantitative severity scores will expand the utility of FMEA for radiotherapy and verify accuracy of FMEA results compared to highly variable subjective scores.« less

  10. Quality assuring HIV point of care testing using whole blood samples.

    PubMed

    Dare-Smith, Raellene; Badrick, Tony; Cunningham, Philip; Kesson, Alison; Badman, Susan

    2016-08-01

    The Royal College of Pathologists Australasia Quality Assurance Programs (RCPAQAP), have offered dedicated external quality assurance (EQA) for HIV point of care testing (PoCT) since 2011. Prior to this, EQA for these tests was available within the comprehensive human immunodeficiency virus (HIV) module. EQA testing for HIV has typically involved the supply of serum or plasma, while in the clinic or community based settings HIV PoCT is generally performed using whole blood obtained by capillary finger-stick collection. RCPAQAP has offered EQA for HIV PoCT using stabilised whole blood since 2014. A total of eight surveys have been undertaken over a period of 2 years from 2014 to 2015. Of the 962 responses received, the overall consensus rate was found to be 98% (941/962). A total of 21 errors were detected. The majority of errors were attributable to false reactive HIV p24 antigen results (9/21, 43%), followed by false reactive HIV antibody results (8/21, 38%). There were 4/21 (19%) false negative HIV antibody results and no false negative HIV p24 antigen results reported. Overall performance was observed to vary minimally between surveys, from a low of 94% up to 99% concordant. Encouraging levels of testing proficiency for HIV PoCT are indicated by these data, but they also confirm the need for HIV PoCT sites to participate in external quality assurance programs to ensure the ongoing provision of high quality patient care. Copyright © 2016 Royal College of Pathologists of Australasia. All rights reserved.

  11. Using a Delphi Method to Identify Human Factors Contributing to Nursing Errors.

    PubMed

    Roth, Cheryl; Brewer, Melanie; Wieck, K Lynn

    2017-07-01

    The purpose of this study was to identify human factors associated with nursing errors. Using a Delphi technique, this study used feedback from a panel of nurse experts (n = 25) on an initial qualitative survey questionnaire followed by summarizing the results with feedback and confirmation. Synthesized factors regarding causes of errors were incorporated into a quantitative Likert-type scale, and the original expert panel participants were queried a second time to validate responses. The list identified 24 items as most common causes of nursing errors, including swamping and errors made by others that nurses are expected to recognize and fix. The responses provided a consensus top 10 errors list based on means with heavy workload and fatigue at the top of the list. The use of the Delphi survey established consensus and developed a platform upon which future study of nursing errors can evolve as a link to future solutions. This list of human factors in nursing errors should serve to stimulate dialogue among nurses about how to prevent errors and improve outcomes. Human and system failures have been the subject of an abundance of research, yet nursing errors continue to occur. © 2016 Wiley Periodicals, Inc.

  12. Evaluation of the performance of a micromethod for measuring urinary iodine by using six sigma quality metrics.

    PubMed

    Hussain, Husniza; Khalid, Norhayati Mustafa; Selamat, Rusidah; Wan Nazaimoon, Wan Mohamud

    2013-09-01

    The urinary iodine micromethod (UIMM) is a modification of the conventional method and its performance needs evaluation. UIMM performance was evaluated using the method validation and 2008 Iodine Deficiency Disorders survey data obtained from four urinary iodine (UI) laboratories. Method acceptability tests and Sigma quality metrics were determined using total allowable errors (TEas) set by two external quality assurance (EQA) providers. UIMM obeyed various method acceptability test criteria with some discrepancies at low concentrations. Method validation data calculated against the UI Quality Program (TUIQP) TEas showed that the Sigma metrics were at 2.75, 1.80, and 3.80 for 51±15.50 µg/L, 108±32.40 µg/L, and 149±38.60 µg/L UI, respectively. External quality control (EQC) data showed that the performance of the laboratories was within Sigma metrics of 0.85-1.12, 1.57-4.36, and 1.46-4.98 at 46.91±7.05 µg/L, 135.14±13.53 µg/L, and 238.58±17.90 µg/L, respectively. No laboratory showed a calculated total error (TEcalc)

  13. Status and trends in the Lake Superior fish community, 2014

    USGS Publications Warehouse

    Vinson, Mark; Evrard, Lori M.; Gorman, Owen T.; Yule, Daniel

    2015-01-01

    In 2014, the Lake Superior fish community was sampled with daytime bottom trawls at 73 nearshore and 30 offshore stations. Spring and summer water temperatures were the coldest measured for the period of records for the surveys. In the nearshore zone, a total of 15,372 individuals from 28 species or morphotypes were collected. Nearshore lakewide mean biomass was 6.9 kg/ha, which was higher than that observed in the past few years, but below the long-term average of 9.2 kg/ha. In the offshore zone, a total 12,462 individuals from 11 species were collected lakewide. Offshore lakewide mean biomass was 6.6 kg/ha. The mean of the three previous years was 8.6 kg/ha. We collected larval Coregonus in surface trawls at 94 locations and estimated a lakewide average density of 577 fish/ha with a total lakewide population estimate of 14.2 billion (standard error + 30 million).

  14. Associations between communication climate and the frequency of medical error reporting among pharmacists within an inpatient setting.

    PubMed

    Patterson, Mark E; Pace, Heather A; Fincham, Jack E

    2013-09-01

    Although error-reporting systems enable hospitals to accurately track safety climate through the identification of adverse events, these systems may be underused within a work climate of poor communication. The objective of this analysis is to identify the extent to which perceived communication climate among hospital pharmacists impacts medical error reporting rates. This cross-sectional study used survey responses from more than 5000 pharmacists responding to the 2010 Hospital Survey on Patient Safety Culture (HSOPSC). Two composite scores were constructed for "communication openness" and "feedback and about error," respectively. Error reporting frequency was defined from the survey question, "In the past 12 months, how many event reports have you filled out and submitted?" Multivariable logistic regressions were used to estimate the likelihood of medical error reporting conditional upon communication openness or feedback levels, controlling for pharmacist years of experience, hospital geographic region, and ownership status. Pharmacists with higher communication openness scores compared with lower scores were 40% more likely to have filed or submitted a medical error report in the past 12 months (OR, 1.4; 95% CI, 1.1-1.7; P = 0.004). In contrast, pharmacists with higher communication feedback scores were not any more likely than those with lower scores to have filed or submitted a medical report in the past 12 months (OR, 1.0; 95% CI, 0.8-1.3; P = 0.97). Hospital work climates that encourage pharmacists to freely communicate about problems related to patient safety is conducive to medical error reporting. The presence of feedback infrastructures about error may not be sufficient to induce error-reporting behavior.

  15. Comparing Errors in Medicaid Reporting across Surveys: Evidence to Date

    PubMed Central

    Call, Kathleen T; Davern, Michael E; Klerman, Jacob A; Lynch, Victoria

    2013-01-01

    Objective To synthesize evidence on the accuracy of Medicaid reporting across state and federal surveys. Data Sources All available validation studies. Study Design Compare results from existing research to understand variation in reporting across surveys. Data Collection Methods Synthesize all available studies validating survey reports of Medicaid coverage. Principal Findings Across all surveys, reporting some type of insurance coverage is better than reporting Medicaid specifically. Therefore, estimates of uninsurance are less biased than estimates of specific sources of coverage. The CPS stands out as being particularly inaccurate. Conclusions Measuring health insurance coverage is prone to some level of error, yet survey overstatements of uninsurance are modest in most surveys. Accounting for all forms of bias is complex. Researchers should consider adjusting estimates of Medicaid and uninsurance in surveys prone to high levels of misreporting. PMID:22816493

  16. A constant altitude flight survey method for mapping atmospheric ambient pressures and systematic radar errors

    NASA Technical Reports Server (NTRS)

    Larson, T. J.; Ehernberger, L. J.

    1985-01-01

    The flight test technique described uses controlled survey runs to determine horizontal atmospheric pressure variations and systematic altitude errors that result from space positioning measurements. The survey data can be used not only for improved air data calibrations, but also for studies of atmospheric structure and space positioning accuracy performance. The examples presented cover a wide range of radar tracking conditions for both subsonic and supersonic flight to an altitude of 42,000 ft.

  17. The effect of photometric redshift uncertainties on galaxy clustering and baryonic acoustic oscillations

    NASA Astrophysics Data System (ADS)

    Chaves-Montero, Jonás; Angulo, Raúl E.; Hernández-Monteagudo, Carlos

    2018-07-01

    In the upcoming era of high-precision galaxy surveys, it becomes necessary to understand the impact of redshift uncertainties on cosmological observables. In this paper we explore the effect of sub-percent photometric redshift errors (photo-z errors) on galaxy clustering and baryonic acoustic oscillations (BAOs). Using analytic expressions and results from 1000 N-body simulations, we show how photo-z errors modify the amplitude of moments of the 2D power spectrum, their variances, the amplitude of BAOs, and the cosmological information in them. We find that (a) photo-z errors suppress the clustering on small scales, increasing the relative importance of shot noise, and thus reducing the interval of scales available for BAO analyses; (b) photo-z errors decrease the smearing of BAOs due to non-linear redshift-space distortions (RSDs) by giving less weight to line-of-sight modes; and (c) photo-z errors (and small-scale RSD) induce a scale dependence on the information encoded in the BAO scale, and that reduces the constraining power on the Hubble parameter. Using these findings, we propose a template that extracts unbiased cosmological information from samples with photo-z errors with respect to cases without them. Finally, we provide analytic expressions to forecast the precision in measuring the BAO scale, showing that spectro-photometric surveys will measure the expansion history of the Universe with a precision competitive to that of spectroscopic surveys.

  18. The effect of photometric redshift uncertainties on galaxy clustering and baryonic acoustic oscillations

    NASA Astrophysics Data System (ADS)

    Chaves-Montero, Jonás; Angulo, Raúl E.; Hernández-Monteagudo, Carlos

    2018-04-01

    In the upcoming era of high-precision galaxy surveys, it becomes necessary to understand the impact of redshift uncertainties on cosmological observables. In this paper we explore the effect of sub-percent photometric redshift errors (photo-z errors) on galaxy clustering and baryonic acoustic oscillations (BAO). Using analytic expressions and results from 1 000 N-body simulations, we show how photo-z errors modify the amplitude of moments of the 2D power spectrum, their variances, the amplitude of BAO, and the cosmological information in them. We find that: a) photo-z errors suppress the clustering on small scales, increasing the relative importance of shot noise, and thus reducing the interval of scales available for BAO analyses; b) photo-z errors decrease the smearing of BAO due to non-linear redshift-space distortions (RSD) by giving less weight to line-of-sight modes; and c) photo-z errors (and small-scale RSD) induce a scale dependence on the information encoded in the BAO scale, and that reduces the constraining power on the Hubble parameter. Using these findings, we propose a template that extracts unbiased cosmological information from samples with photo-z errors with respect to cases without them. Finally, we provide analytic expressions to forecast the precision in measuring the BAO scale, showing that spectro-photometric surveys will measure the expansion history of the Universe with a precision competitive to that of spectroscopic surveys.

  19. Survey and Method for Determination of Trajectory Predictor Requirements

    NASA Technical Reports Server (NTRS)

    Rentas, Tamika L.; Green, Steven M.; Cate, Karen Tung

    2009-01-01

    A survey of air-traffic-management researchers, representing a broad range of automation applications, was conducted to document trajectory-predictor requirements for future decision-support systems. Results indicated that the researchers were unable to articulate a basic set of trajectory-prediction requirements for their automation concepts. Survey responses showed the need to establish a process to help developers determine the trajectory-predictor-performance requirements for their concepts. Two methods for determining trajectory-predictor requirements are introduced. A fast-time simulation method is discussed that captures the sensitivity of a concept to the performance of its trajectory-prediction capability. A characterization method is proposed to provide quicker, yet less precise results, based on analysis and simulation to characterize the trajectory-prediction errors associated with key modeling options for a specific concept. Concept developers can then identify the relative sizes of errors associated with key modeling options, and qualitatively determine which options lead to significant errors. The characterization method is demonstrated for a case study involving future airport surface traffic management automation. Of the top four sources of error, results indicated that the error associated with accelerations to and from turn speeds was unacceptable, the error associated with the turn path model was acceptable, and the error associated with taxi-speed estimation was of concern and needed a higher fidelity concept simulation to obtain a more precise result

  20. Feedback controlled optics with wavefront compensation

    NASA Technical Reports Server (NTRS)

    Breckenridge, William G. (Inventor); Redding, David C. (Inventor)

    1993-01-01

    The sensitivity model of a complex optical system obtained by linear ray tracing is used to compute a control gain matrix by imposing the mathematical condition for minimizing the total wavefront error at the optical system's exit pupil. The most recent deformations or error states of the controlled segments or optical surfaces of the system are then assembled as an error vector, and the error vector is transformed by the control gain matrix to produce the exact control variables which will minimize the total wavefront error at the exit pupil of the optical system. These exact control variables are then applied to the actuators controlling the various optical surfaces in the system causing the immediate reduction in total wavefront error observed at the exit pupil of the optical system.

  1. The SAMI Galaxy Survey: can we trust aperture corrections to predict star formation?

    NASA Astrophysics Data System (ADS)

    Richards, S. N.; Bryant, J. J.; Croom, S. M.; Hopkins, A. M.; Schaefer, A. L.; Bland-Hawthorn, J.; Allen, J. T.; Brough, S.; Cecil, G.; Cortese, L.; Fogarty, L. M. R.; Gunawardhana, M. L. P.; Goodwin, M.; Green, A. W.; Ho, I.-T.; Kewley, L. J.; Konstantopoulos, I. S.; Lawrence, J. S.; Lorente, N. P. F.; Medling, A. M.; Owers, M. S.; Sharp, R.; Sweet, S. M.; Taylor, E. N.

    2016-01-01

    In the low-redshift Universe (z < 0.3), our view of galaxy evolution is primarily based on fibre optic spectroscopy surveys. Elaborate methods have been developed to address aperture effects when fixed aperture sizes only probe the inner regions for galaxies of ever decreasing redshift or increasing physical size. These aperture corrections rely on assumptions about the physical properties of galaxies. The adequacy of these aperture corrections can be tested with integral-field spectroscopic data. We use integral-field spectra drawn from 1212 galaxies observed as part of the SAMI Galaxy Survey to investigate the validity of two aperture correction methods that attempt to estimate a galaxy's total instantaneous star formation rate. We show that biases arise when assuming that instantaneous star formation is traced by broad-band imaging, and when the aperture correction is built only from spectra of the nuclear region of galaxies. These biases may be significant depending on the selection criteria of a survey sample. Understanding the sensitivities of these aperture corrections is essential for correct handling of systematic errors in galaxy evolution studies.

  2. Experience with the ULISS-30 inertial survey system for local geodetic and cadastral network control

    NASA Astrophysics Data System (ADS)

    Forsberg, Rene

    1991-09-01

    The capability of the recently developed SAGEM ULISS-30 inertial survey system for performing local surveys at high accuracies have been tested in a field campaign carried out November 1989 on the island of Fyn, Denmark, in cooperation with the Swedish National Land Survey. In the test a number of lines between existing national geodetic control points were surveyed, along with points in the less reliably determined cadastral network, forming an irregular network pattern of 10 15 km extent. The survey involved frequent offset measurements (up to 50 100 m) with an ISS-integrated total station. The profile geometries were not particularly suited for inertial surveys, with narrow and rather winding roads, necessitating frequent vehicle turns. In addition to the pure inertial surveys a kinematic GPS/inertial test was also carried out, using a pair of Ashtech L-XII receivers. The inertial survey results, analyzed with a smoothing algoritm utilizing common points on forward/backward runs, indicate that 5-cm accuracies are possible on reasonably straight profiles of 5 km length, corresponding to a 10 ppm “best-case” accuracy for double-run traverses. On longer, more winding traverses error levels of 10 20 cm are typical. To handle the inertial data optimally, proper network adjustments are required. A discussion of suitable adjustment models of both conventional and collocation type is included in the paper.

  3. 45 CFR 265.7 - How will we determine if the State is meeting the quarterly reporting requirements?

    Code of Federal Regulations, 2012 CFR

    2012-10-01

    ... computational errors and are internally consistent (e.g., items that should add to totals do so); (3) The State... from computational errors and are internally consistent (e.g., items that should add to totals do so... from computational errors and are internally consistent (e.g., items that should add to totals do so...

  4. Structural Validation of a French Food Frequency Questionnaire of 94 Items.

    PubMed

    Gazan, Rozenn; Vieux, Florent; Darmon, Nicole; Maillot, Matthieu

    2017-01-01

    Food frequency questionnaires (FFQs) are used to estimate the usual food and nutrient intakes over a period of time. Such estimates can suffer from measurement errors, either due to bias induced by respondent's answers or to errors induced by the structure of the questionnaire (e.g., using a limited number of food items and an aggregated food database with average portion sizes). The "structural validation" presented in this study aims to isolate and quantify the impact of the inherent structure of a FFQ on the estimation of food and nutrient intakes, independently of respondent's perception of the questionnaire. A semi-quantitative FFQ ( n  = 94 items, including 50 items with questions on portion sizes) and an associated aggregated food composition database (named the item-composition database) were developed, based on the self-reported weekly dietary records of 1918 adults (18-79 years-old) in the French Individual and National Dietary Survey 2 (INCA2), and the French CIQUAL 2013 food-composition database of all the foods ( n  = 1342 foods) declared as consumed in the population. Reference intakes of foods ("REF_FOOD") and nutrients ("REF_NUT") were calculated for each adult using the food-composition database and the amounts of foods self-reported in his/her dietary record. Then, answers to the FFQ were simulated for each adult based on his/her self-reported dietary record. "FFQ_FOOD" and "FFQ_NUT" intakes were estimated using the simulated answers and the item-composition database. Measurement errors (in %), spearman correlations and cross-classification were used to compare "REF_FOOD" with "FFQ_FOOD" and "REF_NUT" with "FFQ_NUT". Compared to "REF_NUT," "FFQ_NUT" total quantity and total energy intake were underestimated on average by 198 g/day and 666 kJ/day, respectively. "FFQ_FOOD" intakes were well estimated for starches, underestimated for most of the subgroups, and overestimated for some subgroups, in particular vegetables. Underestimation were mainly due to the use of portion sizes, leading to an underestimation of most of nutrients, except free sugars which were overestimated. The "structural validation" by simulating answers to a FFQ based on a reference dietary survey is innovative and pragmatic and allows quantifying the error induced by the simplification of the method of collection.

  5. Patient Safety Culture and the Second Victim Phenomenon: Connecting Culture to Staff Distress in Nurses.

    PubMed

    Quillivan, Rebecca R; Burlison, Jonathan D; Browne, Emily K; Scott, Susan D; Hoffman, James M

    2016-08-01

    Second victim experiences can affect the wellbeing of health care providers and compromise patient safety. Many factors associated with improved coping after patient safety event involvement are also components of a strong patient safety culture, so that supportive patient safety cultures may reduce second victim-related trauma. A cross-sectional survey study was conducted to assess the influence of patient safety culture on second victim-related distress. The Agency for Healthcare Research and Quality (AHRQ) Hospital Survey on Patient Safety Culture (HSOPSC) and the Second Victim Experience and Support Tool (SVEST), which was developed to assess organizational support and personal and professional distress after involvement in a patient safety event, were administered to nurses involved in direct patient care. Of 358 nurses at a specialized pediatric hospital, 169 (47.2%) completed both surveys. Hierarchical linear regres sion demonstrated that the patient safety culture survey dimension nonpunitive response to error was significantly associated with reductions in the second victim survey dimensions psychological, physical, and professional distress (p < 0.001). As a mediator, organizational support fully explained the nonpunitive response to error-physical distress and nonpunitive response to error-professional distress relationships and partially explained the nonpunitive response to error-psychological distress relationship. The results suggest that punitive safety cultures may contribute to self-reported perceptions of second victim-related psychological, physical, and professional distress, which could reflect a lack of organizational support. Reducing punitive response to error and encouraging supportive coworker, supervisor, and institutional interactions may be useful strategies to manage the severity of second victim experiences.

  6. Poster - 49: Assessment of Synchrony respiratory compensation error for CyberKnife liver treatment

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Liu, Ming; Cygler,

    The goal of this work is to quantify respiratory motion compensation errors for liver tumor patients treated by the CyberKnife system with Synchrony tracking, to identify patients with the smallest tracking errors and to eventually help coach patient’s breathing patterns to minimize dose delivery errors. The accuracy of CyberKnife Synchrony respiratory motion compensation was assessed for 37 patients treated for liver lesions by analyzing data from system logfiles. A predictive model is used to modulate the direction of individual beams during dose delivery based on the positions of internally implanted fiducials determined using an orthogonal x-ray imaging system and themore » current location of LED external markers. For each x-ray pair acquired, system logfiles report the prediction error, the difference between the measured and predicted fiducial positions, and the delivery error, which is an estimate of the statistical error in the model overcoming the latency between x-ray acquisition and robotic repositioning. The total error was calculated at the time of each x-ray pair, for the number of treatment fractions and the number of patients, giving the average respiratory motion compensation error in three dimensions. The 99{sup th} percentile for the total radial error is 3.85 mm, with the highest contribution of 2.79 mm in superior/inferior (S/I) direction. The absolute mean compensation error is 1.78 mm radially with a 1.27 mm contribution in the S/I direction. Regions of high total error may provide insight into features predicting groups of patients with larger or smaller total errors.« less

  7. The Trojan Lifetime Champions Health Survey: Development, Validity, and Reliability

    PubMed Central

    Sorenson, Shawn C.; Romano, Russell; Scholefield, Robin M.; Schroeder, E. Todd; Azen, Stanley P.; Salem, George J.

    2015-01-01

    Context Self-report questionnaires are an important method of evaluating lifespan health, exercise, and health-related quality of life (HRQL) outcomes among elite, competitive athletes. Few instruments, however, have undergone formal characterization of their psychometric properties within this population. Objective To evaluate the validity and reliability of a novel health and exercise questionnaire, the Trojan Lifetime Champions (TLC) Health Survey. Design Descriptive laboratory study. Setting A large National Collegiate Athletic Association Division I university. Patients or Other Participants A total of 63 university alumni (age range, 24 to 84 years), including former varsity collegiate athletes and a control group of nonathletes. Intervention(s) Participants completed the TLC Health Survey twice at a mean interval of 23 days with randomization to the paper or electronic version of the instrument. Main Outcome Measure(s) Content validity, feasibility of administration, test-retest reliability, parallel-form reliability between paper and electronic forms, and estimates of systematic and typical error versus differences of clinical interest were assessed across a broad range of health, exercise, and HRQL measures. Results Correlation coefficients, including intraclass correlation coefficients (ICCs) for continuous variables and κ agreement statistics for ordinal variables, for test-retest reliability averaged 0.86, 0.90, 0.80, and 0.74 for HRQL, lifetime health, recent health, and exercise variables, respectively. Correlation coefficients, again ICCs and κ, for parallel-form reliability (ie, equivalence) between paper and electronic versions averaged 0.90, 0.85, 0.85, and 0.81 for HRQL, lifetime health, recent health, and exercise variables, respectively. Typical measurement error was less than the a priori thresholds of clinical interest, and we found minimal evidence of systematic test-retest error. We found strong evidence of content validity, convergent construct validity with the Short-Form 12 Version 2 HRQL instrument, and feasibility of administration in an elite, competitive athletic population. Conclusions These data suggest that the TLC Health Survey is a valid and reliable instrument for assessing lifetime and recent health, exercise, and HRQL, among elite competitive athletes. Generalizability of the instrument may be enhanced by additional, larger-scale studies in diverse populations. PMID:25611315

  8. Long-term care physical environments--effect on medication errors.

    PubMed

    Mahmood, Atiya; Chaudhury, Habib; Gaumont, Alana; Rust, Tiana

    2012-01-01

    Few studies examine physical environmental factors and their effects on staff health, effectiveness, work errors and job satisfaction. To address this gap, this study aims to examine environmental features and their role in medication and nursing errors in long-term care facilities. A mixed methodological strategy was used. Data were collected via focus groups, observing medication preparation and administration, and a nursing staff survey in four facilities. The paper reveals that, during the medication preparation phase, physical design, such as medication room layout, is a major source of potential errors. During medication administration, social environment is more likely to contribute to errors. Interruptions, noise and staff shortages were particular problems. The survey's relatively small sample size needs to be considered when interpreting the findings. Also, actual error data could not be included as existing records were incomplete. The study offers several relatively low-cost recommendations to help staff reduce medication errors. Physical environmental factors are important when addressing measures to reduce errors. The findings of this study underscore the fact that the physical environment's influence on the possibility of medication errors is often neglected. This study contributes to the scarce empirical literature examining the relationship between physical design and patient safety.

  9. Last Year Your Answer Was… : The Impact of Dependent Interviewing Wording and Survey Factors on Reporting of Change

    ERIC Educational Resources Information Center

    Al Baghal, Tarek

    2017-01-01

    Prior studies suggest memories are potentially error prone. Proactive dependent interviewing (PDI) is a possible method to reduce errors in reports of change in longitudinal studies, reminding respondents of previous answers while asking if there has been any change since the last survey. However, little research has been conducted on the impact…

  10. Accuracy assessment of the global TanDEM-X Digital Elevation Model with GPS data

    NASA Astrophysics Data System (ADS)

    Wessel, Birgit; Huber, Martin; Wohlfart, Christian; Marschalk, Ursula; Kosmann, Detlev; Roth, Achim

    2018-05-01

    The primary goal of the German TanDEM-X mission is the generation of a highly accurate and global Digital Elevation Model (DEM) with global accuracies of at least 10 m absolute height error (linear 90% error). The global TanDEM-X DEM acquired with single-pass SAR interferometry was finished in September 2016. This paper provides a unique accuracy assessment of the final TanDEM-X global DEM using two different GPS point reference data sets, which are distributed across all continents, to fully characterize the absolute height error. Firstly, the absolute vertical accuracy is examined by about three million globally distributed kinematic GPS (KGPS) points derived from 19 KGPS tracks covering a total length of about 66,000 km. Secondly, a comparison is performed with more than 23,000 "GPS on Bench Marks" (GPS-on-BM) points provided by the US National Geodetic Survey (NGS) scattered across 14 different land cover types of the US National Land Cover Data base (NLCD). Both GPS comparisons prove an absolute vertical mean error of TanDEM-X DEM smaller than ±0.20 m, a Root Means Square Error (RMSE) smaller than 1.4 m and an excellent absolute 90% linear height error below 2 m. The RMSE values are sensitive to land cover types. For low vegetation the RMSE is ±1.1 m, whereas it is slightly higher for developed areas (±1.4 m) and for forests (±1.8 m). This validation confirms an outstanding absolute height error at 90% confidence level of the global TanDEM-X DEM outperforming the requirement by a factor of five. Due to its extensive and globally distributed reference data sets, this study is of considerable interests for scientific and commercial applications.

  11. Population-based survey of refractive error among school-aged children in rural northern China: the Heilongjiang eye study.

    PubMed

    Li, Zhijian; Xu, Keke; Wu, Shubin; Lv, Jia; Jin, Di; Song, Zhen; Wang, Zhongliang; Liu, Ping

    2014-01-01

    The prevalence of refractive error in the north of China is unknown. The study aimed to estimate the prevalence and associated factors of refractive error in school-aged children in a rural area of northern China. Cross-sectional study. The cluster random sampling method was used to select the sample. A total of 1700 subjects of 5 to 18 years of age were examined. All participants underwent ophthalmic evaluation. Refraction was performed under cycloplegia. Association of refractive errors with age, sex, and education was analysed. The main outcome measure was prevalence rates of refractive error among school-aged children. Of the 1700 responders, 1675 were eligible. The prevalence of uncorrected, presenting, and best-corrected visual acuity of 20/40 or worse in the better eye was 6.3%, 3.0% and 1.2%, respectively. The prevalence of myopia was 5.0% (84/1675, 95% CI, 4.8%-5.4%) and of hyperopia was 1.6% (27/1675, 95% CI, 1.0%-2.2%). Astigmatism was evident in 2.0% of the subjects. Myopia increased with increasing age, whereas hyperopia and astigmatism were associated with younger age. Myopia, hyperopia and astigmatism were more common in females. We also found that prevalence of refractive error were associated with education. Myopia and astigmatism were more common in those with higher degrees of education. This report has provided details of the refractive status in a rural school-aged population. Although the prevalence of refractive errors is lower in the population, the unmet need for spectacle correction remains a significant challenge for refractive eye-care services. © 2013 Royal Australian and New Zealand College of Ophthalmologists.

  12. Height-Error Analysis for the FAA-Air Force Replacement Radar Program (FARR)

    DTIC Science & Technology

    1991-08-01

    7719 Figure 1-7 CLIMATOLOGY ERRORS BY MONWTH PERCENT FREQUENCY TABLE OF ERROR BY MONTH ERROR MONTH Col Pc IJAl IFEB )MA IA R IAY JJ’N IJUL JAUG (SEP...MONTH Col Pct IJAN IFEB IMPJ JAPR 1 MM IJUN IJUL JAUG ISEP J--T IN~ IDEC I Total ----- -- - - --------------------------.. . -.. 4...MONTH ERROR MONTH Col Pct IJAN IFEB IM4AR IAPR IMAY jJum IJU JAUG ISEP JOCT IN JDEC I Total . .- 4

  13. Investigation of misfiled cases in the PACS environment and a solution to prevent filing errors for chest radiographs.

    PubMed

    Morishita, Junji; Watanabe, Hideyuki; Katsuragawa, Shigehiko; Oda, Nobuhiro; Sukenobu, Yoshiharu; Okazaki, Hiroko; Nakata, Hajime; Doi, Kunio

    2005-01-01

    The aim of the study was to survey misfiled cases in a picture archiving and communication system environment at two hospitals and to demonstrate the potential usefulness of an automated patient recognition method for posteroanterior chest radiographs based on a template-matching technique designed to prevent filing errors. We surveyed misfiled cases obtained from different modalities in one hospital for 25 months, and misfiled cases of chest radiographs in another hospital for 17 months. For investigating the usefulness of an automated patient recognition and identification method for chest radiographs, a prospective study has been completed in clinical settings at the latter hospital. The total numbers of misfiled cases for different modalities in one hospital and for chest radiographs in another hospital were 327 and 22, respectively. The misfiled cases in the two hospitals were mainly the result of human errors (eg, incorrect manual entries of patient information, incorrect usage of identification cards in which an identification card for the previous patient was used for the next patient's image acquisition). The prospective study indicated the usefulness of the computerized method for discovering misfiled cases with a high performance (ie, an 86.4% correct warning rate for different patients and 1.5% incorrect warning rate for the same patients). We confirmed the occurrence of misfiled cases in the two hospitals. The automated patient recognition and identification method for chest radiographs would be useful in preventing wrong images from being stored in the picture archiving and communication system environment.

  14. Modelling vertical error in LiDAR-derived digital elevation models

    NASA Astrophysics Data System (ADS)

    Aguilar, Fernando J.; Mills, Jon P.; Delgado, Jorge; Aguilar, Manuel A.; Negreiros, J. G.; Pérez, José L.

    2010-01-01

    A hybrid theoretical-empirical model has been developed for modelling the error in LiDAR-derived digital elevation models (DEMs) of non-open terrain. The theoretical component seeks to model the propagation of the sample data error (SDE), i.e. the error from light detection and ranging (LiDAR) data capture of ground sampled points in open terrain, towards interpolated points. The interpolation methods used for infilling gaps may produce a non-negligible error that is referred to as gridding error. In this case, interpolation is performed using an inverse distance weighting (IDW) method with the local support of the five closest neighbours, although it would be possible to utilize other interpolation methods. The empirical component refers to what is known as "information loss". This is the error purely due to modelling the continuous terrain surface from only a discrete number of points plus the error arising from the interpolation process. The SDE must be previously calculated from a suitable number of check points located in open terrain and assumes that the LiDAR point density was sufficiently high to neglect the gridding error. For model calibration, data for 29 study sites, 200×200 m in size, belonging to different areas around Almeria province, south-east Spain, were acquired by means of stereo photogrammetric methods. The developed methodology was validated against two different LiDAR datasets. The first dataset used was an Ordnance Survey (OS) LiDAR survey carried out over a region of Bristol in the UK. The second dataset was an area located at Gador mountain range, south of Almería province, Spain. Both terrain slope and sampling density were incorporated in the empirical component through the calibration phase, resulting in a very good agreement between predicted and observed data (R2 = 0.9856 ; p < 0.001). In validation, Bristol observed vertical errors, corresponding to different LiDAR point densities, offered a reasonably good fit to the predicted errors. Even better results were achieved in the more rugged morphology of the Gador mountain range dataset. The findings presented in this article could be used as a guide for the selection of appropriate operational parameters (essentially point density in order to optimize survey cost), in projects related to LiDAR survey in non-open terrain, for instance those projects dealing with forestry applications.

  15. The association between frequency of self-reported medical errors and anesthesia trainee supervision: a survey of United States anesthesiology residents-in-training.

    PubMed

    De Oliveira, Gildasio S; Rahmani, Rod; Fitzgerald, Paul C; Chang, Ray; McCarthy, Robert J

    2013-04-01

    Poor supervision of physician trainees can be detrimental not only to resident education but also to patient care and safety. Inadequate supervision has been associated with more frequent deaths of patients under the care of junior residents. We hypothesized that residents reporting more medical errors would also report lower quality of supervision scores than the ones with lower reported medical errors. The primary objective of this study was to evaluate the association between the frequency of medical errors reported by residents and their perceived quality of faculty supervision. A cross-sectional nationwide survey was sent to 1000 residents randomly selected from anesthesiology training departments across the United States. Residents from 122 residency programs were invited to participate, the median (interquartile range) per institution was 7 (4-11). Participants were asked to complete a survey assessing demography, perceived quality of faculty supervision, and perceived causes of inadequate perceived supervision. Responses to the statements "I perform procedures for which I am not properly trained," "I make mistakes that have negative consequences for the patient," and "I have made a medication error (drug or incorrect dose) in the last year" were used to assess error rates. Average supervision scores were determined using the De Oliveira Filho et al. scale and compared among the frequency of self-reported error categories using the Kruskal-Wallis test. Six hundred four residents responded to the survey (60.4%). Forty-five (7.5%) of the respondents reported performing procedures for which they were not properly trained, 24 (4%) reported having made mistakes with negative consequences to patients, and 16 (3%) reported medication errors in the last year having occurred multiple times or often. Supervision scores were inversely correlated with the frequency of reported errors for all 3 questions evaluating errors. At a cutoff value of 3, supervision scores demonstrated an overall accuracy (area under the curve) (99% confidence interval) of 0.81 (0.73-0.86), 0.89 (0.77-0.95), and 0.93 (0.77-0.98) for predicting a response of multiple times or often to the question of performing procedures for which they were not properly trained, reported mistakes with negative consequences to patients, and reported medication errors in the last year, respectively. Anesthesiology trainees who reported a greater incidence of medical errors with negative consequences to patients and drug errors also reported lower scores for supervision by faculty. Our findings suggest that further studies of the association between supervision and patient safety are warranted. (Anesth Analg 2013;116:892-7).

  16. Quantifying and reducing statistical uncertainty in sample-based health program costing studies in low- and middle-income countries.

    PubMed

    Rivera-Rodriguez, Claudia L; Resch, Stephen; Haneuse, Sebastien

    2018-01-01

    In many low- and middle-income countries, the costs of delivering public health programs such as for HIV/AIDS, nutrition, and immunization are not routinely tracked. A number of recent studies have sought to estimate program costs on the basis of detailed information collected on a subsample of facilities. While unbiased estimates can be obtained via accurate measurement and appropriate analyses, they are subject to statistical uncertainty. Quantification of this uncertainty, for example, via standard errors and/or 95% confidence intervals, provides important contextual information for decision-makers and for the design of future costing studies. While other forms of uncertainty, such as that due to model misspecification, are considered and can be investigated through sensitivity analyses, statistical uncertainty is often not reported in studies estimating the total program costs. This may be due to a lack of awareness/understanding of (1) the technical details regarding uncertainty estimation and (2) the availability of software with which to calculate uncertainty for estimators resulting from complex surveys. We provide an overview of statistical uncertainty in the context of complex costing surveys, emphasizing the various potential specific sources that contribute to overall uncertainty. We describe how analysts can compute measures of uncertainty, either via appropriately derived formulae or through resampling techniques such as the bootstrap. We also provide an overview of calibration as a means of using additional auxiliary information that is readily available for the entire program, such as the total number of doses administered, to decrease uncertainty and thereby improve decision-making and the planning of future studies. A recent study of the national program for routine immunization in Honduras shows that uncertainty can be reduced by using information available prior to the study. This method can not only be used when estimating the total cost of delivering established health programs but also to decrease uncertainty when the interest lies in assessing the incremental effect of an intervention. Measures of statistical uncertainty associated with survey-based estimates of program costs, such as standard errors and 95% confidence intervals, provide important contextual information for health policy decision-making and key inputs for the design of future costing studies. Such measures are often not reported, possibly because of technical challenges associated with their calculation and a lack of awareness of appropriate software. Modern statistical analysis methods for survey data, such as calibration, provide a means to exploit additional information that is readily available but was not used in the design of the study to significantly improve the estimation of total cost through the reduction of statistical uncertainty.

  17. Quantifying and reducing statistical uncertainty in sample-based health program costing studies in low- and middle-income countries

    PubMed Central

    Resch, Stephen

    2018-01-01

    Objectives: In many low- and middle-income countries, the costs of delivering public health programs such as for HIV/AIDS, nutrition, and immunization are not routinely tracked. A number of recent studies have sought to estimate program costs on the basis of detailed information collected on a subsample of facilities. While unbiased estimates can be obtained via accurate measurement and appropriate analyses, they are subject to statistical uncertainty. Quantification of this uncertainty, for example, via standard errors and/or 95% confidence intervals, provides important contextual information for decision-makers and for the design of future costing studies. While other forms of uncertainty, such as that due to model misspecification, are considered and can be investigated through sensitivity analyses, statistical uncertainty is often not reported in studies estimating the total program costs. This may be due to a lack of awareness/understanding of (1) the technical details regarding uncertainty estimation and (2) the availability of software with which to calculate uncertainty for estimators resulting from complex surveys. We provide an overview of statistical uncertainty in the context of complex costing surveys, emphasizing the various potential specific sources that contribute to overall uncertainty. Methods: We describe how analysts can compute measures of uncertainty, either via appropriately derived formulae or through resampling techniques such as the bootstrap. We also provide an overview of calibration as a means of using additional auxiliary information that is readily available for the entire program, such as the total number of doses administered, to decrease uncertainty and thereby improve decision-making and the planning of future studies. Results: A recent study of the national program for routine immunization in Honduras shows that uncertainty can be reduced by using information available prior to the study. This method can not only be used when estimating the total cost of delivering established health programs but also to decrease uncertainty when the interest lies in assessing the incremental effect of an intervention. Conclusion: Measures of statistical uncertainty associated with survey-based estimates of program costs, such as standard errors and 95% confidence intervals, provide important contextual information for health policy decision-making and key inputs for the design of future costing studies. Such measures are often not reported, possibly because of technical challenges associated with their calculation and a lack of awareness of appropriate software. Modern statistical analysis methods for survey data, such as calibration, provide a means to exploit additional information that is readily available but was not used in the design of the study to significantly improve the estimation of total cost through the reduction of statistical uncertainty. PMID:29636964

  18. Quantifying Variations in Airborne Gravity Data Quality Due to Aircraft Selection with the Gravity for the Re-Definition of the American Vertical Datum Project

    NASA Astrophysics Data System (ADS)

    Youngman, M.; Weil, C.; Salisbury, T.; Villarreal, C.

    2015-12-01

    The U.S. National Geodetic Survey is collecting airborne gravity with the Gravity for the Redefinition of the American Vertical Datum (GRAV-D) project to produce a geoid supporting heights accurate to 2 centimeters, where possible, with a modernized U.S. vertical datum in 2022. Targeting 15.6 million square kilometers, the GRAV-D project is unprecedented in its scope of consistently collected airborne gravity data across the entire U.S. and its holdings. Currently over 42% of data collection has been completed by 42 surveys (field campaigns) covering 34 completed blocks (data collection areas). The large amount of data available offers a unique opportunity to evaluate the causes of data quality variation from survey to survey. Two metrics were chosen to use as a basis for comparing the quality of each survey/block: 1. total crossover error (i.e. difference in gravity recorded at all locations of crossing flight lines) and 2. the statistical difference of the airborne gravity from the EGM2008 global model. We have determined that the aircraft used for surveying contributes significantly to the variation in data quality. This paper will further expand upon that recent work, using statistical analysis to determine the contribution of aircraft selection to data quality taking into account other variables such as differences in survey setup or weather conditions during surveying.

  19. Estimating the magnitude of peak flows for streams in Kentucky for selected recurrence intervals

    USGS Publications Warehouse

    Hodgkins, Glenn A.; Martin, Gary R.

    2003-01-01

    This report gives estimates of, and presents techniques for estimating, the magnitude of peak flows for streams in Kentucky for recurrence intervals of 2, 5, 10, 25, 50, 100, 200, and 500 years. A flowchart in this report guides the user to the appropriate estimates and (or) estimating techniques for a site on a specific stream. Estimates of peak flows are given for 222 U.S. Geological Survey streamflow-gaging stations in Kentucky. In the development of the peak-flow estimates at gaging stations, a new generalized skew coefficient was calculated for the State. This single statewide value of 0.011 (with a standard error of prediction of 0.520) is more appropriate for Kentucky than the national skew isoline map in Bulletin 17B of the Interagency Advisory Committee on Water Data. Regression equations are presented for estimating the peak flows on ungaged, unregulated streams in rural drainage basins. The equations were developed by use of generalized-least-squares regression procedures at 187 U.S. Geological Survey gaging stations in Kentucky and 51 stations in surrounding States. Kentucky was divided into seven flood regions. Total drainage area is used in the final regression equations as the sole explanatory variable, except in Regions 1 and 4 where main-channel slope also was used. The smallest average standard errors of prediction were in Region 3 (from -13.1 to +15.0 percent) and the largest average standard errors of prediction were in Region 5 (from -37.6 to +60.3 percent). One section of this report describes techniques for estimating peak flows for ungaged sites on gaged, unregulated streams in rural drainage basins. Another section references two previous U.S. Geological Survey reports for peak-flow estimates on ungaged, unregulated, urban streams. Estimating peak flows at ungaged sites on regulated streams is beyond the scope of this report, because peak flows on regulated streams are dependent upon variable human activities.

  20. Tectonic motion site survey of the National Radio Astronomy Observatory, Green Bank, West Virginia

    NASA Technical Reports Server (NTRS)

    Webster, W. J., Jr.; Allenby, R. J.; Hutton, L. K.; Lowman, P. D., Jr.; Tiedemann, H. A.

    1979-01-01

    A geological and geophysical site survey was made of the area around the National Radio Astronomy Observatory (NRAO) to determine whether there are at present local tectonic movements that could introduce significant errors to Very Long Baseline Interferometry (VLBI) geodetic measurements. The site survey consisted of a literature search, photogeologic mapping with Landsat and Skylab photographs, a field reconnaissance, and installation of a seismometer at the NRAO. It is concluded that local tectonic movement will not contribute significantly to VLBI errors. It is recommended that similar site surveys be made of all locations used for VLBI or laser ranging.

  1. VizieR Online Data Catalog: ROSAT detected quasars. I. (Brinkmann+ 1997)

    NASA Astrophysics Data System (ADS)

    Brinkmann, W.; Yuan, W.

    1996-09-01

    We have compiled a sample of all quasars with measured radio emission from the Veron-Cetty - Veron catalogue (1993, VV93 ) detected by ROSAT in the ALL-SKY SURVEY (RASS, Voges 1992), as targets of pointed observations, or as serendipitous sources from pointed observations as publicly available from the ROSAT point source catalogue (ROSAT-SRC, Voges et al. 1995). The total number of ROSAT detected radio quasars from the above three sources is 654 objects. 69 of the objects are classified as radio-quiet using the defining line at a radio-loudness of 1.0, and 10 objects have no classification. The 5GHz data are from the 87GB radio survey, the NED database, or from the Veron-Cetty - Veron catalogue. The power law indices and their errors are estimated from the two hardness ratios given by the SASS assuming Galactic absorption. The X-ray flux densities in the ROSAT band (0.1-2.4keV) are calculated from the count rates using the energy to counts conversion factor for power law spectra and Galactic absorption. For the photon index we use the value obtained for a individual source if the estimated 1 sigma error is smaller than 0.5, otherwise we use the mean value 2.14. (1 data file).

  2. The 2.3 GHz continuum survey of the GEM project

    NASA Astrophysics Data System (ADS)

    Tello, C.; Villela, T.; Torres, S.; Bersanelli, M.; Smoot, G. F.; Ferreira, I. S.; Cingoz, A.; Lamb, J.; Barbosa, D.; Perez-Becker, D.; Ricciardi, S.; Currivan, J. A.; Platania, P.; Maino, D.

    2013-08-01

    Context. Determining the spectral and spatial characteristics of the radio continuum of our Galaxy is an experimentally challenging endeavour for improving our understanding of the astrophysics of the interstellar medium. This knowledge has also become of paramount significance for cosmology, since Galactic emission is the main source of astrophysical contamination in measurements of the cosmic microwave background (CMB) radiation on large angular scales. Aims: We present a partial-sky survey of the radio continuum at 2.3GHz within the scope of the Galactic Emission Mapping (GEM) project, an observational program conceived and developed to reveal the large-scale properties of Galactic synchrotron radiation through a set of self-consistent surveys of the radio continuum between 408MHz and 10GHz. Methods: The GEM experiment uses a portable and double-shielded 5.5-m radiotelescope in altazimuthal configuration to map 60-degree-wide declination bands from different observational sites by circularly scanning the sky at zenithal angles of 30° from a constantly rotating platform. The observations were accomplished with a total power receiver, whose front-end high electron mobility transistor (HEMT) amplifier was matched directly to a cylindrical horn at the prime focus of the parabolic reflector. The Moon was used to calibrate the antenna temperature scale and the preparation of the map required direct subtraction and destriping algorithms to remove ground contamination as the most significant source of systematic error. Results: We used 484 h of total intensity observations from two locations in Colombia and Brazil to yield 66% sky coverage from to . The observations in Colombia were obtained with a horizontal HPBW of and a vertical HPBW of . The pointing accuracy was and the RMS sensitivity was 11.42 mK. The observations in Brazil were obtained with a horizontal HPBW of and a vertical HPBW of . The pointing accuracy was and the RMS sensitivity was 8.24 mK. The zero-level uncertainty of the combined survey is 103mK with a temperature scale error of 5% after direct correlation with the Rhodes/HartRAO survey at 2326MHz on a T-T plot. Conclusions: The sky brightness distribution into regions of low and high emission in the GEM survey is consistent with the appearance of a transition region as seen in the Haslam 408MHz and WMAP K-band surveys. Preliminary results also show that the temperature spectral index between 408MHz and the 2.3GHz band of the GEM survey has a weak spatial correlation with these regions; but it steepens significantly from high to low emission regions with respect to the WMAP K-band survey. The survey is only available at the CDS via anonymous ftp to http://cdsarc.u-strasbg.fr (ftp://130.79.128.5) or via http://cdsarc.u-strasbg.fr/viz-bin/qcat?J/A+A/556/A1

  3. Reducing Uncertainty in the American Community Survey through Data-Driven Regionalization

    PubMed Central

    Spielman, Seth E.; Folch, David C.

    2015-01-01

    The American Community Survey (ACS) is the largest survey of US households and is the principal source for neighborhood scale information about the US population and economy. The ACS is used to allocate billions in federal spending and is a critical input to social scientific research in the US. However, estimates from the ACS can be highly unreliable. For example, in over 72% of census tracts, the estimated number of children under 5 in poverty has a margin of error greater than the estimate. Uncertainty of this magnitude complicates the use of social data in policy making, research, and governance. This article presents a heuristic spatial optimization algorithm that is capable of reducing the margins of error in survey data via the creation of new composite geographies, a process called regionalization. Regionalization is a complex combinatorial problem. Here rather than focusing on the technical aspects of regionalization we demonstrate how to use a purpose built open source regionalization algorithm to process survey data in order to reduce the margins of error to a user-specified threshold. PMID:25723176

  4. Reducing uncertainty in the american community survey through data-driven regionalization.

    PubMed

    Spielman, Seth E; Folch, David C

    2015-01-01

    The American Community Survey (ACS) is the largest survey of US households and is the principal source for neighborhood scale information about the US population and economy. The ACS is used to allocate billions in federal spending and is a critical input to social scientific research in the US. However, estimates from the ACS can be highly unreliable. For example, in over 72% of census tracts, the estimated number of children under 5 in poverty has a margin of error greater than the estimate. Uncertainty of this magnitude complicates the use of social data in policy making, research, and governance. This article presents a heuristic spatial optimization algorithm that is capable of reducing the margins of error in survey data via the creation of new composite geographies, a process called regionalization. Regionalization is a complex combinatorial problem. Here rather than focusing on the technical aspects of regionalization we demonstrate how to use a purpose built open source regionalization algorithm to process survey data in order to reduce the margins of error to a user-specified threshold.

  5. Calibrating photometric redshifts of luminous red galaxies

    DOE PAGES

    Padmanabhan, Nikhil; Budavari, Tamas; Schlegel, David J.; ...

    2005-05-01

    We discuss the construction of a photometric redshift catalogue of luminous red galaxies (LRGs) from the Sloan Digital Sky Survey (SDSS), emphasizing the principal steps necessary for constructing such a catalogue: (i) photometrically selecting the sample, (ii) measuring photometric redshifts and their error distributions, and (iii) estimating the true redshift distribution. We compare two photometric redshift algorithms for these data and find that they give comparable results. Calibrating against the SDSS and SDSS–2dF (Two Degree Field) spectroscopic surveys, we find that the photometric redshift accuracy is σ~ 0.03 for redshifts less than 0.55 and worsens at higher redshift (~ 0.06more » for z < 0.7). These errors are caused by photometric scatter, as well as systematic errors in the templates, filter curves and photometric zero-points. We also parametrize the photometric redshift error distribution with a sum of Gaussians and use this model to deconvolve the errors from the measured photometric redshift distribution to estimate the true redshift distribution. We pay special attention to the stability of this deconvolution, regularizing the method with a prior on the smoothness of the true redshift distribution. The methods that we develop are applicable to general photometric redshift surveys.« less

  6. Status of pelagic prey fishes in Lake Michigan, 2013

    USGS Publications Warehouse

    Warner, David M.; Farha, Steven A.; O'Brien, Timothy P.; Ogilvie, Lynn; Claramunt, Randall M.; Hanson, Dale

    2014-01-01

    Acoustic surveys were conducted in late summer/early fall during the years 1992-1996 and 2001-2013 to estimate pelagic prey fish biomass in Lake Michigan. Midwater trawling during the surveys as well as target strength provided a measure of species and size composition of the fish community for use in scaling acoustic data and providing species-specific abundance estimates. The 2013 survey consisted of 27 acoustic transects (546 km total) and 31 midwater trawl tows. Mean prey fish biomass was 6.1 kg/ha (relative standard error, RSE = 11%) or 29.6 kilotonnes (kt = 1,000 metric tons), which was similar to the estimate in 2012 (31.1 kt) and 23.5% of the long-term (18 years) mean. The numeric density of the 2013 alewife year class was 6% of the time series average and this year-class contributed 4% of total alewife biomass (5.2 kg/ha, RSE = 12%). Alewife ≥age-1 comprised 96% of alewife biomass. In 2013, alewife comprised 86% of total prey fish biomass, while rainbow smelt and bloater were 4 and 10% of total biomass, respectively. Rainbow smelt biomass in 2013 (0.24 kg/ha, RSE = 17%) was essentially identical to the rainbow smelt biomass in 2012 and was 6% of the long term mean. Bloater biomass in 2013 was 0.6 kg/ha, only half the 2012 biomass, and 6% of the long term mean. Mean density of small bloater in 2013 (29 fish/ha, RSE = 29%) was lower than peak values observed in 2007-2009 and was 23% of the time series mean. In 2013, pelagic prey fish biomass in Lake Michigan was similar to Lake Huron, but pelagic community composition differs in the two lakes, with Lake Huron dominated by bloater.

  7. Generalized site occupancy models allowing for false positive and false negative errors

    USGS Publications Warehouse

    Royle, J. Andrew; Link, W.A.

    2006-01-01

    Site occupancy models have been developed that allow for imperfect species detection or ?false negative? observations. Such models have become widely adopted in surveys of many taxa. The most fundamental assumption underlying these models is that ?false positive? errors are not possible. That is, one cannot detect a species where it does not occur. However, such errors are possible in many sampling situations for a number of reasons, and even low false positive error rates can induce extreme bias in estimates of site occupancy when they are not accounted for. In this paper, we develop a model for site occupancy that allows for both false negative and false positive error rates. This model can be represented as a two-component finite mixture model and can be easily fitted using freely available software. We provide an analysis of avian survey data using the proposed model and present results of a brief simulation study evaluating the performance of the maximum-likelihood estimator and the naive estimator in the presence of false positive errors.

  8. Constraining neutrino properties with a Euclid-like galaxy cluster survey

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Cerbolini, M. Costanzi Alunno; Sartoris, B.; Borgani, S.

    2013-06-01

    We perform a forecast analysis on how well a Euclid-like photometric galaxy cluster survey will constrain the total neutrino mass and effective number of neutrino species. We base our analysis on the Monte Carlo Markov Chains technique by combining information from cluster number counts and cluster power spectrum. We find that combining cluster data with Cosmic Microwave Background (CMB) measurements from Planck improves by more than an order of magnitude the constraint on neutrino masses compared to each probe used independently. For the ΛCDM+m{sub ν} model the 2σ upper limit on total neutrino mass shifts from Σm{sub ν} < 0.35more » eV using cluster data alone to Σm{sub ν} < 0.031 eV when combined with Planck data. When a non-standard scenario with N{sub eff}≠3.046 number of neutrino species is considered, we estimate an upper limit of N{sub eff} < 3.14 (95%CL), while the bounds on neutrino mass are relaxed to Σm{sub ν} < 0.040 eV. This accuracy would be sufficient for a 2σ detection of neutrino mass even in the minimal normal hierarchy scenario (Σm{sub ν} ≅ 0.05 eV). In addition to the extended ΛCDM+m{sub ν}+N{sub eff} model we also consider scenarios with a constant dark energy equation of state and a non-vanishing curvature. When these models are considered the error on Σm{sub ν} is only slightly affected, while there is a larger impact of the order of ∼ 15% and ∼ 20% respectively on the 2σ error bar of N{sub eff} with respect to the standard case. To assess the effect of an uncertain knowledge of the relation between cluster mass and optical richness, we also treat the ΛCDM+m{sub ν}+N{sub eff} case with free nuisance parameters, which parameterize the uncertainties on the cluster mass determination. Adopting the over-conservative assumption of no prior knowledge on the nuisance parameter the loss of information from cluster number counts leads to a large degradation of neutrino constraints. In particular, the upper bounds for Σm{sub ν} are relaxed by a factor larger than two, Σm{sub ν} < 0.083 eV (95%CL), hence compromising the possibility of detecting the total neutrino mass with good significance. We thus confirm the potential that a large optical/near-IR cluster survey, like that to be carried out by Euclid, could have in constraining neutrino properties, and we stress the importance of a robust measurement of masses, e.g. from weak lensing within the Euclid survey, in order to full exploit the cosmological information carried by such survey.« less

  9. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Li, T. S.; DePoy, D. L.; Marshall, J. L.

    Here, we report that meeting the science goals for many current and future ground-based optical large-area sky surveys requires that the calibrated broadband photometry is both stable in time and uniform over the sky to 1% precision or better. Past and current surveys have achieved photometric precision of 1%–2% by calibrating the survey's stellar photometry with repeated measurements of a large number of stars observed in multiple epochs. The calibration techniques employed by these surveys only consider the relative frame-by-frame photometric zeropoint offset and the focal plane position-dependent illumination corrections, which are independent of the source color. However, variations inmore » the wavelength dependence of the atmospheric transmission and the instrumental throughput induce source color-dependent systematic errors. These systematic errors must also be considered to achieve the most precise photometric measurements. In this paper, we examine such systematic chromatic errors (SCEs) using photometry from the Dark Energy Survey (DES) as an example. We first define a natural magnitude system for DES and calculate the systematic errors on stellar magnitudes when the atmospheric transmission and instrumental throughput deviate from the natural system. We conclude that the SCEs caused by the change of airmass in each exposure, the change of the precipitable water vapor and aerosol in the atmosphere over time, and the non-uniformity of instrumental throughput over the focal plane can be up to 2% in some bandpasses. We then compare the calculated SCEs with the observed DES data. For the test sample data, we correct these errors using measurements of the atmospheric transmission and instrumental throughput from auxiliary calibration systems. In conclusion, the residual after correction is less than 0.3%. Moreover, we calculate such SCEs for Type Ia supernovae and elliptical galaxies and find that the chromatic errors for non-stellar objects are redshift-dependent and can be larger than those for stars at certain redshifts.« less

  10. Relationship Between Patients' Perceptions of Care Quality and Health Care Errors in 11 Countries: A Secondary Data Analysis.

    PubMed

    Hincapie, Ana L; Slack, Marion; Malone, Daniel C; MacKinnon, Neil J; Warholak, Terri L

    2016-01-01

    Patients may be the most reliable reporters of some aspects of the health care process; their perspectives should be considered when pursuing changes to improve patient safety. The authors evaluated the association between patients' perceived health care quality and self-reported medical, medication, and laboratory errors in a multinational sample. The analysis was conducted using the 2010 Commonwealth Fund International Health Policy Survey, a multinational consumer survey conducted in 11 countries. Quality of care was measured by a multifaceted construct developed using Rasch techniques. After adjusting for potentially important confounding variables, an increase in respondents' perceptions of care coordination decreased the odds of self-reporting medical errors, medication errors, and laboratory errors (P < .001). As health care stakeholders continue to search for initiatives that improve care experiences and outcomes, this study's results emphasize the importance of guaranteeing integrated care.

  11. Error management for musicians: an interdisciplinary conceptual framework

    PubMed Central

    Kruse-Weber, Silke; Parncutt, Richard

    2014-01-01

    Musicians tend to strive for flawless performance and perfection, avoiding errors at all costs. Dealing with errors while practicing or performing is often frustrating and can lead to anger and despair, which can explain musicians’ generally negative attitude toward errors and the tendency to aim for flawless learning in instrumental music education. But even the best performances are rarely error-free, and research in general pedagogy and psychology has shown that errors provide useful information for the learning process. Research in instrumental pedagogy is still neglecting error issues; the benefits of risk management (before the error) and error management (during and after the error) are still underestimated. It follows that dealing with errors is a key aspect of music practice at home, teaching, and performance in public. And yet, to be innovative, or to make their performance extraordinary, musicians need to risk errors. Currently, most music students only acquire the ability to manage errors implicitly – or not at all. A more constructive, creative, and differentiated culture of errors would balance error tolerance and risk-taking against error prevention in ways that enhance music practice and music performance. The teaching environment should lay the foundation for the development of such an approach. In this contribution, we survey recent research in aviation, medicine, economics, psychology, and interdisciplinary decision theory that has demonstrated that specific error-management training can promote metacognitive skills that lead to better adaptive transfer and better performance skills. We summarize how this research can be applied to music, and survey-relevant research that is specifically tailored to the needs of musicians, including generic guidelines for risk and error management in music teaching and performance. On this basis, we develop a conceptual framework for risk management that can provide orientation for further music education and musicians at all levels. PMID:25120501

  12. Error management for musicians: an interdisciplinary conceptual framework.

    PubMed

    Kruse-Weber, Silke; Parncutt, Richard

    2014-01-01

    Musicians tend to strive for flawless performance and perfection, avoiding errors at all costs. Dealing with errors while practicing or performing is often frustrating and can lead to anger and despair, which can explain musicians' generally negative attitude toward errors and the tendency to aim for flawless learning in instrumental music education. But even the best performances are rarely error-free, and research in general pedagogy and psychology has shown that errors provide useful information for the learning process. Research in instrumental pedagogy is still neglecting error issues; the benefits of risk management (before the error) and error management (during and after the error) are still underestimated. It follows that dealing with errors is a key aspect of music practice at home, teaching, and performance in public. And yet, to be innovative, or to make their performance extraordinary, musicians need to risk errors. Currently, most music students only acquire the ability to manage errors implicitly - or not at all. A more constructive, creative, and differentiated culture of errors would balance error tolerance and risk-taking against error prevention in ways that enhance music practice and music performance. The teaching environment should lay the foundation for the development of such an approach. In this contribution, we survey recent research in aviation, medicine, economics, psychology, and interdisciplinary decision theory that has demonstrated that specific error-management training can promote metacognitive skills that lead to better adaptive transfer and better performance skills. We summarize how this research can be applied to music, and survey-relevant research that is specifically tailored to the needs of musicians, including generic guidelines for risk and error management in music teaching and performance. On this basis, we develop a conceptual framework for risk management that can provide orientation for further music education and musicians at all levels.

  13. 45 CFR 98.100 - Error Rate Report.

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... Welfare DEPARTMENT OF HEALTH AND HUMAN SERVICES GENERAL ADMINISTRATION CHILD CARE AND DEVELOPMENT FUND... rates, which is defined as the percentage of cases with an error (expressed as the total number of cases with an error compared to the total number of cases); the percentage of cases with an improper payment...

  14. 3D Volume and Morphology of Perennial Cave Ice and Related Geomorphological Models at Scăriloara Ice Cave, Romania, from Structure from Motion, Ground Penetrating Radar and Total Station Surveys

    NASA Astrophysics Data System (ADS)

    Hubbard, J.; Onac, B. P.; Kruse, S.; Forray, F. L.

    2017-12-01

    Research at Scăriloara Ice Cave has proceeded for over 150 years, primarily driven by the presence and paleoclimatic importance of the large perennial ice block and various ice speleothems located within its galleries. Previous observations of the ice block led to rudimentary volume estimates of 70,000 to 120,000 cubic meters (m3), prospectively placing it as one of the world's largest cave ice deposits. The cave morphology and the surface of the ice block are now recreated in a total station survey-validated 3D model, produced using Structure from Motion (SfM) software. With the total station survey and the novel use of ArcGIS tools, the SfM validation process is drastically simplified to produce a scaled, georeferenced, and photo-texturized 3D model of the cave environment with a root-mean-square error (RMSE) of 0.24 m. Furthermore, ground penetrating radar data was collected and spatially oriented with the total station survey to recreate the ice block basal surface and was combined with the SfM model to create a model of the ice block itself. The resulting ice block model has a volume of over 118,000 m3 with an uncertainty of 9.5%, with additional volumes left un-surveyed. The varying elevation of the ice block basal surface model reflect specific features of the cave roof, such as areas of enlargement, shafts, and potential joints, which offer further validation and inform theories on cave and ice genesis. Specifically, a large depression area was identified as a potential area of initial ice growth. Finally, an ice thickness map was produced that will aid in the designing of future ice coring projects. This methodology presents a powerful means to observe and accurately characterize and measure cave and cave ice morphologies with ease and affordability. Results further establish the significance of Scăriloara's ice block to paleoclimate research, provide insights into cave and ice block genesis, and aid future study design.

  15. Personal protective equipment for the Ebola virus disease: A comparison of 2 training programs.

    PubMed

    Casalino, Enrique; Astocondor, Eugenio; Sanchez, Juan Carlos; Díaz-Santana, David Enrique; Del Aguila, Carlos; Carrillo, Juan Pablo

    2015-12-01

    Personal protective equipment (PPE) for preventing Ebola virus disease (EVD) includes basic PPE (B-PPE) and enhanced PPE (E-PPE). Our aim was to compare conventional training programs (CTPs) and reinforced training programs (RTPs) on the use of B-PPE and E-PPE. Four groups were created, designated CTP-B, CTP-E, RTP-B, and RTP-E. All groups received the same theoretical training, followed by 3 practical training sessions. A total of 120 students were included (30 per group). In all 4 groups, the frequency and number of total errors and critical errors decreased significantly over the course of the training sessions (P < .01). The RTP was associated with a greater reduction in the number of total errors and critical errors (P < .0001). During the third training session, we noted an error frequency of 7%-43%, a critical error frequency of 3%-40%, 0.3-1.5 total errors, and 0.1-0.8 critical errors per student. The B-PPE groups had the fewest errors and critical errors (P < .0001). Our results indicate that both training methods improved the student's proficiency, that B-PPE appears to be easier to use than E-PPE, that the RTP achieved better proficiency for both PPE types, and that a number of students are still potentially at risk for EVD contamination despite the improvements observed during the training. Copyright © 2015 Association for Professionals in Infection Control and Epidemiology, Inc. Published by Elsevier Inc. All rights reserved.

  16. Variability in Threshold for Medication Error Reporting Between Physicians, Nurses, Pharmacists, and Families.

    PubMed

    Keefer, Patricia; Kidwell, Kelley; Lengyel, Candice; Warrier, Kavita; Wagner, Deborah

    2017-01-01

    Voluntary medication error reporting is an imperfect resource used to improve the quality of medication administration. It requires judgment by front-line staff to determine how to report enough to identify opportunities to improve patients' safety but not jeopardize that safety by creating a culture of "report fatigue." This study aims to provide information on interpretability of medication error and the variability between the subgroups of caregivers in the hospital setting. Survey participants included nursing, physician (trainee and graduated), patient/families, pharmacist across a large academic health system, including an attached free-standing pediatric hospital. Demographics and survey questions were collected and analyzed using Fischer's exact testing with SAS v9.3. Statistically significant variability existed between the four groups for a majority of the questions. This included all cases designated as administration errors and many, but not all, cases of prescribing events. Commentary provided in the free-text portion of the survey was sub-analyzed and found to be associated with medication allergy reporting and lack of education surrounding report characteristics. There is significant variability in the threshold to report specific medication errors in the hospital setting. More work needs to be done to further improve the education surrounding error reporting in hospitals for all noted subgroups. Copyright© Bentham Science Publishers; For any queries, please email at epub@benthamscience.org.

  17. Self-calibration of photometric redshift scatter in weak-lensing surveys

    DOE PAGES

    Zhang, Pengjie; Pen, Ue -Li; Bernstein, Gary

    2010-06-11

    Photo-z errors, especially catastrophic errors, are a major uncertainty for precision weak lensing cosmology. We find that the shear-(galaxy number) density and density-density cross correlation measurements between photo-z bins, available from the same lensing surveys, contain valuable information for self-calibration of the scattering probabilities between the true-z and photo-z bins. The self-calibration technique we propose does not rely on cosmological priors nor parameterization of the photo-z probability distribution function, and preserves all of the cosmological information available from shear-shear measurement. We estimate the calibration accuracy through the Fisher matrix formalism. We find that, for advanced lensing surveys such as themore » planned stage IV surveys, the rate of photo-z outliers can be determined with statistical uncertainties of 0.01-1% for z < 2 galaxies. Among the several sources of calibration error that we identify and investigate, the galaxy distribution bias is likely the most dominant systematic error, whereby photo-z outliers have different redshift distributions and/or bias than non-outliers from the same bin. This bias affects all photo-z calibration techniques based on correlation measurements. As a result, galaxy bias variations of O(0.1) produce biases in photo-z outlier rates similar to the statistical errors of our method, so this galaxy distribution bias may bias the reconstructed scatters at several-σ level, but is unlikely to completely invalidate the self-calibration technique.« less

  18. Psychometric assessment of a scale to measure bonding workplace social capital

    PubMed Central

    Tsutsumi, Akizumi; Inoue, Akiomi; Odagiri, Yuko

    2017-01-01

    Objectives Workplace social capital (WSC) has attracted increasing attention as an organizational and psychosocial factor related to worker health. This study aimed to assess the psychometric properties of a newly developed WSC scale for use in work environments, where bonding social capital is important. Methods We assessed the psychometric properties of a newly developed 6-item scale to measure bonding WSC using two data sources. Participants were 1,650 randomly selected workers who completed an online survey. Exploratory factor analyses were conducted. We examined the item–item and item–total correlations, internal consistency, and associations between scale scores and a previous 8-item measure of WSC. We evaluated test–retest reliability by repeating the survey with 900 of the respondents 2 weeks later. The overall scale reliability was quantified by an intraclass coefficient and the standard error of measurement. We evaluated convergent validity by examining the association with several relevant workplace psychosocial factors using a dataset from workers employed by an electrical components company (n = 2,975). Results The scale was unidimensional. The item–item and item–total correlations ranged from 0.52 to 0.78 (p < 0.01) and from 0.79 to 0.89 (p < 0.01), respectively. Internal consistency was good (Cronbach’s α coefficient: 0.93). The correlation with the 8-item scale indicated high criterion validity (r = 0.81) and the scale showed high test–retest reliability (r = 0.74, p < 0.01). The intraclass coefficient and standard error of measurement were 0.74 (95% confidence intervals: 0.71–0.77) and 4.04 (95% confidence intervals: 1.86–6.20), respectively. Correlations with relevant workplace psychosocial factors showed convergent validity. Conclusions The results confirmed that the newly developed WSC scale has adequate psychometric properties. PMID:28662058

  19. Evaluation of the U.S. Geological Survey Landsat burned area essential climate variable across the conterminous U.S. using commercial high-resolution imagery

    USGS Publications Warehouse

    Vanderhoof, Melanie; Brunner, Nicole M.; Beal, Yen-Ju G.; Hawbaker, Todd J.

    2017-01-01

    The U.S. Geological Survey has produced the Landsat Burned Area Essential Climate Variable (BAECV) product for the conterminous United States (CONUS), which provides wall-to-wall annual maps of burned area at 30 m resolution (1984–2015). Validation is a critical component in the generation of such remotely sensed products. Previous efforts to validate the BAECV relied on a reference dataset derived from Landsat, which was effective in evaluating the product across its timespan but did not allow for consideration of inaccuracies imposed by the Landsat sensor itself. In this effort, the BAECV was validated using 286 high-resolution images, collected from GeoEye-1, QuickBird-2, Worldview-2 and RapidEye satellites. A disproportionate sampling strategy was utilized to ensure enough burned area pixels were collected. Errors of omission and commission for burned area averaged 22 ± 4% and 48 ± 3%, respectively, across CONUS. Errors were lowest across the western U.S. The elevated error of commission relative to omission was largely driven by patterns in the Great Plains which saw low errors of omission (13 ± 13%) but high errors of commission (70 ± 5%) and potentially a region-growing function included in the BAECV algorithm. While the BAECV reliably detected agricultural fires in the Great Plains, it frequently mapped tilled areas or areas with low vegetation as burned. Landscape metrics were calculated for individual fire events to assess the influence of image resolution (2 m, 30 m and 500 m) on mapping fire heterogeneity. As the spatial detail of imagery increased, fire events were mapped in a patchier manner with greater patch and edge densities, and shape complexity, which can influence estimates of total greenhouse gas emissions and rates of vegetation recovery. The increasing number of satellites collecting high-resolution imagery and rapid improvements in the frequency with which imagery is being collected means greater opportunities to utilize these sources of imagery for Landsat product validation. 

  20. Physician attitudes and practices related to voluntary error and near-miss reporting.

    PubMed

    Smith, Koren S; Harris, Kendra M; Potters, Louis; Sharma, Rajiv; Mutic, Sasa; Gay, Hiram A; Wright, Jean; Samuels, Michael; Ye, Xiaobu; Ford, Eric; Terezakis, Stephanie

    2014-09-01

    Incident learning systems are important tools to improve patient safety in radiation oncology, but physician participation in these systems is poor. To understand reporting practices and attitudes, a survey was sent to staff members of four large academic radiation oncology centers, all of which have in-house reporting systems. Institutional review board approval was obtained to send a survey to employees including physicians, dosimetrists, nurses, physicists, and radiation therapists. The survey evaluated barriers to reporting, perceptions of errors, and reporting practices. The responses of physicians were compared with those of other professional groups. There were 274 respondents to the survey, with a response rate of 81.3%. Physicians and other staff agreed that errors and near-misses were happening in their clinics (93.8% v 88.7%, respectively) and that they have a responsibility to report (97% overall). Physicians were significantly less likely to report minor near-misses (P = .001) and minor errors (P = .024) than other groups. Physicians were significantly more concerned about getting colleagues in trouble (P = .015), liability (P = .009), effect on departmental reputation (P = .006), and embarrassment (P < .001) than their colleagues. Regression analysis identified embarrassment among physicians as a critical barrier. If not embarrassed, participants were 2.5 and 4.5 times more likely to report minor errors and major near-miss events, respectively. All members of the radiation oncology team observe errors and near-misses. Physicians, however, are significantly less likely to report events than other colleagues. There are important, specific barriers to physician reporting that need to be addressed to encourage reporting and create a fair culture around reporting. Copyright © 2014 by American Society of Clinical Oncology.

  1. Error assessment of local tie vectors in space geodesy

    NASA Astrophysics Data System (ADS)

    Falkenberg, Jana; Heinkelmann, Robert; Schuh, Harald

    2014-05-01

    For the computation of the ITRF, the data of the geometric space-geodetic techniques on co-location sites are combined. The combination increases the redundancy and offers the possibility to utilize the strengths of each technique while mitigating their weaknesses. To enable the combination of co-located techniques each technique needs to have a well-defined geometric reference point. The linking of the geometric reference points enables the combination of the technique-specific coordinate to a multi-technique site coordinate. The vectors between these reference points are called "local ties". The realization of local ties is usually reached by local surveys of the distances and or angles between the reference points. Identified temporal variations of the reference points are considered in the local tie determination only indirectly by assuming a mean position. Finally, the local ties measured in the local surveying network are to be transformed into the ITRF, the global geocentric equatorial coordinate system of the space-geodetic techniques. The current IERS procedure for the combination of the space-geodetic techniques includes the local tie vectors with an error floor of three millimeters plus a distance dependent component. This error floor, however, significantly underestimates the real accuracy of local tie determination. To fullfill the GGOS goals of 1 mm position and 0.1 mm/yr velocity accuracy, an accuracy of the local tie will be mandatory at the sub-mm level, which is currently not achievable. To assess the local tie effects on ITRF computations, investigations of the error sources will be done to realistically assess and consider them. Hence, a reasonable estimate of all the included errors of the various local ties is needed. An appropriate estimate could also improve the separation of local tie error and technique-specific error contributions to uncertainties and thus access the accuracy of space-geodetic techniques. Our investigations concern the simulation of the error contribution of each component of the local tie definition and determination. A closer look into the models of reference point definition, of accessibility, of measurement, and of transformation is necessary to properly model the error of the local tie. The effect of temporal variations on the local ties will be studied as well. The transformation of the local survey into the ITRF can be assumed to be the largest error contributor, in particular the orientation of the local surveying network to the ITRF.

  2. Compensation, benefits, and satisfaction: the Longitudinal Emergency Medical Technician Demographic Study (LEADS) Project.

    PubMed

    Brown, William E; Dawson, Drew; Levine, Roger

    2003-01-01

    To determine the compensation, benefit package, and level of satisfaction with the benefits of nationally registered emergency medical technicians (NREMTs) in 2001. The Longitudinal EMT Attribute Demographic Study (LEADS) Project included an 18-question snapshot survey on compensation with the 2001 core survey. This survey was sent to 4,835 randomly selected NREMTs. A total of 1,718 NREMT-Basics and NREMT-Paramedics, from 1,317 different postal zip codes, responded to the survey. Most NREMTs in the survey (86% of the compensated NREMT-Basics and 85% of the compensated NREMT-Paramedics) were employed primarily as patient care providers. For their emergency medical services (EMS) work in the previous 12 months, compensated NREMT-Basics had mean earnings of 18,324 US dollars (standard error, 978 US dollars) and compensated NREMT-Paramedics had mean earnings of 34,654 US dollars (standard error, 646 US dollars). At least 26% of compensated NREMT-Basics and 9% of compensated NREMT-Paramedics had no health insurance. The majority of compensated NREMTs (62% of the Basics and 57% of the Paramedics) reported their retirement plans were not adequate to meet their financial needs. EMTs are not satisfied with the appreciation and recognition they receive from EMS employers. About one-third (35% of the compensated NREMT-Basics and 30% of the compensated NREMT-Paramedics) were not satisfied with all of the benefits they receive from their EMS employer. Nearly all (94% of both compensated NREMT-Basics and NREMT-Paramedics) believed that EMTs should be paid more for the job that they do. The adequacy of EMT compensation and benefit packages is an area of concern. It is not unreasonable to believe that these factors are associated with EMT retention and attrition. Additional longitudinal EMT information on compensation and benefits are anticipated to determine the extent to which compensation and benefits are factors in EMT retention.

  3. Learning from Errors: Critical Incident Reporting in Nursing

    ERIC Educational Resources Information Center

    Gartmeier, Martin; Ottl, Eva; Bauer, Johannes; Berberat, Pascal Oliver

    2017-01-01

    Purpose: The purpose of this paper is to conceptualize error reporting as a strategy for informal workplace learning and investigate nurses' error reporting cost/benefit evaluations and associated behaviors. Design/methodology/approach: A longitudinal survey study was carried out in a hospital setting with two measurements (time 1 [t1]:…

  4. Seasonal variability of stratospheric methane: implications for constraining tropospheric methane budgets using total column observations

    NASA Astrophysics Data System (ADS)

    Saad, Katherine M.; Wunch, Debra; Deutscher, Nicholas M.; Griffith, David W. T.; Hase, Frank; De Mazière, Martine; Notholt, Justus; Pollard, David F.; Roehl, Coleen M.; Schneider, Matthias; Sussmann, Ralf; Warneke, Thorsten; Wennberg, Paul O.

    2016-11-01

    Global and regional methane budgets are markedly uncertain. Conventionally, estimates of methane sources are derived by bridging emissions inventories with atmospheric observations employing chemical transport models. The accuracy of this approach requires correctly simulating advection and chemical loss such that modeled methane concentrations scale with surface fluxes. When total column measurements are assimilated into this framework, modeled stratospheric methane introduces additional potential for error. To evaluate the impact of such errors, we compare Total Carbon Column Observing Network (TCCON) and GEOS-Chem total and tropospheric column-averaged dry-air mole fractions of methane. We find that the model's stratospheric contribution to the total column is insensitive to perturbations to the seasonality or distribution of tropospheric emissions or loss. In the Northern Hemisphere, we identify disagreement between the measured and modeled stratospheric contribution, which increases as the tropopause altitude decreases, and a temporal phase lag in the model's tropospheric seasonality driven by transport errors. Within the context of GEOS-Chem, we find that the errors in tropospheric advection partially compensate for the stratospheric methane errors, masking inconsistencies between the modeled and measured tropospheric methane. These seasonally varying errors alias into source attributions resulting from model inversions. In particular, we suggest that the tropospheric phase lag error leads to large misdiagnoses of wetland emissions in the high latitudes of the Northern Hemisphere.

  5. Problems with small area surveys: lensing covariance of supernova distance measurements.

    PubMed

    Cooray, Asantha; Huterer, Dragan; Holz, Daniel E

    2006-01-20

    While luminosity distances from type Ia supernovae (SNe) are a powerful probe of cosmology, the accuracy with which these distances can be measured is limited by cosmic magnification due to gravitational lensing by the intervening large-scale structure. Spatial clustering of foreground mass leads to correlated errors in SNe distances. By including the full covariance matrix of SNe, we show that future wide-field surveys will remain largely unaffected by lensing correlations. However, "pencil beam" surveys, and those with narrow (but possibly long) fields of view, can be strongly affected. For a survey with 30 arcmin mean separation between SNe, lensing covariance leads to a approximately 45% increase in the expected errors in dark energy parameters.

  6. Unmodeled observation error induces bias when inferring patterns and dynamics of species occurrence via aural detections

    USGS Publications Warehouse

    McClintock, Brett T.; Bailey, Larissa L.; Pollock, Kenneth H.; Simons, Theodore R.

    2010-01-01

    The recent surge in the development and application of species occurrence models has been associated with an acknowledgment among ecologists that species are detected imperfectly due to observation error. Standard models now allow unbiased estimation of occupancy probability when false negative detections occur, but this is conditional on no false positive detections and sufficient incorporation of explanatory variables for the false negative detection process. These assumptions are likely reasonable in many circumstances, but there is mounting evidence that false positive errors and detection probability heterogeneity may be much more prevalent in studies relying on auditory cues for species detection (e.g., songbird or calling amphibian surveys). We used field survey data from a simulated calling anuran system of known occupancy state to investigate the biases induced by these errors in dynamic models of species occurrence. Despite the participation of expert observers in simplified field conditions, both false positive errors and site detection probability heterogeneity were extensive for most species in the survey. We found that even low levels of false positive errors, constituting as little as 1% of all detections, can cause severe overestimation of site occupancy, colonization, and local extinction probabilities. Further, unmodeled detection probability heterogeneity induced substantial underestimation of occupancy and overestimation of colonization and local extinction probabilities. Completely spurious relationships between species occurrence and explanatory variables were also found. Such misleading inferences would likely have deleterious implications for conservation and management programs. We contend that all forms of observation error, including false positive errors and heterogeneous detection probabilities, must be incorporated into the estimation framework to facilitate reliable inferences about occupancy and its associated vital rate parameters.

  7. The Error in Total Error Reduction

    PubMed Central

    Witnauer, James E.; Urcelay, Gonzalo P.; Miller, Ralph R.

    2013-01-01

    Most models of human and animal learning assume that learning is proportional to the discrepancy between a delivered outcome and the outcome predicted by all cues present during that trial (i.e., total error across a stimulus compound). This total error reduction (TER) view has been implemented in connectionist and artificial neural network models to describe the conditions under which weights between units change. Electrophysiological work has revealed that the activity of dopamine neurons is correlated with the total error signal in models of reward learning. Similar neural mechanisms presumably support fear conditioning, human contingency learning, and other types of learning. Using a computational modelling approach, we compared several TER models of associative learning to an alternative model that rejects the TER assumption in favor of local error reduction (LER), which assumes that learning about each cue is proportional to the discrepancy between the delivered outcome and the outcome predicted by that specific cue on that trial. The LER model provided a better fit to the reviewed data than the TER models. Given the superiority of the LER model with the present data sets, acceptance of TER should be tempered. PMID:23891930

  8. A critical review of field techniques employed in the survey of large woody debris in river corridors: a central European perspective.

    PubMed

    Máčka, Zdeněk; Krejčí, Lukáš; Loučková, Blanka; Peterková, Lucie

    2011-10-01

    In forested watersheds, large woody debris (LWD) is an integral component of river channels and floodplains. Fallen trees have a significant impact on physical and ecological processes in fluvial ecosystems. An enormous body of literature concerning LWD in river corridors is currently available. However, synthesis and statistical treatment of the published data are hampered by the heterogeneity of methodological approaches. Likewise, the precision and accuracy of data arising out of published surveys have yet to be assessed. For this review, a literature scrutiny of 100 randomly selected research papers was made to examine the most frequently surveyed LWD variables and field procedures. Some 29 variables arose for individual LWD pieces, and 15 variables for wood accumulations. The literature survey revealed a large variability in field procedures for LWD surveys. In many studies (32), description of field procedure proved less than adequate, rendering the results impossible to reproduce in comparable fashion by other researchers. This contribution identifies the main methodological problems and sources of error associated with the mapping and measurement of the most frequently surveyed variables of LWD, both as individual pieces and in accumulations. The discussion stems from our own field experience with LWD survey in river systems of various geomorphic styles and types of riparian vegetation in the Czech Republic in the 2004-10 period. We modelled variability in terms of LWD number, volume, and biomass for three geomorphologically contrasting river systems. The results appeared to be sensitive, in the main, to sampling strategy and prevailing field conditions; less variability was produced by errors of measurement. Finally, we propose a comprehensive standard field procedure for LWD surveyors, including a total of 20 variables describing spatial position, structural characteristics and the functions and dynamics of LWD. However, resources are only rarely available for highly time-demanding surveys. We therefore include a set of core LWD metrics for routine baseline surveys of individual LWD pieces (diameter, length, rootwad size, preservation of branches and rootwad, geomorphological/ecological function, stability/mobility) and wood accumulations (number of LWD pieces, geometrical dimensions, channel blockage, wood/air ratio), which may provide useful background information for river management, hydromorphological assessment, habitat evaluation, and inter-regional comparisons.

  9. Usual Intake of Added Sugars and Lipid Profiles Among the U.S. Adolescents: National Health and Nutrition Examination Survey, 2005–2010

    PubMed Central

    Zhang, Zefeng; Gillespie, Cathleen; Welsh, Jean A.; Hu, Frank B.; Yang, Quanhe

    2015-01-01

    Purpose Although studies suggest that higher consumption of added sugars is associated with cardiovascular risk factors in adolescents, none have adjusted for measurement errors or examined its association with the risk of dyslipidemia. Methods We analyzed data of 4,047 adolescents aged 12–19 years from the 2005–2010 National Health and Nutrition Examination Survey, a nationally representative, cross-sectional survey. We estimated the usual percentage of calories (%kcal) from added sugars using up to two 24-hour dietary recalls and the National Cancer Institute method to account for measurement error. Results The average usual %kcal from added sugars was 16.0%. Most adolescents (88.0%) had usual intake of ≥10% of total energy, and 5.5% had usual intake of ≥25% of total energy. After adjustment for potential confounders, usual %kcal from added sugars was inversely associated with high-density lipoprotein (HDL) and positively associated with triglycerides (TGs), TG-to-HDL ratio, and total cholesterol (TC) to HDL ratio. Comparing the lowest and highest quintiles of intake, HDLs were 49.5 (95% confidence interval [CI], 47.4–51.6) and 46.4 mg/dL(95% CI, 45.2–47.6; p = .009), TGs were 85.6 (95% CI, 75.5–95.6) and 101.2 mg/dL(95% CI, 88.7–113.8; p = .037), TG to HDL ratios were 2.28 (95% CI, 1.84–2.70) and 2.73 (95% CI, 2.11–3.32; p = .017), and TC to HDL ratios were 3.41 (95% CI, 3.03–3.79) and 3.70 (95% CI, 3.24–4.15; p = .028), respectively. Comparing the highest and lowest quintiles of intake, adjusted odds ratio of dyslipidemia was 1.41 (95% CI, 1.01–1.95). The patterns were consistent across sex, race/ethnicity, and body mass index subgroups. No association was found for TC, low-density lipoprotein, and non-HDL cholesterol. Conclusions Most U.S. adolescents consumed more added sugars than recommended for heart health. Usual intake of added sugars was significantly associated with several measures of lipid profiles. PMID:25703323

  10. Evaluation and Comparison of the Processing Methods of Airborne Gravimetry Concerning the Errors Effects on Downward Continuation Results: Case Studies in Louisiana (USA) and the Tibetan Plateau (China).

    PubMed

    Zhao, Qilong; Strykowski, Gabriel; Li, Jiancheng; Pan, Xiong; Xu, Xinyu

    2017-05-25

    Gravity data gaps in mountainous areas are nowadays often filled in with the data from airborne gravity surveys. Because of the errors caused by the airborne gravimeter sensors, and because of rough flight conditions, such errors cannot be completely eliminated. The precision of the gravity disturbances generated by the airborne gravimetry is around 3-5 mgal. A major obstacle in using airborne gravimetry are the errors caused by the downward continuation. In order to improve the results the external high-accuracy gravity information e.g., from the surface data can be used for high frequency correction, while satellite information can be applying for low frequency correction. Surface data may be used to reduce the systematic errors, while regularization methods can reduce the random errors in downward continuation. Airborne gravity surveys are sometimes conducted in mountainous areas and the most extreme area of the world for this type of survey is the Tibetan Plateau. Since there are no high-accuracy surface gravity data available for this area, the above error minimization method involving the external gravity data cannot be used. We propose a semi-parametric downward continuation method in combination with regularization to suppress the systematic error effect and the random error effect in the Tibetan Plateau; i.e., without the use of the external high-accuracy gravity data. We use a Louisiana airborne gravity dataset from the USA National Oceanic and Atmospheric Administration (NOAA) to demonstrate that the new method works effectively. Furthermore, and for the Tibetan Plateau we show that the numerical experiment is also successfully conducted using the synthetic Earth Gravitational Model 2008 (EGM08)-derived gravity data contaminated with the synthetic errors. The estimated systematic errors generated by the method are close to the simulated values. In addition, we study the relationship between the downward continuation altitudes and the error effect. The analysis results show that the proposed semi-parametric method combined with regularization is efficient to address such modelling problems.

  11. Evaluation and Comparison of the Processing Methods of Airborne Gravimetry Concerning the Errors Effects on Downward Continuation Results: Case Studies in Louisiana (USA) and the Tibetan Plateau (China)

    PubMed Central

    Zhao, Qilong; Strykowski, Gabriel; Li, Jiancheng; Pan, Xiong; Xu, Xinyu

    2017-01-01

    Gravity data gaps in mountainous areas are nowadays often filled in with the data from airborne gravity surveys. Because of the errors caused by the airborne gravimeter sensors, and because of rough flight conditions, such errors cannot be completely eliminated. The precision of the gravity disturbances generated by the airborne gravimetry is around 3–5 mgal. A major obstacle in using airborne gravimetry are the errors caused by the downward continuation. In order to improve the results the external high-accuracy gravity information e.g., from the surface data can be used for high frequency correction, while satellite information can be applying for low frequency correction. Surface data may be used to reduce the systematic errors, while regularization methods can reduce the random errors in downward continuation. Airborne gravity surveys are sometimes conducted in mountainous areas and the most extreme area of the world for this type of survey is the Tibetan Plateau. Since there are no high-accuracy surface gravity data available for this area, the above error minimization method involving the external gravity data cannot be used. We propose a semi-parametric downward continuation method in combination with regularization to suppress the systematic error effect and the random error effect in the Tibetan Plateau; i.e., without the use of the external high-accuracy gravity data. We use a Louisiana airborne gravity dataset from the USA National Oceanic and Atmospheric Administration (NOAA) to demonstrate that the new method works effectively. Furthermore, and for the Tibetan Plateau we show that the numerical experiment is also successfully conducted using the synthetic Earth Gravitational Model 2008 (EGM08)-derived gravity data contaminated with the synthetic errors. The estimated systematic errors generated by the method are close to the simulated values. In addition, we study the relationship between the downward continuation altitudes and the error effect. The analysis results show that the proposed semi-parametric method combined with regularization is efficient to address such modelling problems. PMID:28587086

  12. Evaluation and Comparison of the Processing Methods of Airborne Gravimetry Concerning the Errors Effects on Downward Continuation Results: Case Studies in Louisiana (USA) and the Tibetan Plateau (China)

    NASA Astrophysics Data System (ADS)

    Zhao, Q.

    2017-12-01

    Gravity data gaps in mountainous areas are nowadays often filled in with the data from airborne gravity surveys. Because of the errors caused by the airborne gravimeter sensors, and because of rough flight conditions, such errors cannot be completely eliminated. The precision of the gravity disturbances generated by the airborne gravimetry is around 3-5 mgal. A major obstacle in using airborne gravimetry are the errors caused by the downward continuation. In order to improve the results the external high-accuracy gravity information e.g., from the surface data can be used for high frequency correction, while satellite information can be applying for low frequency correction. Surface data may be used to reduce the systematic errors, while regularization methods can reduce the random errors in downward continuation. Airborne gravity surveys are sometimes conducted in mountainous areas and the most extreme area of the world for this type of survey is the Tibetan Plateau. Since there are no high-accuracy surface gravity data available for this area, the above error minimization method involving the external gravity data cannot be used. We propose a semi-parametric downward continuation method in combination with regularization to suppress the systematic error effect and the random error effect in the Tibetan Plateau; i.e., without the use of the external high-accuracy gravity data. We use a Louisiana airborne gravity dataset from the USA National Oceanic and Atmospheric Administration (NOAA) to demonstrate that the new method works effectively. Furthermore, and for the Tibetan Plateau we show that the numerical experiment is also successfully conducted using the synthetic Earth Gravitational Model 2008 (EGM08)-derived gravity data contaminated with the synthetic errors. The estimated systematic errors generated by the method are close to the simulated values. In addition, we study the relationship between the downward continuation altitudes and the error effect. The analysis results show that the proposed semi-parametric method combined with regularization is efficient to address such modelling problems.

  13. Short-term leprosy forecasting from an expert opinion survey.

    PubMed

    Deiner, Michael S; Worden, Lee; Rittel, Alex; Ackley, Sarah F; Liu, Fengchen; Blum, Laura; Scott, James C; Lietman, Thomas M; Porco, Travis C

    2017-01-01

    We conducted an expert survey of leprosy (Hansen's Disease) and neglected tropical disease experts in February 2016. Experts were asked to forecast the next year of reported cases for the world, for the top three countries, and for selected states and territories of India. A total of 103 respondents answered at least one forecasting question. We elicited lower and upper confidence bounds. Comparing these results to regression and exponential smoothing, we found no evidence that any forecasting method outperformed the others. We found evidence that experts who believed it was more likely to achieve global interruption of transmission goals and disability reduction goals had higher error scores for India and Indonesia, but lower for Brazil. Even for a disease whose epidemiology changes on a slow time scale, forecasting exercises such as we conducted are simple and practical. We believe they can be used on a routine basis in public health.

  14. Short-term leprosy forecasting from an expert opinion survey

    PubMed Central

    Deiner, Michael S.; Worden, Lee; Rittel, Alex; Ackley, Sarah F.; Liu, Fengchen; Blum, Laura; Scott, James C.; Lietman, Thomas M.

    2017-01-01

    We conducted an expert survey of leprosy (Hansen’s Disease) and neglected tropical disease experts in February 2016. Experts were asked to forecast the next year of reported cases for the world, for the top three countries, and for selected states and territories of India. A total of 103 respondents answered at least one forecasting question. We elicited lower and upper confidence bounds. Comparing these results to regression and exponential smoothing, we found no evidence that any forecasting method outperformed the others. We found evidence that experts who believed it was more likely to achieve global interruption of transmission goals and disability reduction goals had higher error scores for India and Indonesia, but lower for Brazil. Even for a disease whose epidemiology changes on a slow time scale, forecasting exercises such as we conducted are simple and practical. We believe they can be used on a routine basis in public health. PMID:28813531

  15. The clustering of the SDSS-IV extended Baryon Oscillation Spectroscopic Survey DR14 quasar sample: structure growth rate measurement from the anisotropic quasar power spectrum in the redshift range 0.8 < z < 2.2

    NASA Astrophysics Data System (ADS)

    Gil-Marín, Héctor; Guy, Julien; Zarrouk, Pauline; Burtin, Etienne; Chuang, Chia-Hsun; Percival, Will J.; Ross, Ashley J.; Ruggeri, Rossana; Tojerio, Rita; Zhao, Gong-Bo; Wang, Yuting; Bautista, Julian; Hou, Jiamin; Sánchez, Ariel G.; Pâris, Isabelle; Baumgarten, Falk; Brownstein, Joel R.; Dawson, Kyle S.; Eftekharzadeh, Sarah; González-Pérez, Violeta; Habib, Salman; Heitmann, Katrin; Myers, Adam D.; Rossi, Graziano; Schneider, Donald P.; Seo, Hee-Jong; Tinker, Jeremy L.; Zhao, Cheng

    2018-06-01

    We analyse the clustering of the Sloan Digital Sky Survey IV extended Baryon Oscillation Spectroscopic Survey Data Release 14 quasar sample (DR14Q). We measure the redshift space distortions using the power-spectrum monopole, quadrupole, and hexadecapole inferred from 148 659 quasars between redshifts 0.8 and 2.2, covering a total sky footprint of 2112.9 deg2. We constrain the logarithmic growth of structure times the amplitude of dark matter density fluctuations, fσ8, and the Alcock-Paczynski dilation scales that allow constraints to be placed on the angular diameter distance DA(z) and the Hubble H(z) parameter. At the effective redshift of zeff = 1.52, fσ8(zeff) = 0.420 ± 0.076, H(z_eff)=[162± 12] (r_s^fid/r_s) {km s}^{-1} Mpc^{-1}, and D_A(z_eff)=[1.85± 0.11]× 10^3 (r_s/r_s^fid) Mpc, where rs is the comoving sound horizon at the baryon drag epoch and the superscript `fid' stands for its fiducial value. The errors take into account the full error budget, including systematics and statistical contributions. These results are in full agreement with the current Λ-Cold Dark Matter cosmological model inferred from Planck measurements. Finally, we compare our measurements with other eBOSS companion papers and find excellent agreement, demonstrating the consistency and complementarity of the different methods used for analysing the data.

  16. Truth or consequences: the intertemporal consistency of adolescent self-report on the Youth Risk Behavior Survey.

    PubMed

    Rosenbaum, Janet E

    2009-06-01

    Surveys are the primary information source about adolescents' health risk behaviors, but adolescents may not report their behaviors accurately. Survey data are used for formulating adolescent health policy, and inaccurate data can cause mistakes in policy creation and evaluation. The author used test-retest data from the Youth Risk Behavior Survey (United States, 2000) to compare adolescents' responses to 72 questions about their risk behaviors at a 2-week interval. Each question was evaluated for prevalence change and 3 measures of unreliability: inconsistency (retraction and apparent initiation), agreement measured as tetrachoric correlation, and estimated error due to inconsistency assessed with a Bayesian method. Results showed that adolescents report their sex, drug, alcohol, and tobacco histories more consistently than other risk behaviors in a 2-week period, opposite their tendency over longer intervals. Compared with other Youth Risk Behavior Survey topics, most sex, drug, alcohol, and tobacco items had stable prevalence estimates, higher average agreement, and lower estimated measurement error. Adolescents reported their weight control behaviors more unreliably than other behaviors, particularly problematic because of the increased investment in adolescent obesity research and reliance on annual surveys for surveillance and policy evaluation. Most weight control items had unstable prevalence estimates, lower average agreement, and greater estimated measurement error than other topics.

  17. Automatic readout micrometer

    DOEpatents

    Lauritzen, Ted

    1982-01-01

    A measuring system is disclosed for surveying and very accurately positioning objects with respect to a reference line. A principal use of this surveying system is for accurately aligning the electromagnets which direct a particle beam emitted from a particle accelerator. Prior art surveying systems require highly skilled surveyors. Prior art systems include, for example, optical surveying systems which are susceptible to operator reading errors, and celestial navigation-type surveying systems, with their inherent complexities. The present invention provides an automatic readout micrometer which can very accurately measure distances. The invention has a simplicity of operation which practically eliminates the possibilities of operator optical reading error, owning to the elimination of traditional optical alignments for making measurements. The invention has an extendable arm which carries a laser surveying target. The extendable arm can be continuously positioned over its entire length of travel by either a coarse or fine adjustment without having the fine adjustment outrun the coarse adjustment until a reference laser beam is centered on the target as indicated by a digital readout. The length of the micrometer can then be accurately and automatically read by a computer and compared with a standardized set of alignment measurements. Due to its construction, the micrometer eliminates any errors due to temperature changes when the system is operated within a standard operating temperature range.

  18. Automatic readout micrometer

    DOEpatents

    Lauritzen, T.

    A measuring system is described for surveying and very accurately positioning objects with respect to a reference line. A principle use of this surveying system is for accurately aligning the electromagnets which direct a particle beam emitted from a particle accelerator. Prior art surveying systems require highly skilled surveyors. Prior art systems include, for example, optical surveying systems which are susceptible to operator reading errors, and celestial navigation-type surveying systems, with their inherent complexities. The present invention provides an automatic readout micrometer which can very accurately measure distances. The invention has a simplicity of operation which practically eliminates the possibilities of operator optical reading error, owning to the elimination of traditional optical alignments for making measurements. The invention has an extendable arm which carries a laser surveying target. The extendable arm can be continuously positioned over its entire length of travel by either a coarse of fine adjustment without having the fine adjustment outrun the coarse adjustment until a reference laser beam is centered on the target as indicated by a digital readout. The length of the micrometer can then be accurately and automatically read by a computer and compared with a standardized set of alignment measurements. Due to its construction, the micrometer eliminates any errors due to temperature changes when the system is operated within a standard operating temperature range.

  19. Biases in Planet Occurrence Caused by Unresolved Binaries in Transit Surveys

    NASA Astrophysics Data System (ADS)

    Bouma, L. G.; Masuda, Kento; Winn, Joshua N.

    2018-06-01

    Wide-field surveys for transiting planets, such as the NASA Kepler and TESS missions, are usually conducted without knowing which stars have binary companions. Unresolved and unrecognized binaries give rise to systematic errors in planet occurrence rates, including misclassified planets and mistakes in completeness corrections. The individual errors can have different signs, making it difficult to anticipate the net effect on inferred occurrence rates. Here, we use simplified models of signal-to-noise limited transit surveys to try and clarify the situation. We derive a formula for the apparent occurrence rate density measured by an observer who falsely assumes all stars are single. The formula depends on the binary fraction, the mass function of the secondary stars, and the true occurrence of planets around primaries, secondaries, and single stars. It also takes into account the Malmquist bias by which binaries are over-represented in flux-limited samples. Application of the formula to an idealized Kepler-like survey shows that for planets larger than 2 R ⊕, the net systematic error is of order 5%. In particular, unrecognized binaries are unlikely to be the reason for the apparent discrepancies between hot-Jupiter occurrence rates measured in different surveys. For smaller planets the errors are potentially larger: the occurrence of Earth-sized planets could be overestimated by as much as 50%. We also show that whenever high-resolution imaging reveals a transit host star to be a binary, the planet is usually more likely to orbit the primary star than the secondary star.

  20. Imputing Risk Tolerance From Survey Responses

    PubMed Central

    Kimball, Miles S.; Sahm, Claudia R.; Shapiro, Matthew D.

    2010-01-01

    Economic theory assigns a central role to risk preferences. This article develops a measure of relative risk tolerance using responses to hypothetical income gambles in the Health and Retirement Study. In contrast to most survey measures that produce an ordinal metric, this article shows how to construct a cardinal proxy for the risk tolerance of each survey respondent. The article also shows how to account for measurement error in estimating this proxy and how to obtain consistent regression estimates despite the measurement error. The risk tolerance proxy is shown to explain differences in asset allocation across households. PMID:20407599

  1. The FIRST Survey: Faint Images of the Radio Sky at Twenty Centimeters

    NASA Astrophysics Data System (ADS)

    Becker, Robert H.; White, Richard L.; Helfand, David J.

    1995-09-01

    The FIRST survey to produce Faint Images of the Radio Sky at Twenty centimeters is now underway using the NRAO Very Large Array. We describe here the scientific motivation for a large-area sky survey at radio frequencies which has a sensitivity and angular resolution comparable to the Palomar Observatory Sky Survey, and we recount the history that led to the current survey project. The technical design of the survey is covered in detail, including a description and justification of the grid pattern chosen, the rationale behind the integration time and angular resolution selected, and a summary of the other considerations which informed our planning for the project. A comprehensive description of the automated data analysis pipeline we have developed is presented. We also report here the results of the first year of FIRST observations. A total of 144 hr of time in 1993 April and May was used for a variety of tests, as well as to cover an initial strip of the survey extending between 07h 15m and 16h 30m in a 2°.8 wide declination zone passing through the local zenith (28.2 <δ < 31.0). A total of 2153 individual pointings yielded an image database containing 1039 merged images 46'.5 × 34'.5 in extent with 1".8 pixels and a typical rms of 0.13 mJy. A catalog derived from this 300 deg2 region contains 28,000 radio sources. We have performed extensive tests on the images and source list in order to establish the photometric and astrometric accuracy of these data products. We find systematic astrometric errors of < 0".05 individual sources down to the 1 mJy survey flux density threshold have 90% confidence error circles with radii of < 1". CLEAN bias introduces a systematic underestimate of point-source flux densities of ˜0.25 mJy; the bias is more severe for extended sources. Nonetheless, a comparison with a published deep survey field demonstrates that we successfully detect 39/49 sources with integrated flux densities greater than 0.75 mJy, including 19 of 20 sources above 2.0 mJy; the sources not detected are known to be very extended and so have surface brightnesses well below our threshold. With 480 hr of observing time committed for each of the next three B-configuration periods, FIRST will complete nearly one-half of its goal of covering the 10,000 deg2 of the north Galactic cap scheduled for inclusion in the Sloan Digital Sky Survey. All of the FIRST data raw visibilities, self-calibrated UV data sets, individual pointing maps, final merged images, source catalogs, and individual source images are being placed in the public domain as soon as they are verified; all of the 1993 data are now available through the NRAO and/or the STScI archive. We conclude with a brief summary of the scientific significance of FIRST, which represents an improvement by a factor of 50 in both angular resolution and sensitivity over the best available large area radio surveys.

  2. Early neurological and cognitive impairments in subclinical cerebrovascular disease.

    PubMed

    Atanassova, Penka A; Massaldjieva, Radka I; Dimitrov, Borislav D; Aleksandrov, Aleksandar S; Semerdjieva, Maria A; Tsvetkova, Silvia B; Chalakova, Nedka T; Chompalov, Kostadin A

    2016-01-01

    The subclinical cerebrovascular disease (SCVD) is an important public health problem with demonstrated prognostic significance for stroke, future cognitive decline, and progression to dementia. The earliest possible detection of the silent presence of SCVD in adults at age at risk with normal functioning is very important for both clinical doctors and scientists. Seventy-seven adult volunteers, recruited during the years 2005-2007, with mean age 58.7 (standard deviation 5.9) years, were assessed by four subtests from the Cambridge Neuropsychological Test Automated Battery (CANTAB)-Eclipse cognitive assessment system. We used a questionnaire survey for the presence of cerebrovascular risk factors (CVRFs) such as arterial hypertension, smoking and dyslipidemia, among others, as well as instrumental (Doppler examination) and neurological magnetic resonance imaging (MRI) procedures. Descriptive statistics, comparison (t-test, Chi-square) and univariate methods were used as followed by multifactor logistic regression and receiver operating characteristics analyses. The risk factor questionnaire revealed nonspecific symptoms in 44 (67.7%) of the subjects. In 42 (64.6%) of all 65 subjects, we found at least one of the conventional CVRFs. Abnormal findings from the extra- and trans-cranial Doppler examination were established in 38 (58.5%) of all studied volunteers. Thirty-four subjects had brain MRI (52.3%), and abnormal findings were found in 12 (35.3%) of them. Two of the four subtests of CANTAB tool appeared to be potentially promising predictors of the outcome, as found at the univariate analysis (spatial working memory 1 [SWM1] total errors; intra-extra dimensional set 1 [IED1] total errors [adjusted]; IED2 total trials [adjusted]). We established that the best accuracy of 82.5% was achieved by a multifactor interaction logistic regression model, with the role CVRF and combined CANTAB predictor "IED total ratio (errors/trials) × SWM1 total errors" (P = 0.006). Our results have contributed to the hypothesis that it is possible to identify, by noninvasive methods, subjects at age at risk who have mild degree of cognitive impairment and to establish the significant relationship of this impairment with existing CVRFs, nonspecific symptoms and subclinical abnormal brain Doppler/MRI findings. We created a combined neuropsychological predictor that was able to clearly distinguish between the presence and absence of abnormal Doppler/MRI findings. This pilot prognostic model showed a relatively high accuracy of >80%; therefore, the predictors may serve as biomarkers for SCVD in subjects at age at risk (51-65 years).

  3. Estimating the size of the MSM populations for 38 European countries by calculating the survey-surveillance discrepancies (SSD) between self-reported new HIV diagnoses from the European MSM internet survey (EMIS) and surveillance-reported HIV diagnoses among MSM in 2009

    PubMed Central

    2013-01-01

    Background Comparison of rates of newly diagnosed HIV infections among MSM across countries is challenging for a variety of reasons, including the unknown size of MSM populations. In this paper we propose a method of triangulating surveillance data with data collected in a pan-European MSM Internet Survey (EMIS) to estimate the sizes of the national MSM populations and the rates at which HIV is being diagnosed amongst them by calculating survey-surveillance discrepancies (SSD) as a measure of selection biases of survey participants. Methods In 2010, the first EMIS collected self-reported data on HIV diagnoses among more than 180,000 MSM in 38 countries of Europe. These data were compared with data from national HIV surveillance systems to explore possible sampling and reporting biases in the two approaches. The Survey-Surveillance Discrepancy (SSD) represents the ratio of survey members diagnosed in 2009 (HIVsvy) to total survey members (Nsvy), divided by the ratio of surveillance reports of diagnoses in 2009 (HIVpop) to the estimated total MSM population (Npop). As differences in household internet access may be a key component of survey selection biases, we analysed the relationship between household internet access and SSD in countries conducting consecutive MSM internet surveys at different time points with increasing levels of internet access. The empirically defined SSD was used to calculate the respective MSM population sizes (Npop), using the formula Npop = HIVpop*Nsvy*SSD/HIVsvy. Results Survey-surveillance discrepancies for consecutive MSM internet surveys between 2003 and 2010 with different levels of household internet access were best described by a potential equation, with high SSD at low internet access, declining to a level around 2 with broad access. The lowest SSD was calculated for the Netherlands with 1.8, the highest for Moldova with 9.0. Taking the best available estimate for surveillance reports of HIV diagnoses among MSM in 2009 (HIVpop), the relative MSM population sizes were between 0.03% and 5.6% of the adult male population aged 15–64. The correlation between recently diagnosed (2009) HIV in EMIS participants and HIV diagnosed among MSM in 2009 as reported in the national surveillance systems was very high (R2 = 0.88) when using the calculated MSM population size. Conclusions Npop and HIVpop were unreliably low for several countries. We discuss and identify possible measurement errors for countries with calculated MSM population sizes above 3% and below 1% of the adult male population. In most cases the number of new HIV diagnoses in MSM in the surveillance system appears too low. In some cases, measurement errors may be due to small EMIS sample sizes. It must be assumed that the SSD is modified by country-specific factors. Comparison of community-based survey data with surveillance data suggests only minor sampling biases in the former that – except for a few countries - do not seriously distort inter-country comparability, despite large variations in participation rates across countries. Internet surveys are useful complements to national surveillance systems, highlighting deficiencies and allowing estimates of the range of newly diagnosed infections among MSM in countries where surveillance systems fail to accurately provide such data. PMID:24088198

  4. Comparing errors in ED computer-assisted vs conventional pediatric drug dosing and administration.

    PubMed

    Yamamoto, Loren; Kanemori, Joan

    2010-06-01

    Compared to fixed-dose single-vial drug administration in adults, pediatric drug dosing and administration requires a series of calculations, all of which are potentially error prone. The purpose of this study is to compare error rates and task completion times for common pediatric medication scenarios using computer program assistance vs conventional methods. Two versions of a 4-part paper-based test were developed. Each part consisted of a set of medication administration and/or dosing tasks. Emergency department and pediatric intensive care unit nurse volunteers completed these tasks using both methods (sequence assigned to start with a conventional or a computer-assisted approach). Completion times, errors, and the reason for the error were recorded. Thirty-eight nurses completed the study. Summing the completion of all 4 parts, the mean conventional total time was 1243 seconds vs the mean computer program total time of 879 seconds (P < .001). The conventional manual method had a mean of 1.8 errors vs the computer program with a mean of 0.7 errors (P < .001). Of the 97 total errors, 36 were due to misreading the drug concentration on the label, 34 were due to calculation errors, and 8 were due to misplaced decimals. Of the 36 label interpretation errors, 18 (50%) occurred with digoxin or insulin. Computerized assistance reduced errors and the time required for drug administration calculations. A pattern of errors emerged, noting that reading/interpreting certain drug labels were more error prone. Optimizing the layout of drug labels could reduce the error rate for error-prone labels. Copyright (c) 2010 Elsevier Inc. All rights reserved.

  5. Clinical biochemistry laboratory rejection rates due to various types of preanalytical errors.

    PubMed

    Atay, Aysenur; Demir, Leyla; Cuhadar, Serap; Saglam, Gulcan; Unal, Hulya; Aksun, Saliha; Arslan, Banu; Ozkan, Asuman; Sutcu, Recep

    2014-01-01

    Preanalytical errors, along the process from the beginning of test requests to the admissions of the specimens to the laboratory, cause the rejection of samples. The aim of this study was to better explain the reasons of rejected samples, regarding to their rates in certain test groups in our laboratory. This preliminary study was designed on the rejected samples in one-year period, based on the rates and types of inappropriateness. Test requests and blood samples of clinical chemistry, immunoassay, hematology, glycated hemoglobin, coagulation and erythrocyte sedimentation rate test units were evaluated. Types of inappropriateness were evaluated as follows: improperly labelled samples, hemolysed, clotted specimen, insufficient volume of specimen and total request errors. A total of 5,183,582 test requests from 1,035,743 blood collection tubes were considered. The total rejection rate was 0.65 %. The rejection rate of coagulation group was significantly higher (2.28%) than the other test groups (P < 0.001) including insufficient volume of specimen error rate as 1.38%. Rejection rates of hemolysis, clotted specimen and insufficient volume of sample error were found to be 8%, 24% and 34%, respectively. Total request errors, particularly, for unintelligible requests were 32% of the total for inpatients. The errors were especially attributable to unintelligible requests of inappropriate test requests, improperly labelled samples for inpatients and blood drawing errors especially due to insufficient volume of specimens in a coagulation test group. Further studies should be performed after corrective and preventive actions to detect a possible decrease in rejecting samples.

  6. Status of pelagic prey fishes in Lake Michigan, 2012

    USGS Publications Warehouse

    Warner, David M.; O'Brien, Timothy P.; Farha, Steve A.; Claramunt, Randall M.; Hanson, Dale

    2012-01-01

    Acoustic surveys were conducted in late summer/early fall during the years 1992-1996 and 2001-2012 to estimate pelagic prey fish biomass in Lake Michigan. Midwater trawling during the surveys as well as target strength provided a measure of species and size composition of the fish community for use in scaling acoustic data and providing species-specific abundance estimates. The 2012 survey consisted of 26 acoustic transects (576 km total) and 31 midwater tows. Mean total prey fish biomass was 6.4 kg/ha (relative standard error, RSE = 15%) or 31 kilotonnes (kt = 1,000 metric tons), which was 1.5 times the estimate for 2011 and 22% of the long-term mean. The increase from 2011 resulted from increased biomass of age-0 alewife, age-1 or older alewife, and large bloater. The abundance of the 2012 alewife year class was similar to the average, and this year-class contributed 35% of total alewife biomass (4.9 kg/ha, RSE = 17%), while the 2010 alewife year-class contributed 58%. The 2010 year class made up 89% of age-1 or older alewife biomass. In 2012, alewife comprised 77% of total prey fish biomass, while rainbow smelt and bloater were 4 and 19% of total biomass, respectively. Rainbow smelt biomass in 2012 (0.25 kg/ha, RSE = 17%) was 40% of the rainbow smelt biomass in 2011 and 5% of the long term mean. Bloater biomass was much lower (1.2 kg/ha, RSE = 12%) than in the 1990s, and mean density of small bloater in 2012 (191 fish/ha, RSE = 24%) was lower than peak values observed in 2007-2009. In 2012, pelagic prey fish biomass in Lake Michigan was similar to Lake Superior and Lake Huron. Prey fish biomass remained well below the Fish Community Objectives target of 500-800 kt, and key native species remain absent or rare.

  7. Guidelines and recommendations for household and external travel surveys.

    DOT National Transportation Integrated Search

    2010-03-01

    The Texas Department of Transportation has a comprehensive ongoing travel survey program. Research under RMC : 0-5711 examined areas within two select travel surveys concerning quality control issues involved in data collection : and sampling error i...

  8. Improving accuracy in household and external travel surveys.

    DOT National Transportation Integrated Search

    2010-01-01

    The Texas Department of Transportation has a comprehensive on-going travel survey program. This research examines areas within two select travel surveys concerning quality control issues involved in data collection and sampling error in the data caus...

  9. A relational leadership perspective on unit-level safety climate.

    PubMed

    Thompson, Debra N; Hoffman, Leslie A; Sereika, Susan M; Lorenz, Holly L; Wolf, Gail A; Burns, Helen K; Minnier, Tamra E; Ramanujam, Rangaraj

    2011-11-01

    This study compared nursing staff perceptions of safety climate in clinical units characterized by high and low ratings of leader-member exchange (LMX) and explored characteristics that might account for differences. Frontline nursing leaders' actions are critical to ensure patient safety. Specific leadership behaviors to achieve this goal are underexamined. The LMX perspective has shown promise in nonhealthcare settings as a means to explain safety climate perceptions. Cross-sectional survey of staff (n = 711) and unit directors from 34 inpatient units in an academic medical center was conducted. Significant differences were found between high and low LMX scoring units on supervisor safety expectations, organizational learning-continuous improvement, total communication, feedback and communication about errors, and nonpunitive response to errors. The LMX perspective can be used to identify differences in perceptions of safety climate among nursing staff. Future studies are needed to identify strategies to improve staff safety attitudes and behaviors. Copyright © 2011 Wolters Kluwer Health | Lippincott Williams & Wilkins

  10. 45 CFR 98.102 - Content of Error Rate Reports.

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... Funds and State Matching and Maintenance-of-Effort (MOE Funds): (1) Percentage of cases with an error... cases in the sample with an error compared to the total number of cases in the sample; (2) Percentage of cases with an improper payment (both over and under payments), expressed as the total number of cases in...

  11. Total ozone trend significance from space time variability of daily Dobson data

    NASA Technical Reports Server (NTRS)

    Wilcox, R. W.

    1981-01-01

    Estimates of standard errors of total ozone time and area means, as derived from ozone's natural temporal and spatial variability and autocorrelation in middle latitudes determined from daily Dobson data are presented. Assessing the significance of apparent total ozone trends is equivalent to assessing the standard error of the means. Standard errors of time averages depend on the temporal variability and correlation of the averaged parameter. Trend detectability is discussed, both for the present network and for satellite measurements.

  12. Relationships between autofocus methods for SAR and self-survey techniques for SONAR. [Synthetic Aperture Radar (SAR)

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wahl, D.E.; Jakowatz, C.V. Jr.; Ghiglia, D.C.

    1991-01-01

    Autofocus methods in SAR and self-survey techniques in SONAR have a common mathematical basis in that they both involve estimation and correction of phase errors introduced by sensor position uncertainties. Time delay estimation and correlation methods have been shown to be effective in solving the self-survey problem for towed SONAR arrays. Since it can be shown that platform motion errors introduce similar time-delay estimation problems in SAR imaging, the question arises as to whether such techniques could be effectively employed for autofocus of SAR imagery. With a simple mathematical model for motion errors in SAR, we will show why suchmore » correlation/time-delay techniques are not nearly as effective as established SAR autofocus algorithms such as phase gradient autofocus or sub-aperture based methods. This analysis forms an important bridge between signal processing methodologies for SAR and SONAR. 5 refs., 4 figs.« less

  13. OBSERVATIONS OF BINARY STARS WITH THE DIFFERENTIAL SPECKLE SURVEY INSTRUMENT. III. MEASURES BELOW THE DIFFRACTION LIMIT OF THE WIYN TELESCOPE

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Horch, Elliott P.; Van Altena, William F.; Howell, Steve B.

    2011-06-15

    In this paper, we study the ability of CCD- and electron-multiplying-CCD-based speckle imaging to obtain reliable astrometry and photometry of binary stars below the diffraction limit of the WIYN 3.5 m Telescope. We present a total of 120 measures of binary stars, 75 of which are below the diffraction limit. The measures are divided into two groups that have different measurement accuracy and precision. The first group is composed of standard speckle observations, that is, a sequence of speckle images taken in a single filter, while the second group consists of paired observations where the two observations are taken onmore » the same observing run and in different filters. The more recent paired observations were taken simultaneously with the Differential Speckle Survey Instrument, which is a two-channel speckle imaging system. In comparing our results to the ephemeris positions of binaries with known orbits, we find that paired observations provide the opportunity to identify cases of systematic error in separation below the diffraction limit and after removing these from consideration, we obtain a linear measurement uncertainty of 3-4 mas. However, if observations are unpaired or if two observations taken in the same filter are paired, it becomes harder to identify cases of systematic error, presumably because the largest source of this error is residual atmospheric dispersion, which is color dependent. When observations are unpaired, we find that it is unwise to report separations below approximately 20 mas, as these are most susceptible to this effect. Using the final results obtained, we are able to update two older orbits in the literature and present preliminary orbits for three systems that were discovered by Hipparcos.« less

  14. Cone-Probe Rake Design and Calibration for Supersonic Wind Tunnel Models

    NASA Technical Reports Server (NTRS)

    Won, Mark J.

    1999-01-01

    A series of experimental investigations were conducted at the NASA Langley Unitary Plan Wind Tunnel (UPWT) to calibrate cone-probe rakes designed to measure the flow field on 1-2% scale, high-speed wind tunnel models from Mach 2.15 to 2.4. The rakes were developed from a previous design that exhibited unfavorable measurement characteristics caused by a high probe spatial density and flow blockage from the rake body. Calibration parameters included Mach number, total pressure recovery, and flow angularity. Reference conditions were determined from a localized UPWT test section flow survey using a 10deg supersonic wedge probe. Test section Mach number and total pressure were determined using a novel iterative technique that accounted for boundary layer effects on the wedge surface. Cone-probe measurements were correlated to the surveyed flow conditions using analytical functions and recursive algorithms that resolved Mach number, pressure recovery, and flow angle to within +/-0.01, +/-1% and +/-0.1deg , respectively, for angles of attack and sideslip between +/-8deg. Uncertainty estimates indicated the overall cone-probe calibration accuracy was strongly influenced by the propagation of measurement error into the calculated results.

  15. A novel color vision test for detection of diabetic macular edema.

    PubMed

    Shin, Young Joo; Park, Kyu Hyung; Hwang, Jeong-Min; Wee, Won Ryang; Lee, Jin Hak; Lee, In Bum; Hyon, Joon Young

    2014-01-02

    To determine the sensitivity of the Seoul National University (SNU) computerized color vision test for detecting diabetic macular edema. From May to September 2003, a total of 73 eyes of 73 patients with diabetes mellitus were examined using the SNU computerized color vision test and optical coherence tomography (OCT). Color deficiency was quantified as the total error score on the SNU test and as error scores for each of four color quadrants corresponding to yellows (Q1), greens (Q2), blues (Q3), and reds (Q4). SNU error scores were assessed as a function of OCT foveal thickness and total macular volume (TMV). The error scores in Q1, Q2, Q3, and Q4 measured by the SNU color vision test increased with foveal thickness (P < 0.05), whereas they were not correlated with TMV. Total error scores, the summation of Q1 and Q3, the summation of Q2 and Q4, and blue-yellow (B-Y) error scores were significantly correlated with foveal thickness (P < 0.05), but not with TMV. The observed correlation between SNU color test error scores and foveal thickness indicates that the SNU test may be useful for detection and monitoring of diabetic macular edema.

  16. Prevalence and cost of hospital medical errors in the general and elderly United States populations.

    PubMed

    Mallow, Peter J; Pandya, Bhavik; Horblyuk, Ruslan; Kaplan, Harold S

    2013-12-01

    The primary objective of this study was to quantify the differences in the prevalence rate and costs of hospital medical errors between the general population and an elderly population aged ≥65 years. Methods from an actuarial study of medical errors were modified to identify medical errors in the Premier Hospital Database using data from 2009. Visits with more than four medical errors were removed from the population to avoid over-estimation of cost. Prevalence rates were calculated based on the total number of inpatient visits. There were 3,466,596 total inpatient visits in 2009. Of these, 1,230,836 (36%) occurred in people aged ≥ 65. The prevalence rate was 49 medical errors per 1000 inpatient visits in the general cohort and 79 medical errors per 1000 inpatient visits for the elderly cohort. The top 10 medical errors accounted for more than 80% of the total in the general cohort and the 65+ cohort. The most costly medical error for the general population was postoperative infection ($569,287,000). Pressure ulcers were most costly ($347,166,257) in the elderly population. This study was conducted with a hospital administrative database, and assumptions were necessary to identify medical errors in the database. Further, there was no method to identify errors of omission or misdiagnoses within the database. This study indicates that prevalence of hospital medical errors for the elderly is greater than the general population and the associated cost of medical errors in the elderly population is quite substantial. Hospitals which further focus their attention on medical errors in the elderly population may see a significant reduction in costs due to medical errors as a disproportionate percentage of medical errors occur in this age group.

  17. Disclosing Medical Errors to Patients: Attitudes and Practices of Physicians and Trainees

    PubMed Central

    Jones, Elizabeth W.; Wu, Barry J.; Forman-Hoffman, Valerie L.; Levi, Benjamin H.; Rosenthal, Gary E.

    2007-01-01

    BACKGROUND Disclosing errors to patients is an important part of patient care, but the prevalence of disclosure, and factors affecting it, are poorly understood. OBJECTIVE To survey physicians and trainees about their practices and attitudes regarding error disclosure to patients. DESIGN AND PARTICIPANTS Survey of faculty physicians, resident physicians, and medical students in Midwest, Mid-Atlantic, and Northeast regions of the United States. MEASUREMENTS Actual error disclosure; hypothetical error disclosure; attitudes toward disclosure; demographic factors. RESULTS Responses were received from 538 participants (response rate = 77%). Almost all faculty and residents responded that they would disclose a hypothetical error resulting in minor (97%) or major (93%) harm to a patient. However, only 41% of faculty and residents had disclosed an actual minor error (resulting in prolonged treatment or discomfort), and only 5% had disclosed an actual major error (resulting in disability or death). Moreover, 19% acknowledged not disclosing an actual minor error and 4% acknowledged not disclosing an actual major error. Experience with malpractice litigation was not associated with less actual or hypothetical error disclosure. Faculty were more likely than residents and students to disclose a hypothetical error and less concerned about possible negative consequences of disclosure. Several attitudes were associated with greater likelihood of hypothetical disclosure, including the belief that disclosure is right even if it comes at a significant personal cost. CONCLUSIONS There appears to be a gap between physicians’ attitudes and practices regarding error disclosure. Willingness to disclose errors was associated with higher training level and a variety of patient-centered attitudes, and it was not lessened by previous exposure to malpractice litigation. PMID:17473944

  18. How Trainees Would Disclose Medical Errors: Educational Implications for Training Programs

    PubMed Central

    White, Andrew A.; Bell, Sigall K.; Krauss, Melissa J; Garbutt, Jane; Dunagan, W. Claiborne; Fraser, Victoria J.; Levinson, Wendy; Larson, Eric B.; Gallagher, Thomas H.

    2012-01-01

    Background Disclosing harmful errors to patients is recommended, but appears to be uncommon. Understanding how trainees disclose errors and how those practices evolve during training could help educators design programs to address this gap. Purpose To determine how trainees would disclose medical errors. Methods A survey of 758 trainees (488 students and 270 residents) in internal medicine at two academic medical centers. Surveys depicted one of two harmful error scenarios that varied by how apparent the error would be to the patient. We measured attitudes and disclosure content using scripted responses. Results Trainees reported their intent to disclose the error as “definitely” (43%) “probably” (47%) “only if asked by patient” (9%), and “definitely not” (1%). Trainees were more likely to disclose obvious errors in comparison with ones patients were unlikely to recognize (55% vs. 30%, P<0.01). Respondents varied widely in what information they would disclose. Fifty percent of trainees chose statements explicitly stating an error occurred rather than only an adverse event. Regarding apologies, trainees were split between a general expression of regret (52%) and an explicit apology (46%). Respondents at higher levels of training were less likely to use explicit apologies (Trend P<0.01). Prior disclosure training was associated with increased willingness to disclose errors (OR 1.40, P=0.03). Conclusions Trainees may not be prepared to disclose medical errors to patients, and worrisome trends in trainee apology practices were observed across levels of training. Medical educators should intensify efforts to enhance trainees’ skills at meeting patients’ expectations for open disclosure of harmful medical errors. PMID:21401685

  19. Outpatient Prescribing Errors and the Impact of Computerized Prescribing

    PubMed Central

    Gandhi, Tejal K; Weingart, Saul N; Seger, Andrew C; Borus, Joshua; Burdick, Elisabeth; Poon, Eric G; Leape, Lucian L; Bates, David W

    2005-01-01

    Background Medication errors are common among inpatients and many are preventable with computerized prescribing. Relatively little is known about outpatient prescribing errors or the impact of computerized prescribing in this setting. Objective To assess the rates, types, and severity of outpatient prescribing errors and understand the potential impact of computerized prescribing. Design Prospective cohort study in 4 adult primary care practices in Boston using prescription review, patient survey, and chart review to identify medication errors, potential adverse drug events (ADEs) and preventable ADEs. Participants Outpatients over age 18 who received a prescription from 24 participating physicians. Results We screened 1879 prescriptions from 1202 patients, and completed 661 surveys (response rate 55%). Of the prescriptions, 143 (7.6%; 95% confidence interval (CI) 6.4% to 8.8%) contained a prescribing error. Three errors led to preventable ADEs and 62 (43%; 3% of all prescriptions) had potential for patient injury (potential ADEs); 1 was potentially life-threatening (2%) and 15 were serious (24%). Errors in frequency (n=77, 54%) and dose (n=26, 18%) were common. The rates of medication errors and potential ADEs were not significantly different at basic computerized prescribing sites (4.3% vs 11.0%, P=.31; 2.6% vs 4.0%, P=.16) compared to handwritten sites. Advanced checks (including dose and frequency checking) could have prevented 95% of potential ADEs. Conclusions Prescribing errors occurred in 7.6% of outpatient prescriptions and many could have harmed patients. Basic computerized prescribing systems may not be adequate to reduce errors. More advanced systems with dose and frequency checking are likely needed to prevent potentially harmful errors. PMID:16117752

  20. Estimating regression coefficients from clustered samples: Sampling errors and optimum sample allocation

    NASA Technical Reports Server (NTRS)

    Kalton, G.

    1983-01-01

    A number of surveys were conducted to study the relationship between the level of aircraft or traffic noise exposure experienced by people living in a particular area and their annoyance with it. These surveys generally employ a clustered sample design which affects the precision of the survey estimates. Regression analysis of annoyance on noise measures and other variables is often an important component of the survey analysis. Formulae are presented for estimating the standard errors of regression coefficients and ratio of regression coefficients that are applicable with a two- or three-stage clustered sample design. Using a simple cost function, they also determine the optimum allocation of the sample across the stages of the sample design for the estimation of a regression coefficient.

  1. Effects of Simplifying Choice Tasks on Estimates of Taste Heterogeneity in Stated-Choice Surveys

    PubMed Central

    Johnson, F. Reed; Ozdemir, Semra; Phillips, Kathryn A

    2011-01-01

    Researchers usually employ orthogonal arrays or D-optimal designs with little or no attribute overlap in stated-choice surveys. The challenge is to balance statistical efficiency and respondent burden to minimize the overall error in the survey responses. This study examined whether simplifying the choice task, by using a design with more overlap, provides advantages over standard minimum-overlap methods. We administered two designs for eliciting HIV test preferences to split samples. Surveys were undertaken at four HIV testing locations in San Francisco, California. Personal characteristics had different effects on willingness to pay for the two treatments, and gains in statistical efficiency in the minimal-overlap version more than compensated for possible imprecision from increased measurement error. PMID:19880234

  2. Total error shift patterns for daily CT on rails image-guided radiotherapy to the prostate bed

    PubMed Central

    2011-01-01

    Background To evaluate the daily total error shift patterns on post-prostatectomy patients undergoing image guided radiotherapy (IGRT) with a diagnostic quality computer tomography (CT) on rails system. Methods A total of 17 consecutive post-prostatectomy patients receiving adjuvant or salvage IMRT using CT-on-rails IGRT were analyzed. The prostate bed's daily total error shifts were evaluated for a total of 661 CT scans. Results In the right-left, cranial-caudal, and posterior-anterior directions, 11.5%, 9.2%, and 6.5% of the 661 scans required no position adjustments; 75.3%, 66.1%, and 56.8% required a shift of 1 - 5 mm; 11.5%, 20.9%, and 31.2% required a shift of 6 - 10 mm; and 1.7%, 3.8%, and 5.5% required a shift of more than 10 mm, respectively. There was evidence of correlation between the x and y, x and z, and y and z axes in 3, 3, and 3 of 17 patients, respectively. Univariate (ANOVA) analysis showed that the total error pattern was random in the x, y, and z axis for 10, 5, and 2 of 17 patients, respectively, and systematic for the rest. Multivariate (MANOVA) analysis showed that the (x,y), (x,z), (y,z), and (x, y, z) total error pattern was random in 5, 1, 1, and 1 of 17 patients, respectively, and systematic for the rest. Conclusions The overall daily total error shift pattern for these 17 patients simulated with an empty bladder, and treated with CT on rails IGRT was predominantly systematic. Despite this, the temporal vector trends showed complex behaviors and unpredictable changes in magnitude and direction. These findings highlight the importance of using daily IGRT in post-prostatectomy patients. PMID:22024279

  3. Optimising 4-D surface change detection: an approach for capturing rockfall magnitude-frequency

    NASA Astrophysics Data System (ADS)

    Williams, Jack G.; Rosser, Nick J.; Hardy, Richard J.; Brain, Matthew J.; Afana, Ashraf A.

    2018-02-01

    We present a monitoring technique tailored to analysing change from near-continuously collected, high-resolution 3-D data. Our aim is to fully characterise geomorphological change typified by an event magnitude-frequency relationship that adheres to an inverse power law or similar. While recent advances in monitoring have enabled changes in volume across more than 7 orders of magnitude to be captured, event frequency is commonly assumed to be interchangeable with the time-averaged event numbers between successive surveys. Where events coincide, or coalesce, or where the mechanisms driving change are not spatially independent, apparent event frequency must be partially determined by survey interval.The data reported have been obtained from a permanently installed terrestrial laser scanner, which permits an increased frequency of surveys. Surveying from a single position raises challenges, given the single viewpoint onto a complex surface and the need for computational efficiency associated with handling a large time series of 3-D data. A workflow is presented that optimises the detection of change by filtering and aligning scans to improve repeatability. An adaptation of the M3C2 algorithm is used to detect 3-D change to overcome data inconsistencies between scans. Individual rockfall geometries are then extracted and the associated volumetric errors modelled. The utility of this approach is demonstrated using a dataset of ˜ 9 × 103 surveys acquired at ˜ 1 h intervals over 10 months. The magnitude-frequency distribution of rockfall volumes generated is shown to be sensitive to monitoring frequency. Using a 1 h interval between surveys, rather than 30 days, the volume contribution from small (< 0.1 m3) rockfalls increases from 67 to 98 % of the total, and the number of individual rockfalls observed increases by over 3 orders of magnitude. High-frequency monitoring therefore holds considerable implications for magnitude-frequency derivatives, such as hazard return intervals and erosion rates. As such, while high-frequency monitoring has potential to describe short-term controls on geomorphological change and more realistic magnitude-frequency relationships, the assessment of longer-term erosion rates may be more suited to less-frequent data collection with lower accumulative errors.

  4. SKA weak lensing- II. Simulated performance and survey design considerations

    NASA Astrophysics Data System (ADS)

    Bonaldi, Anna; Harrison, Ian; Camera, Stefano; Brown, Michael L.

    2016-12-01

    We construct a pipeline for simulating weak lensing cosmology surveys with the Square Kilometre Array (SKA), taking as inputs telescope sensitivity curves; correlated source flux, size and redshift distributions; a simple ionospheric model; source redshift and ellipticity measurement errors. We then use this simulation pipeline to optimize a 2-yr weak lensing survey performed with the first deployment of the SKA (SKA1). Our assessments are based on the total signal to noise of the recovered shear power spectra, a metric that we find to correlate very well with a standard dark energy figure of merit. We first consider the choice of frequency band, trading off increases in number counts at lower frequencies against poorer resolution; our analysis strongly prefers the higher frequency Band 2 (950-1760 MHz) channel of the SKA-MID telescope to the lower frequency Band 1 (350-1050 MHz). Best results would be obtained by allowing the centre of Band 2 to shift towards lower frequency, around 1.1 GHz. We then move on to consider survey size, finding that an area of 5000 deg2 is optimal for most SKA1 instrumental configurations. Finally, we forecast the performance of a weak lensing survey with the second deployment of the SKA. The increased survey size (3π steradian) and sensitivity improves both the signal to noise and the dark energy metrics by two orders of magnitude.

  5. Recommended survey designs for occupancy modelling using motion-activated cameras: insights from empirical wildlife data

    PubMed Central

    Lewis, Jesse S.; Gerber, Brian D.

    2014-01-01

    Motion-activated cameras are a versatile tool that wildlife biologists can use for sampling wild animal populations to estimate species occurrence. Occupancy modelling provides a flexible framework for the analysis of these data; explicitly recognizing that given a species occupies an area the probability of detecting it is often less than one. Despite the number of studies using camera data in an occupancy framework, there is only limited guidance from the scientific literature about survey design trade-offs when using motion-activated cameras. A fuller understanding of these trade-offs will allow researchers to maximise available resources and determine whether the objectives of a monitoring program or research study are achievable. We use an empirical dataset collected from 40 cameras deployed across 160 km2 of the Western Slope of Colorado, USA to explore how survey effort (number of cameras deployed and the length of sampling period) affects the accuracy and precision (i.e., error) of the occupancy estimate for ten mammal and three virtual species. We do this using a simulation approach where species occupancy and detection parameters were informed by empirical data from motion-activated cameras. A total of 54 survey designs were considered by varying combinations of sites (10–120 cameras) and occasions (20–120 survey days). Our findings demonstrate that increasing total sampling effort generally decreases error associated with the occupancy estimate, but changing the number of sites or sampling duration can have very different results, depending on whether a species is spatially common or rare (occupancy = ψ) and easy or hard to detect when available (detection probability = p). For rare species with a low probability of detection (i.e., raccoon and spotted skunk) the required survey effort includes maximizing the number of sites and the number of survey days, often to a level that may be logistically unrealistic for many studies. For common species with low detection (i.e., bobcat and coyote) the most efficient sampling approach was to increase the number of occasions (survey days). However, for common species that are moderately detectable (i.e., cottontail rabbit and mule deer), occupancy could reliably be estimated with comparatively low numbers of cameras over a short sampling period. We provide general guidelines for reliably estimating occupancy across a range of terrestrial species (rare to common: ψ = 0.175–0.970, and low to moderate detectability: p = 0.003–0.200) using motion-activated cameras. Wildlife researchers/managers with limited knowledge of the relative abundance and likelihood of detection of a particular species can apply these guidelines regardless of location. We emphasize the importance of prior biological knowledge, defined objectives and detailed planning (e.g., simulating different study-design scenarios) for designing effective monitoring programs and research studies. PMID:25210658

  6. Automated drug dispensing system reduces medication errors in an intensive care setting.

    PubMed

    Chapuis, Claire; Roustit, Matthieu; Bal, Gaëlle; Schwebel, Carole; Pansu, Pascal; David-Tchouda, Sandra; Foroni, Luc; Calop, Jean; Timsit, Jean-François; Allenet, Benoît; Bosson, Jean-Luc; Bedouch, Pierrick

    2010-12-01

    We aimed to assess the impact of an automated dispensing system on the incidence of medication errors related to picking, preparation, and administration of drugs in a medical intensive care unit. We also evaluated the clinical significance of such errors and user satisfaction. Preintervention and postintervention study involving a control and an intervention medical intensive care unit. Two medical intensive care units in the same department of a 2,000-bed university hospital. Adult medical intensive care patients. After a 2-month observation period, we implemented an automated dispensing system in one of the units (study unit) chosen randomly, with the other unit being the control. The overall error rate was expressed as a percentage of total opportunities for error. The severity of errors was classified according to National Coordinating Council for Medication Error Reporting and Prevention categories by an expert committee. User satisfaction was assessed through self-administered questionnaires completed by nurses. A total of 1,476 medications for 115 patients were observed. After automated dispensing system implementation, we observed a reduced percentage of total opportunities for error in the study compared to the control unit (13.5% and 18.6%, respectively; p<.05); however, no significant difference was observed before automated dispensing system implementation (20.4% and 19.3%, respectively; not significant). Before-and-after comparisons in the study unit also showed a significantly reduced percentage of total opportunities for error (20.4% and 13.5%; p<.01). An analysis of detailed opportunities for error showed a significant impact of the automated dispensing system in reducing preparation errors (p<.05). Most errors caused no harm (National Coordinating Council for Medication Error Reporting and Prevention category C). The automated dispensing system did not reduce errors causing harm. Finally, the mean for working conditions improved from 1.0±0.8 to 2.5±0.8 on the four-point Likert scale. The implementation of an automated dispensing system reduced overall medication errors related to picking, preparation, and administration of drugs in the intensive care unit. Furthermore, most nurses favored the new drug dispensation organization.

  7. The error in total error reduction.

    PubMed

    Witnauer, James E; Urcelay, Gonzalo P; Miller, Ralph R

    2014-02-01

    Most models of human and animal learning assume that learning is proportional to the discrepancy between a delivered outcome and the outcome predicted by all cues present during that trial (i.e., total error across a stimulus compound). This total error reduction (TER) view has been implemented in connectionist and artificial neural network models to describe the conditions under which weights between units change. Electrophysiological work has revealed that the activity of dopamine neurons is correlated with the total error signal in models of reward learning. Similar neural mechanisms presumably support fear conditioning, human contingency learning, and other types of learning. Using a computational modeling approach, we compared several TER models of associative learning to an alternative model that rejects the TER assumption in favor of local error reduction (LER), which assumes that learning about each cue is proportional to the discrepancy between the delivered outcome and the outcome predicted by that specific cue on that trial. The LER model provided a better fit to the reviewed data than the TER models. Given the superiority of the LER model with the present data sets, acceptance of TER should be tempered. Copyright © 2013 Elsevier Inc. All rights reserved.

  8. Food Insecurity and Health Care Expenditures in the United States, 2011-2013.

    PubMed

    Berkowitz, Seth A; Basu, Sanjay; Meigs, James B; Seligman, Hilary K

    2018-06-01

    To determine whether food insecurity, limited or uncertain food access owing to cost, is associated with greater health care expenditures. Nationally representative sample of the civilian noninstitutionalized population of the United States (2011 National Health Interview Survey [NHIS] linked to 2012-2013 Medication Expenditure Panel Survey [MEPS]). Longitudinal retrospective cohort. A total of 16,663 individuals underwent assessment of food insecurity, using the 10-item adult 30-day food security module, in the 2011 NHIS. Their total health care expenditures in 2012 and 2013 were recorded in MEPS. Expenditure data were analyzed using zero-inflated negative binomial regression and adjusted for age, gender, race/ethnicity, education, income, insurance, and residence area. Fourteen percent of individuals reported food insecurity, representing 41,616,255 Americans. Mean annualized total expenditures were $4,113 (standard error $115); 9.2 percent of all individuals had no health care expenditures. In multivariable analyses, those with food insecurity had significantly greater estimated mean annualized health care expenditures ($6,072 vs. $4,208, p < .0001), an extra $1,863 in health care expenditure per year, or $77.5 billion in additional health care expenditure annually. Food insecurity was associated with greater subsequent health care expenditures. Future studies should determine whether food insecurity interventions can improve health and reduce health care costs. © Health Research and Educational Trust.

  9. Test-Retest Reliability of the Adaptive Chemistry Assessment Survey for Teachers: Measurement Error and Alternatives to Correlation

    ERIC Educational Resources Information Center

    Harshman, Jordan; Yezierski, Ellen

    2016-01-01

    Determining the error of measurement is a necessity for researchers engaged in bench chemistry, chemistry education research (CER), and a multitude of other fields. Discussions regarding what constructs measurement error entails and how to best measure them have occurred, but the critiques about traditional measures have yielded few alternatives.…

  10. Diagnostic, pharmacy-based, and self-reported health measures in risk equalization models.

    PubMed

    Stam, Pieter J A; van Vliet, René C J A; van de Ven, Wynand P M M

    2010-05-01

    Current research on the added value of self-reported health measures for risk equalization modeling does not include all types of self-reported health measures; and/or is compared with a limited set of medically diagnosed or pharmacy-based diseases; and/or is limited to specific populations of high-risk individuals. The objective of our study is to determine the predictive power of all types of self-reported health measures for prospective modeling of health care expenditures in a general population of adult Dutch sickness fund enrollees, given that pharmacy and diagnostic data from administrative records are already included in the risk equalization formula. We used 4 models of 2002 total, inpatient and outpatient expenditures to evaluate the separate and combined predictive ability of 2 kinds of data: (1) Pharmacy-based (PCGs) and Diagnosis-based (DCGs) Cost Groups and (2) summarized self-reported health information. Model performance is measured at the total population level using R2 and mean absolute prediction error; also, by examining mean discrepancies between model-predicted and actual expenditures (ie, expected over- or undercompensation) for members of potentially "mispriced" subgroups. These subgroups are identified by self-reports from prior-year health surveys and utilization and expenditure data from 5 preceding years. Subjects were 18,617 respondents to a health survey, held among a stratified sample of adult members of the largest Dutch sickness fund in 2002, with an overrepresentation of people in poor health. The data were extracted from a claims database and a health survey. The claims-based data are the outcomes of total, inpatient, and outpatient annualized expenditures in 2002; age, gender, PCGs, DCGs in 2001; and health care expenditures and hospitalizations during the years 1997 to 2001. The SF-36, Organization for Economic Cooperation and Development items, and long-term diseases and conditions were collected by a special purpose health survey conducted in the last quarter of 2001. Out-of-sample R2 equals 17.2%, 2.6%, and 32.4% for the models of total, inpatient and outpatient expenditures including PCGs, DCGs, and self-reported health measures. Self-reported health measures contribute less to predictive power than PCGs and DCGs. PCGs and DCGs also predict better than self-reported health measures for people with top 25% total expenditures or hospitalizations in each year during a 5-year period. On the other hand, self-reported health measures are better predictors than PCGs and DCGs for people without any top 25% expenditures during the 5-year period, for switchers, and for most subgroups of relatively unhealthy people defined by self-reported health measures. Among the set of self-reported health measures, the SF-36 adds most to predictive power in terms of R2, mean absolute prediction error, and for almost all studied subgroups. It is concluded that the self-reported health measures make an independent contribution to forecasting health care expenditures, even if the prediction model already includes diagnostic and pharmacy-based information currently used in Dutch risk equalization models.

  11. UCAC3 Proper Motion Survey. I. Discovery of New Proper Motion Stars in UCAC3 With 0.40/yr mu 0.18/yr Between Declinations -90 deg and -47 deg

    DTIC Science & Technology

    2010-09-01

    overlooked during previous SCR and other searches. The Two-Micron All Sky Survey ( 2MASS ) was used to probe for and reduce systematic errors in UCAC CCD...of 50–200 mas, when compared to 2MASS data. For a detailed description of the derived UCAC3 proper motions see Zacharias et al. (2010). An effort was...meeting the declination and proper motion survey limits, all stars (1) must be in the 2MASS catalog with an e2mpho ( 2MASS photometry error) less than

  12. Error, stress, and teamwork in medicine and aviation: cross sectional surveys

    NASA Technical Reports Server (NTRS)

    Sexton, J. B.; Thomas, E. J.; Helmreich, R. L.

    2000-01-01

    OBJECTIVES: To survey operating theatre and intensive care unit staff about attitudes concerning error, stress, and teamwork and to compare these attitudes with those of airline cockpit crew. DESIGN:: Cross sectional surveys. SETTING:: Urban teaching and non-teaching hospitals in the United States, Israel, Germany, Switzerland, and Italy. Major airlines around the world. PARTICIPANTS:: 1033 doctors, nurses, fellows, and residents working in operating theatres and intensive care units and over 30 000 cockpit crew members (captains, first officers, and second officers). MAIN OUTCOME MEASURES:: Perceptions of error, stress, and teamwork. RESULTS:: Pilots were least likely to deny the effects of fatigue on performance (26% v 70% of consultant surgeons and 47% of consultant anaesthetists). Most pilots (97%) and intensive care staff (94%) rejected steep hierarchies (in which senior team members are not open to input from junior members), but only 55% of consultant surgeons rejected such hierarchies. High levels of teamwork with consultant surgeons were reported by 73% of surgical residents, 64% of consultant surgeons, 39% of anaesthesia consultants, 28% of surgical nurses, 25% of anaesthetic nurses, and 10% of anaesthetic residents. Only a third of staff reported that errors are handled appropriately at their hospital. A third of intensive care staff did not acknowledge that they make errors. Over half of intensive care staff reported that they find it difficult to discuss mistakes. CONCLUSIONS: Medical staff reported that error is important but difficult to discuss and not handled well in their hospital. Barriers to discussing error are more important since medical staff seem to deny the effect of stress and fatigue on performance. Further problems include differing perceptions of teamwork among team members and reluctance of senior theatre staff to accept input from junior members.

  13. Pediatric crisis resource management training improves emergency medicine trainees' perceived ability to manage emergencies and ability to identify teamwork errors.

    PubMed

    Bank, Ilana; Snell, Linda; Bhanji, Farhan

    2014-12-01

    Improved pediatric crisis resource management (CRM) training is needed in emergency medicine residencies because of the variable nature of exposure to critically ill pediatric patients during training. We created a short, needs-based pediatric CRM simulation workshop with postactivity follow-up to determine retention of CRM knowledge. Our aims were to provide a realistic learning experience for residents and to help the learners recognize common errors in teamwork and improve their perceived abilities to manage ill pediatric patients. Residents participated in a 4-hour objectives-based workshop derived from a formal needs assessment. To quantify their subjective abilities to manage pediatric cases, the residents completed a postworkshop survey (with a retrospective precomponent to assess perceived change). Ability to identify CRM errors was determined via a written assessment of scripted errors in a prerecorded video observed before and 1 month after completion of the workshop. Fifteen of the 16 eligible emergency medicine residents (postgraduate year 1-5) attended the workshop and completed the surveys. There were significant differences in 15 of 16 retrospective pre to post survey items using the Wilcoxon rank sum test for non-parametric data. These included ability to be an effective team leader in general (P < 0.008), delegating tasks appropriately (P < 0.009), and ability to ensure closed-loop communication (P < 0.008). There was a significant improvement in identification of CRM errors through the use of the video assessment from 3 of the 12 CRM errors to 7 of the 12 CRM errors (P < 0.006). The pediatric CRM simulation-based workshop improved the residents' self-perceptions of their pediatric CRM abilities and improved their performance on a video assessment task.

  14. Error, stress, and teamwork in medicine and aviation: cross sectional surveys

    PubMed Central

    Sexton, J Bryan; Thomas, Eric J; Helmreich, Robert L

    2000-01-01

    Objectives: To survey operating theatre and intensive care unit staff about attitudes concerning error, stress, and teamwork and to compare these attitudes with those of airline cockpit crew. Design: Cross sectional surveys. Setting: Urban teaching and non-teaching hospitals in the United States, Israel, Germany, Switzerland, and Italy. Major airlines around the world. Participants: 1033 doctors, nurses, fellows, and residents working in operating theatres and intensive care units and over 30 000 cockpit crew members (captains, first officers, and second officers). Main outcome measures: Perceptions of error, stress, and teamwork. Results: Pilots were least likely to deny the effects of fatigue on performance (26% v 70% of consultant surgeons and 47% of consultant anaesthetists). Most pilots (97%) and intensive care staff (94%) rejected steep hierarchies (in which senior team members are not open to input from junior members), but only 55% of consultant surgeons rejected such hierarchies. High levels of teamwork with consultant surgeons were reported by 73% of surgical residents, 64% of consultant surgeons, 39% of anaesthesia consultants, 28% of surgical nurses, 25% of anaesthetic nurses, and 10% of anaesthetic residents. Only a third of staff reported that errors are handled appropriately at their hospital. A third of intensive care staff did not acknowledge that they make errors. Over half of intensive care staff reported that they find it difficult to discuss mistakes. Conclusions: Medical staff reported that error is important but difficult to discuss and not handled well in their hospital. Barriers to discussing error are more important since medical staff seem to deny the effect of stress and fatigue on performance. Further problems include differing perceptions of teamwork among team members and reluctance of senior theatre staff to accept input from junior members. PMID:10720356

  15. Evaluation of LiDAR-acquired bathymetric and topographic data accuracy in various hydrogeomorphic settings in the Deadwood and South Fork Boise Rivers, West-Central Idaho, 2007

    USGS Publications Warehouse

    Skinner, Kenneth D.

    2011-01-01

    High-quality elevation data in riverine environments are important for fisheries management applications and the accuracy of such data needs to be determined for its proper application. The Experimental Advanced Airborne Research LiDAR (Light Detection and Ranging)-or EAARL-system was used to obtain topographic and bathymetric data along the Deadwood and South Fork Boise Rivers in west-central Idaho. The EAARL data were post-processed into bare earth and bathymetric raster and point datasets. Concurrently with the EAARL surveys, real-time kinematic global positioning system surveys were made in three areas along each of the rivers to assess the accuracy of the EAARL elevation data in different hydrogeomorphic settings. The accuracies of the EAARL-derived raster elevation values, determined in open, flat terrain, to provide an optimal vertical comparison surface, had root mean square errors ranging from 0.134 to 0.347 m. Accuracies in the elevation values for the stream hydrogeomorphic settings had root mean square errors ranging from 0.251 to 0.782 m. The greater root mean square errors for the latter data are the result of complex hydrogeomorphic environments within the streams, such as submerged aquatic macrophytes and air bubble entrainment; and those along the banks, such as boulders, woody debris, and steep slopes. These complex environments reduce the accuracy of EAARL bathymetric and topographic measurements. Steep banks emphasize the horizontal location discrepancies between the EAARL and ground-survey data and may not be good representations of vertical accuracy. The EAARL point to ground-survey comparisons produced results with slightly higher but similar root mean square errors than those for the EAARL raster to ground-survey comparisons, emphasizing the minimized horizontal offset by using interpolated values from the raster dataset at the exact location of the ground-survey point as opposed to an actual EAARL point within a 1-meter distance. The average error for the wetted stream channel surface areas was -0.5 percent, while the average error for the wetted stream channel volume was -8.3 percent. The volume of the wetted river channel was underestimated by an average of 31 percent in half of the survey areas, and overestimated by an average of 14 percent in the remainder of the survey areas. The EAARL system is an efficient way to obtain topographic and bathymetric data in large areas of remote terrain. The elevation accuracy of the EAARL system varies throughout the area depending upon the hydrogeomorphic setting, preventing the use of a single accuracy value to describe the EAARL system. The elevation accuracy variations should be kept in mind when using the data, such as for hydraulic modeling or aquatic habitat assessments.

  16. Aged-care nurses in rural Tasmanian clinical settings more likely to think hypothetical medication error would be reported and disclosed compared to hospital and community nurses.

    PubMed

    Carnes, Debra; Kilpatrick, Sue; Iedema, Rick

    2015-12-01

    This study aims to determine the likelihood that rural nurses perceive a hypothetical medication error would be reported in their workplace. This employs cross-sectional survey using hypothetical error scenario with varying levels of harm. Clinical settings in rural Tasmania. Participants were 116 eligible surveys received from registered and enrolled nurses. Frequency of responses indicating the likelihood that severe, moderate and near miss (no harm) scenario would 'always' be reported or disclosed. Eighty per cent of nurses viewed a severe error would 'always' be reported, 64.8% a moderate error and 45.7% a near-miss error. In regards to disclosure, 54.7% felt this was 'always' likely to occur for a severe error, 44.8% for a moderate error and 26.4% for a near miss. Across all levels of severity, aged-care nurses were more likely than nurses in other settings to view error to 'always' be reported (ranging from 72-96%, P = 0.010 to 0.042,) and disclosed (68-88%, P = 0.000). Those in a management role were more likely to view error to 'always' be disclosed compared to those in a clinical role (50-77.3%, P = 0.008-0.024). Further research in rural clinical settings is needed to improve the understanding of error management and disclosure. © 2015 The Authors. Australian Journal of Rural Health published by Wiley Publishing Asia Pty Ltd on behalf of National Rural Health Alliance.

  17. Grizzly Bear Noninvasive Genetic Tagging Surveys: Estimating the Magnitude of Missed Detections.

    PubMed

    Fisher, Jason T; Heim, Nicole; Code, Sandra; Paczkowski, John

    2016-01-01

    Sound wildlife conservation decisions require sound information, and scientists increasingly rely on remotely collected data over large spatial scales, such as noninvasive genetic tagging (NGT). Grizzly bears (Ursus arctos), for example, are difficult to study at population scales except with noninvasive data, and NGT via hair trapping informs management over much of grizzly bears' range. Considerable statistical effort has gone into estimating sources of heterogeneity, but detection error-arising when a visiting bear fails to leave a hair sample-has not been independently estimated. We used camera traps to survey grizzly bear occurrence at fixed hair traps and multi-method hierarchical occupancy models to estimate the probability that a visiting bear actually leaves a hair sample with viable DNA. We surveyed grizzly bears via hair trapping and camera trapping for 8 monthly surveys at 50 (2012) and 76 (2013) sites in the Rocky Mountains of Alberta, Canada. We used multi-method occupancy models to estimate site occupancy, probability of detection, and conditional occupancy at a hair trap. We tested the prediction that detection error in NGT studies could be induced by temporal variability within season, leading to underestimation of occupancy. NGT via hair trapping consistently underestimated grizzly bear occupancy at a site when compared to camera trapping. At best occupancy was underestimated by 50%; at worst, by 95%. Probability of false absence was reduced through successive surveys, but this mainly accounts for error imparted by movement among repeated surveys, not necessarily missed detections by extant bears. The implications of missed detections and biased occupancy estimates for density estimation-which form the crux of management plans-require consideration. We suggest hair-trap NGT studies should estimate and correct detection error using independent survey methods such as cameras, to ensure the reliability of the data upon which species management and conservation actions are based.

  18. Error decomposition and estimation of inherent optical properties.

    PubMed

    Salama, Mhd Suhyb; Stein, Alfred

    2009-09-10

    We describe a methodology to quantify and separate the errors of inherent optical properties (IOPs) derived from ocean-color model inversion. Their total error is decomposed into three different sources, namely, model approximations and inversion, sensor noise, and atmospheric correction. Prior information on plausible ranges of observation, sensor noise, and inversion goodness-of-fit are employed to derive the posterior probability distribution of the IOPs. The relative contribution of each error component to the total error budget of the IOPs, all being of stochastic nature, is then quantified. The method is validated with the International Ocean Colour Coordinating Group (IOCCG) data set and the NASA bio-Optical Marine Algorithm Data set (NOMAD). The derived errors are close to the known values with correlation coefficients of 60-90% and 67-90% for IOCCG and NOMAD data sets, respectively. Model-induced errors inherent to the derived IOPs are between 10% and 57% of the total error, whereas atmospheric-induced errors are in general above 43% and up to 90% for both data sets. The proposed method is applied to synthesized and in situ measured populations of IOPs. The mean relative errors of the derived values are between 2% and 20%. A specific error table to the Medium Resolution Imaging Spectrometer (MERIS) sensor is constructed. It serves as a benchmark to evaluate the performance of the atmospheric correction method and to compute atmospheric-induced errors. Our method has a better performance and is more appropriate to estimate actual errors of ocean-color derived products than the previously suggested methods. Moreover, it is generic and can be applied to quantify the error of any derived biogeophysical parameter regardless of the used derivation.

  19. A novel validation and calibration method for motion capture systems based on micro-triangulation.

    PubMed

    Nagymáté, Gergely; Tuchband, Tamás; Kiss, Rita M

    2018-06-06

    Motion capture systems are widely used to measure human kinematics. Nevertheless, users must consider system errors when evaluating their results. Most validation techniques for these systems are based on relative distance and displacement measurements. In contrast, our study aimed to analyse the absolute volume accuracy of optical motion capture systems by means of engineering surveying reference measurement of the marker coordinates (uncertainty: 0.75 mm). The method is exemplified on an 18 camera OptiTrack Flex13 motion capture system. The absolute accuracy was defined by the root mean square error (RMSE) between the coordinates measured by the camera system and by engineering surveying (micro-triangulation). The original RMSE of 1.82 mm due to scaling error was managed to be reduced to 0.77 mm while the correlation of errors to their distance from the origin reduced from 0.855 to 0.209. A simply feasible but less accurate absolute accuracy compensation method using tape measure on large distances was also tested, which resulted in similar scaling compensation compared to the surveying method or direct wand size compensation by a high precision 3D scanner. The presented validation methods can be less precise in some respects as compared to previous techniques, but they address an error type, which has not been and cannot be studied with the previous validation methods. Copyright © 2018 Elsevier Ltd. All rights reserved.

  20. Large-scale retrospective evaluation of regulated liquid chromatography-mass spectrometry bioanalysis projects using different total error approaches.

    PubMed

    Tan, Aimin; Saffaj, Taoufiq; Musuku, Adrien; Awaiye, Kayode; Ihssane, Bouchaib; Jhilal, Fayçal; Sosse, Saad Alaoui; Trabelsi, Fethi

    2015-03-01

    The current approach in regulated LC-MS bioanalysis, which evaluates the precision and trueness of an assay separately, has long been criticized for inadequate balancing of lab-customer risks. Accordingly, different total error approaches have been proposed. The aims of this research were to evaluate the aforementioned risks in reality and the difference among four common total error approaches (β-expectation, β-content, uncertainty, and risk profile) through retrospective analysis of regulated LC-MS projects. Twenty-eight projects (14 validations and 14 productions) were randomly selected from two GLP bioanalytical laboratories, which represent a wide variety of assays. The results show that the risk of accepting unacceptable batches did exist with the current approach (9% and 4% of the evaluated QC levels failed for validation and production, respectively). The fact that the risk was not wide-spread was only because the precision and bias of modern LC-MS assays are usually much better than the minimum regulatory requirements. Despite minor differences in magnitude, very similar accuracy profiles and/or conclusions were obtained from the four different total error approaches. High correlation was even observed in the width of bias intervals. For example, the mean width of SFSTP's β-expectation is 1.10-fold (CV=7.6%) of that of Saffaj-Ihssane's uncertainty approach, while the latter is 1.13-fold (CV=6.0%) of that of Hoffman-Kringle's β-content approach. To conclude, the risk of accepting unacceptable batches was real with the current approach, suggesting that total error approaches should be used instead. Moreover, any of the four total error approaches may be used because of their overall similarity. Lastly, the difficulties/obstacles associated with the application of total error approaches in routine analysis and their desirable future improvements are discussed. Copyright © 2015 Elsevier B.V. All rights reserved.

  1. Total Dose Effects on Error Rates in Linear Bipolar Systems

    NASA Technical Reports Server (NTRS)

    Buchner, Stephen; McMorrow, Dale; Bernard, Muriel; Roche, Nicholas; Dusseau, Laurent

    2007-01-01

    The shapes of single event transients in linear bipolar circuits are distorted by exposure to total ionizing dose radiation. Some transients become broader and others become narrower. Such distortions may affect SET system error rates in a radiation environment. If the transients are broadened by TID, the error rate could increase during the course of a mission, a possibility that has implications for hardness assurance.

  2. Association of resident fatigue and distress with perceived medical errors.

    PubMed

    West, Colin P; Tan, Angelina D; Habermann, Thomas M; Sloan, Jeff A; Shanafelt, Tait D

    2009-09-23

    Fatigue and distress have been separately shown to be associated with medical errors. The contribution of each factor when assessed simultaneously is unknown. To determine the association of fatigue and distress with self-perceived major medical errors among resident physicians using validated metrics. Prospective longitudinal cohort study of categorical and preliminary internal medicine residents at Mayo Clinic, Rochester, Minnesota. Data were provided by 380 of 430 eligible residents (88.3%). Participants began training from 2003 to 2008 and completed surveys quarterly through February 2009. Surveys included self-assessment of medical errors, linear analog self-assessment of overall quality of life (QOL) and fatigue, the Maslach Burnout Inventory, the PRIME-MD depression screening instrument, and the Epworth Sleepiness Scale. Frequency of self-perceived, self-defined major medical errors was recorded. Associations of fatigue, QOL, burnout, and symptoms of depression with a subsequently reported major medical error were determined using generalized estimating equations for repeated measures. The mean response rate to individual surveys was 67.5%. Of the 356 participants providing error data (93.7%), 139 (39%) reported making at least 1 major medical error during the study period. In univariate analyses, there was an association of subsequent self-reported error with the Epworth Sleepiness Scale score (odds ratio [OR], 1.10 per unit increase; 95% confidence interval [CI], 1.03-1.16; P = .002) and fatigue score (OR, 1.14 per unit increase; 95% CI, 1.08-1.21; P < .001). Subsequent error was also associated with burnout (ORs per 1-unit change: depersonalization OR, 1.09; 95% CI, 1.05-1.12; P < .001; emotional exhaustion OR, 1.06; 95% CI, 1.04-1.08; P < .001; lower personal accomplishment OR, 0.94; 95% CI, 0.92-0.97; P < .001), a positive depression screen (OR, 2.56; 95% CI, 1.76-3.72; P < .001), and overall QOL (OR, 0.84 per unit increase; 95% CI, 0.79-0.91; P < .001). Fatigue and distress variables remained statistically significant when modeled together with little change in the point estimates of effect. Sleepiness and distress, when modeled together, showed little change in point estimates of effect, but sleepiness no longer had a statistically significant association with errors when adjusted for burnout or depression. Among internal medicine residents, higher levels of fatigue and distress are independently associated with self-perceived medical errors.

  3. A comparison of registration errors with imageless computer navigation during MIS total knee arthroplasty versus standard incision total knee arthroplasty: a cadaveric study.

    PubMed

    Davis, Edward T; Pagkalos, Joseph; Gallie, Price A M; Macgroarty, Kelly; Waddell, James P; Schemitsch, Emil H

    2015-01-01

    Optimal component alignment in total knee arthroplasty has been associated with better functional outcome as well as improved implant longevity. The ability to align components optimally during minimally invasive (MIS) total knee replacement (TKR) has been a cause of concern. Computer navigation is a useful aid in achieving the desired alignment although it is limited by the error during the manual registration of landmarks. Our study aims to compare the registration process error between a standard and a MIS surgical approach. We hypothesized that performing the registration error via an MIS approach would increase the registration process error. Five fresh frozen lower limbs were routinely prepared and draped. The registration process was performed through an MIS approach. This was then extended to the standard approach and the registration was performed again. Two surgeons performed the registration process five times with each approach. Performing the registration process through the MIS approach was not associated with higher error compared to the standard approach in the alignment parameters of interest. This rejects our hypothesis. Image-free navigated MIS TKR does not appear to carry higher risk of component malalignment due to the registration process error. Navigation can be used during MIS TKR to improve alignment without reduced accuracy due to the approach.

  4. Bayes plus Brass: Estimating Total Fertility for Many Small Areas from Sparse Census Data

    PubMed Central

    Schmertmann, Carl P.; Cavenaghi, Suzana M.; Assunção, Renato M.; Potter, Joseph E.

    2013-01-01

    Small-area fertility estimates are valuable for analysing demographic change, and important for local planning and population projection. In countries lacking complete vital registration, however, small-area estimates are possible only from sparse survey or census data that are potentially unreliable. Such estimation requires new methods for old problems: procedures must be automated if thousands of estimates are required, they must deal with extreme sampling variability in many areas, and they should also incorporate corrections for possible data errors. We present a two-step algorithm for estimating total fertility in such circumstances, and we illustrate by applying the method to 2000 Brazilian Census data for over five thousand municipalities. Our proposed algorithm first smoothes local age-specific rates using Empirical Bayes methods, and then applies a new variant of Brass’s P/F parity correction procedure that is robust under conditions of rapid fertility decline. PMID:24143946

  5. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Li, T. S.; DePoy, D. L.; Marshall, J. L.

    Meeting the science goals for many current and future ground-based optical large-area sky surveys requires that the calibrated broadband photometry is both stable in time and uniform over the sky to 1% precision or better. Past and current surveys have achieved photometric precision of 1%–2% by calibrating the survey’s stellar photometry with repeated measurements of a large number of stars observed in multiple epochs. The calibration techniques employed by these surveys only consider the relative frame-by-frame photometric zeropoint offset and the focal plane position-dependent illumination corrections, which are independent of the source color. However, variations in the wavelength dependence ofmore » the atmospheric transmission and the instrumental throughput induce source color-dependent systematic errors. These systematic errors must also be considered to achieve the most precise photometric measurements. In this paper, we examine such systematic chromatic errors (SCEs) using photometry from the Dark Energy Survey (DES) as an example. We first define a natural magnitude system for DES and calculate the systematic errors on stellar magnitudes when the atmospheric transmission and instrumental throughput deviate from the natural system. We conclude that the SCEs caused by the change of airmass in each exposure, the change of the precipitable water vapor and aerosol in the atmosphere over time, and the non-uniformity of instrumental throughput over the focal plane can be up to 2% in some bandpasses. We then compare the calculated SCEs with the observed DES data. For the test sample data, we correct these errors using measurements of the atmospheric transmission and instrumental throughput from auxiliary calibration systems. The residual after correction is less than 0.3%. Moreover, we calculate such SCEs for Type Ia supernovae and elliptical galaxies and find that the chromatic errors for non-stellar objects are redshift-dependent and can be larger than those for stars at certain redshifts.« less

  6. Effects of vertical distribution of water vapor and temperature on total column water vapor retrieval error

    NASA Technical Reports Server (NTRS)

    Sun, Jielun

    1993-01-01

    Results are presented of a test of the physically based total column water vapor retrieval algorithm of Wentz (1992) for sensitivity to realistic vertical distributions of temperature and water vapor. The ECMWF monthly averaged temperature and humidity fields are used to simulate the spatial pattern of systematic retrieval error of total column water vapor due to this sensitivity. The estimated systematic error is within 0.1 g/sq cm over about 70 percent of the global ocean area; systematic errors greater than 0.3 g/sq cm are expected to exist only over a few well-defined regions, about 3 percent of the global oceans, assuming that the global mean value is unbiased.

  7. Development of a refractive error quality of life scale for Thai adults (the REQ-Thai).

    PubMed

    Sukhawarn, Roongthip; Wiratchai, Nonglak; Tatsanavivat, Pyatat; Pitiyanuwat, Somwung; Kanato, Manop; Srivannaboon, Sabong; Guyatt, Gordon H

    2011-08-01

    To develop a scale for measuring refractive error quality of life (QOL) for Thai adults. The full survey comprised 424 respondents from 5 medical centers in Bangkok and from 3 medical centers in Chiangmai, Songkla and KhonKaen provinces. Participants were emmetropes and persons with refractive correction with visual acuity of 20/30 or better An item reduction process was employed by combining 3 methods-expert opinion, impact method and item-total correlation methods. The classical reliability testing and the validity testing including convergent, discriminative and construct validity was performed. The developed questionnaire comprised 87 items in 6 dimensions: 1) quality of vision, 2) visual function, 3) social function, 4) psychological function, 5) symptoms and 6) refractive correction problems. It is the 5-level Likert scale type. The Cronbach's Alpha coefficients of its dimensions ranged from 0.756 to 0. 979. All validity testing were shown to be valid. The construct validity was validated by the confirmatory factor analysis. A short version questionnaire comprised 48 items with good reliability and validity was also developed. This is the first validated instrument for measuring refractive error quality of life for Thai adults that was developed with strong research methodology and large sample size.

  8. Calculating sediment discharge from a highway construction site in central Pennsylvania

    USGS Publications Warehouse

    Reed, L.A.; Ward, J.R.; Wetzel, K.L.

    1985-01-01

    The Pennsylvania Department of Transportation, the Federal Highway Administration, and the U.S. Geological Survey have cooperated in a study to evaluate two methods of predicting sediment yields during highway construction. Sediment yields were calculated using the Universal Soil Loss and the Younkin Sediment Prediction Equations. Results were compared to the actual measured values, and standard errors and coefficients of correlation were calculated. Sediment discharge from the construction area was determined for storms that occurred during construction of Interstate 81 in a 0.38-square mile basin near Harrisburg, Pennsylvania. Precipitation data tabulated included total rainfall, maximum 30-minute rainfall, kinetic energy, and the erosive index of the precipitation. Highway construction data tabulated included the area disturbed by clearing and grubbing, the area in cuts and fills, the average depths of cuts and fills, the area seeded and mulched, and the area paved. Using the Universal Soil Loss Equation, sediment discharge from the construction area was calculated for storms. The standard error of estimate was 0.40 (about 105 percent), and the coefficient of correlation was 0.79. Sediment discharge from the construction area was also calculated using the Younkin Equation. The standard error of estimate of 0.42 (about 110 percent), and the coefficient of correlation of 0.77 are comparable to those from the Universal Soil Loss Equation.

  9. Impact of methodological "shortcuts" in conducting public health surveys: Results from a vaccination coverage survey

    PubMed Central

    Luman, Elizabeth T; Sablan, Mariana; Stokley, Shannon; McCauley, Mary M; Shaw, Kate M

    2008-01-01

    Background Lack of methodological rigor can cause survey error, leading to biased results and suboptimal public health response. This study focused on the potential impact of 3 methodological "shortcuts" pertaining to field surveys: relying on a single source for critical data, failing to repeatedly visit households to improve response rates, and excluding remote areas. Methods In a vaccination coverage survey of young children conducted in the Commonwealth of the Northern Mariana Islands in July 2005, 3 sources of vaccination information were used, multiple follow-up visits were made, and all inhabited areas were included in the sampling frame. Results are calculated with and without these strategies. Results Most children had at least 2 sources of data; vaccination coverage estimated from any single source was substantially lower than from all sources combined. Eligibility was ascertained for 79% of households after the initial visit and for 94% of households after follow-up visits; vaccination coverage rates were similar with and without follow-up. Coverage among children on remote islands differed substantially from that of their counterparts on the main island indicating a programmatic need for locality-specific information; excluding remote islands from the survey would have had little effect on overall estimates due to small populations and divergent results. Conclusion Strategies to reduce sources of survey error should be maximized in public health surveys. The impact of the 3 strategies illustrated here will vary depending on the primary outcomes of interest and local situations. Survey limitations such as potential for error should be well-documented, and the likely direction and magnitude of bias should be considered. PMID:18371195

  10. The Eating Motivation Survey: results from the USA, India and Germany.

    PubMed

    Sproesser, Gudrun; Ruby, Matthew B; Arbit, Naomi; Rozin, Paul; Schupp, Harald T; Renner, Britta

    2018-02-01

    Research has shown that there is a large variety of different motives underlying why people eat what they eat, which can be assessed with The Eating Motivation Survey (TEMS). The present study investigates the consistency and measurement invariance of the fifteen basic motives included in TEMS in countries with greatly differing eating environments. The fifteen-factor structure of TEMS (brief version: forty-six items) was tested in confirmatory factor analyses. An online survey was conducted. US-American, Indian and German adults (total N 749) took part. Despite the complexity of the model, fit indices indicated a reasonable model fit (for the total sample: χ 2/df=4·03; standardized root-mean-squared residual (SRMR)=0·063; root-mean-square error of approximation (RMSEA)=0·064 (95 % CI 0·062, 0·066)). Only the comparative fit index (CFI) was below the recommended threshold (for the total sample: CFI=0·84). Altogether, 181 out of 184 item loadings were above the recommended threshold of 0·30. Furthermore, the factorial structure of TEMS was invariant across countries with respect to factor configuration and factor loadings (configural v. metric invariance model: ΔCFI=0·009; ΔRMSEA=0·001; ΔSRMR=0·001). Moreover, forty-three out of forty-six items showed invariant intercepts across countries. The fifteen-factor structure of TEMS was, in general, confirmed across countries despite marked differences in eating environments. Moreover, latent means of fourteen out of fifteen motive factors can be compared across countries in future studies. This is a first step towards determining generalizability of the fifteen basic eating motives of TEMS across eating environments.

  11. The accuracy of the 24-h activity recall method for assessing sedentary behaviour: the physical activity measurement survey (PAMS) project.

    PubMed

    Kim, Youngwon; Welk, Gregory J

    2017-02-01

    Sedentary behaviour (SB) has emerged as a modifiable risk factor, but little is known about measurement errors of SB. The purpose of this study was to determine the validity of 24-h Physical Activity Recall (24PAR) relative to SenseWear Armband (SWA) for assessing SB. Each participant (n = 1485) undertook a series of data collection procedures on two randomly selected days: wearing a SWA for full 24-h, and then completing the telephone-administered 24PAR the following day to recall the past 24-h activities. Estimates of total sedentary time (TST) were computed without the inclusion of reported or recorded sleep time. Equivalence testing was used to compare estimates of TST. Analyses from equivalence testing showed no significant equivalence of 24PAR for TST (90% CI: 443.0 and 457.6 min · day -1 ) relative to SWA (equivalence zone: 580.7 and 709.8 min · day -1 ). Bland-Altman plots indicated individuals that were extremely or minimally sedentary provided relatively comparable sedentary time between 24PAR and SWA. Overweight/obese and/or older individuals were more likely to under-estimate sedentary time than normal weight and/or younger individuals. Measurement errors of 24PAR varied by the level of sedentary time and demographic indicators. This evidence informs future work to develop measurement error models to correct for errors of self-reports.

  12. An educational and audit tool to reduce prescribing error in intensive care.

    PubMed

    Thomas, A N; Boxall, E M; Laha, S K; Day, A J; Grundy, D

    2008-10-01

    To reduce prescribing errors in an intensive care unit by providing prescriber education in tutorials, ward-based teaching and feedback in 3-monthly cycles with each new group of trainee medical staff. Prescribing audits were conducted three times in each 3-month cycle, once pretraining, once post-training and a final audit after 6 weeks. The audit information was fed back to prescribers with their correct prescribing rates, rates for individual error types and total error rates together with anonymised information about other prescribers' error rates. The percentage of prescriptions with errors decreased over each 3-month cycle (pretraining 25%, 19%, (one missing data point), post-training 23%, 6%, 11%, final audit 7%, 3%, 5% (p<0.0005)). The total number of prescriptions and error rates varied widely between trainees (data collection one; cycle two: range of prescriptions written: 1-61, median 18; error rate: 0-100%; median: 15%). Prescriber education and feedback reduce manual prescribing errors in intensive care.

  13. An Analysis of College Students' Attitudes towards Error Correction in EFL Context

    ERIC Educational Resources Information Center

    Zhu, Honglin

    2010-01-01

    This article is based on a survey on the attitudes towards the error correction by their teachers in the process of teaching and learning and it is intended to improve the language teachers' understanding of the nature of error correction. Based on the analysis, the article expounds some principles and techniques that can be applied in the process…

  14. Quantifying the morphodynamics of river restoration schemes using Unmanned Aerial Vehicles (UAVs)

    NASA Astrophysics Data System (ADS)

    Williams, Richard; Byrne, Patrick; Gilles, Eric; Hart, John; Hoey, Trevor; Maniatis, George; Moir, Hamish; Reid, Helen; Ves, Nikolas

    2017-04-01

    River restoration schemes are particularly sensitive to morphological adjustment during the first set of high-flow events that they are subjected to. Quantifying elevation change associated with morphological adjustment can contribute to improved adaptive decision making to ensure river restoration scheme objectives are achieved. To date the relatively high cost, technical demands and challenging logistics associated with acquiring repeat, high-resolution topographic surveys has resulted in a significant barrier to monitoring the three-dimensional morphodynamics of river restoration schemes. The availability of low-cost, consumer grade Unmanned Aerial Vehicles that are capable of acquiring imagery for processing using Structure-from-Motion Multi-View Stereo (SfM MVS) photogrammetry has the potential to transform the survey the morphodynamics of river restoration schemes. Application guidance does, however, need to be developed to fully exploit the advances of UAV technology and SfM MVS processing techniques. In particular, there is a need to quantify the effect of the number and spatial distribution of ground targets on vertical error. This is particularly significant because vertical errors propagate when mapping morphological change, and thus determine the evidence that is available for decision making. This presentation presents results from a study that investigated how the number and spatial distribution of targets influenced vertical error, and then used the findings to determine survey protocols for a monitoring campaign that has quantified morphological change across a number of restoration schemes. At the Swindale river restoration scheme, Cumbria, England, 31 targets were distributed across a 700 m long reach and the centre of each target was surveyed using RTK-GPS. Using the targets as General Control Points (GCPs) or checkpoints, they were divided into three different spatial patterns (centre, edge and random) and used for processing images acquired from a SenseFly Swinglet CAM UAV with a Canon IXUS 240 HS camera. Results indicate that if targets were distributed centrally then vertical distortions would be most notable in outer region of the processing domain; if an edge pattern was used then vertical errors were greatest in the central region of the processing domain; if targets were distributed randomly then errors were more evenly distributed. For this optimal random layout, vertical errors were lowest when 15 to 23 targets were used as GCPs. The best solution achieved planimetric (XY) errors of 0.006 m and vertical (Z) errors of 0.05 m. This result was used to determine target density and distribution for repeat surveys on two other restoration schemes, Whit Beck (Cumbria, England) and Allt Lorgy (Highlands, Scotland). These repeat surveys have been processed to produce DEMs of Difference (DoDs). The DoDs have been used to quantify the spatial distribution of erosion and deposition of these schemes due to high-flow events. Broader interpretation enables insight into patterns of morphological sensitivity that are related to scheme design.

  15. Factors controlling volume errors through 2D gully erosion assessment: guidelines for optimal survey design

    NASA Astrophysics Data System (ADS)

    Castillo, Carlos; Pérez, Rafael

    2017-04-01

    The assessment of gully erosion volumes is essential for the quantification of soil losses derived from this relevant degradation process. Traditionally, 2D and 3D approaches has been applied for this purpose (Casalí et al., 2006). Although innovative 3D approaches have recently been proposed for gully volume quantification, a renewed interest can be found in literature regarding the useful information that cross-section analysis still provides in gully erosion research. Moreover, the application of methods based on 2D approaches can be the most cost-effective approach in many situations such as preliminary studies with low accuracy requirements or surveys under time or budget constraints. The main aim of this work is to examine the key factors controlling volume error variability in 2D gully assessment by means of a stochastic experiment involving a Monte Carlo analysis over synthetic gully profiles in order to 1) contribute to a better understanding of the drivers and magnitude of gully erosion 2D-surveys uncertainty and 2) provide guidelines for optimal survey designs. Owing to the stochastic properties of error generation in 2D volume assessment, a statistical approach was followed to generate a large and significant set of gully reach configurations to evaluate quantitatively the influence of the main factors controlling the uncertainty of the volume assessment. For this purpose, a simulation algorithm in Matlab® code was written, involving the following stages: - Generation of synthetic gully area profiles with different degrees of complexity (characterized by the cross-section variability) - Simulation of field measurements characterised by a survey intensity and the precision of the measurement method - Quantification of the volume error uncertainty as a function of the key factors In this communication we will present the relationships between volume error and the studied factors and propose guidelines for 2D field surveys based on the minimal survey densities required to achieve a certain accuracy given the cross-sectional variability of a gully and the measurement method applied. References Casali, J., Loizu, J., Campo, M.A., De Santisteban, L.M., Alvarez-Mozos, J., 2006. Accuracy of methods for field assessment of rill and ephemeral gully erosion. Catena 67, 128-138. doi:10.1016/j.catena.2006.03.005

  16. Evaluation of causes and frequency of medication errors during information technology downtime.

    PubMed

    Hanuscak, Tara L; Szeinbach, Sheryl L; Seoane-Vazquez, Enrique; Reichert, Brendan J; McCluskey, Charles F

    2009-06-15

    The causes and frequency of medication errors occurring during information technology downtime were evaluated. Individuals from a convenience sample of 78 hospitals who were directly responsible for supporting and maintaining clinical information systems (CISs) and automated dispensing systems (ADSs) were surveyed using an online tool between February 2007 and May 2007 to determine if medication errors were reported during periods of system downtime. The errors were classified using the National Coordinating Council for Medication Error Reporting and Prevention severity scoring index. The percentage of respondents reporting downtime was estimated. Of the 78 eligible hospitals, 32 respondents with CIS and ADS responsibilities completed the online survey for a response rate of 41%. For computerized prescriber order entry, patch installations and system upgrades caused an average downtime of 57% over a 12-month period. Lost interface and interface malfunction were reported for centralized and decentralized ADSs, with an average downtime response of 34% and 29%, respectively. The average downtime response was 31% for software malfunctions linked to clinical decision-support systems. Although patient harm did not result from 30 (54%) medication errors, the potential for harm was present for 9 (16%) of these errors. Medication errors occurred during CIS and ADS downtime despite the availability of backup systems and standard protocols to handle periods of system downtime. Efforts should be directed to reduce the frequency and length of down-time in order to minimize medication errors during such downtime.

  17. Evolution of errors in the altimetric bathymetry model used by Google Earth and GEBCO

    NASA Astrophysics Data System (ADS)

    Marks, K. M.; Smith, W. H. F.; Sandwell, D. T.

    2010-09-01

    We analyze errors in the global bathymetry models of Smith and Sandwell that combine satellite altimetry with acoustic soundings and shorelines to estimate depths. Versions of these models have been incorporated into Google Earth and the General Bathymetric Chart of the Oceans (GEBCO). We use Japan Agency for Marine-Earth Science and Technology (JAMSTEC) multibeam surveys not previously incorporated into the models as "ground truth" to compare against model versions 7.2 through 12.1, defining vertical differences as "errors." Overall error statistics improve over time: 50th percentile errors declined from 57 to 55 to 49 m, and 90th percentile errors declined from 257 to 235 to 219 m, in versions 8.2, 11.1 and 12.1. This improvement is partly due to an increasing number of soundings incorporated into successive models, and partly to improvements in the satellite gravity model. Inspection of specific sites reveals that changes in the algorithms used to interpolate across survey gaps with altimetry have affected some errors. Versions 9.1 through 11.1 show a bias in the scaling from gravity in milliGals to topography in meters that affected the 15-160 km wavelength band. Regionally averaged (>160 km wavelength) depths have accumulated error over successive versions 9 through 11. These problems have been mitigated in version 12.1, which shows no systematic variation of errors with depth. Even so, version 12.1 is in some respects not as good as version 8.2, which employed a different algorithm.

  18. Error in total ozone measurements arising from aerosol attenuation

    NASA Technical Reports Server (NTRS)

    Thomas, R. W. L.; Basher, R. E.

    1979-01-01

    A generalized least squares method for deducing both total ozone and aerosol extinction spectrum parameters from Dobson spectrophotometer measurements was developed. An error analysis applied to this system indicates that there is little advantage to additional measurements once a sufficient number of line pairs have been employed to solve for the selected detail in the attenuation model. It is shown that when there is a predominance of small particles (less than about 0.35 microns in diameter) the total ozone from the standard AD system is too high by about one percent. When larger particles are present the derived total ozone may be an overestimate or an underestimate but serious errors occur only for narrow polydispersions.

  19. Analyzing Complex Survey Data.

    ERIC Educational Resources Information Center

    Rodgers-Farmer, Antoinette Y.; Davis, Diane

    2001-01-01

    Uses data from the 1994 AIDS Knowledge and Attitudes Supplement to the National Health Interview Survey (NHIS) to illustrate that biased point estimates, inappropriate standard errors, and misleading tests of significance can result from using traditional software packages, such as SPSS or SAS, for complex survey analysis. (BF)

  20. Empirically Defined Patterns of Executive Function Deficits in Schizophrenia and Their Relation to Everyday Functioning: A Person-Centered Approach

    PubMed Central

    Iampietro, Mary; Giovannetti, Tania; Drabick, Deborah A. G.; Kessler, Rachel K.

    2013-01-01

    Executive function (EF) deficits in schizophrenia (SZ) are well documented, although much less is known about patterns of EF deficits and their association to differential impairments in everyday functioning. The present study empirically defined SZ groups based on measures of various EF abilities and then compared these EF groups on everyday action errors. Participants (n=45) completed various subtests from the Delis–Kaplan Executive Function System (D-KEFS) and the Naturalistic Action Test (NAT), a performance-based measure of everyday action that yields scores reflecting total errors and a range of different error types (e.g., omission, perseveration). Results of a latent class analysis revealed three distinct EF groups, characterized by (a) multiple EF deficits, (b) relatively spared EF, and (c) perseverative responding. Follow-up analyses revealed that the classes differed significantly on NAT total errors, total commission errors, and total perseveration errors; the two classes with EF impairment performed comparably on the NAT but performed worse than the class with relatively spared EF. In sum, people with SZ demonstrate variable patterns of EF deficits, and distinct aspects of these EF deficit patterns (i.e., poor mental control abilities) may be associated with everyday functioning capabilities. PMID:23035705

  1. Sampling Errors in Monthly Rainfall Totals for TRMM and SSM/I, Based on Statistics of Retrieved Rain Rates and Simple Models

    NASA Technical Reports Server (NTRS)

    Bell, Thomas L.; Kundu, Prasun K.; Einaudi, Franco (Technical Monitor)

    2000-01-01

    Estimates from TRMM satellite data of monthly total rainfall over an area are subject to substantial sampling errors due to the limited number of visits to the area by the satellite during the month. Quantitative comparisons of TRMM averages with data collected by other satellites and by ground-based systems require some estimate of the size of this sampling error. A method of estimating this sampling error based on the actual statistics of the TRMM observations and on some modeling work has been developed. "Sampling error" in TRMM monthly averages is defined here relative to the monthly total a hypothetical satellite permanently stationed above the area would have reported. "Sampling error" therefore includes contributions from the random and systematic errors introduced by the satellite remote sensing system. As part of our long-term goal of providing error estimates for each grid point accessible to the TRMM instruments, sampling error estimates for TRMM based on rain retrievals from TRMM microwave (TMI) data are compared for different times of the year and different oceanic areas (to minimize changes in the statistics due to algorithmic differences over land and ocean). Changes in sampling error estimates due to changes in rain statistics due 1) to evolution of the official algorithms used to process the data, and 2) differences from other remote sensing systems such as the Defense Meteorological Satellite Program (DMSP) Special Sensor Microwave/Imager (SSM/I), are analyzed.

  2. Nurses' systems thinking competency, medical error reporting, and the occurrence of adverse events: a cross-sectional study.

    PubMed

    Hwang, Jee-In; Park, Hyeoun-Ae

    2017-12-01

    Healthcare professionals' systems thinking is emphasized for patient safety. To report nurses' systems thinking competency, and its relationship with medical error reporting and the occurrence of adverse events. A cross-sectional survey using a previously validated Systems Thinking Scale (STS), was conducted. Nurses from two teaching hospitals were invited to participate in the survey. There were 407 (60.3%) completed surveys. The mean STS score was 54.5 (SD 7.3) out of 80. Nurses with higher STS scores were more likely to report medical errors (odds ratio (OR) = 1.05; 95% confidence interval (CI) = 1.02-1.08) and were less likely to be involved in the occurrence of adverse events (OR = 0.96; 95% CI = 0.93-0.98). Nurses showed moderate systems thinking competency. Systems thinking was a significant factor associated with patient safety. Impact Statement: The findings of this study highlight the importance of enhancing nurses' systems thinking capacity to promote patient safety.

  3. Comparing Dark Energy Survey and HST –CLASH observations of the galaxy cluster RXC J2248.7-4431: implications for stellar mass versus dark matter

    DOE PAGES

    Palmese, A.; Lahav, O.; Banerji, M.; ...

    2016-08-20

    We derive the stellar mass fraction in the galaxy cluster RXC J2248.7-4431 observed with the Dark Energy Survey (DES) during the Science Verification period. We compare the stellar mass results from DES (5 filters) with those from the Hubble Space Telescope CLASH (17 filters). When the cluster spectroscopic redshift is assumed, we show that stellar masses from DES can be estimated within 25% of CLASH values. We compute the stellar mass contribution coming from red and blue galaxies, and study the relation between stellar mass and the underlying dark matter using weak lensing studies with DES and CLASH. An analysismore » of the radial profiles of the DES total and stellar mass yields a stellar-to-total fraction of f*=7.0+-2.2x10^-3 within a radius of r_200c~3 Mpc. Our analysis also includes a comparison of photometric redshifts and star/galaxy separation efficiency for both datasets. We conclude that space-based small field imaging can be used to calibrate the galaxy properties in DES for the much wider field of view. The technique developed to derive the stellar mass fraction in galaxy clusters can be applied to the ~100 000 clusters that will be observed within this survey. The stacking of all the DES clusters would reduce the errors on f* estimates and deduce important information about galaxy evolution.« less

  4. Estimates of reservoir methane emissions based on a spatially ...

    EPA Pesticide Factsheets

    Global estimates of methane (CH4) emissions from reservoirs are poorly constrained, partly due to the challenges of accounting for intra-reservoir spatial variability. Reservoir-scale emission rates are often estimated by extrapolating from measurement made at a few locations; however, error and bias associated with this approach can be large and difficult to quantify. Here we use a generalized random tessellation survey (GRTS) design to generate estimates of central tendency and variance at multiple spatial scales in a reservoir. GRTS survey designs are probabilistic and spatially balanced which eliminates bias associated with expert judgment in site selection. GRTS surveys also allow for variance estimates that account for spatial pattern in emission rates. Total CH4 emission rates (i.e. sum of ebullition and diffusive emissions) were 4.8 (±2.1), 33.0 (±10.7), and 8.3 (±2.2) mg CH4 m-2 h-1 in open-waters, tributary associated areas, and the entire reservoir for the period in August 2014 during which 115 sites were sampled across an 7.98 km2 reservoir in Southwestern, Ohio, USA. Tributary areas occupy 12% of the reservoir surface, but were the source of 41% of total CH4 emissions, highlighting the importance of riverine-lacustrine transition zones. Ebullition accounted for >90% of CH4 emission at all spatial scales. Confidence interval estimates that incorporated spatial pattern in CH4 emissions were up to 29% narrower than when spatial independence

  5. Comparing Dark Energy Survey and HST –CLASH observations of the galaxy cluster RXC J2248.7-4431: implications for stellar mass versus dark matter

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Palmese, A.; Lahav, O.; Banerji, M.

    We derive the stellar mass fraction in the galaxy cluster RXC J2248.7-4431 observed with the Dark Energy Survey (DES) during the Science Verification period. We compare the stellar mass results from DES (5 filters) with those from the Hubble Space Telescope CLASH (17 filters). When the cluster spectroscopic redshift is assumed, we show that stellar masses from DES can be estimated within 25% of CLASH values. We compute the stellar mass contribution coming from red and blue galaxies, and study the relation between stellar mass and the underlying dark matter using weak lensing studies with DES and CLASH. An analysismore » of the radial profiles of the DES total and stellar mass yields a stellar-to-total fraction of f*=7.0+-2.2x10^-3 within a radius of r_200c~3 Mpc. Our analysis also includes a comparison of photometric redshifts and star/galaxy separation efficiency for both datasets. We conclude that space-based small field imaging can be used to calibrate the galaxy properties in DES for the much wider field of view. The technique developed to derive the stellar mass fraction in galaxy clusters can be applied to the ~100 000 clusters that will be observed within this survey. The stacking of all the DES clusters would reduce the errors on f* estimates and deduce important information about galaxy evolution.« less

  6. Instrument Pointing Capabilities: Past, Present, and Future

    NASA Technical Reports Server (NTRS)

    Blackmore, Lars; Murray, Emmanuell; Scharf, Daniel P.; Aung, Mimi; Bayard, David; Brugarolas, Paul; Hadaegh, Fred; Lee, Allan; Milman, Mark; Sirlin, Sam; hide

    2011-01-01

    This paper surveys the instrument pointing capabilities of past, present and future space telescopes and interferometers. As an important aspect of this survey, we present a taxonomy for "apples-to-apples" comparisons of pointing performances. First, pointing errors are defined relative to either an inertial frame or a celestial target. Pointing error can then be further sub-divided into DC, that is, steady state, and AC components. We refer to the magnitude of the DC error relative to the inertial frame as absolute pointing accuracy, and we refer to the magnitude of the DC error relative to a celestial target as relative pointing accuracy. The magnitude of the AC error is referred to as pointing stability. While an AC/DC partition is not new, we leverage previous work by some of the authors to quantitatively clarify and compare varying definitions of jitter and time window averages. With this taxonomy and for sixteen past, present, and future missions, pointing accuracies and stabilities, both required and achieved, are presented. In addition, we describe the attitude control technologies used to and, for future missions, planned to achieve these pointing performances.

  7. Pediatric Anesthesiology Fellows' Perception of Quality of Attending Supervision and Medical Errors.

    PubMed

    Benzon, Hubert A; Hajduk, John; De Oliveira, Gildasio; Suresh, Santhanam; Nizamuddin, Sarah L; McCarthy, Robert; Jagannathan, Narasimhan

    2018-02-01

    Appropriate supervision has been shown to reduce medical errors in anesthesiology residents and other trainees across various specialties. Nonetheless, supervision of pediatric anesthesiology fellows has yet to be evaluated. The main objective of this survey investigation was to evaluate supervision of pediatric anesthesiology fellows in the United States. We hypothesized that there was an indirect association between perceived quality of faculty supervision of pediatric anesthesiology fellow trainees and the frequency of medical errors reported. A survey of pediatric fellows from 53 pediatric anesthesiology fellowship programs in the United States was performed. The primary outcome was the frequency of self-reported errors by fellows, and the primary independent variable was supervision scores. Questions also assessed barriers for effective faculty supervision. One hundred seventy-six pediatric anesthesiology fellows were invited to participate, and 104 (59%) responded to the survey. Nine of 103 (9%, 95% confidence interval [CI], 4%-16%) respondents reported performing procedures, on >1 occasion, for which they were not properly trained for. Thirteen of 101 (13%, 95% CI, 7%-21%) reported making >1 mistake with negative consequence to patients, and 23 of 104 (22%, 95% CI, 15%-31%) reported >1 medication error in the last year. There were no differences in median (interquartile range) supervision scores between fellows who reported >1 medication error compared to those reporting ≤1 errors (3.4 [3.0-3.7] vs 3.4 [3.1-3.7]; median difference, 0; 99% CI, -0.3 to 0.3; P = .96). Similarly, there were no differences in those who reported >1 mistake with negative patient consequences, 3.3 (3.0-3.7), compared with those who did not report mistakes with negative patient consequences (3.4 [3.3-3.7]; median difference, 0.1; 99% CI, -0.2 to 0.6; P = .35). We detected a high rate of self-reported medication errors in pediatric anesthesiology fellows in the United States. Interestingly, fellows' perception of quality of faculty supervision was not associated with the frequency of reported errors. The current results with a narrow CI suggest the need to evaluate other potential factors that can be associated with the high frequency of reported errors by pediatric fellows (eg, fatigue, burnout). The identification of factors that lead to medical errors by pediatric anesthesiology fellows should be a main research priority to improve both trainee education and best practices of pediatric anesthesia.

  8. Conducting Web-Based Surveys. ERIC Digest.

    ERIC Educational Resources Information Center

    Solomon, David J.

    Web-based surveying is very attractive for many reasons, including reducing the time and cost of conducting a survey and avoiding the often error prone and tedious task of data entry. At this time, Web-based surveys should still be used with caution. The biggest concern at present is coverage bias or bias resulting from sampled people either not…

  9. Improved characterisation of measurement errors in electrical resistivity tomography (ERT) surveys

    NASA Astrophysics Data System (ADS)

    Tso, C. H. M.; Binley, A. M.; Kuras, O.; Graham, J.

    2016-12-01

    Measurement errors can play a pivotal role in geophysical inversion. Most inverse models require users to prescribe a statistical model of data errors before inversion. Wrongly prescribed error levels can lead to over- or under-fitting of data, yet commonly used models of measurement error are relatively simplistic. With the heightening interests in uncertainty estimation across hydrogeophysics, better characterisation and treatment of measurement errors is needed to provide more reliable estimates of uncertainty. We have analysed two time-lapse electrical resistivity tomography (ERT) datasets; one contains 96 sets of direct and reciprocal data collected from a surface ERT line within a 24h timeframe, while the other is a year-long cross-borehole survey at a UK nuclear site with over 50,000 daily measurements. Our study included the characterisation of the spatial and temporal behaviour of measurement errors using autocorrelation and covariance analysis. We find that, in addition to well-known proportionality effects, ERT measurements can also be sensitive to the combination of electrodes used. This agrees with reported speculation in previous literature that ERT errors could be somewhat correlated. Based on these findings, we develop a new error model that allows grouping based on electrode number in additional to fitting a linear model to transfer resistance. The new model fits the observed measurement errors better and shows superior inversion and uncertainty estimates in synthetic examples. It is robust, because it groups errors together based on the number of the four electrodes used to make each measurement. The new model can be readily applied to the diagonal data weighting matrix commonly used in classical inversion methods, as well as to the data covariance matrix in the Bayesian inversion framework. We demonstrate its application using extensive ERT monitoring datasets from the two aforementioned sites.

  10. A method of treating the non-grey error in total emittance measurements

    NASA Technical Reports Server (NTRS)

    Heaney, J. B.; Henninger, J. H.

    1971-01-01

    In techniques for the rapid determination of total emittance, the sample is generally exposed to surroundings that are at a different temperature than the sample's surface. When the infrared spectral reflectance of the surface is spectrally selective, these techniques introduce an error into the total emittance values. Surfaces of aluminum overcoated with oxides of various thicknesses fall into this class. Because they are often used as temperature control coatings on satellites, their emittances must be accurately known. The magnitude of the error was calculated for Alzak and silicon oxide-coated aluminum and was shown to be dependent on the thickness of the oxide coating. The results demonstrate that, because the magnitude of the error is thickness-dependent, it is generally impossible or impractical to eliminate it by calibrating the measuring device.

  11. Optimising UAV topographic surveys processed with structure-from-motion: Ground control quality, quantity and bundle adjustment

    NASA Astrophysics Data System (ADS)

    James, Mike R.; Robson, Stuart; d'Oleire-Oltmanns, Sebastian; Niethammer, Uwe

    2016-04-01

    Structure-from-motion (SfM) algorithms are greatly facilitating the production of detailed topographic models based on images collected by unmanned aerial vehicles (UAVs). However, SfM-based software does not generally provide the rigorous photogrammetric analysis required to fully understand survey quality. Consequently, error related to problems in control point data or the distribution of control points can remain undiscovered. Even if these errors are not large in magnitude, they can be systematic, and thus have strong implications for the use of products such as digital elevation models (DEMs) and orthophotos. Here, we develop a Monte Carlo approach to (1) improve the accuracy of products when SfM-based processing is used and (2) reduce the associated field effort by identifying suitable lower density deployments of ground control points. The method highlights over-parameterisation during camera self-calibration and provides enhanced insight into control point performance when rigorous error metrics are not available. Processing was implemented using commonly-used SfM-based software (Agisoft PhotoScan), which we augment with semi-automated and automated GCPs image measurement. We apply the Monte Carlo method to two contrasting case studies - an erosion gully survey (Taurodont, Morocco) carried out with an fixed-wing UAV, and an active landslide survey (Super-Sauze, France), acquired using a manually controlled quadcopter. The results highlight the differences in the control requirements for the two sites, and we explore the implications for future surveys. We illustrate DEM sensitivity to critical processing parameters and show how the use of appropriate parameter values increases DEM repeatability and reduces the spatial variability of error due to processing artefacts.

  12. Patient Safety Culture and the Second Victim Phenomenon: Connecting Culture to Staff Distress in Nurses

    PubMed Central

    Quillivan, Rebecca R.; Burlison, Jonathan D.; Browne, Emily K.; Scott, Susan D.; Hoffman, James M.

    2017-01-01

    Background Second victim experiences can affect the well-being of healthcare providers and compromise patient safety. Many factors associated with improved coping afer patient safety event involvement are also components of a strong patient safety culture, so that supportive patient safety cultures may reduce second victim–related trauma. A cross-sectional survey study was conducted to assess the influence of patient safety culture on second victim–related distress, in which associations among patient safety culture dimensions, organizational support, and second victim distress were investigated. Methods The Agency for Healthcare Research and Quality (AHRQ) Hospital Survey on Patient Safety Culture (HSOPSC) and the Second Victim Experience and Support Tool (SVEST), which was developed to assess organizational support and personal and professional distress after involvement in a patient safety event, were administered to nurses involved in direct patient care. Results Of 358 nurses, 155 (41%) responded, of whom 144 completed both surveys. Hierarchical linear regression demonstrated that the patient safety culture survey dimension nonpunitive response to errors was significantly associated with reductions in the second victim survey dimensions psychological, physical, and professional distress (p <.001). As a mediator, organizational support fully explained the nonpunitive response to errors–physical distress and nonpunitive response to errors–professional distress relationships and partially explained the nonpunitive response to error–psychological distress relationship. Conclusions A nonpunitive response to errors may mitigate the negative effects of involvement in a patient safety event by encouraging supportive interactions. Also, perceptions of second victim–related distress may be less severe when hospital cultures are characterized by nonpunitive response to errors. Reducing punitive response to error and encouraging supportive coworker, supervisor, and institutional interactions may be useful strategies to manage the severity of second victim experiences. PMID:27456420

  13. Developing and Validating a Tablet Version of an Illness Explanatory Model Interview for a Public Health Survey in Pune, India

    PubMed Central

    Giduthuri, Joseph G.; Maire, Nicolas; Joseph, Saju; Kudale, Abhay; Schaetti, Christian; Sundaram, Neisha; Schindler, Christian; Weiss, Mitchell G.

    2014-01-01

    Background Mobile electronic devices are replacing paper-based instruments and questionnaires for epidemiological and public health research. The elimination of a data-entry step after an interview is a notable advantage over paper, saving investigator time, decreasing the time lags in managing and analyzing data, and potentially improving the data quality by removing the error-prone data-entry step. Research has not yet provided adequate evidence, however, to substantiate the claim of fewer errors for computerized interviews. Methodology We developed an Android-based illness explanatory interview for influenza vaccine acceptance and tested the instrument in a field study in Pune, India, for feasibility and acceptability. Error rates for tablet and paper were compared with reference to the voice recording of the interview as gold standard to assess discrepancies. We also examined the preference of interviewers for the classical paper-based or the electronic version of the interview and compared the costs of research with both data collection devices. Results In 95 interviews with household respondents, total error rates with paper and tablet devices were nearly the same (2.01% and 1.99% respectively). Most interviewers indicated no preference for a particular device; but those with a preference opted for tablets. The initial investment in tablet-based interviews was higher compared to paper, while the recurring costs per interview were lower with the use of tablets. Conclusion An Android-based tablet version of a complex interview was developed and successfully validated. Advantages were not compromised by increased errors, and field research assistants with a preference preferred the Android device. Use of tablets may be more costly than paper for small samples and less costly for large studies. PMID:25233212

  14. Toward Successful Implementation of Speech Recognition Technology: A Survey of SRT Utilization Issues in Healthcare Settings.

    PubMed

    Clarke, Martina A; King, Joshua L; Kim, Min Soon

    2015-07-01

    To evaluate physician utilization of speech recognition technology (SRT) for medical documentation in two hospitals. A quantitative survey was used to collect data in the areas of practice, electronic equipment used for documentation, documentation created after providing care, and overall thoughts about and satisfaction with the SRT. The survey sample was from one rural and one urban facility in central Missouri. In addition, qualitative interviews were conducted with a chief medical officer and a physician champion regarding implementation issues, training, choice of SRT, and outcomes from their perspective. Seventy-one (60%) of the anticipated 125 surveys were returned. A total of 16 (23%) participants were practicing in internal medicine and 9 (13%) were practicing in family medicine. Fifty-six (79%) participants used a desktop and 14 (20%) used a laptop (2%) computer. SRT products from Nuance were the dominant SRT used by 59 participants (83%). Windows operating systems (Microsoft, Redmond, WA) was used by more than 58 (82%) of the survey respondents. With regard to user experience, 42 (59%) participants experienced spelling and grammatical errors, 15 (21%) encountered clinical inaccuracy, 9 (13%) experienced word substitution, and 4 (6%) experienced misleading medical information. This study shows critical issues of inconsistency, unreliability, and dissatisfaction in the functionality and usability of SRT. This merits further attention to improve the functionality and usability of SRT for better adoption within varying healthcare settings.

  15. The Gulliver Effect: The Impact of Error in an Elephantine Subpopulation on Estimates for Lilliputian Subpopulations

    ERIC Educational Resources Information Center

    Micceri, Theodore; Parasher, Pradnya; Waugh, Gordon W.; Herreid, Charlene

    2009-01-01

    An extensive review of the research literature and a study comparing over 36,000 survey responses with archival true scores indicated that one should expect a minimum of at least three percent random error for the least ambiguous of self-report measures. The Gulliver Effect occurs when a small proportion of error in a sizable subpopulation exerts…

  16. Beyond the Total Score: A Preliminary Investigation into the Types of Phonological Awareness Errors Made by First Graders

    ERIC Educational Resources Information Center

    Hayward, Denyse V.; Annable, Caitlin D.; Fung, Jennifer E.; Williamson, Robert D.; Lovell-Johnston, Meridith A.; Phillips, Linda M.

    2017-01-01

    Current phonological awareness assessment procedures consider only the total score a child achieves. Such an approach may result in children who achieve the same total score receiving the same instruction even though the configuration of their errors represent fundamental knowledge differences. The purpose of this study was to develop a tool for…

  17. Tilt Error in Cryospheric Surface Radiation Measurements at High Latitudes: A Model Study

    NASA Astrophysics Data System (ADS)

    Bogren, W.; Kylling, A.; Burkhart, J. F.

    2015-12-01

    We have evaluated the magnitude and makeup of error in cryospheric radiation observations due to small sensor misalignment in in-situ measurements of solar irradiance. This error is examined through simulation of diffuse and direct irradiance arriving at a detector with a cosine-response foreoptic. Emphasis is placed on assessing total error over the solar shortwave spectrum from 250nm to 4500nm, as well as supporting investigation over other relevant shortwave spectral ranges. The total measurement error introduced by sensor tilt is dominated by the direct component. For a typical high latitude albedo measurement with a solar zenith angle of 60◦, a sensor tilted by 1, 3, and 5◦ can respectively introduce up to 2.6, 7.7, and 12.8% error into the measured irradiance and similar errors in the derived albedo. Depending on the daily range of solar azimuth and zenith angles, significant measurement error can persist also in integrated daily irradiance and albedo.

  18. A comparison of multiple indicator kriging and area-to-point Poisson kriging for mapping patterns of herbivore species abundance in Kruger National Park, South Africa

    PubMed Central

    Kerry, Ruth; Goovaerts, Pierre; Smit, Izak P.J.; Ingram, Ben R.

    2015-01-01

    Kruger National Park (KNP), South Africa, provides protected habitats for the unique animals of the African savannah. For the past 40 years, annual aerial surveys of herbivores have been conducted to aid management decisions based on (1) the spatial distribution of species throughout the park and (2) total species populations in a year. The surveys are extremely time consuming and costly. For many years, the whole park was surveyed, but in 1998 a transect survey approach was adopted. This is cheaper and less time consuming but leaves gaps in the data spatially. Also the distance method currently employed by the park only gives estimates of total species populations but not their spatial distribution. We compare the ability of multiple indicator kriging and area-to-point Poisson kriging to accurately map species distribution in the park. A leave-one-out cross-validation approach indicates that multiple indicator kriging makes poor estimates of the number of animals, particularly the few large counts, as the indicator variograms for such high thresholds are pure nugget. Poisson kriging was applied to the prediction of two types of abundance data: spatial density and proportion of a given species. Both Poisson approaches had standardized mean absolute errors (St. MAEs) of animal counts at least an order of magnitude lower than multiple indicator kriging. The spatial density, Poisson approach (1), gave the lowest St. MAEs for the most abundant species and the proportion, Poisson approach (2), did for the least abundant species. Incorporating environmental data into Poisson approach (2) further reduced St. MAEs. PMID:25729318

  19. A comparison of multiple indicator kriging and area-to-point Poisson kriging for mapping patterns of herbivore species abundance in Kruger National Park, South Africa.

    PubMed

    Kerry, Ruth; Goovaerts, Pierre; Smit, Izak P J; Ingram, Ben R

    Kruger National Park (KNP), South Africa, provides protected habitats for the unique animals of the African savannah. For the past 40 years, annual aerial surveys of herbivores have been conducted to aid management decisions based on (1) the spatial distribution of species throughout the park and (2) total species populations in a year. The surveys are extremely time consuming and costly. For many years, the whole park was surveyed, but in 1998 a transect survey approach was adopted. This is cheaper and less time consuming but leaves gaps in the data spatially. Also the distance method currently employed by the park only gives estimates of total species populations but not their spatial distribution. We compare the ability of multiple indicator kriging and area-to-point Poisson kriging to accurately map species distribution in the park. A leave-one-out cross-validation approach indicates that multiple indicator kriging makes poor estimates of the number of animals, particularly the few large counts, as the indicator variograms for such high thresholds are pure nugget. Poisson kriging was applied to the prediction of two types of abundance data: spatial density and proportion of a given species. Both Poisson approaches had standardized mean absolute errors (St. MAEs) of animal counts at least an order of magnitude lower than multiple indicator kriging. The spatial density, Poisson approach (1), gave the lowest St. MAEs for the most abundant species and the proportion, Poisson approach (2), did for the least abundant species. Incorporating environmental data into Poisson approach (2) further reduced St. MAEs.

  20. Educational intervention together with an on-line quality control program achieve recommended analytical goals for bedside blood glucose monitoring in a 1200-bed university hospital.

    PubMed

    Sánchez-Margalet, Víctor; Rodriguez-Oliva, Manuel; Sánchez-Pozo, Cristina; Fernández-Gallardo, María Francisca; Goberna, Raimundo

    2005-01-01

    Portable meters for blood glucose concentrations are used at the patients bedside, as well as by patients for self-monitoring of blood glucose. Even though most devices have important technological advances that decrease operator error, the analytical goals proposed for the performance of glucose meters have been recently changed by the American Diabetes Association (ADA) to reach <5% analytical error and <7.9% total error. We studied 80 meters throughout the Virgen Macarena Hospital and we found most devices with performance error higher than 10%. The aim of the present study was to establish a new system to control portable glucose meters together with an educational program for nurses in a 1200-bed University Hospital to achieve recommended analytical goals, so that we could improve the quality of diabetes care. We used portable glucose meters connected on-line to the laboratory after an educational program for nurses with responsibilities in point-of-care testing. We evaluated the system by assessing total error of the glucometers using high- and low-level glucose control solutions. In a period of 6 months, we collected data from 5642 control samples obtained by 14 devices (Precision PCx) directly from the control program (QC manager). The average total error for the low-level glucose control (2.77 mmol/l) was 6.3% (range 5.5-7.6%), and even lower for the high-level glucose control (16.66 mmol/l), at 4.8% (range 4.1-6.5%). In conclusion, the performance of glucose meters used in our University Hospital with more than 1000 beds not only improved after the intervention, but the meters achieved the analytical goals of the suggested ADA/National Academy of Clinical Biochemistry criteria for total error (<7.9% in the range 2.77-16.66 mmol/l glucose) and optimal total error for high glucose concentrations of <5%, which will improve the quality of care of our patients.

  1. Medication Administration Practices of School Nurses.

    ERIC Educational Resources Information Center

    McCarthy, Ann Marie; Kelly, Michael W.; Reed, David

    2000-01-01

    Assessed medication administration practices among school nurses, surveying members of the National Association of School Nurses. Respondents were extremely concerned about medication administration. Errors in administering medications were reported by 48.5 percent of respondents, with missed doses the most common error. Most nurses followed…

  2. Adapting the Healthy Eating Index 2010 for the Canadian Population: Evidence from the Canadian Community Health Survey

    PubMed Central

    Ng, Alena Praneet; L’Abbé, Mary R.

    2017-01-01

    The Healthy Eating Index (HEI) is a diet quality index shown to be associated with reduced chronic disease risk. Older versions of the HEI have been adapted for Canadian populations; however, no Canadian modification of the Healthy Eating Index-2010 (HEI-2010) has been made. The aims of this study were: (a) to develop a Canadian adaptation of the HEI-2010 (i.e., Healthy Eating Index-Canada 2010 (HEI-C 2010)) by adapting the recommendations of the HEI-2010 to Canada’s Food Guide (CFG) 2007; (b) to evaluate the validity and reliability of the HEI-C 2010; and (c) to examine relationships between HEI-C 2010 scores with diet quality and the likelihood of being obese. Data from 12,805 participants (≥18 years) were obtained from the Canadian Community Health Survey Cycle 2.2. Weighted multivariate logistic regression was used to test the association between compliance to the HEI-C 2010 recommendations and the likelihood of being obese, adjusting for errors in self-reported dietary data. The total mean error-corrected HEI-C 2010 score was 50.85 ± 0.35 out of 100. Principal component analysis confirmed multidimensionality of the HEI-C 2010, while Cronbach’s α = 0.78 demonstrated internal reliability. Participants in the fourth quartile of the HEI-C 2010 with the healthiest diets were less likely to consume refined grains and empty calories and more likely to consume beneficial nutrients and foods (p-trend < 0.0001). Lower adherence to the index recommendations was inversely associated with the likelihood of being obese; this association strengthened after correction for measurement error (Odds Ratio: 1.41; 95% Confidence Interval: 1.17–1.71). Closer adherence to Canada’s Food Guide 2007 assessed through the HEI-C 2010 was associated with improved diet quality and reductions in the likelihood of obesity when energy intake and measurement errors were taken into account. Consideration of energy requirements and energy density in future updates of Canada’s Food Guide are important and necessary to ensure broader application and usability of dietary quality indexes developed based on this national nutrition guideline. PMID:28825674

  3. Adapting the Healthy Eating Index 2010 for the Canadian Population: Evidence from the Canadian National Nutrition Survey.

    PubMed

    Jessri, Mahsa; Ng, Alena Praneet; L'Abbé, Mary R

    2017-08-21

    The Healthy Eating Index (HEI) is a diet quality index shown to be associated with reduced chronic disease risk. Older versions of the HEI have been adapted for Canadian populations; however, no Canadian modification of the Healthy Eating Index-2010 (HEI-2010) has been made. The aims of this study were: (a) to develop a Canadian adaptation of the HEI-2010 (i.e., Healthy Eating Index-Canada 2010 (HEI-C 2010)) by adapting the recommendations of the HEI-2010 to Canada's Food Guide (CFG) 2007; (b) to evaluate the validity and reliability of the HEI-C 2010; and (c) to examine relationships between HEI-C 2010 scores with diet quality and the likelihood of being obese. Data from 12,805 participants (≥18 years) were obtained from the Canadian Community Health Survey Cycle 2.2. Weighted multivariate logistic regression was used to test the association between compliance to the HEI-C 2010 recommendations and the likelihood of being obese, adjusting for errors in self-reported dietary data. The total mean error-corrected HEI-C 2010 score was 50.85 ± 0.35 out of 100. Principal component analysis confirmed multidimensionality of the HEI-C 2010, while Cronbach's α = 0.78 demonstrated internal reliability. Participants in the fourth quartile of the HEI-C 2010 with the healthiest diets were less likely to consume refined grains and empty calories and more likely to consume beneficial nutrients and foods ( p -trend < 0.0001). Lower adherence to the index recommendations was inversely associated with the likelihood of being obese; this association strengthened after correction for measurement error (Odds Ratio: 1.41; 95% Confidence Interval: 1.17-1.71). Closer adherence to Canada's Food Guide 2007 assessed through the HEI-C 2010 was associated with improved diet quality and reductions in the likelihood of obesity when energy intake and measurement errors were taken into account. Consideration of energy requirements and energy density in future updates of Canada's Food Guide are important and necessary to ensure broader application and usability of dietary quality indexes developed based on this national nutrition guideline.

  4. Accounting for nonsampling error in estimates of HIV epidemic trends from antenatal clinic sentinel surveillance

    PubMed Central

    Eaton, Jeffrey W.; Bao, Le

    2017-01-01

    Objectives The aim of the study was to propose and demonstrate an approach to allow additional nonsampling uncertainty about HIV prevalence measured at antenatal clinic sentinel surveillance (ANC-SS) in model-based inferences about trends in HIV incidence and prevalence. Design Mathematical model fitted to surveillance data with Bayesian inference. Methods We introduce a variance inflation parameter σinfl2 that accounts for the uncertainty of nonsampling errors in ANC-SS prevalence. It is additive to the sampling error variance. Three approaches are tested for estimating σinfl2 using ANC-SS and household survey data from 40 subnational regions in nine countries in sub-Saharan, as defined in UNAIDS 2016 estimates. Methods were compared using in-sample fit and out-of-sample prediction of ANC-SS data, fit to household survey prevalence data, and the computational implications. Results Introducing the additional variance parameter σinfl2 increased the error variance around ANC-SS prevalence observations by a median of 2.7 times (interquartile range 1.9–3.8). Using only sampling error in ANC-SS prevalence ( σinfl2=0), coverage of 95% prediction intervals was 69% in out-of-sample prediction tests. This increased to 90% after introducing the additional variance parameter σinfl2. The revised probabilistic model improved model fit to household survey prevalence and increased epidemic uncertainty intervals most during the early epidemic period before 2005. Estimating σinfl2 did not increase the computational cost of model fitting. Conclusions: We recommend estimating nonsampling error in ANC-SS as an additional parameter in Bayesian inference using the Estimation and Projection Package model. This approach may prove useful for incorporating other data sources such as routine prevalence from Prevention of mother-to-child transmission testing into future epidemic estimates. PMID:28296801

  5. Are health care provider organizations ready to tackle diagnostic error? A survey of Leapfrog-participating hospitals.

    PubMed

    Newman-Toker, David E; Austin, J Matthew; Derk, Jordan; Danforth, Melissa; Graber, Mark L

    2017-06-27

    A 2015 National Academy of Medicine report on improving diagnosis in health care made recommendations for direct action by hospitals and health systems. Little is known about how health care provider organizations are addressing diagnostic safety/quality. This study is an anonymous online survey of safety professionals from US hospitals and health systems in July-August 2016. The survey was sent to those attending a Leapfrog Group webinar on misdiagnosis (n=188). The instrument was focused on knowledge, attitudes, and capability to address diagnostic errors at the institutional level. Overall, 61 (32%) responded, including community hospitals (42%), integrated health networks (25%), and academic centers (21%). Awareness was high, but commitment and capability were low (31% of leaders understand the problem; 28% have sufficient safety resources; and 25% have made diagnosis a top institutional safety priority). Ongoing efforts to improve diagnostic safety were sparse and mostly included root cause analysis and peer review feedback around diagnostic errors. The top three barriers to addressing diagnostic error were lack of awareness of the problem, lack of measures of diagnostic accuracy and error, and lack of feedback on diagnostic performance. The top two tools viewed as critically important for locally tackling the problem were routine feedback on diagnostic performance and culture change to emphasize diagnostic safety. Although hospitals and health systems appear to be aware of diagnostic errors as a major safety imperative, most organizations (even those that appear to be making a strong commitment to patient safety) are not yet doing much to improve diagnosis. Going forward, efforts to activate health care organizations will be essential to improving diagnostic safety.

  6. Statistical and systematic errors in the measurement of weak-lensing Minkowski functionals: Application to the Canada-France-Hawaii Lensing Survey

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Shirasaki, Masato; Yoshida, Naoki, E-mail: masato.shirasaki@utap.phys.s.u-tokyo.ac.jp

    2014-05-01

    The measurement of cosmic shear using weak gravitational lensing is a challenging task that involves a number of complicated procedures. We study in detail the systematic errors in the measurement of weak-lensing Minkowski Functionals (MFs). Specifically, we focus on systematics associated with galaxy shape measurements, photometric redshift errors, and shear calibration correction. We first generate mock weak-lensing catalogs that directly incorporate the actual observational characteristics of the Canada-France-Hawaii Lensing Survey (CFHTLenS). We then perform a Fisher analysis using the large set of mock catalogs for various cosmological models. We find that the statistical error associated with the observational effects degradesmore » the cosmological parameter constraints by a factor of a few. The Subaru Hyper Suprime-Cam (HSC) survey with a sky coverage of ∼1400 deg{sup 2} will constrain the dark energy equation of the state parameter with an error of Δw {sub 0} ∼ 0.25 by the lensing MFs alone, but biases induced by the systematics can be comparable to the 1σ error. We conclude that the lensing MFs are powerful statistics beyond the two-point statistics only if well-calibrated measurement of both the redshifts and the shapes of source galaxies is performed. Finally, we analyze the CFHTLenS data to explore the ability of the MFs to break degeneracies between a few cosmological parameters. Using a combined analysis of the MFs and the shear correlation function, we derive the matter density Ω{sub m0}=0.256±{sub 0.046}{sup 0.054}.« less

  7. An Empirical State Error Covariance Matrix Orbit Determination Example

    NASA Technical Reports Server (NTRS)

    Frisbee, Joseph H., Jr.

    2015-01-01

    State estimation techniques serve effectively to provide mean state estimates. However, the state error covariance matrices provided as part of these techniques suffer from some degree of lack of confidence in their ability to adequately describe the uncertainty in the estimated states. A specific problem with the traditional form of state error covariance matrices is that they represent only a mapping of the assumed observation error characteristics into the state space. Any errors that arise from other sources (environment modeling, precision, etc.) are not directly represented in a traditional, theoretical state error covariance matrix. First, consider that an actual observation contains only measurement error and that an estimated observation contains all other errors, known and unknown. Then it follows that a measurement residual (the difference between expected and observed measurements) contains all errors for that measurement. Therefore, a direct and appropriate inclusion of the actual measurement residuals in the state error covariance matrix of the estimate will result in an empirical state error covariance matrix. This empirical state error covariance matrix will fully include all of the errors in the state estimate. The empirical error covariance matrix is determined from a literal reinterpretation of the equations involved in the weighted least squares estimation algorithm. It is a formally correct, empirical state error covariance matrix obtained through use of the average form of the weighted measurement residual variance performance index rather than the usual total weighted residual form. Based on its formulation, this matrix will contain the total uncertainty in the state estimate, regardless as to the source of the uncertainty and whether the source is anticipated or not. It is expected that the empirical error covariance matrix will give a better, statistical representation of the state error in poorly modeled systems or when sensor performance is suspect. In its most straight forward form, the technique only requires supplemental calculations to be added to existing batch estimation algorithms. In the current problem being studied a truth model making use of gravity with spherical, J2 and J4 terms plus a standard exponential type atmosphere with simple diurnal and random walk components is used. The ability of the empirical state error covariance matrix to account for errors is investigated under four scenarios during orbit estimation. These scenarios are: exact modeling under known measurement errors, exact modeling under corrupted measurement errors, inexact modeling under known measurement errors, and inexact modeling under corrupted measurement errors. For this problem a simple analog of a distributed space surveillance network is used. The sensors in this network make only range measurements and with simple normally distributed measurement errors. The sensors are assumed to have full horizon to horizon viewing at any azimuth. For definiteness, an orbit at the approximate altitude and inclination of the International Space Station is used for the study. The comparison analyses of the data involve only total vectors. No investigation of specific orbital elements is undertaken. The total vector analyses will look at the chisquare values of the error in the difference between the estimated state and the true modeled state using both the empirical and theoretical error covariance matrices for each of scenario.

  8. Comparative analysis of data base management systems

    NASA Technical Reports Server (NTRS)

    Smith, R.

    1983-01-01

    A study to determine if the Remote File Inquiry (RFI) system would handle the future requirements of the user community is discussed. RFI is a locally written and locally maintained on-line query/update package. The current and future on-line requirements of the user community were studied. Additional consideration was given to the types of data structuring the users required. The survey indicated the features of greatest benefit were: sort, subtotals, totals, record selection, storage of queries, global updating and the ability to page break. The major deficiencies were: one level of hierarchy, excessive response time, software unreliability, difficult to add, delete and modify records, complicated error messages and the lack of ability to perform interfield comparisons. Missing features users required were: formatted screens, interfield comparions, interfield arithmetic, multiple file access, security and data integrity. The survey team recommended Kennedy Space Center move forward to state-of-the-art software, a Data Base Management System which is thoroughly tested and easy to implement and use.

  9. Relationship between long working hours and depression: a 3-year longitudinal study of clerical workers.

    PubMed

    Amagasa, Takashi; Nakayama, Takeo

    2013-08-01

    To clarify how long working hours affect the likelihood of current and future depression. Using data from four repeated measurements collected from 218 clerical workers, four models associating work-related factors to the depressive mood scale were established. The final model was constructed after comparing and testing the goodness-of-fit index using structural equation modeling. Multiple logistic regression analysis was also performed. The final model showed the best fit (normed fit index = 0.908; goodness-of-fit index = 0.936; root-mean-square error of approximation = 0.018). Its standardized total effect indicated that long working hours affected depression at the time of evaluation and 1 to 3 years later. The odds ratio for depression risk was 14.7 in employees who were not long-hours overworked according to the initial survey but who were long-hours overworked according to the second survey. Long working hours increase current and future risks of depression.

  10. The South African fertility decline: Evidence from two censuses and a Demographic and Health Survey.

    PubMed

    Moultrie, Tom A; Timaeus, Ian M

    2003-11-01

    Inadequate data and apartheid policies have meant that, until recently, most demographers have not had the opportunity to investigate the level of, and trend in, the fertility of South African women. The 1996 South Africa Census and the 1998 Demographic and Health Survey provide the first widely available and nationally representative demographic data on South Africa since 1970. Using these data, this paper describes the South African fertility decline from 1955 to 1996. Having identified and adjusted for several errors in the 1996 Census data, the paper argues that total fertility at that time was 3.2 children per woman nationally, and 3.5 children per woman for African South Africans. These levels are lower than in any other sub-Saharan African country. We show also that fertility in South Africa has been falling since the 1960s. Thus, fertility transition predates the establishment of a family planning programme in the country in 1974.

  11. Derivation of formulas for root-mean-square errors in location, orientation, and shape in triangulation solution of an elongated object in space

    NASA Technical Reports Server (NTRS)

    Long, S. A. T.

    1974-01-01

    Formulas are derived for the root-mean-square (rms) displacement, slope, and curvature errors in an azimuth-elevation image trace of an elongated object in space, as functions of the number and spacing of the input data points and the rms elevation error in the individual input data points from a single observation station. Also, formulas are derived for the total rms displacement, slope, and curvature error vectors in the triangulation solution of an elongated object in space due to the rms displacement, slope, and curvature errors, respectively, in the azimuth-elevation image traces from different observation stations. The total rms displacement, slope, and curvature error vectors provide useful measure numbers for determining the relative merits of two or more different triangulation procedures applicable to elongated objects in space.

  12. Residents' Ratings of Their Clinical Supervision and Their Self-Reported Medical Errors: Analysis of Data From 2009.

    PubMed

    Baldwin, DeWitt C; Daugherty, Steven R; Ryan, Patrick M; Yaghmour, Nicholas A; Philibert, Ingrid

    2018-04-01

    Medical errors and patient safety are major concerns for the medical and medical education communities. Improving clinical supervision for residents is important in avoiding errors, yet little is known about how residents perceive the adequacy of their supervision and how this relates to medical errors and other education outcomes, such as learning and satisfaction. We analyzed data from a 2009 survey of residents in 4 large specialties regarding the adequacy and quality of supervision they receive as well as associations with self-reported data on medical errors and residents' perceptions of their learning environment. Residents' reports of working without adequate supervision were lower than data from a 1999 survey for all 4 specialties, and residents were least likely to rate "lack of supervision" as a problem. While few residents reported that they received inadequate supervision, problems with supervision were negatively correlated with sufficient time for clinical activities, overall ratings of the residency experience, and attending physicians as a source of learning. Problems with supervision were positively correlated with resident reports that they had made a significant medical error, had been belittled or humiliated, or had observed others falsifying medical records. Although working without supervision was not a pervasive problem in 2009, when it happened, it appeared to have negative consequences. The association between inadequate supervision and medical errors is of particular concern.

  13. Gamma-ray burster counterparts - Radio

    NASA Technical Reports Server (NTRS)

    Schaefer, Bradley E.; Cline, Thomas L.; Desai, U. D.; Teegarden, B. J.; Atteia, J.-L.; Barat, C.; Estulin, I. V.; Evans, W. D.; Fenimore, E. E.; Hurley, K.

    1989-01-01

    Many observers and theorists have suggested that gamma-ray bursters (GRBs) are related to highly magnetized rotating, neutron stars, in which case an analogy with pulsars implies that GRBs would be prodigious emitters of polarized radio emission during quiescence. The paper reports on a survey conducted with the Very Large Array radio telescope of 10 small GRB error regions for quiescent radio emission at wavelengths of 2, 6, and 20 cm. The sensitivity of the survey varied from 0.1 to 0.8 mJy. The observations did indeed reveal four radio sources inside the GRB error regions.

  14. Implementing a mixed-mode design for collecting administrative records: striking a balance between quality and burden

    EIA Publications

    2012-01-01

    RECS relies on actual records from energy suppliers to produce robust survey estimates of household energy consumption and expenditures. During the RECS Energy Supplier Survey (ESS), energy billing records are collected from the companies that supply electricity, natural gas, fuel oil/kerosene, and propane (LPG) to the interviewed households. As Federal agencies expand the use of administrative records to enhance, replace, or evaluate survey data, EIA has explored more flexible, reliable and efficient techniques to collect energy billing records. The ESS has historically been a mail-administered survey, but EIA introduced web data collection with the 2009 RECS ESS. In that survey, energy suppliers self-selected their reporting mode among several options: standardized paper form, on-line fillable form or spreadsheet, or failing all else, a nonstandard format of their choosing. In this paper, EIA describes where reporting mode appears to influence the data quality. We detail the reporting modes, the embedded and post-hoc quality control and consistency checks that were performed, the extent of detectable errors, and the methods used for correcting data errors. We explore by mode the levels of unit and item nonresponse, number of errors, and corrections made to the data. In summary, we find notable differences in data quality between modes and analyze where the benefits of offering these new modes outweigh the "costs".

  15. Tilt error in cryospheric surface radiation measurements at high latitudes: a model study

    NASA Astrophysics Data System (ADS)

    Bogren, Wiley Steven; Faulkner Burkhart, John; Kylling, Arve

    2016-03-01

    We have evaluated the magnitude and makeup of error in cryospheric radiation observations due to small sensor misalignment in in situ measurements of solar irradiance. This error is examined through simulation of diffuse and direct irradiance arriving at a detector with a cosine-response fore optic. Emphasis is placed on assessing total error over the solar shortwave spectrum from 250 to 4500 nm, as well as supporting investigation over other relevant shortwave spectral ranges. The total measurement error introduced by sensor tilt is dominated by the direct component. For a typical high-latitude albedo measurement with a solar zenith angle of 60°, a sensor tilted by 1, 3, and 5° can, respectively introduce up to 2.7, 8.1, and 13.5 % error into the measured irradiance and similar errors in the derived albedo. Depending on the daily range of solar azimuth and zenith angles, significant measurement error can persist also in integrated daily irradiance and albedo. Simulations including a cloud layer demonstrate decreasing tilt error with increasing cloud optical depth.

  16. Concomitant prescribing and dispensing errors at a Brazilian hospital: a descriptive study

    PubMed Central

    Silva, Maria das Dores Graciano; Rosa, Mário Borges; Franklin, Bryony Dean; Reis, Adriano Max Moreira; Anchieta, Lêni Márcia; Mota, Joaquim Antônio César

    2011-01-01

    OBJECTIVE: To analyze the prevalence and types of prescribing and dispensing errors occurring with high-alert medications and to propose preventive measures to avoid errors with these medications. INTRODUCTION: The prevalence of adverse events in health care has increased, and medication errors are probably the most common cause of these events. Pediatric patients are known to be a high-risk group and are an important target in medication error prevention. METHODS: Observers collected data on prescribing and dispensing errors occurring with high-alert medications for pediatric inpatients in a university hospital. In addition to classifying the types of error that occurred, we identified cases of concomitant prescribing and dispensing errors. RESULTS: One or more prescribing errors, totaling 1,632 errors, were found in 632 (89.6%) of the 705 high-alert medications that were prescribed and dispensed. We also identified at least one dispensing error in each high-alert medication dispensed, totaling 1,707 errors. Among these dispensing errors, 723 (42.4%) content errors occurred concomitantly with the prescribing errors. A subset of dispensing errors may have occurred because of poor prescription quality. The observed concomitancy should be examined carefully because improvements in the prescribing process could potentially prevent these problems. CONCLUSION: The system of drug prescribing and dispensing at the hospital investigated in this study should be improved by incorporating the best practices of medication safety and preventing medication errors. High-alert medications may be used as triggers for improving the safety of the drug-utilization system. PMID:22012039

  17. Hospital-based transfusion error tracking from 2005 to 2010: identifying the key errors threatening patient transfusion safety.

    PubMed

    Maskens, Carolyn; Downie, Helen; Wendt, Alison; Lima, Ana; Merkley, Lisa; Lin, Yulia; Callum, Jeannie

    2014-01-01

    This report provides a comprehensive analysis of transfusion errors occurring at a large teaching hospital and aims to determine key errors that are threatening transfusion safety, despite implementation of safety measures. Errors were prospectively identified from 2005 to 2010. Error data were coded on a secure online database called the Transfusion Error Surveillance System. Errors were defined as any deviation from established standard operating procedures. Errors were identified by clinical and laboratory staff. Denominator data for volume of activity were used to calculate rates. A total of 15,134 errors were reported with a median number of 215 errors per month (range, 85-334). Overall, 9083 (60%) errors occurred on the transfusion service and 6051 (40%) on the clinical services. In total, 23 errors resulted in patient harm: 21 of these errors occurred on the clinical services and two in the transfusion service. Of the 23 harm events, 21 involved inappropriate use of blood. Errors with no harm were 657 times more common than events that caused harm. The most common high-severity clinical errors were sample labeling (37.5%) and inappropriate ordering of blood (28.8%). The most common high-severity error in the transfusion service was sample accepted despite not meeting acceptance criteria (18.3%). The cost of product and component loss due to errors was $593,337. Errors occurred at every point in the transfusion process, with the greatest potential risk of patient harm resulting from inappropriate ordering of blood products and errors in sample labeling. © 2013 American Association of Blood Banks (CME).

  18. Survey Data for Geomagnetic Field Modelling

    NASA Technical Reports Server (NTRS)

    Barraclough, D. R.; Macmillan, S.

    1992-01-01

    The survey data discussed here are based on observations made relatively recently at points on land. A special subset of land survey data consists of those made at specially designated sites known as repeat stations. This class of data will be discussed in another part of this document (Barton, 1991b), so only the briefest of references will be made to repeat stations here. This discussion of 'ordinary' land survey data begins with a description of the spatial and temporal distributions of available survey data based on observations made since 1900. (The reason for this rather arbitrary choice of cut-off date is that this was the value used in the production of the computer file of magnetic survey data (land, sea, air, satellite, rocket) that is the primary source of data for geomagnetic main-field modeling). This is followed by a description of the various types of error to which these survey data are, or may be, subject and a discussion of the likely effects of such errors on field models produced from the data. Finally, there is a short section on the availability of geomagnetic survey data, which also describes how the data files are maintained.

  19. Optimization of multimagnetometer systems on a spacecraft

    NASA Technical Reports Server (NTRS)

    Neubauer, F. M.

    1975-01-01

    The problem of optimizing the position of magnetometers along a boom of given length to yield a minimized total error is investigated. The discussion is limited to at most four magnetometers, which seems to be a practical limit due to weight, power, and financial considerations. The outlined error analysis is applied to some illustrative cases. The optimal magnetometer locations, for which the total error is minimum, are computed for given boom length, instrument errors, and very conservative magnetic field models characteristic for spacecraft with only a restricted or ineffective magnetic cleanliness program. It is shown that the error contribution by the magnetometer inaccuracy is increased as the number of magnetometers is increased, whereas the spacecraft field uncertainty is diminished by an appreciably larger amount.

  20. Development and Application of Regression Models for Estimating Nutrient Concentrations in Streams of the Conterminous United States, 1992-2001

    USGS Publications Warehouse

    Spahr, Norman E.; Mueller, David K.; Wolock, David M.; Hitt, Kerie J.; Gronberg, JoAnn M.

    2010-01-01

    Data collected for the U.S. Geological Survey National Water-Quality Assessment program from 1992-2001 were used to investigate the relations between nutrient concentrations and nutrient sources, hydrology, and basin characteristics. Regression models were developed to estimate annual flow-weighted concentrations of total nitrogen and total phosphorus using explanatory variables derived from currently available national ancillary data. Different total-nitrogen regression models were used for agricultural (25 percent or more of basin area classified as agricultural land use) and nonagricultural basins. Atmospheric, fertilizer, and manure inputs of nitrogen, percent sand in soil, subsurface drainage, overland flow, mean annual precipitation, and percent undeveloped area were significant variables in the agricultural basin total nitrogen model. Significant explanatory variables in the nonagricultural total nitrogen model were total nonpoint-source nitrogen input (sum of nitrogen from manure, fertilizer, and atmospheric deposition), population density, mean annual runoff, and percent base flow. The concentrations of nutrients derived from regression (CONDOR) models were applied to drainage basins associated with the U.S. Environmental Protection Agency (USEPA) River Reach File (RF1) to predict flow-weighted mean annual total nitrogen concentrations for the conterminous United States. The majority of stream miles in the Nation have predicted concentrations less than 5 milligrams per liter. Concentrations greater than 5 milligrams per liter were predicted for a broad area extending from Ohio to eastern Nebraska, areas spatially associated with greater application of fertilizer and manure. Probabilities that mean annual total-nitrogen concentrations exceed the USEPA regional nutrient criteria were determined by incorporating model prediction uncertainty. In all nutrient regions where criteria have been established, there is at least a 50 percent probability of exceeding the criteria in more than half of the stream miles. Dividing calibration sites into agricultural and nonagricultural groups did not improve the explanatory capability for total phosphorus models. The group of explanatory variables that yielded the lowest model error for mean annual total phosphorus concentrations includes phosphorus input from manure, population density, amounts of range land and forest land, percent sand in soil, and percent base flow. However, the large unexplained variability and associated model error precluded the use of the total phosphorus model for nationwide extrapolations.

  1. Lexico-Semantic Errors of the Learners of English: A Survey of Standard Seven Keiyo-Speaking Primary School Pupils in Keiyo District, Kenya

    ERIC Educational Resources Information Center

    Jeptarus, Kipsamo E.; Ngene, Patrick K.

    2016-01-01

    The purpose of this research was to study the Lexico-semantic errors of the Keiyo-speaking standard seven primary school learners of English as a Second Language (ESL) in Keiyo District, Kenya. This study was guided by two related theories: Error Analysis Theory/Approach by Corder (1971) which approaches L2 learning through a detailed analysis of…

  2. Self-reported medical, medication and laboratory error in eight countries: risk factors for chronically ill adults.

    PubMed

    Scobie, Andrea

    2011-04-01

    To identify risk factors associated with self-reported medical, medication and laboratory error in eight countries. The Commonwealth Fund's 2008 International Health Policy Survey of chronically ill patients in eight countries. None. A multi-country telephone survey was conducted between 3 March and 30 May 2008 with patients in Australia, Canada, France, Germany, the Netherlands, New Zealand, the UK and the USA who self-reported being chronically ill. A bivariate analysis was performed to determine significant explanatory variables of medical, medication and laboratory error (P < 0.01) for inclusion in a binary logistic regression model. The final regression model included eight risk factors for self-reported error: age 65 and under, education level of some college or less, presence of two or more chronic conditions, high prescription drug use (four+ drugs), four or more doctors seen within 2 years, a care coordination problem, poor doctor-patient communication and use of an emergency department. Risk factors with the greatest ability to predict experiencing an error encompassed issues with coordination of care and provider knowledge of a patient's medical history. The identification of these risk factors could help policymakers and organizations to proactively reduce the likelihood of error through greater examination of system- and organization-level practices.

  3. Continental-Scale Mapping of Adelie Penguin Colonies from Landsat Imagery

    NASA Technical Reports Server (NTRS)

    Schwaller, Mathew R.; Southwell, Colin; Emmerson, Louise

    2013-01-01

    Breeding distribution of the Adlie penguin, Pygoscelis adeliae, was surveyed with Landsat-7 Enhanced Thematic Mapper Plus (ETM+) data in an area covering approximately 330 of longitude along the coastline of Antarctica.An algorithm was designed to minimize radiometric noise and to retrieve Adlie penguin colony location and spatial extent from the ETM+data. In all, 9143 individual pixels were classified as belonging to an Adlie penguin colony class out of the entire dataset of 195 ETM+ scenes, where the dimension of each pixel is 30 m by 30 m,and each scene is approximately 180 km by 180 km. Pixel clustering identified a total of 187 individual Adlie penguin colonies, ranging in size from a single pixel (900 sq m) to a maximum of 875 pixels (0.788 sq km). Colony retrievals have a very low error of commission, on the order of 1% or less, and the error of omission was estimated to be 3% to 4% by population based on comparisons with direct observations from surveys across east Antarctica. Thus, the Landsat retrievals successfully located Adlie penguin colonies that accounted for 96 to 97% of the regional population used as ground truth. Geographic coordinates and the spatial extent of each colony retrieved from the Landsat data are available publically. Regional analysis found several areas where the Landsat retrievals suggest populations that are significantly larger than published estimates. Six Adlie penguin colonies were found that are believed to be previously unreported in the literature.

  4. Low aerial imagery - an assessment of georeferencing errors and the potential for use in environmental inventory

    NASA Astrophysics Data System (ADS)

    Smaczyński, Maciej; Medyńska-Gulij, Beata

    2017-06-01

    Unmanned aerial vehicles are increasingly being used in close range photogrammetry. Real-time observation of the Earth's surface and the photogrammetric images obtained are used as material for surveying and environmental inventory. The following study was conducted on a small area (approximately 1 ha). In such cases, the classical method of topographic mapping is not accurate enough. The geodetic method of topographic surveying, on the other hand, is an overly precise measurement technique for the purpose of inventorying the natural environment components. The author of the following study has proposed using the unmanned aerial vehicle technology and tying in the obtained images to the control point network established with the aid of GNSS technology. Georeferencing the acquired images and using them to create a photogrammetric model of the studied area enabled the researcher to perform calculations, which yielded a total root mean square error below 9 cm. The performed comparison of the real lengths of the vectors connecting the control points and their lengths calculated on the basis of the photogrammetric model made it possible to fully confirm the RMSE calculated and prove the usefulness of the UAV technology in observing terrain components for the purpose of environmental inventory. Such environmental components include, among others, elements of road infrastructure, green areas, but also changes in the location of moving pedestrians and vehicles, as well as other changes in the natural environment that are not registered on classical base maps or topographic maps.

  5. HUBBLE SPACE TELESCOPE/ADVANCED CAMERA FOR SURVEYS OBSERVATIONS OF EUROPA'S ATMOSPHERIC ULTRAVIOLET EMISSION AT EASTERN ELONGATION

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Saur, Joachim; Roth, Lorenz; Schilling, Nico

    2011-09-10

    We report results of a Hubble Space Telescope (HST) campaign with the Advanced Camera for Surveys to observe Europa at eastern elongation, i.e., Europa's leading side, on 2008 June 29. With five consecutive HST orbits, we constrain Europa's atmospheric O I 1304 A and O I 1356 A emissions using the prism PR130L. The total emissions of both oxygen multiplets range between 132 {+-} 14 and 226 {+-} 14 Rayleigh. An additional systematic error with values on the same order as the statistical errors may be due to uncertainties in modeling the reflected light from Europa's surface. The total emissionmore » also shows a clear dependence of Europa's position with respect to Jupiter's magnetospheric plasma sheet. We derive a lower limit for the O{sub 2} column density of 6 x 10{sup 18} m{sup -2}. Previous observations of Europa's atmosphere with the Space Telescope Imaging Spectrograph in 1999 of Europa's trailing side show an enigmatic surplus of radiation on the anti-Jovian side within the disk of Europa. With emission from a radially symmetric atmosphere as a reference, we searched for an anti-Jovian versus sub-Jovian asymmetry with respect to the central meridian on the leading side and found none. Likewise, we searched for departures from a radially symmetric atmospheric emission and found an emission surplus centered around 90 deg. west longitude, for which plausible mechanisms exist. Previous work about the possibility of plumes on Europa due to tidally driven shear heating found longitudes with strongest local strain rates which might be consistent with the longitudes of maximum UV emissions. Alternatively, asymmetries in Europa's UV emission can also be caused by inhomogeneous surface properties, an optically thick atmospheric contribution of atomic oxygen, and/or by Europa's complex plasma interaction with Jupiter's magnetosphere.« less

  6. Seeing in the Dark: Weak Lensing from the Sloan Digital Sky Survey

    NASA Astrophysics Data System (ADS)

    Huff, Eric Michael

    Statistical weak lensing by large-scale structure { cosmic shear { is a promising cosmological tool, which has motivated the design of several large upcoming astronomical surveys. This Thesis presents a measurement of cosmic shear using coadded Sloan Digital Sky Survey (SDSS) imaging in 168 square degrees of the equatorial region, with r < 23:5 and i < 22:5, a source number density of 2.2 per arcmin2 and median redshift of zmed = 0.52. These coadds were generated using a new rounding kernel method that was intended to minimize systematic errors in the lensing measurement due to coherent PSF anisotropies that are otherwise prevalent in the SDSS imaging data. Measurements of cosmic shear out to angular separations of 2 degrees are presented, along with systematics tests of the catalog generation and shear measurement steps that demonstrate that these results are dominated by statistical rather than systematic errors. Assuming a cosmological model corresponding to WMAP7 (Komatsu et al., 2011) and allowing only the amplitude of matter fluctuations sigma8 to vary, the best-t value of the amplitude of matter fluctuations is sigma 8=0.636+0.109-0.154 (1sigma); without systematic errors this would be sigma8=0.636+0.099 -0.137 (1sigma). Assuming a flat Λ CDM model, the combined constraints with WMAP7 are sigma8=0.784+0.028 -0.026 (1sigma). The 2sigma error range is 14 percent smaller than WMAP7 alone. Aside from the intrinsic value of such cosmological constraints from the growth of structure, some important lessons are identified for upcoming surveys that may face similar issues when combining multi-epoch data to measure cosmic shear. Motivated by the challenges faced in the cosmic shear measurement, two new lensing probes are suggested for increasing the available weak lensing signal. Both use galaxy scaling relations to control for scatter in lensing observables. The first employs a version of the well-known fundamental plane relation for early type galaxies. This modified "photometric fundamental plane" replaces velocity dispersions with photometric galaxy properties, thus obviating the need for spectroscopic data. We present the first detection of magnification using this method by applying it to photometric catalogs from the Sloan Digital Sky Survey. This analysis shows that the derived magnification signal is comparable to that available from conventional methods using gravitational shear. We suppress the dominant sources of systematic error and discuss modest improvements that may allow this method to equal or even surpass the signal-to-noise achievable with shear. Moreover, some of the dominant sources of systematic error are substantially different from those of shear-based techniques. The second outlines an idea for using the optical Tully-Fisher relation to dramatically improve the signal-to-noise and systematic error control for shear measurements. The expected error properties and potential advantages of such a measurement are proposed, and a pilot study is suggested in order to test the viability of Tully-Fisher weak lensing in the context of the forthcoming generation of large spectroscopic surveys.

  7. Reliability, precision, and measurement in the context of data from ability tests, surveys, and assessments

    NASA Astrophysics Data System (ADS)

    Fisher, W. P., Jr.; Elbaum, B.; Coulter, A.

    2010-07-01

    Reliability coefficients indicate the proportion of total variance attributable to differences among measures separated along a quantitative continuum by a testing, survey, or assessment instrument. Reliability is usually considered to be influenced by both the internal consistency of a data set and the number of items, though textbooks and research papers rarely evaluate the extent to which these factors independently affect the data in question. Probabilistic formulations of the requirements for unidimensional measurement separate consistency from error by modelling individual response processes instead of group-level variation. The utility of this separation is illustrated via analyses of small sets of simulated data, and of subsets of data from a 78-item survey of over 2,500 parents of children with disabilities. Measurement reliability ultimately concerns the structural invariance specified in models requiring sufficient statistics, parameter separation, unidimensionality, and other qualities that historically have made quantification simple, practical, and convenient for end users. The paper concludes with suggestions for a research program aimed at focusing measurement research more on the calibration and wide dissemination of tools applicable to individuals, and less on the statistical study of inter-variable relations in large data sets.

  8. Artificial intelligence techniques: An efficient new approach to challenge the assessment of complex clinical fields such as airway clearance techniques in patients with cystic fibrosis?

    PubMed

    Slavici, Titus; Almajan, Bogdan

    2013-04-01

    To construct an artificial intelligence application to assist untrained physiotherapists in determining the appropriate physiotherapy exercises to improve the quality of life of patients with cystic fibrosis. A total of 42 children (21 boys and 21 girls), age range 6-18 years, participated in a clinical survey between 2001 and 2005. Data collected during the clinical survey were entered into a neural network in order to correlate the health state indicators of the patients and the type of physiotherapy exercise to be followed. Cross-validation of the network was carried out by comparing the health state indicators achieved after following a certain physiotherapy exercise and the health state indicators predicted by the network. The lifestyle and health state indicators of the survey participants improved. The network predicted the health state indicators of the participants with an accuracy of 93%. The results of the cross-validation test were within the error margins of the real-life indicators. Using data on the clinical state of individuals with cystic fibrosis, it is possible to determine the most effective type of physiotherapy exercise for improving overall health state indicators.

  9. Medication knowledge, certainty, and risk of errors in health care: a cross-sectional study

    PubMed Central

    2011-01-01

    Background Medication errors are often involved in reported adverse events. Drug therapy, prescribed by physicians, is mostly carried out by nurses, who are expected to master all aspects of medication. Research has revealed the need for improved knowledge in drug dose calculation, and medication knowledge as a whole is poorly investigated. The purpose of this survey was to study registered nurses' medication knowledge, certainty and estimated risk of errors, and to explore factors associated with good results. Methods Nurses from hospitals and primary health care establishments were invited to carry out a multiple-choice test in pharmacology, drug management and drug dose calculations (score range 0-14). Self-estimated certainty in each answer was recorded, graded from 0 = very uncertain to 3 = very certain. Background characteristics and sense of coping were recorded. Risk of error was estimated by combining knowledge and certainty scores. The results are presented as mean (±SD). Results Two-hundred and three registered nurses participated (including 16 males), aged 42.0 (9.3) years with a working experience of 12.4 (9.2) years. Knowledge scores in pharmacology, drug management and drug dose calculations were 10.3 (1.6), 7.5 (1.6), and 11.2 (2.0), respectively, and certainty scores were 1.8 (0.4), 1.9 (0.5), and 2.0 (0.6), respectively. Fifteen percent of the total answers showed a high risk of error, with 25% in drug management. Independent factors associated with high medication knowledge were working in hospitals (p < 0.001), postgraduate specialization (p = 0.01) and completion of courses in drug management (p < 0.01). Conclusions Medication knowledge was found to be unsatisfactory among practicing nurses, with a significant risk for medication errors. The study revealed a need to improve the nurses' basic knowledge, especially when referring to drug management. PMID:21791106

  10. Evaluation of Preanalytical Quality Indicators by Six Sigma and Pareto`s Principle.

    PubMed

    Kulkarni, Sweta; Ramesh, R; Srinivasan, A R; Silvia, C R Wilma Delphine

    2018-01-01

    Preanalytical steps are the major sources of error in clinical laboratory. The analytical errors can be corrected by quality control procedures but there is a need for stringent quality checks in preanalytical area as these processes are done outside the laboratory. Sigma value depicts the performance of laboratory and its quality measures. Hence in the present study six sigma and Pareto principle was applied to preanalytical quality indicators to evaluate the clinical biochemistry laboratory performance. This observational study was carried out for a period of 1 year from November 2015-2016. A total of 1,44,208 samples and 54,265 test requisition forms were screened for preanalytical errors like missing patient information, sample collection details in forms and hemolysed, lipemic, inappropriate, insufficient samples and total number of errors were calculated and converted into defects per million and sigma scale. Pareto`s chart was drawn using total number of errors and cumulative percentage. In 75% test requisition forms diagnosis was not mentioned and sigma value of 0.9 was obtained and for other errors like sample receiving time, stat and type of sample sigma values were 2.9, 2.6, and 2.8 respectively. For insufficient sample and improper ratio of blood to anticoagulant sigma value was 4.3. Pareto`s chart depicts out of 80% of errors in requisition forms, 20% is contributed by missing information like diagnosis. The development of quality indicators, application of six sigma and Pareto`s principle are quality measures by which not only preanalytical, the total testing process can be improved.

  11. The relative importance of noise level and number of events on human reactions to noise: Community survey findings and study methods

    NASA Technical Reports Server (NTRS)

    Fields, J. M.

    1980-01-01

    The data from seven surveys of community response to environmental noise are reanalyzed to assess the relative influence of peak noise levels and the numbers of noise events on human response. The surveys do not agree on the value of the tradeoff between the effects of noise level and numbers of events. The value of the tradeoff cannot be confidently specified in any survey because the tradeoff estimate may have a large standard error of estimate and because the tradeoff estimate may be seriously biased by unknown noise measurement errors. Some evidence suggests a decrease in annoyance with very high numbers of noise events but this evidence is not strong enough to lead to the rejection of the conventionally accepted assumption that annoyance is related to a log transformation of the number of noise events.

  12. Improved characterisation and modelling of measurement errors in electrical resistivity tomography (ERT) surveys

    NASA Astrophysics Data System (ADS)

    Tso, Chak-Hau Michael; Kuras, Oliver; Wilkinson, Paul B.; Uhlemann, Sebastian; Chambers, Jonathan E.; Meldrum, Philip I.; Graham, James; Sherlock, Emma F.; Binley, Andrew

    2017-11-01

    Measurement errors can play a pivotal role in geophysical inversion. Most inverse models require users to prescribe or assume a statistical model of data errors before inversion. Wrongly prescribed errors can lead to over- or under-fitting of data; however, the derivation of models of data errors is often neglected. With the heightening interest in uncertainty estimation within hydrogeophysics, better characterisation and treatment of measurement errors is needed to provide improved image appraisal. Here we focus on the role of measurement errors in electrical resistivity tomography (ERT). We have analysed two time-lapse ERT datasets: one contains 96 sets of direct and reciprocal data collected from a surface ERT line within a 24 h timeframe; the other is a two-year-long cross-borehole survey at a UK nuclear site with 246 sets of over 50,000 measurements. Our study includes the characterisation of the spatial and temporal behaviour of measurement errors using autocorrelation and correlation coefficient analysis. We find that, in addition to well-known proportionality effects, ERT measurements can also be sensitive to the combination of electrodes used, i.e. errors may not be uncorrelated as often assumed. Based on these findings, we develop a new error model that allows grouping based on electrode number in addition to fitting a linear model to transfer resistance. The new model explains the observed measurement errors better and shows superior inversion results and uncertainty estimates in synthetic examples. It is robust, because it groups errors together based on the electrodes used to make the measurements. The new model can be readily applied to the diagonal data weighting matrix widely used in common inversion methods, as well as to the data covariance matrix in a Bayesian inversion framework. We demonstrate its application using extensive ERT monitoring datasets from the two aforementioned sites.

  13. The Pearson-Readhead Survey of Compact Extragalactic Radio Sources from Space. I. The Images

    NASA Astrophysics Data System (ADS)

    Lister, M. L.; Tingay, S. J.; Murphy, D. W.; Piner, B. G.; Jones, D. L.; Preston, R. A.

    2001-06-01

    We present images from a space-VLBI survey using the facilities of the VLBI Space Observatory Programme (VSOP), drawing our sample from the well-studied Pearson-Readhead survey of extragalactic radio sources. Our survey has taken advantage of long space-VLBI baselines and large arrays of ground antennas, such as the Very Long Baseline Array and European VLBI Network, to obtain high-resolution images of 27 active galactic nuclei and to measure the core brightness temperatures of these sources more accurately than is possible from the ground. A detailed analysis of the source properties is given in accompanying papers. We have also performed an extensive series of simulations to investigate the errors in VSOP images caused by the relatively large holes in the (u,v)-plane when sources are observed near the orbit normal direction. We find that while the nominal dynamic range (defined as the ratio of map peak to off-source error) often exceeds 1000:1, the true dynamic range (map peak to on-source error) is only about 30:1 for relatively complex core-jet sources. For sources dominated by a strong point source, this value rises to approximately 100:1. We find the true dynamic range to be a relatively weak function of the difference in position angle (P.A.) between the jet P.A. and u-v coverage major axis P.A. For regions with low signal-to-noise ratios, typically located down the jet away from the core, large errors can occur, causing spurious features in VSOP images that should be interpreted with caution.

  14. Mind the Mode: Differences in Paper vs. Web-Based Survey Modes Among Women With Cancer.

    PubMed

    Hagan, Teresa L; Belcher, Sarah M; Donovan, Heidi S

    2017-09-01

    Researchers administering surveys seek to balance data quality, sources of error, and practical concerns when selecting an administration mode. Rarely are decisions about survey administration based on the background of study participants, although socio-demographic characteristics like age, education, and race may contribute to participants' (non)responses. In this study, we describe differences in paper- and web-based surveys administered in a national cancer survivor study of women with a history of cancer to compare the ability of each survey administrative mode to provide quality, generalizable data. We compared paper- and web-based survey data by socio-demographic characteristics of respondents, missing data rates, scores on primary outcome measure, and administrative costs and time using descriptive statistics, tests of mean group differences, and linear regression. Our findings indicate that more potentially vulnerable patients preferred paper questionnaires and that data quality, responses, and costs significantly varied by mode and participants' demographic information. We provide targeted suggestions for researchers conducting survey research to reduce survey error and increase generalizability of study results to the patient population of interest. Researchers must carefully weigh the pros and cons of survey administration modes to ensure a representative sample and high-quality data. Copyright © 2017 American Academy of Hospice and Palliative Medicine. Published by Elsevier Inc. All rights reserved.

  15. Collection, Processing and Accuracy of Mobile Terrestrial Lidar Survey Data in the Coastal Environment

    DTIC Science & Technology

    2017-04-01

    ER D C/ CH L TR -1 7- 5 Coastal Field Data Collection Program Collection, Processing, and Accuracy of Mobile Terrestrial Lidar Survey ... Survey Data in the Coastal Environment Nicholas J. Spore and Katherine L. Brodie Field Research Facility U.S. Army Engineer Research and Development...value to a mobile lidar survey may misrepresent some of the spatially variable error throughout the survey , and further work should incorporate full

  16. Collection, Processing, and Accuracy of Mobile Terrestrial Lidar Survey Data in the Coastal Environment

    DTIC Science & Technology

    2017-04-01

    ER D C/ CH L TR -1 7- 5 Coastal Field Data Collection Program Collection, Processing, and Accuracy of Mobile Terrestrial Lidar Survey ... Survey Data in the Coastal Environment Nicholas J. Spore and Katherine L. Brodie Field Research Facility U.S. Army Engineer Research and Development...value to a mobile lidar survey may misrepresent some of the spatially variable error throughout the survey , and further work should incorporate full

  17. Influence of measurement error on Maxwell's demon

    NASA Astrophysics Data System (ADS)

    Sørdal, Vegard; Bergli, Joakim; Galperin, Y. M.

    2017-06-01

    In any general cycle of measurement, feedback, and erasure, the measurement will reduce the entropy of the system when information about the state is obtained, while erasure, according to Landauer's principle, is accompanied by a corresponding increase in entropy due to the compression of logical and physical phase space. The total process can in principle be fully reversible. A measurement error reduces the information obtained and the entropy decrease in the system. The erasure still gives the same increase in entropy, and the total process is irreversible. Another consequence of measurement error is that a bad feedback is applied, which further increases the entropy production if the proper protocol adapted to the expected error rate is not applied. We consider the effect of measurement error on a realistic single-electron box Szilard engine, and we find the optimal protocol for the cycle as a function of the desired power P and error ɛ .

  18. Characterizing Macro Scale Patterns Of Uncertainty For Improved Operational Flood Forecasting Over The Conterminous United States

    NASA Astrophysics Data System (ADS)

    Vergara, H. J.; Kirstetter, P.; Gourley, J. J.; Flamig, Z.; Hong, Y.

    2015-12-01

    The macro scale patterns of simulated streamflow errors are studied in order to characterize uncertainty in a hydrologic modeling system forced with the Multi-Radar/Multi-Sensor (MRMS; http://mrms.ou.edu) quantitative precipitation estimates for flood forecasting over the Conterminous United States (CONUS). The hydrologic model is centerpiece of the Flooded Locations And Simulated Hydrograph (FLASH; http://flash.ou.edu) real-time system. The hydrologic model is implemented at 1-km/5-min resolution to generate estimates of streamflow. Data from the CONUS-wide stream gauge network of the United States' Geological Survey (USGS) were used as a reference to evaluate the discrepancies with the hydrological model predictions. Streamflow errors were studied at the event scale with particular focus on the peak flow magnitude and timing. A total of 2,680 catchments over CONUS and 75,496 events from a 10-year period are used for the simulation diagnostic analysis. Associations between streamflow errors and geophysical factors were explored and modeled. It is found that hydro-climatic factors and radar coverage could explain significant underestimation of peak flow in regions of complex terrain. Furthermore, the statistical modeling of peak flow errors shows that other geophysical factors such as basin geomorphometry, pedology, and land cover/use could also provide explanatory information. Results from this research demonstrate the utility of uncertainty characterization in providing guidance to improve model adequacy, parameter estimates, and input quality control. Likewise, the characterization of uncertainty enables probabilistic flood forecasting that can be extended to ungauged locations.

  19. Ozone Trend Detectability

    NASA Technical Reports Server (NTRS)

    Campbell, J. W. (Editor)

    1981-01-01

    The detection of anthropogenic disturbances in the Earth's ozone layer was studied. Two topics were addressed: (1) the level at which a trend in total ozoning is detected by existing data sources; and (2) empirical evidence in the prediction of the depletion in total ozone. Error sources are identified. The predictability of climatological series, whether empirical models can be trusted, and how errors in the Dobson total ozone data impact trend detectability, are discussed.

  20. Monitoring Fine-Grained Sediment in the Colorado River Ecosystem, Arizona - Control Network and Conventional Survey Techniques

    USGS Publications Warehouse

    Hazel, Joseph E.; Kaplinski, Matt; Parnell, Roderic A.; Kohl, Keith; Schmidt, John C.

    2008-01-01

    In 2002, fine-grained sediment (sand, silt, and clay) monitoring in the Colorado River downstream from Glen Canyon Dam was initiated to survey channel topography at scales previously unobtainable in this canyon setting. This report presents the methods used to establish the high-resolution global positioning system (GPS) control network required for this effort as well as the conventional surveying techniques used in the study. Using simultaneous, dual-frequency GPS vector-based methods, the network points were determined to have positioning accuracies of less than 0.03 meters (m) and ellipsoidal height accuracies of between 0.01 and 0.10 m at a 95-percent degree of confidence. We also assessed network point quality with repeated, electronic (optical) total-station observations at 39 points for a total of 362 measurements; the mean range was 0.022 m in horizontal and 0.13 in vertical at a 95-percent confidence interval. These results indicate that the control network is of sufficient spatial and vertical accuracy for collection of airborne and subaerial remote-sensing technologies and integration of these data in a geographic information system on a repeatable basis without anomalies. The monitoring methods were employed in up to 11 discrete reaches over various time intervals. The reaches varied from 1.3 to 6.4 kilometers in length. Field results from surveys in 2000, 2002, and 2004 are described, during which conventional surveying was used to collect more than 3000 points per day. Ground points were used as checkpoints and to supplement areas just below or above the water surface, where remote-sensing data are not collected or are subject to greater error. An accuracy of +or- 0.05 m was identified as the minimum precision of individual ground points. These results are important for assessing digital elevation model (DEM) quality and identifying detection limits of significant change among surfaces generated from remote-sensing technologies.

  1. ASHP national survey of hospital-based pharmaceutical services--1992.

    PubMed

    Crawford, S Y; Myers, C E

    1993-07-01

    The results of a national mail survey of pharmaceutical services in community hospitals conducted by ASHP during summer 1992 are reported and compared with the results of earlier ASHP surveys. A simple random sample of community hospitals (short-term, nonfederal) was selected from community hospitals registered by the American Hospital Association. Questionnaires were mailed to each director of pharmacy. The adjusted gross sample size was 889. The net response rate was 58% (518 usable replies). The average number of hours of pharmacy operation per week was 105. Complete unit dose drug distribution was offered by 90% of the respondents, and 67% offered complete, comprehensive i.v. admixture programs. A total of 73% of the hospitals had centralized pharmaceutical services. Some 83% provided services to ambulatory-care patients, including clinic patients, emergency room patients, patients being discharged, employees, home care patients, and the general public. A computerized pharmacy system was present in 75% of the departments, and 86% had at least one microcomputer. More than 90% participated in adverse drug reaction, drug-use evaluation, drug therapy monitoring, and medication error management programs. Two thirds of the respondents regularly provided written documentation of pharmacist interventions in patients' medical records, and the same proportion provided patient education or counseling. One third provided drug management of medical emergencies. One fifth provided drug therapy management planning, and 17% provided written histories. Pharmacokinetic consultations were provided by 57% and nutritional support consultations by 37%; three fourths of pharmacist recommendations were adopted by prescribers. A well-controlled formulary system was in place in 51% of the hospitals; therapeutic interchange was practiced by 69%. A total of 99% participated in group purchasing, and 95% used a prime vendor. The 1992 ASHP survey revealed a continuation of the changes in many hospital-based pharmaceutical services documented in earlier surveys (e.g., growth in clinical services, ambulatory-care services, computerization) and identified static areas that merit the attention of pharmacy leaders (e.g., provision of complete, comprehensive i.v. services).

  2. The comparison of cervical repositioning errors according to smartphone addiction grades.

    PubMed

    Lee, Jeonhyeong; Seo, Kyochul

    2014-04-01

    [Purpose] The purpose of this study was to compare cervical repositioning errors according to smartphone addiction grades of adults in their 20s. [Subjects and Methods] A survey of smartphone addiction was conducted of 200 adults. Based on the survey results, 30 subjects were chosen to participate in this study, and they were divided into three groups of 10; a Normal Group, a Moderate Addiction Group, and a Severe Addiction Group. After attaching a C-ROM, we measured the cervical repositioning errors of flexion, extension, right lateral flexion and left lateral flexion. [Results] Significant differences in the cervical repositioning errors of flexion, extension, and right and left lateral flexion were found among the Normal Group, Moderate Addiction Group, and Severe Addiction Group. In particular, the Severe Addiction Group showed the largest errors. [Conclusion] The result indicates that as smartphone addiction becomes more severe, a person is more likely to show impaired proprioception, as well as impaired ability to recognize the right posture. Thus, musculoskeletal problems due to smartphone addiction should be resolved through social cognition and intervention, and physical therapeutic education and intervention to educate people about correct postures.

  3. Debiasing affective forecasting errors with targeted, but not representative, experience narratives.

    PubMed

    Shaffer, Victoria A; Focella, Elizabeth S; Scherer, Laura D; Zikmund-Fisher, Brian J

    2016-10-01

    To determine whether representative experience narratives (describing a range of possible experiences) or targeted experience narratives (targeting the direction of forecasting bias) can reduce affective forecasting errors, or errors in predictions of experiences. In Study 1, participants (N=366) were surveyed about their experiences with 10 common medical events. Those who had never experienced the event provided ratings of predicted discomfort and those who had experienced the event provided ratings of actual discomfort. Participants making predictions were randomly assigned to either the representative experience narrative condition or the control condition in which they made predictions without reading narratives. In Study 2, participants (N=196) were again surveyed about their experiences with these 10 medical events, but participants making predictions were randomly assigned to either the targeted experience narrative condition or the control condition. Affective forecasting errors were observed in both studies. These forecasting errors were reduced with the use of targeted experience narratives (Study 2) but not representative experience narratives (Study 1). Targeted, but not representative, narratives improved the accuracy of predicted discomfort. Public collections of patient experiences should favor stories that target affective forecasting biases over stories representing the range of possible experiences. Copyright © 2016 Elsevier Ireland Ltd. All rights reserved.

  4. The associations of insomnia with costly workplace accidents and errors: results from the America Insomnia Survey.

    PubMed

    Shahly, Victoria; Berglund, Patricia A; Coulouvrat, Catherine; Fitzgerald, Timothy; Hajak, Goeran; Roth, Thomas; Shillington, Alicia C; Stephenson, Judith J; Walsh, James K; Kessler, Ronald C

    2012-10-01

    Insomnia is a common and seriously impairing condition that often goes unrecognized. To examine associations of broadly defined insomnia (ie, meeting inclusion criteria for a diagnosis from International Statistical Classification of Diseases, 10th Revision, DSM-IV, or Research Diagnostic Criteria/International Classification of Sleep Disorders, Second Edition) with costly workplace accidents and errors after excluding other chronic conditions among workers in the America Insomnia Survey (AIS). A national cross-sectional telephone survey (65.0% cooperation rate) of commercially insured health plan members selected from the more than 34 million in the HealthCore Integrated Research Database. Four thousand nine hundred ninety-one employed AIS respondents. Costly workplace accidents or errors in the 12 months before the AIS interview were assessed with one question about workplace accidents "that either caused damage or work disruption with a value of $500 or more" and another about other mistakes "that cost your company $500 or more." Current insomnia with duration of at least 12 months was assessed with the Brief Insomnia Questionnaire, a validated (area under the receiver operating characteristic curve, 0.86 compared with diagnoses based on blinded clinical reappraisal interviews), fully structured diagnostic interview. Eighteen other chronic conditions were assessed with medical/pharmacy claims records and validated self-report scales. Insomnia had a significant odds ratio with workplace accidents and/or errors controlled for other chronic conditions (1.4). The odds ratio did not vary significantly with respondent age, sex, educational level, or comorbidity. The average costs of insomnia-related accidents and errors ($32 062) were significantly higher than those of other accidents and errors ($21 914). Simulations estimated that insomnia was associated with 7.2% of all costly workplace accidents and errors and 23.7% of all the costs of these incidents. These proportions are higher than for any other chronic condition, with annualized US population projections of 274 000 costly insomnia-related workplace accidents and errors having a combined value of US $31.1 billion. Effectiveness trials are needed to determine whether expanded screening, outreach, and treatment of workers with insomnia would yield a positive return on investment for employers.

  5. Extragalactic counterparts to Einstein slew survey sources

    NASA Technical Reports Server (NTRS)

    Schachter, Jonathan F.; Elvis, Martin; Plummer, David; Remillard, Ron

    1992-01-01

    The Einstein slew survey consists of 819 bright X-ray sources, of which 636 (or 78 percent) are identified with counterparts in standard catalogs. The importance of bright X-ray surveys is stressed, and the slew survey is compared to the Rosat all sky survey. Statistical techniques for minimizing confusion in arcminute error circles in digitized data are discussed. The 238 slew survey active galactic nuclei, clusters, and BL Lacertae objects identified to date and their implications for logN-logS and source evolution studies are described.

  6. Uncertainty in the visibility mask of a survey and its effects on the clustering of biased tracers

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Colavincenzo, M.; Monaco, P.; Borgani, S.

    The forecasted accuracy of upcoming surveys of large-scale structure cannot be achieved without a proper quantification of the error induced by foreground removal (or other systematics like 0-point photometry offset). Because these errors are highly correlated on the sky, their influence is expected to be especially important at very large scales, at and beyond the first Baryonic Acoustic Oscillation (BAO). In this work we quantify how the uncertainty in the visibility mask of a survey, that gives the survey depth in a specific sky area, influences the measured power spectrum of a sample of tracers of the density field andmore » its covariance matrix. We start from a very large set of 10,000 catalogs of dark matter (DM) halos in periodic cosmological boxes, produced with the PINOCCHIO approximate method. To make an analytic approach feasible, we assume luminosity-independent halo bias and an idealized geometry for the visibility mask, that is constant in square tiles of physical length l ; this should be interpreted as the projection, at the observation redshift, of the angular correlation scale of the foreground residuals. We find that the power spectrum of these biased tracers can be expressed as the sum of a cosmological term, a mask term and a term involving their convolution. The mask and convolution terms scale like P ∝ l {sup 2}σ {sub A} {sup 2}, where σ {sub A} {sup 2} is the variance of the uncertainty on the visibility mask. With l = 30−100 Mpc/ h and σ {sub A} = 5−20%, the mask term can be significant at k ∼ 0.01−0.1 h /Mpc, and the convolution term can amount to ∼ 1−10% of the total. The influence of mask uncertainty on power spectrum covariance is more complicated: the coupling of the convolution term with the other two gives rise to several mixed terms, that we quantify by difference using the mock catalogs. These are found to be of the same order of the mask covariance, and to introduce non-diagonal terms at large scales. As a consequence, the power spectrum covariance matrix cannot be expressed as the sum of a cosmological and of a mask term. More realistic settings (realistic foregrounds, luminosity-dependent bias) make the analytical approach not feasible, and the problem requires on the one hand usage of extended sets of mock catalogs, on the other hand detailed knowledge of the correlations among errors in the visibility masks. Our results lie down the theoretical bases to quantify the impact that uncertainties in the mask calibration have on the derivation of cosmological constraints from large spectroscopic surveys.« less

  7. From Here to There: Lessons from an Integrative Patient Safety Project in Rural Health Care Settings

    DTIC Science & Technology

    2005-05-01

    errors and patient falls. The medication errors generally involved one of three issues: incorrect dose, time, or port. Although most of the health...statistics about trends; and the summary of events related to patient safety and medical errors.12 The interplay among factors These three domains...the medical staff. We explored these issues further when administering a staff-wide Patient Safety Survey. Responses mirrored the findings that

  8. An error covariance model for sea surface topography and velocity derived from TOPEX/POSEIDON altimetry

    NASA Technical Reports Server (NTRS)

    Tsaoussi, Lucia S.; Koblinsky, Chester J.

    1994-01-01

    In order to facilitate the use of satellite-derived sea surface topography and velocity oceanographic models, methodology is presented for deriving the total error covariance and its geographic distribution from TOPEX/POSEIDON measurements. The model is formulated using a parametric model fit to the altimeter range observations. The topography and velocity modeled with spherical harmonic expansions whose coefficients are found through optimal adjustment to the altimeter range residuals using Bayesian statistics. All other parameters, including the orbit, geoid, surface models, and range corrections are provided as unadjusted parameters. The maximum likelihood estimates and errors are derived from the probability density function of the altimeter range residuals conditioned with a priori information. Estimates of model errors for the unadjusted parameters are obtained from the TOPEX/POSEIDON postlaunch verification results and the error covariances for the orbit and the geoid, except for the ocean tides. The error in the ocean tides is modeled, first, as the difference between two global tide models and, second, as the correction to the present tide model, the correction derived from the TOPEX/POSEIDON data. A formal error covariance propagation scheme is used to derive the total error. Our global total error estimate for the TOPEX/POSEIDON topography relative to the geoid for one 10-day period is found tio be 11 cm RMS. When the error in the geoid is removed, thereby providing an estimate of the time dependent error, the uncertainty in the topography is 3.5 cm root mean square (RMS). This level of accuracy is consistent with direct comparisons of TOPEX/POSEIDON altimeter heights with tide gauge measurements at 28 stations. In addition, the error correlation length scales are derived globally in both east-west and north-south directions, which should prove useful for data assimilation. The largest error correlation length scales are found in the tropics. Errors in the velocity field are smallest in midlatitude regions. For both variables the largest errors caused by uncertainty in the geoid. More accurate representations of the geoid await a dedicated geopotential satellite mission. Substantial improvements in the accuracy of ocean tide models are expected in the very near future from research with TOPEX/POSEIDON data.

  9. Fully Convolutional Networks for Ground Classification from LIDAR Point Clouds

    NASA Astrophysics Data System (ADS)

    Rizaldy, A.; Persello, C.; Gevaert, C. M.; Oude Elberink, S. J.

    2018-05-01

    Deep Learning has been massively used for image classification in recent years. The use of deep learning for ground classification from LIDAR point clouds has also been recently studied. However, point clouds need to be converted into an image in order to use Convolutional Neural Networks (CNNs). In state-of-the-art techniques, this conversion is slow because each point is converted into a separate image. This approach leads to highly redundant computation during conversion and classification. The goal of this study is to design a more efficient data conversion and ground classification. This goal is achieved by first converting the whole point cloud into a single image. The classification is then performed by a Fully Convolutional Network (FCN), a modified version of CNN designed for pixel-wise image classification. The proposed method is significantly faster than state-of-the-art techniques. On the ISPRS Filter Test dataset, it is 78 times faster for conversion and 16 times faster for classification. Our experimental analysis on the same dataset shows that the proposed method results in 5.22 % of total error, 4.10 % of type I error, and 15.07 % of type II error. Compared to the previous CNN-based technique and LAStools software, the proposed method reduces the total error and type I error (while type II error is slightly higher). The method was also tested on a very high point density LIDAR point clouds resulting in 4.02 % of total error, 2.15 % of type I error and 6.14 % of type II error.

  10. Determinants of Wealth Fluctuation: Changes in Hard-To-Measure Economic Variables in a Panel Study

    PubMed Central

    Pfeffer, Fabian T.; Griffin, Jamie

    2017-01-01

    Measuring fluctuation in families’ economic conditions is the raison d’être of household panel studies. Accordingly, a particularly challenging critique is that extreme fluctuation in measured economic characteristics might indicate compounding measurement error rather than actual changes in families’ economic wellbeing. In this article, we address this claim by moving beyond the assumption that particularly large fluctuation in economic conditions might be too large to be realistic. Instead, we examine predictors of large fluctuation, capturing sources related to actual socio-economic changes as well as potential sources of measurement error. Using the Panel Study of Income Dynamics, we study between-wave changes in a dimension of economic wellbeing that is especially hard to measure, namely, net worth as an indicator of total family wealth. Our results demonstrate that even very large between-wave changes in net worth can be attributed to actual socio-economic and demographic processes. We do, however, also identify a potential source of measurement error that contributes to large wealth fluctuation, namely, the treatment of incomplete information, presenting a pervasive challenge for any longitudinal survey that includes questions on economic assets. Our results point to ways for improving wealth variables both in the data collection process (e.g., by measuring active savings) and in data processing (e.g., by improving imputation algorithms). PMID:28316752

  11. Seamless geoids across coastal zones - a comparison of satellite-derived gravity to airborne gravity across the seven continents

    NASA Astrophysics Data System (ADS)

    Forsberg, R.; Olesen, A. V.; Barnes, D.; Ingalls, S. E.; Minter, C. F.; Presicci, M. R.

    2017-12-01

    An accurate coastal geoid model is important for determination of near-shore ocean dynamic topography and currents, as well as for land GPS surveys and global geopotential models. Since many coastal regions across the globe are regions of intense development and coastal protection projects, precise geoid models at cm-level accuracy are essential. The only way to secure cm-geoid accuracies across coastal regions is to acquire more marine gravity data; here airborne gravity is the obvious method of choice due to the uniform accuracy, and the ability to provide a seamless geoid accuracy across the coastline. Current practice for gravity and geoid models, such as EGM2008 and many national projects, is to complement land gravity data with satellite radar altimetry at sea, a procedure which can give large errors in regions close to the coast. To quantify the coastal errors in satellite gravity, we compare results of a large set of recent airborne gravity surveys, acquired across a range of coastal zones globally from polar to equatorial regions, and quantify the errors as a function of distance from the coast line for a number of different global altimetry gravity solutions. We find that accuracy in satellite altimetry solutions depend very much on the availability of gravity data along the coast-near land regions in the underlying reference fields (e.g., EGM2008), with satellite gravity accuracy in the near-shore zone ranging from anywhere between 5 to 20 mGal r.m.s., with occasional large outliers; we also show how these errors may typically propagate into coastal geoid errors of 5-10 cm r.m.s. or more. This highlight the need for airborne (land) gravity surveys to be extended at least 20-30 km offshore, especially for regions of insufficient marine gravity coverage; we give examples of a few such recent surveys and associated marine geoid impacts.

  12. Team safety and innovation by learning from errors in long-term care settings.

    PubMed

    Buljac-Samardžić, Martina; van Woerkom, Marianne; Paauwe, Jaap

    2012-01-01

    Team safety and team innovation are underexplored in the context of long-term care. Understanding the issues requires attention to how teams cope with error. Team managers could have an important role in developing a team's error orientation and managing team membership instabilities. The aim of this study was to examine the impact of team member stability, team coaching, and a team's error orientation on team safety and innovation. A cross-sectional survey method was employed within 2 long-term care organizations. Team members and team managers received a survey that measured safety and innovation. Team members assessed member stability, team coaching, and team error orientation (i.e., problem-solving and blaming approach). The final sample included 933 respondents from 152 teams. Stable teams and teams with managers who take on the role of coach are more likely to adopt a problem-solving approach and less likely to adopt a blaming approach toward errors. Both error orientations are related to team member ratings of safety and innovation, but only the blaming approach is (negatively) related to manager ratings of innovation. Differences between members' and managers' ratings of safety are greater in teams with relatively high scores for the blaming approach and relatively low scores for the problem-solving approach. Team coaching was found to be positively related to innovation, especially in unstable teams. Long-term care organizations that wish to enhance team safety and innovation should encourage a problem-solving approach and discourage a blaming approach. Team managers can play a crucial role in this by coaching team members to see errors as sources of learning and improvement and ensuring that individuals will not be blamed for errors.

  13. Psychological safety and error reporting within Veterans Health Administration hospitals.

    PubMed

    Derickson, Ryan; Fishman, Jonathan; Osatuke, Katerine; Teclaw, Robert; Ramsel, Dee

    2015-03-01

    In psychologically safe workplaces, employees feel comfortable taking interpersonal risks, such as pointing out errors. Previous research suggested that psychologically safe climate optimizes organizational outcomes. We evaluated psychological safety levels in Veterans Health Administration (VHA) hospitals and assessed their relationship to employee willingness of reporting medical errors. We conducted an ANOVA on psychological safety scores from a VHA employees census survey (n = 185,879), assessing variability of means across racial and supervisory levels. We examined organizational climate assessment interviews (n = 374) evaluating how many employees asserted willingness to report errors (or not) and their stated reasons. Finally, based on survey data, we identified 2 (psychologically safe versus unsafe) hospitals and compared their number of employees who would be willing/unwilling to report an error. Psychological safety increased with supervisory level (P < 0.001, η = 0.03) and was not meaningfully related to race (P < 0.001, η = 0.003). Twelve percent of employees would not report an error; retaliation fear was the most commonly mentioned deterrent. Furthermore, employees at the psychologically unsafe hospital (71% would report, 13% would not) were less willing to report an error than at the psychologically safe hospital (91% would, 0% would not). A substantial minority would not report an error and were willing to admit so in a private interview setting. Their stated reasons as well as higher psychological safety means for supervisory employees both suggest power as an important determinant. Intentions to report were associated with psychological safety, strongly suggesting this climate aspect as instrumental to improving patient safety and reducing costs.

  14. The theory precision analyse of RFM localization of satellite remote sensing imagery

    NASA Astrophysics Data System (ADS)

    Zhang, Jianqing; Xv, Biao

    2009-11-01

    The tradition method of detecting precision of Rational Function Model(RFM) is to make use of a great deal check points, and it calculates mean square error through comparing calculational coordinate with known coordinate. This method is from theory of probability, through a large number of samples to statistic estimate value of mean square error, we can think its estimate value approaches in its true when samples are well enough. This paper is from angle of survey adjustment, take law of propagation of error as the theory basis, and it calculates theory precision of RFM localization. Then take the SPOT5 three array imagery as experiment data, and the result of traditional method and narrated method in the paper are compared, while has confirmed tradition method feasible, and answered its theory precision question from the angle of survey adjustment.

  15. Integrating Safety in the Aviation System: Interdepartmental Training for Pilots and Maintenance Technicians

    NASA Technical Reports Server (NTRS)

    Mattson, Marifran; Petrin, Donald A.; Young, John P.

    2001-01-01

    The study of human factors has had a decisive impact on the aviation industry. However, the entire aviation system often is not considered in researching, training, and evaluating human factors issues especially with regard to safety. In both conceptual and practical terms, we argue for the proactive management of human error from both an individual and organizational systems perspective. The results of a multidisciplinary research project incorporating survey data from professional pilots and maintenance technicians and an exploratory study integrating students from relevant disciplines are reported. Survey findings suggest that latent safety errors may occur during the maintenance discrepancy reporting process because pilots and maintenance technicians do not effectively interact with one another. The importance of interdepartmental or cross-disciplinary training for decreasing these errors and increasing safety is discussed as a primary implication.

  16. LAMOST DR1: Stellar Parameters and Chemical Abundances with SP_Ace

    NASA Astrophysics Data System (ADS)

    Boeche, C.; Smith, M. C.; Grebel, E. K.; Zhong, J.; Hou, J. L.; Chen, L.; Stello, D.

    2018-04-01

    We present a new analysis of the LAMOST DR1 survey spectral database performed with the code SP_Ace, which provides the derived stellar parameters {T}{{eff}}, {log}g, [Fe/H], and [α/H] for 1,097,231 stellar objects. We tested the reliability of our results by comparing them to reference results from high spectral resolution surveys. The expected errors can be summarized as ∼120 K in {T}{{eff}}, ∼0.2 in {log}g, ∼0.15 dex in [Fe/H], and ∼0.1 dex in [α/Fe] for spectra with S/N > 40, with some differences between dwarf and giant stars. SP_Ace provides error estimations consistent with the discrepancies observed between derived and reference parameters. Some systematic errors are identified and discussed. The resulting catalog is publicly available at the LAMOST and CDS websites.

  17. Mapping the gravity field in coastal areas: feasibility and interest of a new airborne planar gradiometer concept

    NASA Astrophysics Data System (ADS)

    Douch, Karim; Panet, Isabelle; Foulon, Bernard; Christophe, Bruno; Pajot-Métivier, Gwendoline; Diament, Michel

    2014-05-01

    Satellite missions such as CHAMP, GRACE and GOCE have led to an unprecedented improvement of global gravity field models during the past decade. However, for many applications these global models are not sufficiently accurate when dealing with wavelengths shorter than 100 km. This is all the more true in areas where gravity data are scarce and uneven as for instance in the poorly covered land-sea transition area. We suggest here, in line with spatial gravity gradiometry, airborne gravity gradiometry as a convenient way to amplify the sensitivity to short wavelengths and to cover homogeneously coastal region. Moreover, the directionality of the gravity gradients gives new information on the geometry of the gravity field and therefore of the causative bodies. In this respect, we analyze here the performances of a new airborne electrostatic acceleration gradiometer, GREMLIT, which permits along with ancillary measurements to determine the horizontal gradients of the horizontal components of the gravitational field in the instrumental frame. GREMLIT is composed of a compact assembly of 4 planar electrostatic accelerometers inheriting from technologies developed by ONERA for spatial accelerometers. After an overview of the functionals of the gravity field that are of interest for coastal oceanography, passive navigation and hydrocarbon exploration, we present the corresponding required precision and resolution. Then, we investigate the influence of the different parameters of the survey, such as altitude or cross-track distance, on the resolution and precision of the final measurements. To do so, we design numerical simulations of airborne survey performed with GREMLIT and compute the total error budget on the gravity gradients. Based on this error analysis, we infer by a method of error propagation the uncertainty on the different functionals of the gravity potential used for each application. This finally enables us to conclude on the requirements for a high resolution mapping of the gravity field in coastal areas.

  18. Evaluating the accuracy of low cost UAV generated topography and its effectiveness for geomorphic change detection

    NASA Astrophysics Data System (ADS)

    Cook, Kristen

    2015-04-01

    With the recent explosion in the use and availability of unmanned aerial vehicle platforms and development of easy to use structure from motion (SfM) software, UAV based photogrammetry is increasingly being adopted to produce high resolution topography for the study of surface processes. UAV systems can vary substantially in price and complexity, but the tradeoffs between these and the quality of the resulting data are not well constrained. We look at one end of this spectrum and evaluate the effectiveness of a simple low cost UAV setup for obtaining high resolution topography in a challenging field setting. Our study site is the Daan River gorge in western Taiwan, a rapidly eroding bedrock gorge that we have monitored with terrestrial Lidar since 2009. The site presents challenges for the generation and analysis of high resolution topography, including vertical gorge walls, vegetation, wide variation in surface roughness, and a complicated 3D morphology. In order to evaluate the accuracy of the UAV-derived topography, we compare it with terrestrial Lidar data collected during the same survey period. Our UAV setup combines a DJI Phantom 2 quadcopter with a 16 megapixel Canon Powershot camera for a total platform cost of less than 850. The quadcopter is flown manually, and the camera is programmed to take a photograph every 4 seconds, yielding 200-250 pictures per flight. We measured ground control points and targets for both the Lidar scans and the aerial surveys using a Leica RTK GPS with 1-2 cm accuracy. UAV derived point clouds were obtained using Agisoft Photoscan software. We conducted both Lidar and UAV surveys before and after the 2014 typhoon season, allowing us to evaluate the reliability of the UAV survey to detect geomorphic changes in the range of one to several meters. The accuracy of the SfM point clouds depends strongly on the characteristics of the surface being considered, with vegetation and small scale texture causing inaccuracies. However, we find that this simple UAV setup can yield point clouds with 78% of points within 20 cm and 60% within 10 cm of the Lidar point clouds, with the higher errors dominated by vegetation effects. Well-distributed and accurately located ground control points are critical, but we achieve good accuracy with even with relatively few ground control points (25) over a 150,000 sq m area. The large number of photographs taken during each flight also allows us to explore the reproducibility of the UAV-derived topography by generating point clouds from different subsets of photographs taken of the same area during a single survey. These results show the same pattern of higher errors due to vegetation, but bedrock surfaces generally have errors of less than 4 cm. These results suggest that even very basic UAV surveys can yield data suitable for measuring geomorphic change on the scale of a channel reach.

  19. The Limit of Inundation of the September 29, 2009, Tsunami on Tutuila, American Samoa

    USGS Publications Warehouse

    Jaffe, Bruce E.; Gelfenbaum, Guy; Buckley, Mark L.; Watt, Steve; Apotsos, Alex; Stevens, Andrew W.; Richmond, Bruce M.

    2010-01-01

    U.S. Geological Survey scientists investigated the coastal impacts of the September 29, 2009, South Pacific tsunami in Tutuila, American Samoa in October and November 2009, including mapping the alongshore variation in the limit of inundation. Knowing the inundation limit is useful for planning safer coastal development and evacuation routes for future tsunamis and for improving models of tsunami hazards. This report presents field data documenting the limit of inundation at 18 sites around Tutuila collected in the weeks following the tsunami using Differential GPS (DGPS). In total, 15,703 points along inundation lines were mapped. Estimates of DGPS error and uncertainty in interpretation of the inundation line are provided as electronic files that accompany this report.

  20. An algorithm to assess methodological quality of nutrition and mortality cross-sectional surveys: development and application to surveys conducted in Darfur, Sudan.

    PubMed

    Prudhon, Claudine; de Radiguès, Xavier; Dale, Nancy; Checchi, Francesco

    2011-11-09

    Nutrition and mortality surveys are the main tools whereby evidence on the health status of populations affected by disasters and armed conflict is quantified and monitored over time. Several reviews have consistently revealed a lack of rigor in many surveys. We describe an algorithm for analyzing nutritional and mortality survey reports to identify a comprehensive range of errors that may result in sampling, response, or measurement biases and score quality. We apply the algorithm to surveys conducted in Darfur, Sudan. We developed an algorithm based on internationally agreed upon methods and best practices. Penalties are attributed for a list of errors, and an overall score is built from the summation of penalties accrued by the survey as a whole. To test the algorithm reproducibility, it was independently applied by three raters on 30 randomly selected survey reports. The algorithm was further applied to more than 100 surveys conducted in Darfur, Sudan. The Intra Class Correlation coefficient was 0.79 for mortality surveys and 0.78 for nutrition surveys. The overall median quality score and range of about 100 surveys conducted in Darfur were 0.60 (0.12-0.93) and 0.675 (0.23-0.86) for mortality and nutrition surveys, respectively. They varied between the organizations conducting the surveys, with no major trend over time. Our study suggests that it is possible to systematically assess quality of surveys and reveals considerable problems with the quality of nutritional and particularly mortality surveys conducted in the Darfur crisis.

  1. An algorithm to assess methodological quality of nutrition and mortality cross-sectional surveys: development and application to surveys conducted in Darfur, Sudan

    PubMed Central

    2011-01-01

    Background Nutrition and mortality surveys are the main tools whereby evidence on the health status of populations affected by disasters and armed conflict is quantified and monitored over time. Several reviews have consistently revealed a lack of rigor in many surveys. We describe an algorithm for analyzing nutritional and mortality survey reports to identify a comprehensive range of errors that may result in sampling, response, or measurement biases and score quality. We apply the algorithm to surveys conducted in Darfur, Sudan. Methods We developed an algorithm based on internationally agreed upon methods and best practices. Penalties are attributed for a list of errors, and an overall score is built from the summation of penalties accrued by the survey as a whole. To test the algorithm reproducibility, it was independently applied by three raters on 30 randomly selected survey reports. The algorithm was further applied to more than 100 surveys conducted in Darfur, Sudan. Results The Intra Class Correlation coefficient was 0.79 for mortality surveys and 0.78 for nutrition surveys. The overall median quality score and range of about 100 surveys conducted in Darfur were 0.60 (0.12-0.93) and 0.675 (0.23-0.86) for mortality and nutrition surveys, respectively. They varied between the organizations conducting the surveys, with no major trend over time. Conclusion Our study suggests that it is possible to systematically assess quality of surveys and reveals considerable problems with the quality of nutritional and particularly mortality surveys conducted in the Darfur crisis. PMID:22071133

  2. The Grism Lens-Amplified Survey from Space (GLASS). VI. Comparing the Mass and Light in MACS J0416.1-2403 Using Frontier Field Imaging and GLASS Spectroscopy

    NASA Astrophysics Data System (ADS)

    Hoag, A.; Huang, K.-H.; Treu, T.; Bradač, M.; Schmidt, K. B.; Wang, X.; Brammer, G. B.; Broussard, A.; Amorin, R.; Castellano, M.; Fontana, A.; Merlin, E.; Schrabback, T.; Trenti, M.; Vulcani, B.

    2016-11-01

    We present a model using both strong and weak gravitational lensing of the galaxy cluster MACS J0416.1-2403, constrained using spectroscopy from the Grism Lens-Amplified Survey from Space (GLASS) and Hubble Frontier Fields (HFF) imaging data. We search for emission lines in known multiply imaged sources in the GLASS spectra, obtaining secure spectroscopic redshifts of 30 multiple images belonging to 15 distinct source galaxies. The GLASS spectra provide the first spectroscopic measurements for five of the source galaxies. The weak lensing signal is acquired from 884 galaxies in the F606W HFF image. By combining the weak lensing constraints with 15 multiple image systems with spectroscopic redshifts and nine multiple image systems with photometric redshifts, we reconstruct the gravitational potential of the cluster on an adaptive grid. The resulting map of total mass density is compared with a map of stellar mass density obtained from the deep Spitzer Frontier Fields imaging data to study the relative distribution of stellar and total mass in the cluster. We find that the projected stellar mass to total mass ratio, f ⋆, varies considerably with the stellar surface mass density. The mean projected stellar mass to total mass ratio is < {f}\\star > =0.009+/- 0.003 (stat.), but with a systematic error as large as 0.004-0.005, dominated by the choice of the initial mass function. We find agreement with several recent measurements of f ⋆ in massive cluster environments. The lensing maps of convergence, shear, and magnification are made available to the broader community in the standard HFF format.

  3. Error identification in a high-volume clinical chemistry laboratory: Five-year experience.

    PubMed

    Jafri, Lena; Khan, Aysha Habib; Ghani, Farooq; Shakeel, Shahid; Raheem, Ahmed; Siddiqui, Imran

    2015-07-01

    Quality indicators for assessing the performance of a laboratory require a systematic and continuous approach in collecting and analyzing data. The aim of this study was to determine the frequency of errors utilizing the quality indicators in a clinical chemistry laboratory and to convert errors to the Sigma scale. Five-year quality indicator data of a clinical chemistry laboratory was evaluated to describe the frequency of errors. An 'error' was defined as a defect during the entire testing process from the time requisition was raised and phlebotomy was done until the result dispatch. An indicator with a Sigma value of 4 was considered good but a process for which the Sigma value was 5 (i.e. 99.977% error-free) was considered well controlled. In the five-year period, a total of 6,792,020 specimens were received in the laboratory. Among a total of 17,631,834 analyses, 15.5% were from within hospital. Total error rate was 0.45% and of all the quality indicators used in this study the average Sigma level was 5.2. Three indicators - visible hemolysis, failure of proficiency testing and delay in stat tests - were below 5 on the Sigma scale and highlight the need to rigorously monitor these processes. Using Six Sigma metrics quality in a clinical laboratory can be monitored more effectively and it can set benchmarks for improving efficiency.

  4. Assessment of bedside transfusion practices at a tertiary care center: A step closer to controlling the chaos

    PubMed Central

    Khetan, Dheeraj; Katharia, Rahul; Pandey, Hem Chandra; Chaudhary, Rajendra; Harsvardhan, Rajesh; Pandey, Hemchandra; Sonkar, Atul

    2018-01-01

    BACKGROUND: Blood transfusion chain can be divided into three phases: preanalytical (patient bedside), analytical (steps done at transfusion services), and postanalytical (bedside). Majority (~70%) of events due to blood transfusion have been attributed to errors in bedside blood administration practices. Survey of bedside transfusion practices (pre-analytical and post analytical phase) was done to assess awareness and compliance to guidelines regarding requisition and administration of blood components. MATERIALS AND METHODS: Interview-based questionnaire of ward staff and observational survey of actual transfusion of blood components in total 26 wards of the institute was carried out during November–December 2013. All the collected data were coded (to maintain confidentiality) and analyzed using SPSS (v 20). For analysis, wards were divided into three categories: medical, surgical, and others (including all intensive care units). RESULTS: A total of 104 (33 resident doctors and 71 nursing) staff members were interviewed and observational survey could be conducted in 25 wards during the study period. In the preanalytical phase, major issues were as follows: lack of awareness for institute guidelines (80.6% not aware), improper sampling practices (67.3%), and prescription related (56.7%). In the postanalytical phase, major issues were found to be lack of consent for blood transfusion (72%), improper warming of blood component (~80%), and problems in storage and discarding of blood units. CONCLUSION: There is need to create awareness about policies and guidelines of bed side transfusion among the ward staff. Regular audits are necessary for compliance to guidelines among clinical staff. PMID:29563672

  5. Correlating subjective and objective descriptors of ultra high molecular weight wear particles from total joint prostheses.

    PubMed

    McMullin, Brian T; Leung, Ming-Ying; Shanbhag, Arun S; McNulty, Donald; Mabrey, Jay D; Agrawal, C Mauli

    2006-02-01

    A total of 750 images of individual ultra-high molecular weight polyethylene (UHMWPE) particles isolated from periprosthetic failed hip, knee, and shoulder arthroplasties were extracted from archival scanning electron micrographs. Particle size and morphology was subsequently analyzed using computerized image analysis software utilizing five descriptors found in ASTM F1877-98, a standard for quantitative description of wear debris. An online survey application was developed to display particle images, and allowed ten respondents to classify particle morphologies according to commonly used terminology as fibers, flakes, or granules. Particles were categorized based on a simple majority of responses. All descriptors were evaluated using a one-way ANOVA and Tukey-Kramer test for all-pairs comparison among each class of particles. A logistic regression model using half of the particles included in the survey was then used to develop a mathematical scheme to predict whether a given particle should be classified as a fiber, flake, or granule based on its quantitative measurements. The validity of the model was then assessed using the other half of the survey particles and compared with human responses. Comparison of the quantitative measurements of isolated particles showed that the morphologies of each particle type classified by respondents were statistically different from one another (p<0.05). The average agreement between mathematical prediction and human respondents was 83.5% (standard error 0.16%). These data suggest that computerized descriptors can be feasibly correlated with subjective terminology, thus providing a basis for a common vocabulary for particle description which can be translated into quantitative dimensions.

  6. Correlating subjective and objective descriptors of ultra high molecular weight wear particles from total joint prostheses

    PubMed Central

    McMullin, Brian T.; Leung, Ming-Ying; Shanbhag, Arun S.; McNulty, Donald; Mabrey, Jay D.; Agrawal, C. Mauli

    2014-01-01

    A total of 750 images of individual ultra-high molecular weight polyethylene (UHMWPE) particles isolated from periprosthetic failed hip, knee, and shoulder arthroplasties were extracted from archival scanning electron micrographs. Particle size and morphology was subsequently analyzed using computerized image analysis software utilizing five descriptors found in ASTM F1877-98, a standard for quantitative description of wear debris. An online survey application was developed to display particle images, and allowed ten respondents to classify particle morphologies according to commonly used terminology as fibers, flakes, or granules. Particles were categorized based on a simple majority of responses. All descriptors were evaluated using a one-way ANOVA and Tukey–Kramer test for all-pairs comparison among each class of particles. A logistic regression model using half of the particles included in the survey was then used to develop a mathematical scheme to predict whether a given particle should be classified as a fiber, flake, or granule based on its quantitative measurements. The validity of the model was then assessed using the other half of the survey particles and compared with human responses. Comparison of the quantitative measurements of isolated particles showed that the morphologies of each particle type classified by respondents were statistically different from one another (po0:05). The average agreement between mathematical prediction and human respondents was 83.5% (standard error 0.16%). These data suggest that computerized descriptors can be feasibly correlated with subjective terminology, thus providing a basis for a common vocabulary for particle description which can be translated into quantitative dimensions. PMID:16112725

  7. Awareness of surgical costs: a multicenter cross-sectional survey.

    PubMed

    Bade, Kim; Hoogerbrug, Jonathan

    2015-01-01

    Resource scarcity continues to be an important problem in modern surgical practice. Studies in North America and Europe have found that medical professionals have limited understanding of the costs of medical care. No cost awareness studies have been undertaken in Australasia or specifically focusing on the surgical team. This study determined the cost of a range of commonly used diagnostic tests, procedures, and hospital resources associated with care of the surgical patient. The surgical teams' awareness of these costs was then assessed in a multicenter cross-sectional survey. In total, 14 general surgical consultants, 14 registrars, and 25 house officers working in three New Zealand hospitals were asked to estimate the costs of 14 items commonly associated with patient care. Cost estimations were considered correct if within 25% plus or minus of the actual cost. Accuracy was assessed by calculating the median, mean, and absolute percentage discrepancy. A total of 57 surveys were completed. Of which, four were incomplete and were not included in the analysis. Cost awareness was generally poor, and members of the surgical team were rarely able to estimate the costs to within 25%. The mean absolute percentage error was 0.87 (95% CI: 0.58-1.18) and underestimates were most common. There was no significant difference in estimate accuracy between consultants, registrars, or house officers, or between consultants working in both public/private practice compared with those working in public practice alone. There is poor awareness of surgical costs among consultant surgeons, registrars, and junior physicians working in Australasia. Copyright © 2014 Association of Program Directors in Surgery. Published by Elsevier Inc. All rights reserved.

  8. RNAV (GPS) total system error models for use in wake encounter risk analysis of candidate CSPR pairs for inclusion in FAA Order 7110.308

    DOT National Transportation Integrated Search

    2013-08-01

    The purpose of this memorandum is to provide recommended Total System Error (TSE) models for : aircraft using RNAV (GPS) guidance when analyzing the wake encounter risk of proposed simultaneous : dependent (paired) approaches, with 1.5 Nautical...

  9. RNAV (GPS) total system error models for use in wake encounter risk analysis of dependent paired approaches to closely-spaced parallel runways : Project memorandum - February 2014

    DOT National Transportation Integrated Search

    2014-02-01

    The purpose of this memorandum is to provide recommended Total System Error (TSE) models : for aircraft using RNAV (GPS) guidance when analyzing the wake encounter risk of proposed : simultaneous dependent (paired) approach operations to Closel...

  10. A patient-initiated voluntary online survey of adverse medical events: the perspective of 696 injured patients and families

    PubMed Central

    Southwick, Frederick S; Cranley, Nicole M; Hallisy, Julia A

    2015-01-01

    Background Preventable medical errors continue to be a major cause of death in the USA and throughout the world. Many patients have written about their experiences on websites and in published books. Methods As patients and family members who have experienced medical harm, we have created a nationwide voluntary survey in order to more broadly and systematically capture the perspective of patients and patient families experiencing adverse medical events and have used quantitative and qualitative analysis to summarise the responses of 696 patients and their families. Results Harm was most commonly associated with diagnostic and therapeutic errors, followed by surgical or procedural complications, hospital-associated infections and medication errors, and our quantitative results match those of previous provider-initiated patient surveys. Qualitative analysis of 450 narratives revealed a lack of perceived provider and system accountability, deficient and disrespectful communication and a failure of providers to listen as major themes. The consequences of adverse events included death, post-traumatic stress, financial hardship and permanent disability. These conditions and consequences led to a loss of patients’ trust in both the health system and providers. Patients and family members offered suggestions for preventing future adverse events and emphasised the importance of shared decision-making. Conclusions This large voluntary survey of medical harm highlights the potential efficacy of patient-initiated surveys for providing meaningful feedback and for guiding improvements in patient care. PMID:26092166

  11. An Analysis of Misconceptions in Science Textbooks: Earth science in England and Wales

    NASA Astrophysics Data System (ADS)

    King, Chris John Henry

    2010-03-01

    Surveys of the earth science content of all secondary (high school) science textbooks and related publications used in England and Wales have revealed high levels of error/misconception. The 29 science textbooks or textbook series surveyed (51 texts in all) showed poor coverage of National Curriculum earth science and contained a mean level of one earth science error/misconception per page. Science syllabuses and examinations surveyed also showed errors/misconceptions. More than 500 instances of misconception were identified through the surveys. These were analysed for frequency, indicating that those areas of the earth science curriculum most prone to misconception are sedimentary processes/rocks, earthquakes/Earth's structure, and plate tectonics. For the 15 most frequent misconceptions, examples of quotes from the textbooks are given, together with the scientific consensus view, a discussion, and an example of a misconception of similar significance in another area of science. The misconceptions identified in the surveys are compared with those described in the literature. This indicates that the misconceptions found in college students and pre-service/practising science teachers are often also found in published materials, and therefore are likely to reinforce the misconceptions in teachers and their students. The analysis may also reflect the prevalence earth science misconceptions in the UK secondary (high school) science-teaching population. The analysis and discussion provide the opportunity for writers of secondary science materials to improve their work on earth science and to provide a platform for improved teaching and learning of earth science in the future.

  12. SIRTF Focal Plane Survey: A Pre-flight Error Analysis

    NASA Technical Reports Server (NTRS)

    Bayard, David S.; Brugarolas, Paul B.; Boussalis, Dhemetrios; Kang, Bryan H.

    2003-01-01

    This report contains a pre-flight error analysis of the calibration accuracies expected from implementing the currently planned SIRTF focal plane survey strategy. The main purpose of this study is to verify that the planned strategy will meet focal plane survey calibration requirements (as put forth in the SIRTF IOC-SV Mission Plan [4]), and to quantify the actual accuracies expected. The error analysis was performed by running the Instrument Pointing Frame (IPF) Kalman filter on a complete set of simulated IOC-SV survey data, and studying the resulting propagated covariances. The main conclusion of this study is that the all focal plane calibration requirements can be met with the currently planned survey strategy. The associated margins range from 3 to 95 percent, and tend to be smallest for frames having a 0.14" requirement, and largest for frames having a more generous 0.28" (or larger) requirement. The smallest margin of 3 percent is associated with the IRAC 3.6 and 5.8 micron array centers (frames 068 and 069), and the largest margin of 95 percent is associated with the MIPS 160 micron array center (frame 087). For pointing purposes, the most critical calibrations are for the IRS Peakup sweet spots and short wavelength slit centers (frames 019, 023, 052, 028, 034). Results show that these frames are meeting their 0.14" requirements with an expected accuracy of approximately 0.1", which corresponds to a 28 percent margin.

  13. Accuracy estimates for some global analytical models of the Earth's main magnetic field on the basis of data on gradient magnetic surveys at stratospheric balloons

    NASA Astrophysics Data System (ADS)

    Tsvetkov, Yu. P.; Brekhov, O. M.; Bondar, T. N.; Filippov, S. V.; Petrov, V. G.; Tsvetkova, N. M.; Frunze, A. Kh.

    2014-03-01

    Two global analytical models of the main magnetic field of the Earth (MFE) have been used to determine their potential in deriving an anomalous MFE from balloon magnetic surveys conducted at altitudes of ˜30 km. The daily mean spherical harmonic model (DMSHM) constructed from satellite data on the day of balloon magnetic surveys was analyzed. This model for the day of magnetic surveys was shown to be almost free of errors associated with secular variations and can be recommended for deriving an anomalous MFE. The error of the enhanced magnetic model (EMM) was estimated depending on the number of harmonics used in the model. The model limited by the first 13 harmonics was shown to be able to lead to errors in the main MFE of around 15 nT. The EMM developed to n = m = 720 and constructed on the basis of satellite and ground-based magnetic data fails to adequately simulate the anomalous MFE at altitudes of 30 km. To construct a representative model developed to m = n = 720, ground-based magnetic data should be replaced by data of balloon magnetic surveys for altitudes of ˜30 km. The results of investigations were confirmed by a balloon experiment conducted by Pushkov Institute of Terrestrial Magnetism, Ionosphere, and Radio Wave Propagation of the Russian Academy of Sciences and the Moscow Aviation Institute.

  14. Characterizing Satellite Rainfall Errors based on Land Use and Land Cover and Tracing Error Source in Hydrologic Model Simulation

    NASA Astrophysics Data System (ADS)

    Gebregiorgis, A. S.; Peters-Lidard, C. D.; Tian, Y.; Hossain, F.

    2011-12-01

    Hydrologic modeling has benefited from operational production of high resolution satellite rainfall products. The global coverage, near-real time availability, spatial and temporal sampling resolutions have advanced the application of physically based semi-distributed and distributed hydrologic models for wide range of environmental decision making processes. Despite these successes, the existence of uncertainties due to indirect way of satellite rainfall estimates and hydrologic models themselves remain a challenge in making meaningful and more evocative predictions. This study comprises breaking down of total satellite rainfall error into three independent components (hit bias, missed precipitation and false alarm), characterizing them as function of land use and land cover (LULC), and tracing back the source of simulated soil moisture and runoff error in physically based distributed hydrologic model. Here, we asked "on what way the three independent total bias components, hit bias, missed, and false precipitation, affect the estimation of soil moisture and runoff in physically based hydrologic models?" To understand the clear picture of the outlined question above, we implemented a systematic approach by characterizing and decomposing the total satellite rainfall error as a function of land use and land cover in Mississippi basin. This will help us to understand the major source of soil moisture and runoff errors in hydrologic model simulation and trace back the information to algorithm development and sensor type which ultimately helps to improve algorithms better and will improve application and data assimilation in future for GPM. For forest and woodland and human land use system, the soil moisture was mainly dictated by the total bias for 3B42-RT, CMORPH, and PERSIANN products. On the other side, runoff error was largely dominated by hit bias than the total bias. This difference occurred due to the presence of missed precipitation which is a major contributor to the total bias both during the summer and winter seasons. Missed precipitation, most likely light rain and rain over snow cover, has significant effect on soil moisture and are less capable of producing runoff that results runoff dependency on the hit bias only.

  15. Using SEM to Analyze Complex Survey Data: A Comparison between Design-Based Single-Level and Model-Based Multilevel Approaches

    ERIC Educational Resources Information Center

    Wu, Jiun-Yu; Kwok, Oi-man

    2012-01-01

    Both ad-hoc robust sandwich standard error estimators (design-based approach) and multilevel analysis (model-based approach) are commonly used for analyzing complex survey data with nonindependent observations. Although these 2 approaches perform equally well on analyzing complex survey data with equal between- and within-level model structures…

  16. Heuristic errors in clinical reasoning.

    PubMed

    Rylander, Melanie; Guerrasio, Jeannette

    2016-08-01

    Errors in clinical reasoning contribute to patient morbidity and mortality. The purpose of this study was to determine the types of heuristic errors made by third-year medical students and first-year residents. This study surveyed approximately 150 clinical educators inquiring about the types of heuristic errors they observed in third-year medical students and first-year residents. Anchoring and premature closure were the two most common errors observed amongst third-year medical students and first-year residents. There was no difference in the types of errors observed in the two groups. Errors in clinical reasoning contribute to patient morbidity and mortality Clinical educators perceived that both third-year medical students and first-year residents committed similar heuristic errors, implying that additional medical knowledge and clinical experience do not affect the types of heuristic errors made. Further work is needed to help identify methods that can be used to reduce heuristic errors early in a clinician's education. © 2015 John Wiley & Sons Ltd.

  17. After the Medication Error: Recent Nursing Graduates' Reflections on Adequacy of Education.

    PubMed

    Treiber, Linda A; Jones, Jackie H

    2018-05-01

    The purpose of this study was to better understand individual- and system-level factors surrounding making a medication error from the perspective of recent Bachelor of Science in Nursing graduates. Online survey mixed-methods items included perceptions of adequacy of preparatory nursing education, contributory variables, emotional responses, and treatment by employer following the error. Of the 168 respondents, 55% had made a medication error. Errors resulted from inexperience, rushing, technology, staffing, and patient acuity. Twenty-four percent did not report their errors. Key themes for improving education included more practice in varied clinical areas, intensive pharmacological preparation, practical instruction in functioning within the health care environment, and coping after making medication errors. Errors generally caused emotional distress in the error maker. Overall, perceived treatment after the error reflected supportive environments, where nurses were generally treated with respect, fair treatment, and understanding. Opportunities for nursing education include second victim awareness and reinforcing professional practice standards. [J Nurs Educ. 2018;57(5):275-280.]. Copyright 2018, SLACK Incorporated.

  18. Issues with data and analyses: Errors, underlying themes, and potential solutions

    PubMed Central

    Allison, David B.

    2018-01-01

    Some aspects of science, taken at the broadest level, are universal in empirical research. These include collecting, analyzing, and reporting data. In each of these aspects, errors can and do occur. In this work, we first discuss the importance of focusing on statistical and data errors to continually improve the practice of science. We then describe underlying themes of the types of errors and postulate contributing factors. To do so, we describe a case series of relatively severe data and statistical errors coupled with surveys of some types of errors to better characterize the magnitude, frequency, and trends. Having examined these errors, we then discuss the consequences of specific errors or classes of errors. Finally, given the extracted themes, we discuss methodological, cultural, and system-level approaches to reducing the frequency of commonly observed errors. These approaches will plausibly contribute to the self-critical, self-correcting, ever-evolving practice of science, and ultimately to furthering knowledge. PMID:29531079

  19. Development and content validation of performance assessments for endoscopic third ventriculostomy.

    PubMed

    Breimer, Gerben E; Haji, Faizal A; Hoving, Eelco W; Drake, James M

    2015-08-01

    This study aims to develop and establish the content validity of multiple expert rating instruments to assess performance in endoscopic third ventriculostomy (ETV), collectively called the Neuro-Endoscopic Ventriculostomy Assessment Tool (NEVAT). The important aspects of ETV were identified through a review of current literature, ETV videos, and discussion with neurosurgeons, fellows, and residents. Three assessment measures were subsequently developed: a procedure-specific checklist (CL), a CL of surgical errors, and a global rating scale (GRS). Neurosurgeons from various countries, all identified as experts in ETV, were then invited to participate in a modified Delphi survey to establish the content validity of these instruments. In each Delphi round, experts rated their agreement including each procedural step, error, and GRS item in the respective instruments on a 5-point Likert scale. Seventeen experts agreed to participate in the study and completed all Delphi rounds. After item generation, a total of 27 procedural CL items, 26 error CL items, and 9 GRS items were posed to Delphi panelists for rating. An additional 17 procedural CL items, 12 error CL items, and 1 GRS item were added by panelists. After three rounds, strong consensus (>80% agreement) was achieved on 35 procedural CL items, 29 error CL items, and 10 GRS items. Moderate consensus (50-80% agreement) was achieved on an additional 7 procedural CL items and 1 error CL item. The final procedural and error checklist contained 42 and 30 items, respectively (divided into setup, exposure, navigation, ventriculostomy, and closure). The final GRS contained 10 items. We have established the content validity of three ETV assessment measures by iterative consensus of an international expert panel. Each measure provides unique assessment information and thus can be used individually or in combination, depending on the characteristics of the learner and the purpose of the assessment. These instruments must now be evaluated in both the simulated and operative settings, to determine their construct validity and reliability. Ultimately, the measures contained in the NEVAT may prove suitable for formative assessment during ETV training and potentially as summative assessment measures during certification.

  20. Key Findings from a National Internet Survey of 400 Teachers and 95 Principals Conducted November 12-21, 2008

    ERIC Educational Resources Information Center

    McCleskey, Nicole

    2010-01-01

    This paper presents the key findings from a national Internet survey of 400 teachers and 95 principals. This survey was conducted November 12-21, 2008. The sample was based on a list provided by EMI Surveys, a custom online research sample provider with an extensive portfolio of projects. The margin of error for a sample of 495 interviews is [plus…

  1. A parenteral nutrition use survey with gap analysis.

    PubMed

    Boullata, Joseph I; Guenter, Peggi; Mirtallo, Jay M

    2013-03-01

    Parenteral nutrition (PN) is a high-alert medication for which safe practice guidelines are available. Recent adverse events associated with PN have been widely reported. A survey of current practices was indicated as new guidelines are being considered. A web-based survey consisting of 70 items was made available for the month of August 2011. Respondents provided answers to questions that addressed all aspects of the PN use process. There were a total of 895 respondents to the survey, including dietitians, nurses, pharmacists, and physicians. They predominantly represented hospital settings (89%), with 44% from academic institutions. Most organizations use a once-daily PN admixture with 21% outsourcing preparation. Electronic PN order entry is available in one-third of organizations, and the use of standardized order sets prevails. Unfortunately, electronic interfaces between computer systems remain infrequent, meaning that at least one transcription step is required by most in the PN use process. There are a wide variety of methods for ordering PN components, many of which are inconsistent with safe practices. Most organizations dedicate a pharmacist to review the PN orders, many of which require clarifications. Documentation at each step of the PN use process with oversight to identify deviations from best practice recommendations is infrequent. A significant proportion (44%) does not track PN-related medication errors. The survey data are a valuable snapshot of current practices with PN. Poor compliance with some of the safe practice guidelines continues. This will help guide new safety initiatives for the PN use process.

  2. Simulating water and nitrogen loss from an irrigated paddy field under continuously flooded condition with Hydrus-1D model.

    PubMed

    Yang, Rui; Tong, Juxiu; Hu, Bill X; Li, Jiayun; Wei, Wenshuo

    2017-06-01

    Agricultural non-point source pollution is a major factor in surface water and groundwater pollution, especially for nitrogen (N) pollution. In this paper, an experiment was conducted in a direct-seeded paddy field under traditional continuously flooded irrigation (CFI). The water movement and N transport and transformation were simulated via the Hydrus-1D model, and the model was calibrated using field measurements. The model had a total water balance error of 0.236 cm and a relative error (error/input total water) of 0.23%. For the solute transport model, the N balance error and relative error (error/input total N) were 0.36 kg ha -1 and 0.40%, respectively. The study results indicate that the plow pan plays a crucial role in vertical water movement in paddy fields. Water flow was mainly lost through surface runoff and underground drainage, with proportions to total input water of 32.33 and 42.58%, respectively. The water productivity in the study was 0.36 kg m -3 . The simulated N concentration results revealed that ammonia was the main form in rice uptake (95% of total N uptake), and its concentration was much larger than for nitrate under CFI. Denitrification and volatilization were the main losses, with proportions to total consumption of 23.18 and 14.49%, respectively. Leaching (10.28%) and surface runoff loss (2.05%) were the main losses of N pushed out of the system by water. Hydrus-1D simulation was an effective method to predict water flow and N concentrations in the three different forms. The study provides results that could be used to guide water and fertilization management and field results for numerical studies of water flow and N transport and transformation in the future.

  3. Cost effectiveness of the US Geological Survey stream-gaging program in Alabama

    USGS Publications Warehouse

    Jeffcoat, H.H.

    1987-01-01

    A study of the cost effectiveness of the stream gaging program in Alabama identified data uses and funding sources for 72 surface water stations (including dam stations, slope stations, and continuous-velocity stations) operated by the U.S. Geological Survey in Alabama with a budget of $393,600. Of these , 58 gaging stations were used in all phases of the analysis at a funding level of $328,380. For the current policy of operation of the 58-station program, the average standard error of estimation of instantaneous discharge is 29.3%. This overall level of accuracy can be maintained with a budget of $319,800 by optimizing routes and implementing some policy changes. The maximum budget considered in the analysis was $361,200, which gave an average standard error of estimation of 20.6%. The minimum budget considered was $299,360, with an average standard error of estimation of 36.5%. The study indicates that a major source of error in the stream gaging records is lost or missing data that are the result of streamside equipment failure. If perfect equipment were available, the standard error in estimating instantaneous discharge under the current program and budget could be reduced to 18.6%. This can also be interpreted to mean that the streamflow data records have a standard error of this magnitude during times when the equipment is operating properly. (Author 's abstract)

  4. Determining relative error bounds for the CVBEM

    USGS Publications Warehouse

    Hromadka, T.V.

    1985-01-01

    The Complex Variable Boundary Element Methods provides a measure of relative error which can be utilized to subsequently reduce the error or provide information for further modeling analysis. By maximizing the relative error norm on each boundary element, a bound on the total relative error for each boundary element can be evaluated. This bound can be utilized to test CVBEM convergence, to analyze the effects of additional boundary nodal points in reducing the modeling error, and to evaluate the sensitivity of resulting modeling error within a boundary element from the error produced in another boundary element as a function of geometric distance. ?? 1985.

  5. The Accuracy of GBM GRB Localizations

    NASA Astrophysics Data System (ADS)

    Briggs, Michael Stephen; Connaughton, V.; Meegan, C.; Hurley, K.

    2010-03-01

    We report an study of the accuracy of GBM GRB localizations, analyzing three types of localizations: those produced automatically by the GBM Flight Software on board GBM, those produced automatically with ground software in near real time, and localizations produced with human guidance. The two types of automatic locations are distributed in near real-time via GCN Notices; the human-guided locations are distributed on timescale of many minutes or hours using GCN Circulars. This work uses a Bayesian analysis that models the distribution of the GBM total location error by comparing GBM locations to more accurate locations obtained with other instruments. Reference locations are obtained from Swift, Super-AGILE, the LAT, and with the IPN. We model the GBM total location errors as having systematic errors in addition to the statistical errors and use the Bayesian analysis to constrain the systematic errors.

  6. Medical errors in primary care clinics – a cross sectional study

    PubMed Central

    2012-01-01

    Background Patient safety is vital in patient care. There is a lack of studies on medical errors in primary care settings. The aim of the study is to determine the extent of diagnostic inaccuracies and management errors in public funded primary care clinics. Methods This was a cross-sectional study conducted in twelve public funded primary care clinics in Malaysia. A total of 1753 medical records were randomly selected in 12 primary care clinics in 2007 and were reviewed by trained family physicians for diagnostic, management and documentation errors, potential errors causing serious harm and likelihood of preventability of such errors. Results The majority of patient encounters (81%) were with medical assistants. Diagnostic errors were present in 3.6% (95% CI: 2.2, 5.0) of medical records and management errors in 53.2% (95% CI: 46.3, 60.2). For management errors, medication errors were present in 41.1% (95% CI: 35.8, 46.4) of records, investigation errors in 21.7% (95% CI: 16.5, 26.8) and decision making errors in 14.5% (95% CI: 10.8, 18.2). A total of 39.9% (95% CI: 33.1, 46.7) of these errors had the potential to cause serious harm. Problems of documentation including illegible handwriting were found in 98.0% (95% CI: 97.0, 99.1) of records. Nearly all errors (93.5%) detected were considered preventable. Conclusions The occurrence of medical errors was high in primary care clinics particularly with documentation and medication errors. Nearly all were preventable. Remedial intervention addressing completeness of documentation and prescriptions are likely to yield reduction of errors. PMID:23267547

  7. The infrared luminosity function of AKARI 90 μm galaxies in the local Universe

    NASA Astrophysics Data System (ADS)

    Kilerci Eser, Ece; Goto, Tomotsugu

    2018-03-01

    Local infrared (IR) luminosity functions (LFs) are necessary benchmarks for high-redshift IR galaxy evolution studies. Any accurate IR LF evolution studies require accordingly accurate local IR LFs. We present IR galaxy LFs at redshifts of z ≤ 0.3 from AKARI space telescope, which performed an all-sky survey in six IR bands (9, 18, 65, 90, 140, and 160 μm) with 10 times better sensitivity than its precursor Infrared Astronomical Satellite. Availability of 160 μm filter is critically important in accurately measuring total IR luminosity of galaxies, covering across the peak of the dust emission. By combining data from Wide-field Infrared Survey Explorer (WISE), Sloan Digital Sky Survey (SDSS) Data Release 13 (DR 13), six-degree Field Galaxy Survey and the 2MASS Redshift Survey, we created a sample of 15 638 local IR galaxies with spectroscopic redshifts, factor of 7 larger compared to previously studied AKARI-SDSS sample. After carefully correcting for volume effects in both IR and optical, the obtained IR LFs agree well with previous studies, but comes with much smaller errors. Measured local IR luminosity density is ΩIR = 1.19 ± 0.05 × 108L⊙ Mpc-3. The contributions from luminous IR galaxies and ultraluminous IR galaxies to ΩIR are very small, 9.3 per cent and 0.9 per cent, respectively. There exists no future all-sky survey in far-IR wavelengths in the foreseeable future. The IR LFs obtained in this work will therefore remain an important benchmark for high-redshift studies for decades.

  8. Two Enhancements of the Logarithmic Least-Squares Method for Analyzing Subjective Comparisons

    DTIC Science & Technology

    1989-03-25

    error term. 1 For this model, the total sum of squares ( SSTO ), defined as n 2 SSTO = E (yi y) i=1 can be partitioned into error and regression sums...of the regression line around the mean value. Mathematically, for the model given by equation A.4, SSTO = SSE + SSR (A.6) A-4 where SSTO is the total...sum of squares (i.e., the variance of the yi’s), SSE is error sum of squares, and SSR is the regression sum of squares. SSTO , SSE, and SSR are given

  9. Relationship between Attributional Errors and At-Risk Behaviors among Juvenile Delinquents.

    ERIC Educational Resources Information Center

    Daley, Christine E.; Onwuegbuzie, Anthony J.

    The purpose of this study was to determine whether at-risk behaviors (e.g., substance abuse, gun ownership, sexual activity, and gang membership) are associated with violence attribution errors, as measured by Daley and Onwuegbuzie's (1995) Violence Attribution Survey, among 82 incarcerated male juvenile delinquents. Analysis revealed that the…

  10. Reliability Estimation for Aggregated Data: Applications for Organizational Research.

    ERIC Educational Resources Information Center

    Hart, Roland J.; Bradshaw, Stephen C.

    This report provides the statistical tools necessary to measure the extent of error that exists in organizational record data and group survey data. It is felt that traditional methods of measuring error are inappropriate or incomplete when applied to organizational groups, especially in studies of organizational change when the same variables are…

  11. Survey Costs and Errors: User’s Manual for the Lotus 1-2-3 Spreadsheet

    DTIC Science & Technology

    1991-04-01

    select appropriate options such as the use of a business reply envelope or a self -addressed, stamped envelope for returning mailed surveys. Recruit. T... self -explanatory and need not be discussed here. Mode/Systematic Automatically enter ALL time and cost estimates for a survey project. "Time and cost...user can choose between a business reply envelope (BRE) or a self -addressed, stamped envelope (SASE) for returning the surveys. For mail surveys, the

  12. The introduction of an acute physiological support service for surgical patients is an effective error reduction strategy.

    PubMed

    Clarke, D L; Kong, V Y; Naidoo, L C; Furlong, H; Aldous, C

    2013-01-01

    Acute surgical patients are particularly vulnerable to human error. The Acute Physiological Support Team (APST) was created with the twin objectives of identifying high-risk acute surgical patients in the general wards and reducing both the incidence of error and impact of error on these patients. A number of error taxonomies were used to understand the causes of human error and a simple risk stratification system was adopted to identify patients who are particularly at risk of error. During the period November 2012-January 2013 a total of 101 surgical patients were cared for by the APST at Edendale Hospital. The average age was forty years. There were 36 females and 65 males. There were 66 general surgical patients and 35 trauma patients. Fifty-six patients were referred on the day of their admission. The average length of stay in the APST was four days. Eleven patients were haemo-dynamically unstable on presentation and twelve were clinically septic. The reasons for referral were sepsis,(4) respiratory distress,(3) acute kidney injury AKI (38), post-operative monitoring (39), pancreatitis,(3) ICU down-referral,(7) hypoxia,(5) low GCS,(1) coagulopathy.(1) The mortality rate was 13%. A total of thirty-six patients experienced 56 errors. A total of 143 interventions were initiated by the APST. These included institution or adjustment of intravenous fluids (101), blood transfusion,(12) antibiotics,(9) the management of neutropenic sepsis,(1) central line insertion,(3) optimization of oxygen therapy,(7) correction of electrolyte abnormality,(8) correction of coagulopathy.(2) CONCLUSION: Our intervention combined current taxonomies of error with a simple risk stratification system and is a variant of the defence in depth strategy of error reduction. We effectively identified and corrected a significant number of human errors in high-risk acute surgical patients. This audit has helped understand the common sources of error in the general surgical wards and will inform on-going error reduction initiatives. Copyright © 2013 Surgical Associates Ltd. Published by Elsevier Ltd. All rights reserved.

  13. Gender differences in the pathway from adverse life events to adolescent emotional and behavioural problems via negative cognitive errors.

    PubMed

    Flouri, Eirini; Panourgia, Constantina

    2011-06-01

    The aim of this study was to test for gender differences in how negative cognitive errors (overgeneralizing, catastrophizing, selective abstraction, and personalizing) mediate the association between adverse life events and adolescents' emotional and behavioural problems (measured with the Strengths and Difficulties Questionnaire). The sample consisted of 202 boys and 227 girls (aged 11-15 years) from three state secondary schools in disadvantaged areas in one county in the South East of England. Control variables were age, ethnicity, special educational needs, exclusion history, family structure, family socio-economic disadvantage, and verbal cognitive ability. Adverse life events were measured with Tiet et al.'s (1998) Adverse Life Events Scale. For both genders, we assumed a pathway from adverse life events to emotional and behavioural problems via cognitive errors. We found no gender differences in life adversity, cognitive errors, total difficulties, peer problems, or hyperactivity. In both boys and girls, even after adjustment for controls, cognitive errors were related to total difficulties and emotional symptoms, and life adversity was related to total difficulties and conduct problems. The life adversity/conduct problems association was not explained by negative cognitive errors in either gender. However, we found gender differences in how adversity and cognitive errors produced hyperactivity and internalizing problems. In particular, life adversity was not related, after adjustment for controls, to hyperactivity in girls and to peer problems and emotional symptoms in boys. Cognitive errors fully mediated the effect of life adversity on hyperactivity in boys and on peer and emotional problems in girls.

  14. Using Airborne Lidar Data from IcePod to Measure Annual and Seasonal Ice Changes Over Greenland

    NASA Astrophysics Data System (ADS)

    Frearson, N.; Bertinato, C.; Das, I.

    2014-12-01

    The IcePod is a multi-sensor airborne science platform that supports a wide suite of instruments, including a Riegl VQ-580 infrared scanning laser, GPS-inertial positioning system, shallow and deep-ice radars, visible-wave and infrared cameras, and upward-looking pyrometer. These instruments allow us to image the ice from top to bottom, including the surface of melt-water plumes that originate at the ice-ocean boundary. In collaboration with the New York Air National Guard 109th Airlift Wing, the IcePod is flown on LC-130 aircraft, which presents the unique opportunity to routinely image the Greenland ice sheet several times within a season. This is particularly important for mass balance studies, as we can measure elevation changes during the melt season. During the 2014 summer, laser data was collected via IcePod over the Greenland ice sheet, including Russell Glacier, Jakobshavn Glacier, Eqip Glacier, and Summit Camp. The Icepod will also be routinely operated in Antarctica. We present the initial testing, calibration, and error estimates from the first set of laser data that were collected on IcePod. At a survey altitude of 1000 m, the laser swath covers ~ 1000 m. A Northrop-Grumman LN-200 tactical grade IMU is rigidly attached to the laser scanner to provide attitude data at a rate of 200 Hz. Several methods were used to determine the lever arm between the IMU center of navigation and GPS antenna phase center, terrestrial scanning laser, total station survey, and optimal estimation. Additionally, initial bore sight calibration flights yielded misalignment angles within an accuracy of ±4 cm. We also performed routine passes over the airport ramp in Kangerlussuaq, Greenland, comparing the airborne GPS and Lidar data to a reference GPS-based ground survey across the ramp, spot GPS points on the ramp and a nearby GPS base station. Positioning errors can severely impact the accuracy of a laser altimeter when flying over remote regions such as across the ice sheets. Setting up GPS base stations along the flight track can prove to be logistically challenging. We have processed the GPS-inertial data using both DGPS and PPP and present the comparison of those results here. Finally, we discuss our processing, calibration and error estimation methods and compare our results to previously flown IceBridge lines.

  15. Use of Geodetic Surveys of Leveling Lines and Dry Tilt Arrays to Study Faults and Volcanoes in Undergraduate Field Geophysics Classes

    NASA Astrophysics Data System (ADS)

    Polet, J.; Alvarez, K.; Elizondo, K.

    2017-12-01

    In the early 1980's and 1990's numerous leveling lines and dry tilt arrays were installed throughout Central and Southern California by United States Geological Survey scientists and other researchers (e.g. Sylvester, 1985). These lines or triangular arrays of geodetic monuments commonly straddle faults or have been installed close to volcanic areas, where significant motion is expected over relatively short time periods. Over the past year, we have incorporated geodetic surveys of these arrays as part of our field exercises in undergraduate and graduate level classes on topics such as shallow subsurface geophysics and field geophysics. In some cases, the monuments themselves first had to be located based on only limited information, testing students' Brunton use and map reading skills. Monuments were then surveyed using total stations and global navigation satellite system (GNSS) receivers, using a variety of experimental procedures. The surveys were documented with tables, photos, maps and graphs in field reports, as well as in wiki pages created by student groups for a geophysics field class this June. The measurements were processed by the students and compared with similar data from surveys conducted soon after installation of the arrays, to analyze the deformation that occurred over the last few decades. The different geodetic techniques were also compared and an error analysis was conducted. The analysis and processing of these data challenged and enhanced students' quantitative literacy and technology skills. The final geodetic measurements are being incorporated into several senior and MSc thesis projects. Further surveys are planned for additional classes, in topics that could include seismology, geodesy, volcanology and global geophysics. We are also considering additional technologies, such as structure from motion (SfM) photogrammetry.

  16. Anatomic, Clinical, and Neuropsychological Correlates of Spelling Errors in Primary Progressive Aphasia

    ERIC Educational Resources Information Center

    Shim, HyungSub; Hurley, Robert S.; Rogalski, Emily; Mesulam, M.-Marsel

    2012-01-01

    This study evaluates spelling errors in the three subtypes of primary progressive aphasia (PPA): agrammatic (PPA-G), logopenic (PPA-L), and semantic (PPA-S). Forty-one PPA patients and 36 age-matched healthy controls were administered a test of spelling. The total number of errors and types of errors in spelling to dictation of regular words,…

  17. Analyzing human errors in flight mission operations

    NASA Technical Reports Server (NTRS)

    Bruno, Kristin J.; Welz, Linda L.; Barnes, G. Michael; Sherif, Josef

    1993-01-01

    A long-term program is in progress at JPL to reduce cost and risk of flight mission operations through a defect prevention/error management program. The main thrust of this program is to create an environment in which the performance of the total system, both the human operator and the computer system, is optimized. To this end, 1580 Incident Surprise Anomaly reports (ISA's) from 1977-1991 were analyzed from the Voyager and Magellan projects. A Pareto analysis revealed that 38 percent of the errors were classified as human errors. A preliminary cluster analysis based on the Magellan human errors (204 ISA's) is presented here. The resulting clusters described the underlying relationships among the ISA's. Initial models of human error in flight mission operations are presented. Next, the Voyager ISA's will be scored and included in the analysis. Eventually, these relationships will be used to derive a theoretically motivated and empirically validated model of human error in flight mission operations. Ultimately, this analysis will be used to make continuous process improvements continuous process improvements to end-user applications and training requirements. This Total Quality Management approach will enable the management and prevention of errors in the future.

  18. A computer procedure to analyze seismic data to estimate outcome probabilities in oil exploration, with an initial application in the tabasco region of southeastern Mexico

    NASA Astrophysics Data System (ADS)

    Berlanga, Juan M.; Harbaugh, John W.

    The Tabasco region contains a number of major oilfields, including some of the emerging "giant" oil fields which have received extensive publicity. Fields in the Tabasco region are associated with large geologic structures which are detected readily by seismic surveys. The structures seem to be associated with deepseated movement of salt, and they are complexly faulted. Some structures have as much as 1000 milliseconds relief of seismic lines. A study, interpreting the structure of the area, used initially only a fraction of the total seismic lines That part of Tabasco region that has been studied was surveyed with a close-spaced rectilinear network of seismic lines. A, interpreting the structure of the area, used initially only a fraction of the total seismic data available. The purpose was to compare "predictions" of reflection time based on widely spaced seismic lines, with "results" obtained along more closely spaced lines. This process of comparison simulates the sequence of events in which a reconnaissance network of seismic lines is used to guide a succession of progressively more closely spaced lines. A square gridwork was established with lines spaced at 10 km intervals, and using machine contour maps, compared the results with those obtained with seismic grids employing spacings of 5 and 2.5 km respectively. The comparisons of predictions based on widely spaced lines with observations along closely spaced lines provide information by which an error function can be established. The error at any point can be defined as the difference between the predicted value for that point, and the subsequently observed value at that point. Residuals obtained by fitting third-degree polynomial trend surfaces were used for comparison. The root mean square of the error measurement, (expressed in seconds or milliseconds reflection time) was found to increase more or less linearly with distance from the nearest seismic point. Oil-occurrence probabilities were established on the basis of frequency distributions of trend-surface residuals obtained by fitting and subtracting polynomial trend surfaces from the machine-contoured reflection time maps. We found that there is a strong preferential relationship between the occurrence of petroleum (i.e. its presence versus absence) and particular ranges of trend-surface residual values. An estimate of the probability of oil occurring at any particular geographic point can be calculated on the basis of the estimated trend-surface residual value. This estimate, however, must be tempered by the probable error in the estimate of the residual value provided by the error function. The result, we believe, is a simple but effective procedure for estimating exploration outcome probabilities where seismic data provide the principal form of information in advance of drilling. Implicit in this approach is the comparison between a maturely explored area, for which both seismic and production data are available, and which serves as a statistical "training area", with the "target" area which is undergoing exploration and for which probability forecasts are to be calculated.

  19. Nurse perceptions of organizational culture and its association with the culture of error reporting: a case of public sector hospitals in Pakistan.

    PubMed

    Jafree, Sara Rizvi; Zakar, Rubeena; Zakar, Muhammad Zakria; Fischer, Florian

    2016-01-05

    There is an absence of formal error tracking systems in public sector hospitals of Pakistan and also a lack of literature concerning error reporting culture in the health care sector. Nurse practitioners have front-line knowledge and rich exposure about both the organizational culture and error sharing in hospital settings. The aim of this paper was to investigate the association between organizational culture and the culture of error reporting, as perceived by nurses. The authors used the "Practice Environment Scale-Nurse Work Index Revised" to measure the six dimensions of organizational culture. Seven questions were used from the "Survey to Solicit Information about the Culture of Reporting" to measure error reporting culture in the region. Overall, 309 nurses participated in the survey, including female nurses from all designations such as supervisors, instructors, ward-heads, staff nurses and student nurses. We used SPSS 17.0 to perform a factor analysis. Furthermore, descriptive statistics, mean scores and multivariable logistic regression were used for the analysis. Three areas were ranked unfavorably by nurse respondents, including: (i) the error reporting culture, (ii) staffing and resource adequacy, and (iii) nurse foundations for quality of care. Multivariable regression results revealed that all six categories of organizational culture, including: (1) nurse manager ability, leadership and support, (2) nurse participation in hospital affairs, (3) nurse participation in governance, (4) nurse foundations of quality care, (5) nurse-coworkers relations, and (6) nurse staffing and resource adequacy, were positively associated with higher odds of error reporting culture. In addition, it was found that married nurses and nurses on permanent contract were more likely to report errors at the workplace. Public healthcare services of Pakistan can be improved through the promotion of an error reporting culture, reducing staffing and resource shortages and the development of nursing care plans.

  20. Refractive error and presbyopia among adults in Fiji.

    PubMed

    Brian, Garry; Pearce, Matthew G; Ramke, Jacqueline

    2011-04-01

    To characterize refractive error, presbyopia and their correction among adults aged ≥ 40 years in Fiji, and contribute to a regional overview of these conditions. A population-based cross-sectional survey using multistage cluster random sampling. Presenting distance and near vision were measured and dilated slitlamp examination performed. The survey achieved 73.0% participation (n=1381). Presenting binocular distance vision ≥ 6/18 was achieved by 1223 participants. Another 79 had vision impaired by refractive error. Three of these were blind. At threshold 6/18, 204 participants had refractive error. Among these, 125 had spectacle-corrected presenting vision ≥ 6/18 ("met refractive error need"); 79 presented wearing no (n=74) or under-correcting (n=5) distance spectacles ("unmet refractive error need"). Presenting binocular near vision ≥ N8 was achieved by 833 participants. At threshold N8, 811 participants had presbyopia. Among these, 336 attained N8 with presenting near spectacles ("met presbyopia need"); 475 presented with no (n=402) or under-correcting (n=73) near spectacles ("unmet presbyopia need"). Rural residence was predictive of unmet refractive error (p=0.040) and presbyopia (p=0.016) need. Gender and household income source were not. Ethnicity-gender-age-domicile-adjusted to the Fiji population aged ≥ 40 years, "met refractive error need" was 10.3% (95% confidence interval [CI] 8.7-11.9%), "unmet refractive error need" was 4.8% (95%CI 3.6-5.9%), "refractive error correction coverage" was 68.3% (95%CI 54.4-82.2%),"met presbyopia need" was 24.6% (95%CI 22.4-26.9%), "unmet presbyopia need" was 33.8% (95%CI 31.3-36.3%), and "presbyopia correction coverage" was 42.2% (95%CI 37.6-46.8%). Fiji refraction and dispensing services should encourage uptake by rural dwellers and promote presbyopia correction. Lack of comparable data from neighbouring countries prevents a regional overview.

  1. Patient safety education at Japanese medical schools: results of a nationwide survey.

    PubMed

    Maeda, Shoichi; Kamishiraki, Etsuko; Starkey, Jay

    2012-05-10

    Patient safety education, including error prevention strategies and management of adverse events, has become a topic of worldwide concern. The importance of the patient safety is also recognized in Japan following two serious medical accidents in 1999. Furthermore, educational curriculum guideline revisions in 2008 by relevant the Ministry of Education includes patient safety as part of the core medical curriculum. However, little is known about the patient safety education in Japanese medical schools partly because a comprehensive study has not yet been conducted in this field. Therefore, we have conducted a nationwide survey in order to clarify the current status of patient safety education at medical schools in Japan. Response rate was 60.0% (n = 48/80). Ninety-eight-percent of respondents (n = 47/48) reported integration of patient safety education into their curricula. Thirty-nine percent reported devoting less than five hours to the topic. All schools that teach patient safety reported use of lecture based teaching methods while few used alternative methods, such as role-playing or in-hospital training. Topics related to medical error theory and legal ramifications of error are widely taught while practical topics related to error analysis such as root cause analysis are less often covered. Based on responses to our survey, most Japanese medical schools have incorporated the topic of patient safety into their curricula. However, the number of hours devoted to the patient safety education is far from the sufficient level with forty percent of medical schools that devote five hours or less to it. In addition, most medical schools employ only the lecture based learning, lacking diversity in teaching methods. Although most medical schools cover basic error theory, error analysis is taught at fewer schools. We still need to make improvements to our medical safety curricula. We believe that this study has the implications for the rest of the world as a model of what is possible and a sounding board for what topics might be important.

  2. Non-linear matter power spectrum covariance matrix errors and cosmological parameter uncertainties

    NASA Astrophysics Data System (ADS)

    Blot, L.; Corasaniti, P. S.; Amendola, L.; Kitching, T. D.

    2016-06-01

    The covariance of the matter power spectrum is a key element of the analysis of galaxy clustering data. Independent realizations of observational measurements can be used to sample the covariance, nevertheless statistical sampling errors will propagate into the cosmological parameter inference potentially limiting the capabilities of the upcoming generation of galaxy surveys. The impact of these errors as function of the number of realizations has been previously evaluated for Gaussian distributed data. However, non-linearities in the late-time clustering of matter cause departures from Gaussian statistics. Here, we address the impact of non-Gaussian errors on the sample covariance and precision matrix errors using a large ensemble of N-body simulations. In the range of modes where finite volume effects are negligible (0.1 ≲ k [h Mpc-1] ≲ 1.2), we find deviations of the variance of the sample covariance with respect to Gaussian predictions above ˜10 per cent at k > 0.3 h Mpc-1. Over the entire range these reduce to about ˜5 per cent for the precision matrix. Finally, we perform a Fisher analysis to estimate the effect of covariance errors on the cosmological parameter constraints. In particular, assuming Euclid-like survey characteristics we find that a number of independent realizations larger than 5000 is necessary to reduce the contribution of sampling errors to the cosmological parameter uncertainties at subpercent level. We also show that restricting the analysis to large scales k ≲ 0.2 h Mpc-1 results in a considerable loss in constraining power, while using the linear covariance to include smaller scales leads to an underestimation of the errors on the cosmological parameters.

  3. Impact of Resident Duty Hour Limits on Safety in the ICU: A National Survey of Pediatric and Neonatal Intensivists

    PubMed Central

    Typpo, Katri V.; Tcharmtchi, M. Hossein; Thomas, Eric J.; Kelly, P. Adam; Castillo, Leticia D.; Singh, Hardeep

    2011-01-01

    Objective Resident duty-hour regulations potentially shift workload from resident to attending physicians. We sought to understand how current or future regulatory changes might impact safety in academic pediatric and neonatal intensive care units (ICUs). Design Web-based survey Setting US academic pediatric and neonatal ICUs Subjects Attending pediatric and neonatal intensivists Interventions We evaluated perceptions on four ICU safety-related risk measures potentially affected by current duty-hour regulations: 1) Attending physician and resident fatigue, 2) Attending physician work-load, 3) Errors (self-reported rates by attending physicians or perceived resident error rates), and 4) Safety culture. We also evaluated perceptions of how these risks would change with further duty hour restrictions. Measurements and Main Results We administered our survey between February and April 2010 to 688 eligible physicians, of which 360 (52.3%) responded. Most believed that resident error rates were unchanged or worse (91.9%) and safety culture was unchanged or worse (84.4%) with current duty-hour regulations. Of respondents, 61.9% believed their own work-hours providing direct patient care increased and 55.8% believed they were more fatigued while providing direct patient care. Most (85.3%) perceived no increase in their own error rates currently, but in the scenario of further reduction in resident duty-hours, over half (53.3%) believed that safety culture would worsen and a significant proportion (40.3%) believed that their own error rates would increase. Conclusions Pediatric intensivists do not perceive improved patient safety from current resident duty hour restrictions. Policies to further restrict resident duty hours should consider unintended consequences of worsening certain aspects of ICU safety. PMID:22614570

  4. Impact of resident duty hour limits on safety in the intensive care unit: a national survey of pediatric and neonatal intensivists.

    PubMed

    Typpo, Katri V; Tcharmtchi, M Hossein; Thomas, Eric J; Kelly, P Adam; Castillo, Leticia D; Singh, Hardeep

    2012-09-01

    Resident duty-hour regulations potentially shift the workload from resident to attending physicians. We sought to understand how current or future regulatory changes might impact safety in academic pediatric and neonatal intensive care units. Web-based survey. U.S. academic pediatric and neonatal intensive care units. Attending pediatric and neonatal intensivists. We evaluated perceptions on four intensive care unit safety-related risk measures potentially affected by current duty-hour regulations: 1) attending physician and resident fatigue; 2) attending physician workload; 3) errors (self-reported rates by attending physicians or perceived resident error rates); and 4) safety culture. We also evaluated perceptions of how these risks would change with further duty-hour restrictions. We administered our survey between February and April 2010 to 688 eligible physicians, of whom 360 (52.3%) responded. Most believed that resident error rates were unchanged or worse (91.9%) and safety culture was unchanged or worse (84.4%) with current duty-hour regulations. Of respondents, 61.9% believed their own work-hours providing direct patient care increased and 55.8% believed they were more fatigued while providing direct patient care. Most (85.3%) perceived no increase in their own error rates currently, but in the scenario of further reduction in resident duty-hours, over half (53.3%) believed that safety culture would worsen and a significant proportion (40.3%) believed that their own error rates would increase. Pediatric intensivists do not perceive improved patient safety from current resident duty-hour restrictions. Policies to further restrict resident duty-hours should consider unintended consequences of worsening certain aspects of intensive care unit safety.

  5. Improved compliance with the World Health Organization Surgical Safety Checklist is associated with reduced surgical specimen labelling errors.

    PubMed

    Martis, Walston R; Hannam, Jacqueline A; Lee, Tracey; Merry, Alan F; Mitchell, Simon J

    2016-09-09

    A new approach to administering the surgical safety checklist (SSC) at our institution using wall-mounted charts for each SSC domain coupled with migrated leadership among operating room (OR) sub-teams, led to improved compliance with the Sign Out domain. Since surgical specimens are reviewed at Sign Out, we aimed to quantify any related change in surgical specimen labelling errors. Prospectively maintained error logs for surgical specimens sent to pathology were examined for the six months before and after introduction of the new SSC administration paradigm. We recorded errors made in the labelling or completion of the specimen pot and on the specimen laboratory request form. Total error rates were calculated from the number of errors divided by total number of specimens. Rates from the two periods were compared using a chi square test. There were 19 errors in 4,760 specimens (rate 3.99/1,000) and eight errors in 5,065 specimens (rate 1.58/1,000) before and after the change in SSC administration paradigm (P=0.0225). Improved compliance with administering the Sign Out domain of the SSC can reduce surgical specimen errors. This finding provides further evidence that OR teams should optimise compliance with the SSC.

  6. A survey of mindset theories of intelligence and medical error self-reporting among pediatric housestaff and faculty.

    PubMed

    Jegathesan, Mithila; Vitberg, Yaffa M; Pusic, Martin V

    2016-02-11

    Intelligence theory research has illustrated that people hold either "fixed" (intelligence is immutable) or "growth" (intelligence can be improved) mindsets and that these views may affect how people learn throughout their lifetime. Little is known about the mindsets of physicians, and how mindset may affect their lifetime learning and integration of feedback. Our objective was to determine if pediatric physicians are of the "fixed" or "growth" mindset and whether individual mindset affects perception of medical error reporting.  We sent an anonymous electronic survey to pediatric residents and attending pediatricians at a tertiary care pediatric hospital. Respondents completed the "Theories of Intelligence Inventory" which classifies individuals on a 6-point scale ranging from 1 (Fixed Mindset) to 6 (Growth Mindset). Subsequent questions collected data on respondents' recall of medical errors by self or others. We received 176/349 responses (50 %). Participants were equally distributed between mindsets with 84 (49 %) classified as "fixed" and 86 (51 %) as "growth". Residents, fellows and attendings did not differ in terms of mindset. Mindset did not correlate with the small number of reported medical errors. There is no dominant theory of intelligence (mindset) amongst pediatric physicians. The distribution is similar to that seen in the general population. Mindset did not correlate with error reports.

  7. Total absorption cross sections of several gases of aeronomic interest at 584 A.

    NASA Technical Reports Server (NTRS)

    Starr, W. L.; Loewenstein, M.

    1972-01-01

    Total photoabsorption cross sections have been measured at 584.3 A for N2, O2, Ar, CO2, CO, NO, N2O, NH3, CH4, H2, and H2S. A monochromator was used to isolate the He I 584 line produced in a helium resonance lamp, and thin aluminum filters were used as absorption cell windows, thereby eliminating possible errors associated with the use of undispersed radiation or windowless cells. Sources of error are examined, and limits of uncertainty are given. Previous relevant cross-sectional measurements and possible error sources are reviewed. Wall adsorption as a source of error in cross-sectional measurements has not previously been considered and is discussed briefly.

  8. Creel survey sampling designs for estimating effort in short-duration Chinook salmon fisheries

    USGS Publications Warehouse

    McCormick, Joshua L.; Quist, Michael C.; Schill, Daniel J.

    2013-01-01

    Chinook Salmon Oncorhynchus tshawytscha sport fisheries in the Columbia River basin are commonly monitored using roving creel survey designs and require precise, unbiased catch estimates. The objective of this study was to examine the relative bias and precision of total catch estimates using various sampling designs to estimate angling effort under the assumption that mean catch rate was known. We obtained information on angling populations based on direct visual observations of portions of Chinook Salmon fisheries in three Idaho river systems over a 23-d period. Based on the angling population, Monte Carlo simulations were used to evaluate the properties of effort and catch estimates for each sampling design. All sampling designs evaluated were relatively unbiased. Systematic random sampling (SYS) resulted in the most precise estimates. The SYS and simple random sampling designs had mean square error (MSE) estimates that were generally half of those observed with cluster sampling designs. The SYS design was more efficient (i.e., higher accuracy per unit cost) than a two-cluster design. Increasing the number of clusters available for sampling within a day decreased the MSE of estimates of daily angling effort, but the MSE of total catch estimates was variable depending on the fishery. The results of our simulations provide guidelines on the relative influence of sample sizes and sampling designs on parameters of interest in short-duration Chinook Salmon fisheries.

  9. Nurses' satisfaction with use of a personal digital assistants with a mobile nursing information system in China.

    PubMed

    Shen, Li-Qiong; Zang, Xiao-Ying; Cong, Ji-Yan

    2018-04-01

    Personal digital assistants, technology with various functions, have been applied in international clinical practice. Great benefits in reducing medical errors and enhancing the efficiency of clinical work have been achieved, but little research has investigated nurses' satisfaction with the use of personal digital assistants. To investigate nurses' satisfaction with use of personal digital assistants, and to explore the predictors of this. This is a cross-sectional descriptive study. We conducted a cross-sectional survey targeting nurses who used personal digital assistants in a comprehensive tertiary hospital in Beijing. A total of 383 nurses were recruited in this survey in 2015. The total score of nurses' satisfaction with use of personal digital assistants was 238.91 (SD 39.25). Nurses were less satisfied with the function of documentation, compared with the function of administering medical orders. The time length of using personal digital assistants, academic degree, and different departments predicted nurses' satisfaction towards personal digital assistant use (all P < 0.05). Nurses were satisfied with the accuracy of administering medical orders and the safety of recording data. The stability of the wireless network and efficiency related to nursing work were less promising. To some extent, nurses with higher education and longer working time with personal digital assistants were more satisfied with them. © 2018 John Wiley & Sons Australia, Ltd.

  10. Human- Versus System-Level Factors and Their Effect on Electronic Work List Variation: Challenging Radiology's Fundamental Attribution Error.

    PubMed

    Davenport, Matthew S; Khalatbari, Shokoufeh; Platt, Joel F

    2015-09-01

    The aim of this study was to analyze sources of variation influencing the unread volume on an electronic abdominopelvic CT work list and to compare those results with blinded radiologist perception. The requirement for institutional review board approval was waived for this HIPAA-compliant quality improvement effort. Data pertaining to an electronic abdominopelvic CT work list were analyzed retrospectively from July 1, 2013, to June 30, 2014, and modeled with respect to the unread case total at 6 pm (Monday through Friday, excluding holidays). Eighteen system-level factors outside individual control (eg, number of workers, workload) and 7 human-level factors within individual control (eg, individual productivity) were studied. Attending radiologist perception was assessed with a blinded anonymous survey (n = 12 of 15 surveys completed). The mean daily unread total was 24 (range, 3-72). The upper control limit (48 CT studies [3 SDs above the mean]) was exceeded 10 times. Multivariate analysis revealed that the rate of unread CT studies was affected principally by system-level factors, including the number of experienced trainees on service (postgraduate year 5 residents [odds ratio, 0.83; 95% confidence interval, 0.74-0.92; P = .0008] and fellows [odds ratio, 0.84; 95% confidence interval, 0.74-0.95; P = .005]) and the daily workload (P = .02 to P < .0001). Individual faculty productivity had a weak effect (Spearman ρ = 0.13, P = .03; adequacy: 3% of variance explained). The majority (67%) of radiologists (8 of 12) completing the survey believed that variation in faculty effort was the most important influence on the daily unread total. System-level factors best predict the variation in unread CT examinations, but blinded faculty radiologists believe that it relates most strongly to variable individual effort. Copyright © 2015 American College of Radiology. Published by Elsevier Inc. All rights reserved.

  11. LoCuSS: THE MASS DENSITY PROFILE OF MASSIVE GALAXY CLUSTERS AT z = 0.2 {sup ,}

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Okabe, Nobuhiro; Umetsu, Keiichi; Smith, Graham P.

    We present a stacked weak-lensing analysis of an approximately mass-selected sample of 50 galaxy clusters at 0.15 < z < 0.3, based on observations with Suprime-Cam on the Subaru Telescope. We develop a new method for selecting lensed background galaxies from which we estimate that our sample of red background galaxies suffers just 1% contamination. We detect the stacked tangential shear signal from the full sample of 50 clusters, based on this red sample of background galaxies, at a total signal-to-noise ratio of 32.7. The Navarro-Frenk-White model is an excellent fit to the data, yielding sub-10% statistical precision on massmore » and concentration: M{sub vir}=7.19{sup +0.53}{sub -0.50} Multiplication-Sign 10{sup 14} h{sup -1} M{sub sun}, c{sub vir}=5.41{sup +0.49}{sub -0.45} (c{sub 200}=4.22{sup +0.40}{sub -0.36}). Tests of a range of possible systematic errors, including shear calibration and stacking-related issues, indicate that they are subdominant to the statistical errors. The concentration parameter obtained from stacking our approximately mass-selected cluster sample is broadly in line with theoretical predictions. Moreover, the uncertainty on our measurement is comparable with the differences between the different predictions in the literature. Overall, our results highlight the potential for stacked weak-lensing methods to probe the mean mass density profile of cluster-scale dark matter halos with upcoming surveys, including Hyper-Suprime-Cam, Dark Energy Survey, and KIDS.« less

  12. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Song, Yong-Seon; Institute of Cosmology and Gravitation, University of Portsmouth, Dennis Sciama Building, Portsmouth, PO1 3FX; Zhao Gongbo

    We explore the complementarity of weak lensing and galaxy peculiar velocity measurements to better constrain modifications to General Relativity. We find no evidence for deviations from General Relativity on cosmological scales from a combination of peculiar velocity measurements (for Luminous Red Galaxies in the Sloan Digital Sky Survey) with weak lensing measurements (from the Canadian France Hawaii Telescope Legacy Survey). We provide a Fisher error forecast for a Euclid-like space-based survey including both lensing and peculiar velocity measurements and show that the expected constraints on modified gravity will be at least an order of magnitude better than with present data,more » i.e. we will obtain {approx_equal}5% errors on the modified gravity parametrization described here. We also present a model-independent method for constraining modified gravity parameters using tomographic peculiar velocity information, and apply this methodology to the present data set.« less

  13. An analysis of input errors in precipitation-runoff models using regression with errors in the independent variables

    USGS Publications Warehouse

    Troutman, Brent M.

    1982-01-01

    Errors in runoff prediction caused by input data errors are analyzed by treating precipitation-runoff models as regression (conditional expectation) models. Independent variables of the regression consist of precipitation and other input measurements; the dependent variable is runoff. In models using erroneous input data, prediction errors are inflated and estimates of expected storm runoff for given observed input variables are biased. This bias in expected runoff estimation results in biased parameter estimates if these parameter estimates are obtained by a least squares fit of predicted to observed runoff values. The problems of error inflation and bias are examined in detail for a simple linear regression of runoff on rainfall and for a nonlinear U.S. Geological Survey precipitation-runoff model. Some implications for flood frequency analysis are considered. A case study using a set of data from Turtle Creek near Dallas, Texas illustrates the problems of model input errors.

  14. Prevalence and pattern of prescription errors in a Nigerian kidney hospital.

    PubMed

    Babatunde, Kehinde M; Akinbodewa, Akinwumi A; Akinboye, Ayodele O; Adejumo, Ademola O

    2016-12-01

    To determine (i) the prevalence and pattern of prescription errors in our Centre and, (ii) appraise pharmacists' intervention and correction of identified prescription errors. A descriptive, single blinded cross-sectional study. Kidney Care Centre is a public Specialist hospital. The monthly patient load averages 60 General Out-patient cases and 17.4 in-patients. A total of 31 medical doctors (comprising of 2 Consultant Nephrologists, 15 Medical Officers, 14 House Officers), 40 nurses and 24 ward assistants participated in the study. One pharmacist runs the daily call schedule. Prescribers were blinded to the study. Prescriptions containing only galenicals were excluded. An error detection mechanism was set up to identify and correct prescription errors. Life-threatening prescriptions were discussed with the Quality Assurance Team of the Centre who conveyed such errors to the prescriber without revealing the on-going study. Prevalence of prescription errors, pattern of prescription errors, pharmacist's intervention. A total of 2,660 (75.0%) combined prescription errors were found to have one form of error or the other; illegitimacy 1,388 (52.18%), omission 1,221(45.90%), wrong dose 51(1.92%) and no error of style was detected. Life-threatening errors were low (1.1-2.2%). Errors were found more commonly among junior doctors and non-medical doctors. Only 56 (1.6%) of the errors were detected and corrected during the process of dispensing. Prescription errors related to illegitimacy and omissions were highly prevalent. There is a need to improve on patient-to-healthcare giver ratio. A medication quality assurance unit is needed in our hospitals. No financial support was received by any of the authors for this study.

  15. Trajectory Design to Mitigate Risk on the Transiting Exoplanet Survey Satellite (TESS) Mission

    NASA Technical Reports Server (NTRS)

    Dichmann, Donald

    2016-01-01

    The Transiting Exoplanet Survey Satellite (TESS) will employ a highly eccentric Earth orbit, in 2:1 lunar resonance, reached with a lunar flyby preceded by 3.5 phasing loops. The TESS mission has limited propellant and several orbit constraints. Based on analysis and simulation, we have designed the phasing loops to reduce delta-V and to mitigate risk due to maneuver execution errors. We have automated the trajectory design process and use distributed processing to generate and to optimize nominal trajectories, check constraint satisfaction, and finally model the effects of maneuver errors to identify trajectories that best meet the mission requirements.

  16. Combining inferences from models of capture efficiency, detectability, and suitable habitat to classify landscapes for conservation of threatened bull trout

    USGS Publications Warehouse

    Peterson, J.; Dunham, J.B.

    2003-01-01

    Effective conservation efforts for at-risk species require knowledge of the locations of existing populations. Species presence can be estimated directly by conducting field-sampling surveys or alternatively by developing predictive models. Direct surveys can be expensive and inefficient, particularly for rare and difficult-to-sample species, and models of species presence may produce biased predictions. We present a Bayesian approach that combines sampling and model-based inferences for estimating species presence. The accuracy and cost-effectiveness of this approach were compared to those of sampling surveys and predictive models for estimating the presence of the threatened bull trout ( Salvelinus confluentus ) via simulation with existing models and empirical sampling data. Simulations indicated that a sampling-only approach would be the most effective and would result in the lowest presence and absence misclassification error rates for three thresholds of detection probability. When sampling effort was considered, however, the combined approach resulted in the lowest error rates per unit of sampling effort. Hence, lower probability-of-detection thresholds can be specified with the combined approach, resulting in lower misclassification error rates and improved cost-effectiveness.

  17. Near field communications technology and the potential to reduce medication errors through multidisciplinary application

    PubMed Central

    Pegler, Joe; Lehane, Elaine; Livingstone, Vicki; McCarthy, Nora; Sahm, Laura J.; Tabirca, Sabin; O’Driscoll, Aoife; Corrigan, Mark

    2016-01-01

    Background Patient safety requires optimal management of medications. Electronic systems are encouraged to reduce medication errors. Near field communications (NFC) is an emerging technology that may be used to develop novel medication management systems. Methods An NFC-based system was designed to facilitate prescribing, administration and review of medications commonly used on surgical wards. Final year medical, nursing, and pharmacy students were recruited to test the electronic system in a cross-over observational setting on a simulated ward. Medication errors were compared against errors recorded using a paper-based system. Results A significant difference in the commission of medication errors was seen when NFC and paper-based medication systems were compared. Paper use resulted in a mean of 4.09 errors per prescribing round while NFC prescribing resulted in a mean of 0.22 errors per simulated prescribing round (P=0.000). Likewise, medication administration errors were reduced from a mean of 2.30 per drug round with a Paper system to a mean of 0.80 errors per round using NFC (P<0.015). A mean satisfaction score of 2.30 was reported by users, (rated on seven-point scale with 1 denoting total satisfaction with system use and 7 denoting total dissatisfaction). Conclusions An NFC based medication system may be used to effectively reduce medication errors in a simulated ward environment. PMID:28293602

  18. Near field communications technology and the potential to reduce medication errors through multidisciplinary application.

    PubMed

    O'Connell, Emer; Pegler, Joe; Lehane, Elaine; Livingstone, Vicki; McCarthy, Nora; Sahm, Laura J; Tabirca, Sabin; O'Driscoll, Aoife; Corrigan, Mark

    2016-01-01

    Patient safety requires optimal management of medications. Electronic systems are encouraged to reduce medication errors. Near field communications (NFC) is an emerging technology that may be used to develop novel medication management systems. An NFC-based system was designed to facilitate prescribing, administration and review of medications commonly used on surgical wards. Final year medical, nursing, and pharmacy students were recruited to test the electronic system in a cross-over observational setting on a simulated ward. Medication errors were compared against errors recorded using a paper-based system. A significant difference in the commission of medication errors was seen when NFC and paper-based medication systems were compared. Paper use resulted in a mean of 4.09 errors per prescribing round while NFC prescribing resulted in a mean of 0.22 errors per simulated prescribing round (P=0.000). Likewise, medication administration errors were reduced from a mean of 2.30 per drug round with a Paper system to a mean of 0.80 errors per round using NFC (P<0.015). A mean satisfaction score of 2.30 was reported by users, (rated on seven-point scale with 1 denoting total satisfaction with system use and 7 denoting total dissatisfaction). An NFC based medication system may be used to effectively reduce medication errors in a simulated ward environment.

  19. Sensitivity of planetary cruise navigation to earth orientation calibration errors

    NASA Technical Reports Server (NTRS)

    Estefan, J. A.; Folkner, W. M.

    1995-01-01

    A detailed analysis was conducted to determine the sensitivity of spacecraft navigation errors to the accuracy and timeliness of Earth orientation calibrations. Analyses based on simulated X-band (8.4-GHz) Doppler and ranging measurements acquired during the interplanetary cruise segment of the Mars Pathfinder heliocentric trajectory were completed for the nominal trajectory design and for an alternative trajectory with a longer transit time. Several error models were developed to characterize the effect of Earth orientation on navigational accuracy based on current and anticipated Deep Space Network calibration strategies. The navigational sensitivity of Mars Pathfinder to calibration errors in Earth orientation was computed for each candidate calibration strategy with the Earth orientation parameters included as estimated parameters in the navigation solution. In these cases, the calibration errors contributed 23 to 58% of the total navigation error budget, depending on the calibration strategy being assessed. Navigation sensitivity calculations were also performed for cases in which Earth orientation calibration errors were not adjusted in the navigation solution. In these cases, Earth orientation calibration errors contributed from 26 to as much as 227% of the total navigation error budget. The final analysis suggests that, not only is the method used to calibrate Earth orientation vitally important for precision navigation of Mars Pathfinder, but perhaps equally important is the method for inclusion of the calibration errors in the navigation solutions.

  20. A simulation study to quantify the impacts of exposure ...

    EPA Pesticide Factsheets

    BackgroundExposure measurement error in copollutant epidemiologic models has the potential to introduce bias in relative risk (RR) estimates. A simulation study was conducted using empirical data to quantify the impact of correlated measurement errors in time-series analyses of air pollution and health.MethodsZIP-code level estimates of exposure for six pollutants (CO, NOx, EC, PM2.5, SO4, O3) from 1999 to 2002 in the Atlanta metropolitan area were used to calculate spatial, population (i.e. ambient versus personal), and total exposure measurement error.Empirically determined covariance of pollutant concentration pairs and the associated measurement errors were used to simulate true exposure (exposure without error) from observed exposure. Daily emergency department visits for respiratory diseases were simulated using a Poisson time-series model with a main pollutant RR = 1.05 per interquartile range, and a null association for the copollutant (RR = 1). Monte Carlo experiments were used to evaluate the impacts of correlated exposure errors of different copollutant pairs.ResultsSubstantial attenuation of RRs due to exposure error was evident in nearly all copollutant pairs studied, ranging from 10 to 40% attenuation for spatial error, 3–85% for population error, and 31–85% for total error. When CO, NOx or EC is the main pollutant, we demonstrated the possibility of false positives, specifically identifying significant, positive associations for copoll

  1. Moving beyond the total sea ice extent in gauging model biases

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ivanova, Detelina P.; Gleckler, Peter J.; Taylor, Karl E.

    Here, reproducing characteristics of observed sea ice extent remains an important climate modeling challenge. This study describes several approaches to improve how model biases in total sea ice distribution are quantified, and applies them to historically forced simulations contributed to phase 5 of the Coupled Model Intercomparison Project (CMIP5). The quantity of hemispheric total sea ice area, or some measure of its equatorward extent, is often used to evaluate model performance. A new approach is introduced that investigates additional details about the structure of model errors, with an aim to reduce the potential impact of compensating errors when gauging differencesmore » between simulated and observed sea ice. Using multiple observational datasets, several new methods are applied to evaluate the climatological spatial distribution and the annual cycle of sea ice cover in 41 CMIP5 models. It is shown that in some models, error compensation can be substantial, for example resulting from too much sea ice in one region and too little in another. Error compensation tends to be larger in models that agree more closely with the observed total sea ice area, which may result from model tuning. The results herein suggest that consideration of only the total hemispheric sea ice area or extent can be misleading when quantitatively comparing how well models agree with observations. Further work is needed to fully develop robust methods to holistically evaluate the ability of models to capture the finescale structure of sea ice characteristics; however, the “sector scale” metric used here aids in reducing the impact of compensating errors in hemispheric integrals.« less

  2. Moving beyond the total sea ice extent in gauging model biases

    DOE PAGES

    Ivanova, Detelina P.; Gleckler, Peter J.; Taylor, Karl E.; ...

    2016-11-29

    Here, reproducing characteristics of observed sea ice extent remains an important climate modeling challenge. This study describes several approaches to improve how model biases in total sea ice distribution are quantified, and applies them to historically forced simulations contributed to phase 5 of the Coupled Model Intercomparison Project (CMIP5). The quantity of hemispheric total sea ice area, or some measure of its equatorward extent, is often used to evaluate model performance. A new approach is introduced that investigates additional details about the structure of model errors, with an aim to reduce the potential impact of compensating errors when gauging differencesmore » between simulated and observed sea ice. Using multiple observational datasets, several new methods are applied to evaluate the climatological spatial distribution and the annual cycle of sea ice cover in 41 CMIP5 models. It is shown that in some models, error compensation can be substantial, for example resulting from too much sea ice in one region and too little in another. Error compensation tends to be larger in models that agree more closely with the observed total sea ice area, which may result from model tuning. The results herein suggest that consideration of only the total hemispheric sea ice area or extent can be misleading when quantitatively comparing how well models agree with observations. Further work is needed to fully develop robust methods to holistically evaluate the ability of models to capture the finescale structure of sea ice characteristics; however, the “sector scale” metric used here aids in reducing the impact of compensating errors in hemispheric integrals.« less

  3. Assessing the Impact of Analytical Error on Perceived Disease Severity.

    PubMed

    Kroll, Martin H; Garber, Carl C; Bi, Caixia; Suffin, Stephen C

    2015-10-01

    The perception of the severity of disease from laboratory results assumes that the results are free of analytical error; however, analytical error creates a spread of results into a band and thus a range of perceived disease severity. To assess the impact of analytical errors by calculating the change in perceived disease severity, represented by the hazard ratio, using non-high-density lipoprotein (nonHDL) cholesterol as an example. We transformed nonHDL values into ranges using the assumed total allowable errors for total cholesterol (9%) and high-density lipoprotein cholesterol (13%). Using a previously determined relationship between the hazard ratio and nonHDL, we calculated a range of hazard ratios for specified nonHDL concentrations affected by analytical error. Analytical error, within allowable limits, created a band of values of nonHDL, with a width spanning 30 to 70 mg/dL (0.78-1.81 mmol/L), depending on the cholesterol and high-density lipoprotein cholesterol concentrations. Hazard ratios ranged from 1.0 to 2.9, a 16% to 50% error. Increased bias widens this range and decreased bias narrows it. Error-transformed results produce a spread of values that straddle the various cutoffs for nonHDL. The range of the hazard ratio obscures the meaning of results, because the spread of ratios at different cutoffs overlap. The magnitude of the perceived hazard ratio error exceeds that for the allowable analytical error, and significantly impacts the perceived cardiovascular disease risk. Evaluating the error in the perceived severity (eg, hazard ratio) provides a new way to assess the impact of analytical error.

  4. Impact of survey workflow on precision and accuracy of terrestrial LiDAR datasets

    NASA Astrophysics Data System (ADS)

    Gold, P. O.; Cowgill, E.; Kreylos, O.

    2009-12-01

    Ground-based LiDAR (Light Detection and Ranging) survey techniques are enabling remote visualization and quantitative analysis of geologic features at unprecedented levels of detail. For example, digital terrain models computed from LiDAR data have been used to measure displaced landforms along active faults and to quantify fault-surface roughness. But how accurately do terrestrial LiDAR data represent the true ground surface, and in particular, how internally consistent and precise are the mosaiced LiDAR datasets from which surface models are constructed? Addressing this question is essential for designing survey workflows that capture the necessary level of accuracy for a given project while minimizing survey time and equipment, which is essential for effective surveying of remote sites. To address this problem, we seek to define a metric that quantifies how scan registration error changes as a function of survey workflow. Specifically, we are using a Trimble GX3D laser scanner to conduct a series of experimental surveys to quantify how common variables in field workflows impact the precision of scan registration. Primary variables we are testing include 1) use of an independently measured network of control points to locate scanner and target positions, 2) the number of known-point locations used to place the scanner and point clouds in 3-D space, 3) the type of target used to measure distances between the scanner and the known points, and 4) setting up the scanner over a known point as opposed to resectioning of known points. Precision of the registered point cloud is quantified using Trimble Realworks software by automatic calculation of registration errors (errors between locations of the same known points in different scans). Accuracy of the registered cloud (i.e., its ground-truth) will be measured in subsequent experiments. To obtain an independent measure of scan-registration errors and to better visualize the effects of these errors on a registered point cloud, we scan from multiple locations an object of known geometry (a cylinder mounted above a square box). Preliminary results show that even in a controlled experimental scan of an object of known dimensions, there is significant variability in the precision of the registered point cloud. For example, when 3 scans of the central object are registered using 4 known points (maximum time, maximum equipment), the point clouds align to within ~1 cm (normal to the object surface). However, when the same point clouds are registered with only 1 known point (minimum time, minimum equipment), misalignment of the point clouds can range from 2.5 to 5 cm, depending on target type. The greater misalignment of the 3 point clouds when registered with fewer known points stems from the field method employed in acquiring the dataset and demonstrates the impact of field workflow on LiDAR dataset precision. By quantifying the degree of scan mismatch in results such as this, we can provide users with the information needed to maximize efficiency in remote field surveys.

  5. Developmental Eye Movement (DEM) Test Norms for Mandarin Chinese-Speaking Chinese Children.

    PubMed

    Xie, Yachun; Shi, Chunmei; Tong, Meiling; Zhang, Min; Li, Tingting; Xu, Yaqin; Guo, Xirong; Hong, Qin; Chi, Xia

    2016-01-01

    The Developmental Eye Movement (DEM) test is commonly used as a clinical visual-verbal ocular motor assessment tool to screen and diagnose reading problems at the onset. No established norm exists for using the DEM test with Mandarin Chinese-speaking Chinese children. This study aims to establish the normative values of the DEM test for the Mandarin Chinese-speaking population in China; it also aims to compare the values with three other published norms for English-, Spanish-, and Cantonese-speaking Chinese children. A random stratified sampling method was used to recruit children from eight kindergartens and eight primary schools in the main urban and suburban areas of Nanjing. A total of 1,425 Mandarin Chinese-speaking children aged 5 to 12 years took the DEM test in Mandarin Chinese. A digital recorder was used to record the process. All of the subjects completed a symptomatology survey, and their DEM scores were determined by a trained tester. The scores were computed using the formula in the DEM manual, except that the "vertical scores" were adjusted by taking the vertical errors into consideration. The results were compared with the three other published norms. In our subjects, a general decrease with age was observed for the four eye movement indexes: vertical score, adjusted horizontal score, ratio, and total error. For both the vertical and adjusted horizontal scores, the Mandarin Chinese-speaking children completed the tests much more quickly than the norms for English- and Spanish-speaking children. However, the same group completed the test slightly more slowly than the norms for Cantonese-speaking children. The differences in the means were significant (P<0.001) in all age groups. For several ages, the scores obtained in this study were significantly different from the reported scores of Cantonese-speaking Chinese children (P<0.005). Compared with English-speaking children, only the vertical score of the 6-year-old group, the vertical-horizontal time ratio of the 8-year-old group and the errors of 9-year-old group had no significant difference (P>0.05); compared with Spanish-speaking children, the scores were statistically significant (P<0.001) for the total error scores of the age groups, except the 6-, 9-, 10-, and 11-year-old age groups (P>0.05). DEM norms may be affected by differences in language, cultural, and educational systems among various ethnicities. The norms of the DEM test are proposed for use with Mandarin Chinese-speaking children in Nanjing and will be proposed for children throughout China.

  6. Developmental Eye Movement (DEM) Test Norms for Mandarin Chinese-Speaking Chinese Children

    PubMed Central

    Tong, Meiling; Zhang, Min; Li, Tingting; Xu, Yaqin; Guo, Xirong; Hong, Qin; Chi, Xia

    2016-01-01

    The Developmental Eye Movement (DEM) test is commonly used as a clinical visual-verbal ocular motor assessment tool to screen and diagnose reading problems at the onset. No established norm exists for using the DEM test with Mandarin Chinese-speaking Chinese children. This study aims to establish the normative values of the DEM test for the Mandarin Chinese-speaking population in China; it also aims to compare the values with three other published norms for English-, Spanish-, and Cantonese-speaking Chinese children. A random stratified sampling method was used to recruit children from eight kindergartens and eight primary schools in the main urban and suburban areas of Nanjing. A total of 1,425 Mandarin Chinese-speaking children aged 5 to 12 years took the DEM test in Mandarin Chinese. A digital recorder was used to record the process. All of the subjects completed a symptomatology survey, and their DEM scores were determined by a trained tester. The scores were computed using the formula in the DEM manual, except that the “vertical scores” were adjusted by taking the vertical errors into consideration. The results were compared with the three other published norms. In our subjects, a general decrease with age was observed for the four eye movement indexes: vertical score, adjusted horizontal score, ratio, and total error. For both the vertical and adjusted horizontal scores, the Mandarin Chinese-speaking children completed the tests much more quickly than the norms for English- and Spanish-speaking children. However, the same group completed the test slightly more slowly than the norms for Cantonese-speaking children. The differences in the means were significant (P<0.001) in all age groups. For several ages, the scores obtained in this study were significantly different from the reported scores of Cantonese-speaking Chinese children (P<0.005). Compared with English-speaking children, only the vertical score of the 6-year-old group, the vertical-horizontal time ratio of the 8-year-old group and the errors of 9-year-old group had no significant difference (P>0.05); compared with Spanish-speaking children, the scores were statistically significant (P<0.001) for the total error scores of the age groups, except the 6-, 9-, 10-, and 11-year-old age groups (P>0.05). DEM norms may be affected by differences in language, cultural, and educational systems among various ethnicities. The norms of the DEM test are proposed for use with Mandarin Chinese-speaking children in Nanjing and will be proposed for children throughout China. PMID:26881754

  7. Special electronic distance meter calibration for precise engineering surveying industrial applications

    NASA Astrophysics Data System (ADS)

    Braun, Jaroslav; Štroner, Martin; Urban, Rudolf

    2015-05-01

    All surveying instruments and their measurements suffer from some errors. To refine the measurement results, it is necessary to use procedures restricting influence of the instrument errors on the measured values or to implement numerical corrections. In precise engineering surveying industrial applications the accuracy of the distances usually realized on relatively short distance is a key parameter limiting the resulting accuracy of the determined values (coordinates, etc.). To determine the size of systematic and random errors of the measured distances were made test with the idea of the suppression of the random error by the averaging of the repeating measurement, and reducing systematic errors influence of by identifying their absolute size on the absolute baseline realized in geodetic laboratory at the Faculty of Civil Engineering CTU in Prague. The 16 concrete pillars with forced centerings were set up and the absolute distances between the points were determined with a standard deviation of 0.02 millimetre using a Leica Absolute Tracker AT401. For any distance measured by the calibrated instruments (up to the length of the testing baseline, i.e. 38.6 m) can now be determined the size of error correction of the distance meter in two ways: Firstly by the interpolation on the raw data, or secondly using correction function derived by previous FFT transformation usage. The quality of this calibration and correction procedure was tested on three instruments (Trimble S6 HP, Topcon GPT-7501, Trimble M3) experimentally using Leica Absolute Tracker AT401. By the correction procedure was the standard deviation of the measured distances reduced significantly to less than 0.6 mm. In case of Topcon GPT-7501 is the nominal standard deviation 2 mm, achieved (without corrections) 2.8 mm and after corrections 0.55 mm; in case of Trimble M3 is nominal standard deviation 3 mm, achieved (without corrections) 1.1 mm and after corrections 0.58 mm; and finally in case of Trimble S6 is nominal standard deviation 1 mm, achieved (without corrections) 1.2 mm and after corrections 0.51 mm. Proposed procedure of the calibration and correction is in our opinion very suitable for increasing of the accuracy of the electronic distance measurement and allows the use of the common surveying instrument to achieve uncommonly high precision.

  8. Refractive Error Study in Children: results from Mechi Zone, Nepal.

    PubMed

    Pokharel, G P; Negrel, A D; Munoz, S R; Ellwein, L B

    2000-04-01

    To assess the prevalence of refractive error and vision impairment in school age children in the terai area of the Mechi zone in Eastern Nepal. Random selection of village-based clusters was used to identify a sample of children 5 to 15 years of age. Children in the 25 selected clusters were enumerated through a door-to-door household survey and invited to village sites for examination. Visual acuity measurements, cycloplegic retinoscopy, cycloplegic autorefraction, ocular motility evaluation, and anterior segment, media, and fundus examinations were done from May 1998 through July 1998. Independent replicate examinations for quality assurance monitoring took place in all children with reduced vision and in a sample of those with normal vision in seven villages. A total of 5,526 children from 3,724 households were enumerated, and 5,067 children (91.7%) were examined. The prevalence of uncorrected, presenting, and best visual acuity 0.5 (20/40) or worse in at least one eye was 2.9%, 2.8%, and 1.4%, respectively; 0.4% had best visual acuity 0.5 or worse in both eyes. Refractive error was the cause in 56% of the 200 eyes with reduced uncorrected vision, amblyopia in 9%, other causes in 19%, with unexplained causes in the remaining 16%. Myopia -0.5 diopter or less in either eye or hyperopia 2 diopters or greater was observed in less than 3% of children. Hyperopia risk was associated with female gender and myopia risk with older age. The prevalence of reduced vision is very low in school-age children in Nepal, most of it because of correctable refractive error. Further studies are needed to determine whether the prevalence of myopia will be higher for more recent birth cohorts.

  9. Prevalence of refractive error, presbyopia, and unmet need of spectacle coverage in a northern district of Bangladesh: Rapid Assessment of Refractive Error study.

    PubMed

    Muhit, Mohammad; Minto, Hasan; Parvin, Afroza; Jadoon, Mohammad Z; Islam, Johurul; Yasmin, Sumrana; Khandaker, Gulam

    2018-04-01

    To determine the prevalence of refractive error (RE), presbyopia, spectacle coverage, and barriers to uptake optical services in Bangladesh. Rapid assessment of refractive error (RARE) study following the RARE protocol was conducted in a northern district (i.e., Sirajganj) of Bangladesh (January 2010-December 2012). People aged 15-49 years were selected, and eligible participants had habitual distance and near visual acuity (VA) measured and ocular examinations were performed in those with VA<6/18. Those with phakic eyes with VA <6/18, but improving to ≥6/18 with pinhole or optical correction, were considered as RE and people aged ≥35 years with binocular unaided near vision of

  10. Improving the quality of child anthropometry: Manual anthropometry in the Body Imaging for Nutritional Assessment Study (BINA).

    PubMed

    Conkle, Joel; Ramakrishnan, Usha; Flores-Ayala, Rafael; Suchdev, Parminder S; Martorell, Reynaldo

    2017-01-01

    Anthropometric data collected in clinics and surveys are often inaccurate and unreliable due to measurement error. The Body Imaging for Nutritional Assessment Study (BINA) evaluated the ability of 3D imaging to correctly measure stature, head circumference (HC) and arm circumference (MUAC) for children under five years of age. This paper describes the protocol for and the quality of manual anthropometric measurements in BINA, a study conducted in 2016-17 in Atlanta, USA. Quality was evaluated by examining digit preference, biological plausibility of z-scores, z-score standard deviations, and reliability. We calculated z-scores and analyzed plausibility based on the 2006 WHO Child Growth Standards (CGS). For reliability, we calculated intra- and inter-observer Technical Error of Measurement (TEM) and Intraclass Correlation Coefficient (ICC). We found low digit preference; 99.6% of z-scores were biologically plausible, with z-score standard deviations ranging from 0.92 to 1.07. Total TEM was 0.40 for stature, 0.28 for HC, and 0.25 for MUAC in centimeters. ICC ranged from 0.99 to 1.00. The quality of manual measurements in BINA was high and similar to that of the anthropometric data used to develop the WHO CGS. We attributed high quality to vigorous training, motivated and competent field staff, reduction of non-measurement error through the use of technology, and reduction of measurement error through adequate monitoring and supervision. Our anthropometry measurement protocol, which builds on and improves upon the protocol used for the WHO CGS, can be used to improve anthropometric data quality. The discussion illustrates the need to standardize anthropometric data quality assessment, and we conclude that BINA can provide a valuable evaluation of 3D imaging for child anthropometry because there is comparison to gold-standard, manual measurements.

  11. A patient-initiated voluntary online survey of adverse medical events: the perspective of 696 injured patients and families.

    PubMed

    Southwick, Frederick S; Cranley, Nicole M; Hallisy, Julia A

    2015-10-01

    Preventable medical errors continue to be a major cause of death in the USA and throughout the world. Many patients have written about their experiences on websites and in published books. As patients and family members who have experienced medical harm, we have created a nationwide voluntary survey in order to more broadly and systematically capture the perspective of patients and patient families experiencing adverse medical events and have used quantitative and qualitative analysis to summarise the responses of 696 patients and their families. Harm was most commonly associated with diagnostic and therapeutic errors, followed by surgical or procedural complications, hospital-associated infections and medication errors, and our quantitative results match those of previous provider-initiated patient surveys. Qualitative analysis of 450 narratives revealed a lack of perceived provider and system accountability, deficient and disrespectful communication and a failure of providers to listen as major themes. The consequences of adverse events included death, post-traumatic stress, financial hardship and permanent disability. These conditions and consequences led to a loss of patients' trust in both the health system and providers. Patients and family members offered suggestions for preventing future adverse events and emphasised the importance of shared decision-making. This large voluntary survey of medical harm highlights the potential efficacy of patient-initiated surveys for providing meaningful feedback and for guiding improvements in patient care. Published by the BMJ Publishing Group Limited. For permission to use (where not already granted under a licence) please go to http://group.bmj.com/group/rights-licensing/permissions.

  12. Surf zone characterization from Unmanned Aerial Vehicle imagery

    NASA Astrophysics Data System (ADS)

    Holman, Rob A.; Holland, K. Todd; Lalejini, Dave M.; Spansel, Steven D.

    2011-11-01

    We investigate the issues and methods for estimating nearshore bathymetry based on wave celerity measurements obtained using time series imagery from small unmanned aircraft systems (SUAS). In contrast to time series imagery from fixed cameras or from larger aircraft, SUAS data are usually short, gappy in time, and unsteady in aim in high frequency ways that are not reflected by the filtered navigation metadata. These issues were first investigated using fixed camera proxy data that have been intentionally degraded to mimic these problems. It has been found that records as short as 50 s or less can yield good bathymetry results. Gaps in records associated with inadvertent look-away during unsteady flight would normally prevent use of the required standard Fast Fourier Transform methods. However, we found that a full Fourier Transform could be implemented on the remaining valid record segments and was effective if at least 50% of total record length remained intact. Errors in image geo-navigation were stabilized based on fixed ground fiducials within a required land portion of the image. The elements of a future method that could remove this requirement were then outlined. Two test SUAS data runs were analyzed and compared to survey ground truth data. A 54-s data run at Eglin Air Force Base on the Gulf of Mexico yielded a good bathymetry product that compared well with survey data (standard deviation of 0.51 m in depths ranging from 0 to 4 m). A shorter (30.5 s) record from Silver Strand Beach (near Coronado) on the US west coast provided a good approximation of the surveyed bathymetry but was excessively deep offshore and had larger errors (1.19 m for true depths ranging from 0 to 6 m), consistent with the short record length. Seventy-three percent of the bathymetry estimates lay within 1 m of the truth for most of the nearshore.

  13. Reducing Check-in Errors at Brigham Young University through Statistical Process Control

    ERIC Educational Resources Information Center

    Spackman, N. Andrew

    2005-01-01

    The relationship between the library and its patrons is damaged and the library's reputation suffers when returned items are not checked in. An informal survey reveals librarians' concern for this problem and their efforts to combat it, although few libraries collect objective measurements of errors or the effects of improvement efforts. Brigham…

  14. Land use surveys by means of automatic interpretation of LANDSAT system data

    NASA Technical Reports Server (NTRS)

    Dejesusparada, N. (Principal Investigator); Lombardo, M. A.; Novo, E. M. L. D.; Niero, M.; Foresti, C.

    1981-01-01

    Analyses for seven land-use classes are presented. The classes are: urban area, industrial area, bare soil, cultivated area, pastureland, reforestation, and natural vegetation. The automatic classification of LANDSAT MSS data using a maximum likelihood algorithm shows a 39% average error of emission and a 3.45 error of commission for the seven classes.

  15. In-Service Teachers' Perceptions and Interpretations of Students' Errors in Mathematics

    ERIC Educational Resources Information Center

    Chauraya, Million; Mashingaidze, Samuel

    2017-01-01

    This paper reports on findings of a research study that investigated in-service secondary school teachers' perceptions and interpretations of students' errors in mathematics. The study used a survey research design in which a questionnaire with two sections was used to collect data. The first section sought to find out the teachers' perceptions of…

  16. Errata: Response Analysis and Error Diagnosis Tools.

    ERIC Educational Resources Information Center

    Hart, Robert S.

    This guide to ERRATA, a set of HyperCard-based tools for response analysis and error diagnosis in language testing, is intended as a user manual and general reference and designed to be used with the software (not included here). It has three parts. The first is a brief survey of computational techniques available for dealing with student test…

  17. The Effect of Data Quality on Short-term Growth Model Projections

    Treesearch

    David Gartner

    2005-01-01

    This study was designed to determine the effect of FIA's data quality on short-term growth model projections. The data from Georgia's 1996 statewide survey were used for the Southern variant of the Forest Vegetation Simulator to predict Georgia's first annual panel. The effect of several data error sources on growth modeling prediction errors...

  18. An intervention to decrease patient identification band errors in a children's hospital.

    PubMed

    Hain, Paul D; Joers, B; Rush, M; Slayton, J; Throop, P; Hoagg, S; Allen, L; Grantham, J; Deshpande, J K

    2010-06-01

    Patient misidentification continues to be a quality and safety issue. There is a paucity of US data describing interventions to reduce identification band error rates. Monroe Carell Jr Children's Hospital at Vanderbilt. Percentage of patients with defective identification bands. Web-based surveys were sent, asking hospital personnel to anonymously identify perceived barriers to reaching zero defects with identification bands. Corrective action plans were created and implemented with ideas from leadership, front-line staff and the online survey. Data from unannounced audits of patient identification bands were plotted on statistical process control charts and shared monthly with staff. All hospital personnel were expected to "stop the line" if there were any patient identification questions. The first audit showed a defect rate of 20.4%. The original mean defect rate was 6.5%. After interventions and education, the new mean defect rate was 2.6%. (a) The initial rate of patient identification band errors in the hospital was higher than expected. (b) The action resulting in most significant improvement was staff awareness of the problem, with clear expectations to immediately stop the line if a patient identification error was present. (c) Staff surveys are an excellent source of suggestions for combating patient identification issues. (d) Continued audit and data collection is necessary for sustainable staff focus and continued improvement. (e) Statistical process control charts are both an effective method to track results and an easily understood tool for sharing data with staff.

  19. THE EVOLUTION OF THE STELLAR MASS FUNCTION OF GALAXIES FROM z = 4.0 AND THE FIRST COMPREHENSIVE ANALYSIS OF ITS UNCERTAINTIES: EVIDENCE FOR MASS-DEPENDENT EVOLUTION

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Marchesini, Danilo; Van Dokkum, Pieter G.; Foerster Schreiber, Natascha M.

    2009-08-20

    We present the evolution of the stellar mass function (SMF) of galaxies from z = 4.0 to z = 1.3 measured from a sample constructed from the deep near-infrared Multi-wavelength Survey by Yale-Chile, the Faint Infrared Extragalactic Survey, and the Great Observatories Origins Deep Survey-Chandra Deep Field South surveys, all having very high-quality optical to mid-infrared data. This sample, unique in that it combines data from surveys with a large range of depths and areas in a self-consistent way, allowed us to (1) minimize the uncertainty due to cosmic variance and empirically quantify its contribution to the total error budget;more » (2) simultaneously probe the high-mass end and the low-mass end (down to {approx}0.05 times the characteristic stellar mass) of the SMF with good statistics; and (3) empirically derive the redshift-dependent completeness limits in stellar mass. We provide, for the first time, a comprehensive analysis of random and systematic uncertainties affecting the derived SMFs, including the effect of metallicity, extinction law, stellar population synthesis model, and initial mass function. We find that the mass density evolves by a factor of {approx}17{sup +7}{sub -10} since z = 4.0, mostly driven by a change in the normalization {phi}*. If only random errors are taken into account, we find evidence for mass-dependent evolution, with the low-mass end evolving more rapidly than the high-mass end. However, we show that this result is no longer robust when systematic uncertainties due to the SED-modeling assumptions are taken into account. Another significant uncertainty is the contribution to the overall stellar mass density of galaxies below our mass limit; future studies with WFC3 will provide better constraints on the SMF at masses below 10{sup 10} M{sub sun} at z>2. Taking our results at face value, we find that they are in conflict with semianalytic models of galaxy formation. The models predict SMFs that are in general too steep, with too many low-mass galaxies and too few high-mass galaxies. The discrepancy at the high-mass end is susceptible to uncertainties in the models and the data, but the discrepancy at the low-mass end may be more difficult to explain.« less

  20. Increased User Satisfaction Through an Improved Message System

    NASA Technical Reports Server (NTRS)

    Weissert, C. L.

    1997-01-01

    With all of the enhancements in software methodology and testing, there is no guarantee that software can be delivered such that no user errors occur, How to handle these errors when they occur has become a major research topic within human-computer interaction (HCI). Users of the Multimission Spacecraft Analysis Subsystem(MSAS) at the Jet Propulsion Laboratory (JPL), a system of X and motif graphical user interfaces for analyzing spacecraft data, complained about the lack of information about the error cause and have suggested that recovery actions be included in the system error messages...The system was evaluated through usability surveys and was shown to be successful.

  1. 45 CFR 98.102 - Content of Error Rate Reports.

    Code of Federal Regulations, 2013 CFR

    2013-10-01

    ....102 Public Welfare DEPARTMENT OF HEALTH AND HUMAN SERVICES GENERAL ADMINISTRATION CHILD CARE AND DEVELOPMENT FUND Error Rate Reporting § 98.102 Content of Error Rate Reports. (a) Baseline Submission Report... payments by the total dollar amount of child care payments that the State, the District of Columbia or...

  2. 45 CFR 98.102 - Content of Error Rate Reports.

    Code of Federal Regulations, 2014 CFR

    2014-10-01

    ....102 Public Welfare Department of Health and Human Services GENERAL ADMINISTRATION CHILD CARE AND DEVELOPMENT FUND Error Rate Reporting § 98.102 Content of Error Rate Reports. (a) Baseline Submission Report... payments by the total dollar amount of child care payments that the State, the District of Columbia or...

  3. 45 CFR 98.102 - Content of Error Rate Reports.

    Code of Federal Regulations, 2012 CFR

    2012-10-01

    ....102 Public Welfare DEPARTMENT OF HEALTH AND HUMAN SERVICES GENERAL ADMINISTRATION CHILD CARE AND DEVELOPMENT FUND Error Rate Reporting § 98.102 Content of Error Rate Reports. (a) Baseline Submission Report... payments by the total dollar amount of child care payments that the State, the District of Columbia or...

  4. 45 CFR 98.102 - Content of Error Rate Reports.

    Code of Federal Regulations, 2011 CFR

    2011-10-01

    ....102 Public Welfare DEPARTMENT OF HEALTH AND HUMAN SERVICES GENERAL ADMINISTRATION CHILD CARE AND DEVELOPMENT FUND Error Rate Reporting § 98.102 Content of Error Rate Reports. (a) Baseline Submission Report... payments by the total dollar amount of child care payments that the State, the District of Columbia or...

  5. Galaxy-galaxy lensing in the Dark Energy Survey Science Verification data

    NASA Astrophysics Data System (ADS)

    Clampitt, J.; Sánchez, C.; Kwan, J.; Krause, E.; MacCrann, N.; Park, Y.; Troxel, M. A.; Jain, B.; Rozo, E.; Rykoff, E. S.; Wechsler, R. H.; Blazek, J.; Bonnett, C.; Crocce, M.; Fang, Y.; Gaztanaga, E.; Gruen, D.; Jarvis, M.; Miquel, R.; Prat, J.; Ross, A. J.; Sheldon, E.; Zuntz, J.; Abbott, T. M. C.; Abdalla, F. B.; Armstrong, R.; Becker, M. R.; Benoit-Lévy, A.; Bernstein, G. M.; Bertin, E.; Brooks, D.; Burke, D. L.; Carnero Rosell, A.; Carrasco Kind, M.; Cunha, C. E.; D'Andrea, C. B.; da Costa, L. N.; Desai, S.; Diehl, H. T.; Dietrich, J. P.; Doel, P.; Estrada, J.; Evrard, A. E.; Fausti Neto, A.; Flaugher, B.; Fosalba, P.; Frieman, J.; Gruendl, R. A.; Honscheid, K.; James, D. J.; Kuehn, K.; Kuropatkin, N.; Lahav, O.; Lima, M.; March, M.; Marshall, J. L.; Martini, P.; Melchior, P.; Mohr, J. J.; Nichol, R. C.; Nord, B.; Plazas, A. A.; Romer, A. K.; Sanchez, E.; Scarpine, V.; Schubnell, M.; Sevilla-Noarbe, I.; Smith, R. C.; Soares-Santos, M.; Sobreira, F.; Suchyta, E.; Swanson, M. E. C.; Tarle, G.; Thomas, D.; Vikram, V.; Walker, A. R.

    2017-03-01

    We present galaxy-galaxy lensing results from 139 deg2 of Dark Energy Survey (DES) Science Verification (SV) data. Our lens sample consists of red galaxies, known as redMaGiC, which are specifically selected to have a low photometric redshift error and outlier rate. The lensing measurement has a total signal-to-noise ratio of 29 over scales 0.09 < R < 15 Mpc h-1, including all lenses over a wide redshift range 0.2 < z < 0.8. Dividing the lenses into three redshift bins for this constant moving number density sample, we find no evidence for evolution in the halo mass with redshift. We obtain consistent results for the lensing measurement with two independent shear pipelines, NGMIX and IM3SHAPE. We perform a number of null tests on the shear and photometric redshift catalogues and quantify resulting systematic uncertainties. Covariances from jackknife subsamples of the data are validated with a suite of 50 mock surveys. The result and systematic checks in this work provide a critical input for future cosmological and galaxy evolution studies with the DES data and redMaGiC galaxy samples. We fit a halo occupation distribution (HOD) model, and demonstrate that our data constrain the mean halo mass of the lens galaxies, despite strong degeneracies between individual HOD parameters.

  6. The patient safety culture: a systematic review by characteristics of Hospital Survey on Patient Safety Culture dimensions.

    PubMed

    Reis, Cláudia Tartaglia; Paiva, Sofia Guerra; Sousa, Paulo

    2018-05-08

    To learn the weaknesses and strengths of safety culture as expressed by the dimensions measured by the Hospital Survey on Patient Safety Culture (HSOPSC) at hospitals in the various cultural contexts. The aim of this study was to identify studies that have used the HSOPSC to collect data on safety culture at hospitals; to survey their findings in the safety culture dimensions and possible contributions to improving the quality and safety of hospital care. Medline (via PubMed), Web of Science and Scopus were searched from 2005 to July 2016 in English, Portuguese and Spanish. Studies were identified using specific search terms and inclusion criteria. A total of 33 articles, reporting on 21 countries, was included. Scores were extracted by patient safety culture dimensions assessed by the HSOPSC. The quality of the studies was evaluated by the STROBE Statement. The dimensions that proved strongest were 'Teamwork within units' and 'Organisational learning-continuous improvement'. Particularly weak dimensions were 'Non-punitive response to error', 'Staffing', 'Handoffs and transitions' and 'Teamwork across units'. The studies revealed a predominance of hospital organisational cultures that were underdeveloped or weak as regards patient safety. For them to be effective, safety culture evaluation should be tied to strategies designed to develop safety culture hospital-wide.

  7. The Effects of Data Collection Method and Monitoring of Workers’ Behavior on the Generation of Demolition Waste

    PubMed Central

    Cha, Gi-Wook; Kim, Young-Chan; Moon, Hyeun Jun; Hong, Won-Hwa

    2017-01-01

    The roles of both the data collection method (including proper classification) and the behavior of workers on the generation of demolition waste (DW) are important. By analyzing the effect of the data collection method used to estimate DW, and by investigating how workers’ behavior can affect the total amount of DW generated during an actual demolition process, it was possible to identify strategies that could improve the prediction of DW. Therefore, this study surveyed demolition waste generation rates (DWGRs) for different types of building by conducting on-site surveys immediately before demolition in order to collect adequate and reliable data. In addition, the effects of DW management strategies and of monitoring the behavior of workers on the actual generation of DW were analyzed. The results showed that when monitoring was implemented, the estimates of DW obtained from the DWGRs that were surveyed immediately before demolition and the actual quantities of DW reported by the demolition contractors had an error rate of 0.63% when the results were compared. Therefore, this study has shown that the proper data collection method (i.e., data were collected immediately before demolition) applied in this paper and monitoring on the demolition site have a significant impact on waste generation. PMID:29023378

  8. Star formation in Herschel's Monsters versus semi-analytic models

    NASA Astrophysics Data System (ADS)

    Gruppioni, C.; Calura, F.; Pozzi, F.; Delvecchio, I.; Berta, S.; De Lucia, G.; Fontanot, F.; Franceschini, A.; Marchetti, L.; Menci, N.; Monaco, P.; Vaccari, M.

    2015-08-01

    We present a direct comparison between the observed star formation rate functions (SFRFs) and the state-of-the-art predictions of semi-analytic models (SAMs) of galaxy formation and evolution. We use the PACS Evolutionary Probe Survey and Herschel Multi-tiered Extragalactic Survey data sets in the COSMOS and GOODS-South fields, combined with broad-band photometry from UV to sub-mm, to obtain total (IR+UV) instantaneous star formation rates (SFRs) for individual Herschel galaxies up to z ˜ 4, subtracted of possible active galactic nucleus (AGN) contamination. The comparison with model predictions shows that SAMs broadly reproduce the observed SFRFs up to z ˜ 2, when the observational errors on the SFR are taken into account. However, all the models seem to underpredict the bright end of the SFRF at z ≳ 2. The cause of this underprediction could lie in an improper modelling of several model ingredients, like too strong (AGN or stellar) feedback in the brighter objects or too low fallback of gas, caused by weak feedback and outflows at earlier epochs.

  9. Extracting cosmological information from the angular power spectrum of the 2MASS Photometric Redshift catalogue

    NASA Astrophysics Data System (ADS)

    Balaguera-Antolínez, A.; Bilicki, M.; Branchini, E.; Postiglione, A.

    2018-05-01

    Using the almost all-sky 2MASS Photometric Redshift catalogue (2MPZ) we perform for the first time a tomographic analysis of galaxy angular clustering in the local Universe (z < 0.24). We estimate the angular auto- and cross-power spectra of 2MPZ galaxies in three photometric redshift bins, and use dedicated mock catalogues to assess their errors. We measure a subset of cosmological parameters, having fixed the others at their Planck values, namely the baryon fraction fb=0.14^{+0.09}_{-0.06}, the total matter density parameter Ωm = 0.30 ± 0.06, and the effective linear bias of 2MPZ galaxies beff, which grows from 1.1^{+0.3}_{-0.4} at = 0.05 up to 2.1^{+0.3}_{-0.5} at = 0.2, largely because of the flux-limited nature of the data set. The results obtained here for the local Universe agree with those derived with the same methodology at higher redshifts, and confirm the importance of the tomographic technique for next-generation photometric surveys such as Euclid or Large Synoptic Survey Telescope.

  10. 78 FR 75353 - Agency Information Collection Activities: Proposed Collection: Public Comment Request

    Federal Register 2010, 2011, 2012, 2013, 2014

    2013-12-11

    ... cognitive interviews, focus groups, usability tests, field tests/pilot interviews, and experimental research... as more basic research on response errors in surveys. HRSA staff use various techniques to evaluate... interview structure consists of respondents first answering a draft survey question and then providing...

  11. Calendar Instruments in Retrospective Web Surveys

    ERIC Educational Resources Information Center

    Glasner, Tina; van der Vaart, Wander; Dijkstra, Wil

    2015-01-01

    Calendar instruments incorporate aided recall techniques such as temporal landmarks and visual time lines that aim to reduce response error in retrospective surveys. Those calendar instruments have been used extensively in off-line research (e.g., computer-aided telephone interviews, computer assisted personal interviewing, and paper and pen…

  12. A Mixed-Methods Investigation of Factors and Scenarios Influencing College Students' Decision to Complete Surveys at Five Mid-Western Universities

    ERIC Educational Resources Information Center

    Koskey, Kristin L. K.; Cain, Bryce; Sondergeld, Toni A.; Alvim, Henrique G.; Slager, Emily M.

    2015-01-01

    Achieving respectable response rates to surveys on university campuses has become increasingly more difficult, which can increase non-response error and jeopardize the integrity of data. Prior research has focused on investigating the effect of a single or small set of factors on college students' decision to complete surveys. We used a concurrent…

  13. Verbal Paradata and Survey Error: Respondent Speech, Voice, and Question-Answering Behavior Can Predict Income Item Nonresponse

    ERIC Educational Resources Information Center

    Jans, Matthew E.

    2010-01-01

    Income nonresponse is a significant problem in survey data, with rates as high as 50%, yet we know little about why it occurs. It is plausible that the way respondents answer survey questions (e.g., their voice and speech characteristics, and their question- answering behavior) can predict whether they will provide income data, and will reflect…

  14. A Study of Selected Nonsampling Errors in the 1991 Survey of Recent College Graduates. Technical Report.

    ERIC Educational Resources Information Center

    Brick, J. Michael; And Others

    The 1991 Survey of Recent College Graduates (RCG:91) is the sixth study in a series begun in 1976. The series provides data on the occupational and educational outcomes of recent bachelor's and master's graduates one year after graduation. The survey was conducted by Westat, Inc. in a two-stage sample involving 400 institutions of higher education…

  15. Claims, errors, and compensation payments in medical malpractice litigation.

    PubMed

    Studdert, David M; Mello, Michelle M; Gawande, Atul A; Gandhi, Tejal K; Kachalia, Allen; Yoon, Catherine; Puopolo, Ann Louise; Brennan, Troyen A

    2006-05-11

    In the current debate over tort reform, critics of the medical malpractice system charge that frivolous litigation--claims that lack evidence of injury, substandard care, or both--is common and costly. Trained physicians reviewed a random sample of 1452 closed malpractice claims from five liability insurers to determine whether a medical injury had occurred and, if so, whether it was due to medical error. We analyzed the prevalence, characteristics, litigation outcomes, and costs of claims that lacked evidence of error. For 3 percent of the claims, there were no verifiable medical injuries, and 37 percent did not involve errors. Most of the claims that were not associated with errors (370 of 515 [72 percent]) or injuries (31 of 37 [84 percent]) did not result in compensation; most that involved injuries due to error did (653 of 889 [73 percent]). Payment of claims not involving errors occurred less frequently than did the converse form of inaccuracy--nonpayment of claims associated with errors. When claims not involving errors were compensated, payments were significantly lower on average than were payments for claims involving errors (313,205 dollars vs. 521,560 dollars, P=0.004). Overall, claims not involving errors accounted for 13 to 16 percent of the system's total monetary costs. For every dollar spent on compensation, 54 cents went to administrative expenses (including those involving lawyers, experts, and courts). Claims involving errors accounted for 78 percent of total administrative costs. Claims that lack evidence of error are not uncommon, but most are denied compensation. The vast majority of expenditures go toward litigation over errors and payment of them. The overhead costs of malpractice litigation are exorbitant. Copyright 2006 Massachusetts Medical Society.

  16. A simulation study to quantify the impacts of exposure measurement error on air pollution health risk estimates in copollutant time-series models.

    PubMed

    Dionisio, Kathie L; Chang, Howard H; Baxter, Lisa K

    2016-11-25

    Exposure measurement error in copollutant epidemiologic models has the potential to introduce bias in relative risk (RR) estimates. A simulation study was conducted using empirical data to quantify the impact of correlated measurement errors in time-series analyses of air pollution and health. ZIP-code level estimates of exposure for six pollutants (CO, NO x , EC, PM 2.5 , SO 4 , O 3 ) from 1999 to 2002 in the Atlanta metropolitan area were used to calculate spatial, population (i.e. ambient versus personal), and total exposure measurement error. Empirically determined covariance of pollutant concentration pairs and the associated measurement errors were used to simulate true exposure (exposure without error) from observed exposure. Daily emergency department visits for respiratory diseases were simulated using a Poisson time-series model with a main pollutant RR = 1.05 per interquartile range, and a null association for the copollutant (RR = 1). Monte Carlo experiments were used to evaluate the impacts of correlated exposure errors of different copollutant pairs. Substantial attenuation of RRs due to exposure error was evident in nearly all copollutant pairs studied, ranging from 10 to 40% attenuation for spatial error, 3-85% for population error, and 31-85% for total error. When CO, NO x or EC is the main pollutant, we demonstrated the possibility of false positives, specifically identifying significant, positive associations for copollutants based on the estimated type I error rate. The impact of exposure error must be considered when interpreting results of copollutant epidemiologic models, due to the possibility of attenuation of main pollutant RRs and the increased probability of false positives when measurement error is present.

  17. An exploration of Australian hospital pharmacists' attitudes to patient safety.

    PubMed

    Lalor, Daniel J; Chen, Timothy F; Walpola, Ramesh; George, Rachel A; Ashcroft, Darren M; Fois, Romano A

    2015-02-01

    To explore the attitudes of Australian hospital pharmacists towards patient safety in their work settings. A safety climate questionnaire was administered to all 2347 active members of the Society of Hospital Pharmacists of Australia in 2010. Part of the survey elicited free-text comments about patient safety, error and incident reporting. The comments were subjected to thematic analysis to determine the attitudes held by respondents in relation to patient safety and its quality management in their work settings. Two hundred and ten (210) of 643 survey respondents provided comments on safety and quality issues related to their work settings. The responses contained a number of dominant themes including issues of workforce and working conditions, incident reporting systems, the response when errors occur, the presence or absence of a blame culture, hospital management support for safety initiatives, openness about errors and the value of teamwork. A number of pharmacists described the development of a mature patient-safety culture - one that is open about reporting errors and active in reducing their occurrence. Others described work settings in which a culture of blame persists, stifling error reporting and ultimately compromising patient safety. Australian hospital pharmacists hold a variety of attitudes that reflect diverse workplace cultures towards patient safety, error and incident reporting. This study has provided an insight into these attitudes and the actions that are needed to improve the patient-safety culture within Australian hospital pharmacy work settings. © 2014 Royal Pharmaceutical Society.

  18. Simulation of Hydrodynamics and Water Quality in Pueblo Reservoir, Southeastern Colorado, for 1985 through 1987 and 1999 through 2002

    USGS Publications Warehouse

    Galloway, Joel M.; Ortiz, Roderick F.; Bales, Jerad D.; Mau, David P.

    2008-01-01

    Pueblo Reservoir is west of Pueblo, Colorado, and is an important water resource for southeastern Colorado. The reservoir provides irrigation, municipal, and industrial water to various entities throughout the region. In anticipation of increased population growth, the cities of Colorado Springs, Fountain, Security, and Pueblo West have proposed building a pipeline that would be capable of conveying 78 million gallons of raw water per day (240 acre-feet) from Pueblo Reservoir. The U.S. Geological Survey, in cooperation with Colorado Springs Utilities and the Bureau of Reclamation, developed, calibrated, and verified a hydrodynamic and water-quality model of Pueblo Reservoir to describe the hydrologic, chemical, and biological processes in Pueblo Reservoir that can be used to assess environmental effects in the reservoir. Hydrodynamics and water-quality characteristics in Pueblo Reservoir were simulated using a laterally averaged, two-dimensional model that was calibrated using data collected from October 1985 through September 1987. The Pueblo Reservoir model was calibrated based on vertical profiles of water temperature and dissolved-oxygen concentration, and water-quality constituent concentrations collected in the epilimnion and hypolimnion at four sites in the reservoir. The calibrated model was verified with data from October 1999 through September 2002, which included a relatively wet year (water year 2000), an average year (water year 2001), and a dry year (water year 2002). Simulated water temperatures compared well to measured water temperatures in Pueblo Reservoir from October 1985 through September 1987. Spatially, simulated water temperatures compared better to measured water temperatures in the downstream part of the reservoir than in the upstream part of the reservoir. Differences between simulated and measured water temperatures also varied through time. Simulated water temperatures were slightly less than measured water temperatures from March to May 1986 and 1987, and slightly greater than measured data in August and September 1987. Relative to the calibration period, simulated water temperatures during the verification period did not compare as well to measured water temperatures. In general, simulated dissolved-oxygen concentrations for the calibration period compared well to measured concentrations in Pueblo Reservoir. Spatially, simulated concentrations deviated more from the measured values at the downstream part of the reservoir than at other locations in the reservoir. Overall, the absolute mean error ranged from 1.05 (site 1B) to 1.42 milligrams per liter (site 7B), and the root mean square error ranged from 1.12 (site 1B) to 1.67 milligrams per liter (site 7B). Simulated dissolved oxygen in the verification period compared better to the measured concentrations than in the calibration period. The absolute mean error ranged from 0.91 (site 5C) to 1.28 milligrams per liter (site 7B), and the root mean square error ranged from 1.03 (site 5C) to 1.46 milligrams per liter (site 7B). Simulated total dissolved solids generally were less than measured total dissolved-solids concentrations in Pueblo Reservoir from October 1985 through September 1987. The largest differences between simulated and measured total dissolved solids were observed at the most downstream sites in Pueblo Reservoir during the second year of the calibration period. Total dissolved-solids data were not available from reservoir sites during the verification period, so in-reservoir specific-conductance data were compared to simulated total dissolved solids. Simulated total dissolved solids followed the same patterns through time as the measured specific conductance data during the verification period. Simulated total nitrogen concentrations compared relatively well to measured concentrations in the Pueblo Reservoir model. The absolute mean error ranged from 0.21 (site 1B) to 0.27 milligram per liter as nitrogen (sites 3B and 7

  19. Addressing the unit of analysis in medical care studies: a systematic review.

    PubMed

    Calhoun, Aaron W; Guyatt, Gordon H; Cabana, Michael D; Lu, Downing; Turner, David A; Valentine, Stacey; Randolph, Adrienne G

    2008-06-01

    We assessed the frequency that patients are incorrectly used as the unit of analysis among studies of physicians' patient care behavior in articles published in high impact journals. We surveyed 30 high-impact journals across 6 medical fields for articles susceptible to unit of analysis errors published from 1994 to 2005. Three reviewers independently abstracted articles using previously published criteria to determine the presence of analytic errors. One hundred fourteen susceptible articles were found published in 15 journals, 4 journals published the majority (71 of 114 or 62.3%) of studies, 40 were intervention studies, and 74 were noninterventional studies. The unit of analysis error was present in 19 (48%) of the intervention studies and 31 (42%) of the noninterventional studies (overall error rate 44%). The frequency of the error decreased between 1994-1999 (N = 38; 65% error) and 2000-2005 (N = 76; 33% error) (P = 0.001). Although the frequency of the error in published studies is decreasing, further improvement remains desirable.

  20. Grizzly Bear Noninvasive Genetic Tagging Surveys: Estimating the Magnitude of Missed Detections

    PubMed Central

    Fisher, Jason T.; Heim, Nicole; Code, Sandra; Paczkowski, John

    2016-01-01

    Sound wildlife conservation decisions require sound information, and scientists increasingly rely on remotely collected data over large spatial scales, such as noninvasive genetic tagging (NGT). Grizzly bears (Ursus arctos), for example, are difficult to study at population scales except with noninvasive data, and NGT via hair trapping informs management over much of grizzly bears’ range. Considerable statistical effort has gone into estimating sources of heterogeneity, but detection error–arising when a visiting bear fails to leave a hair sample–has not been independently estimated. We used camera traps to survey grizzly bear occurrence at fixed hair traps and multi-method hierarchical occupancy models to estimate the probability that a visiting bear actually leaves a hair sample with viable DNA. We surveyed grizzly bears via hair trapping and camera trapping for 8 monthly surveys at 50 (2012) and 76 (2013) sites in the Rocky Mountains of Alberta, Canada. We used multi-method occupancy models to estimate site occupancy, probability of detection, and conditional occupancy at a hair trap. We tested the prediction that detection error in NGT studies could be induced by temporal variability within season, leading to underestimation of occupancy. NGT via hair trapping consistently underestimated grizzly bear occupancy at a site when compared to camera trapping. At best occupancy was underestimated by 50%; at worst, by 95%. Probability of false absence was reduced through successive surveys, but this mainly accounts for error imparted by movement among repeated surveys, not necessarily missed detections by extant bears. The implications of missed detections and biased occupancy estimates for density estimation–which form the crux of management plans–require consideration. We suggest hair-trap NGT studies should estimate and correct detection error using independent survey methods such as cameras, to ensure the reliability of the data upon which species management and conservation actions are based. PMID:27603134

  1. [Risk management in anesthesia and critical care medicine].

    PubMed

    Eisold, C; Heller, A R

    2017-03-01

    Throughout its history, anesthesia and critical care medicine has experienced vast improvements to increase patient safety. Consequently, anesthesia has never been performed on such a high level as it is being performed today. As a result, we do not always fully perceive the risks involved in our daily activity. A survey performed in Swiss hospitals identified a total of 169 hot spots which endanger patient safety. It turned out that there is a complex variety of possible errors that can only be tackled through consistent implementation of a safety culture. The key elements to reduce complications are continuing staff education, algorithms and standard operating procedures (SOP), working according to the principles of crisis resource management (CRM) and last but not least the continuous work-up of mistakes identified by critical incident reporting systems.

  2. An FMEA evaluation of intensity modulated radiation therapy dose delivery failures at tolerance criteria levels.

    PubMed

    Faught, Jacqueline Tonigan; Balter, Peter A; Johnson, Jennifer L; Kry, Stephen F; Court, Laurence E; Stingo, Francesco C; Followill, David S

    2017-11-01

    The objective of this work was to assess both the perception of failure modes in Intensity Modulated Radiation Therapy (IMRT) when the linac is operated at the edge of tolerances given in AAPM TG-40 (Kutcher et al.) and TG-142 (Klein et al.) as well as the application of FMEA to this specific section of the IMRT process. An online survey was distributed to approximately 2000 physicists worldwide that participate in quality services provided by the Imaging and Radiation Oncology Core - Houston (IROC-H). The survey briefly described eleven different failure modes covered by basic quality assurance in step-and-shoot IMRT at or near TG-40 (Kutcher et al.) and TG-142 (Klein et al.) tolerance criteria levels. Respondents were asked to estimate the worst case scenario percent dose error that could be caused by each of these failure modes in a head and neck patient as well as the FMEA scores: Occurrence, Detectability, and Severity. Risk probability number (RPN) scores were calculated as the product of these scores. Demographic data were also collected. A total of 181 individual and three group responses were submitted. 84% were from North America. Most (76%) individual respondents performed at least 80% clinical work and 92% were nationally certified. Respondent medical physics experience ranged from 2.5 to 45 yr (average 18 yr). A total of 52% of individual respondents were at least somewhat familiar with FMEA, while 17% were not familiar. Several IMRT techniques, treatment planning systems, and linear accelerator manufacturers were represented. All failure modes received widely varying scores ranging from 1 to 10 for occurrence, at least 1-9 for detectability, and at least 1-7 for severity. Ranking failure modes by RPN scores also resulted in large variability, with each failure mode being ranked both most risky (1st) and least risky (11th) by different respondents. On average MLC modeling had the highest RPN scores. Individual estimated percent dose errors and severity scores positively correlated (P < 0.01) for each FM as expected. No universal correlations were found between the demographic information collected and scoring, percent dose errors or ranking. Failure modes investigated overall were evaluated as low to medium risk, with average RPNs less than 110. The ranking of 11 failure modes was not agreed upon by the community. Large variability in FMEA scoring may be caused by individual interpretation and/or experience, reflecting the subjective nature of the FMEA tool. © 2017 American Association of Physicists in Medicine.

  3. Correlation of Head Impacts to Change in Balance Error Scoring System Scores in Division I Men's Lacrosse Players.

    PubMed

    Miyashita, Theresa L; Diakogeorgiou, Eleni; Marrie, Kaitlyn

    Investigation into the effect of cumulative subconcussive head impacts has yielded various results in the literature, with many supporting a link to neurological deficits. Little research has been conducted on men's lacrosse and associated balance deficits from head impacts. (1) Athletes will commit more errors on the postseason Balance Error Scoring System (BESS) test. (2) There will be a positive correlation to change in BESS scores and head impact exposure data. Prospective longitudinal study. Level 3. Thirty-four Division I men's lacrosse players (age, 19.59 ± 1.42 years) wore helmets instrumented with a sensor to collect head impact exposure data over the course of a competitive season. Players completed a BESS test at the start and end of the competitive season. The number of errors from pre- to postseason increased during the double-leg stance on foam ( P < 0.001), tandem stance on foam ( P = 0.009), total number of errors on a firm surface ( P = 0.042), and total number of errors on a foam surface ( P = 0.007). There were significant correlations only between the total errors on a foam surface and linear acceleration ( P = 0.038, r = 0.36), head injury criteria ( P = 0.024, r = 0.39), and Gadd Severity Index scores ( P = 0.031, r = 0.37). Changes in the total number of errors on a foam surface may be considered a sensitive measure to detect balance deficits associated with cumulative subconcussive head impacts sustained over the course of 1 lacrosse season, as measured by average linear acceleration, head injury criteria, and Gadd Severity Index scores. If there is microtrauma to the vestibular system due to repetitive subconcussive impacts, only an assessment that highly stresses the vestibular system may be able to detect these changes. Cumulative subconcussive impacts may result in neurocognitive dysfunction, including balance deficits, which are associated with an increased risk for injury. The development of a strategy to reduce total number of head impacts may curb the associated sequelae. Incorporation of a modified BESS test, firm surface only, may not be recommended as it may not detect changes due to repetitive impacts over the course of a competitive season.

  4. Accuracy of emotion labeling in children of parents diagnosed with bipolar disorder.

    PubMed

    Hanford, Lindsay C; Sassi, Roberto B; Hall, Geoffrey B

    2016-04-01

    Emotion labeling deficits have been posited as an endophenotype for bipolar disorder (BD) as they have been observed in both patients and their first-degree relatives. It remains unclear whether these deficits exist secondary to the development of psychiatric symptoms or whether they can be attributed to risk for psychopathology. To explore this, we investigated emotion processing in symptomatic and asymptomatic high-risk bipolar offspring (HRO) and healthy children of healthy parents (HCO). Symptomatic (n:18, age: 13.8 ± 2.6 years, 44% female) and asymptomatic (n:12, age: 12.8 ± 3.0 years, 42% female) HRO and age- and sex-matched HCO (n:20, age: 13.3 ± 2.5 years, 45% female) performed an emotion-labeling task. Total number of errors, emotion category and intensity of emotion error scores were compared. Correlations between total error scores and symptom severity were also investigated. Compared to HCO, both HRO groups made more errors on the adult face task (pcor=0.014). The HRO group were 2.3 times [90%CI:0.9-6.3] more likely and 4.3 times [90%CI:1.3-14.3] more likely to make errors on sad and angry faces, respectively. With the exception of sad face type errors, we observed no significant differences in error patterns between symptomatic and asymptomatic HRO, and no correlations between symptom severity and total number of errors. This study was cross-sectional in design, limiting our ability to infer trajectories or heritability of these deficits. This study provides further support for emotion labeling deficits as a candidate endophenotype for BD. Our study also suggests these deficits are not attributable to the presence of psychiatric symptoms. Copyright © 2016 Elsevier B.V. All rights reserved.

  5. Anticipatory synergy adjustments reflect individual performance of feedforward force control.

    PubMed

    Togo, Shunta; Imamizu, Hiroshi

    2016-10-06

    We grasp and dexterously manipulate an object through multi-digit synergy. In the framework of the uncontrolled manifold (UCM) hypothesis, multi-digit synergy is defined as the coordinated control mechanism of fingers to stabilize variable important for task success, e.g., total force. Previous studies reported anticipatory synergy adjustments (ASAs) that correspond to a drop of the synergy index before a quick change of the total force. The present study compared ASA's properties with individual performances of feedforward force control to investigate a relationship of those. Subjects performed a total finger force production task that consisted of a phase in which subjects tracked target line with visual information and a phase in which subjects produced total force pulse without visual information. We quantified their multi-digit synergy through UCM analysis and observed significant ASAs before producing total force pulse. The time of the ASA initiation and the magnitude of the drop of the synergy index were significantly correlated with the error of force pulse, but not with the tracking error. Almost all subjects showed a significant increase of the variance that affected the total force. Our study directly showed that ASA reflects the individual performance of feedforward force control independently of target-tracking performance and suggests that the multi-digit synergy was weakened to adjust the multi-digit movements based on a prediction error so as to reduce the future error. Copyright © 2016 Elsevier Ireland Ltd. All rights reserved.

  6. Constraining the mass–richness relationship of redMaPPer clusters with angular clustering

    DOE PAGES

    Baxter, Eric J.; Rozo, Eduardo; Jain, Bhuvnesh; ...

    2016-08-04

    The potential of using cluster clustering for calibrating the mass–richness relation of galaxy clusters has been recognized theoretically for over a decade. In this paper, we demonstrate the feasibility of this technique to achieve high-precision mass calibration using redMaPPer clusters in the Sloan Digital Sky Survey North Galactic Cap. By including cross-correlations between several richness bins in our analysis, we significantly improve the statistical precision of our mass constraints. The amplitude of the mass–richness relation is constrained to 7 per cent statistical precision by our analysis. However, the error budget is systematics dominated, reaching a 19 per cent total errormore » that is dominated by theoretical uncertainty in the bias–mass relation for dark matter haloes. We confirm the result from Miyatake et al. that the clustering amplitude of redMaPPer clusters depends on galaxy concentration as defined therein, and we provide additional evidence that this dependence cannot be sourced by mass dependences: some other effect must account for the observed variation in clustering amplitude with galaxy concentration. Assuming that the observed dependence of redMaPPer clustering on galaxy concentration is a form of assembly bias, we find that such effects introduce a systematic error on the amplitude of the mass–richness relation that is comparable to the error bar from statistical noise. Finally, the results presented here demonstrate the power of cluster clustering for mass calibration and cosmology provided the current theoretical systematics can be ameliorated.« less

  7. Deriving the species richness distribution of Geotrupinae (Coleoptera: Scarabaeoidea) in Mexico from the overlap of individual model predictions.

    PubMed

    Trotta-Moreu, Nuria; Lobo, Jorge M

    2010-02-01

    Predictions from individual distribution models for Mexican Geotrupinae species were overlaid to obtain a total species richness map for this group. A database (GEOMEX) that compiles available information from the literature and from several entomological collections was used. A Maximum Entropy method (MaxEnt) was applied to estimate the distribution of each species, taking into account 19 climatic variables as predictors. For each species, suitability values ranging from 0 to 100 were calculated for each grid cell on the map, and 21 different thresholds were used to convert these continuous suitability values into binary ones (presence-absence). By summing all of the individual binary maps, we generated a species richness prediction for each of the considered thresholds. The number of species and faunal composition thus predicted for each Mexican state were subsequently compared with those observed in a preselected set of well-surveyed states. Our results indicate that the sum of individual predictions tends to overestimate species richness but that the selection of an appropriate threshold can reduce this bias. Even under the most optimistic prediction threshold, the mean species richness error is 61% of the observed species richness, with commission errors being significantly more common than omission errors (71 +/- 29 versus 18 +/- 10%). The estimated distribution of Geotrupinae species richness in Mexico in discussed, although our conclusions are preliminary and contingent on the scarce and probably biased available data.

  8. Uncontrolled Web-based administration of surveys on factual health-related knowledge: a randomized study of untimed versus timed quizzing.

    PubMed

    Domnich, Alexander; Panatto, Donatella; Signori, Alessio; Bragazzi, Nicola Luigi; Cristina, Maria Luisa; Amicizia, Daniela; Gasparini, Roberto

    2015-04-13

    Health knowledge and literacy are among the main determinants of health. Assessment of these issues via Web-based surveys is growing continuously. Research has suggested that approximately one-fifth of respondents submit cribbed answers, or cheat, on factual knowledge items, which may lead to measurement error. However, little is known about methods of discouraging cheating in Web-based surveys on health knowledge. This study aimed at exploring the usefulness of imposing a survey time limit to prevent help-seeking and cheating. On the basis of sample size estimation, 94 undergraduate students were randomly assigned in a 1:1 ratio to complete a Web-based survey on nutrition knowledge, with or without a time limit of 15 minutes (30 seconds per item); the topic of nutrition was chosen because of its particular relevance to public health. The questionnaire consisted of two parts. The first was the validated consumer-oriented nutrition knowledge scale (CoNKS) consisting of 20 true/false items; the second was an ad hoc questionnaire (AHQ) containing 10 questions that would be very difficult for people without health care qualifications to answer correctly. It therefore aimed at measuring cribbing and not nutrition knowledge. AHQ items were somewhat encyclopedic and amenable to Web searching, while CoNKS items had more complex wording, so that simple copying/pasting of a question in a search string would not produce an immediate correct answer. A total of 72 of the 94 subjects started the survey. Dropout rates were similar in both groups (11%, 4/35 and 14%, 5/37 in the untimed and timed groups, respectively). Most participants completed the survey from portable devices, such as mobile phones and tablets. To complete the survey, participants in the untimed group took a median 2.3 minutes longer than those in the timed group; the effect size was small (Cohen's r=.29). Subjects in the untimed group scored significantly higher on CoNKS (mean difference of 1.2 points, P=.008) and the effect size was medium (Cohen's d=0.67). By contrast, no significant between-group difference in AHQ scores was documented. Unexpectedly high AHQ scores were recorded in 23% (7/31) and 19% (6/32) untimed and timed respondents, respectively, very probably owing to "e-cheating". Cribbing answers to health knowledge items in researcher-uncontrolled conditions is likely to lead to overestimation of people's knowledge; this should be considered during the design and implementation of Web-based surveys. Setting a time limit alone may not completely prevent cheating, as some cheats may be very fast in Web searching. More complex and contextualized wording of items and checking for the "findability" properties of items before implementing a Web-based health knowledge survey may discourage help-seeking, thus reducing measurement error. Studies with larger sample sizes and diverse populations are needed to confirm our results.

  9. Uncontrolled Web-Based Administration of Surveys on Factual Health-Related Knowledge: A Randomized Study of Untimed Versus Timed Quizzing

    PubMed Central

    2015-01-01

    Background Health knowledge and literacy are among the main determinants of health. Assessment of these issues via Web-based surveys is growing continuously. Research has suggested that approximately one-fifth of respondents submit cribbed answers, or cheat, on factual knowledge items, which may lead to measurement error. However, little is known about methods of discouraging cheating in Web-based surveys on health knowledge. Objective This study aimed at exploring the usefulness of imposing a survey time limit to prevent help-seeking and cheating. Methods On the basis of sample size estimation, 94 undergraduate students were randomly assigned in a 1:1 ratio to complete a Web-based survey on nutrition knowledge, with or without a time limit of 15 minutes (30 seconds per item); the topic of nutrition was chosen because of its particular relevance to public health. The questionnaire consisted of two parts. The first was the validated consumer-oriented nutrition knowledge scale (CoNKS) consisting of 20 true/false items; the second was an ad hoc questionnaire (AHQ) containing 10 questions that would be very difficult for people without health care qualifications to answer correctly. It therefore aimed at measuring cribbing and not nutrition knowledge. AHQ items were somewhat encyclopedic and amenable to Web searching, while CoNKS items had more complex wording, so that simple copying/pasting of a question in a search string would not produce an immediate correct answer. Results A total of 72 of the 94 subjects started the survey. Dropout rates were similar in both groups (11%, 4/35 and 14%, 5/37 in the untimed and timed groups, respectively). Most participants completed the survey from portable devices, such as mobile phones and tablets. To complete the survey, participants in the untimed group took a median 2.3 minutes longer than those in the timed group; the effect size was small (Cohen’s r=.29). Subjects in the untimed group scored significantly higher on CoNKS (mean difference of 1.2 points, P=.008) and the effect size was medium (Cohen’s d=0.67). By contrast, no significant between-group difference in AHQ scores was documented. Unexpectedly high AHQ scores were recorded in 23% (7/31) and 19% (6/32) untimed and timed respondents, respectively, very probably owing to “e-cheating”. Conclusions Cribbing answers to health knowledge items in researcher-uncontrolled conditions is likely to lead to overestimation of people’s knowledge; this should be considered during the design and implementation of Web-based surveys. Setting a time limit alone may not completely prevent cheating, as some cheats may be very fast in Web searching. More complex and contextualized wording of items and checking for the “findability” properties of items before implementing a Web-based health knowledge survey may discourage help-seeking, thus reducing measurement error. Studies with larger sample sizes and diverse populations are needed to confirm our results. PMID:25872617

  10. Estimating the designated use attainment decision error rates of US Environmental Protection Agency's proposed numeric total phosphorus criteria for Florida, USA, colored lakes.

    PubMed

    McLaughlin, Douglas B

    2012-01-01

    The utility of numeric nutrient criteria established for certain surface waters is likely to be affected by the uncertainty that exists in the presence of a causal link between nutrient stressor variables and designated use-related biological responses in those waters. This uncertainty can be difficult to characterize, interpret, and communicate to a broad audience of environmental stakeholders. The US Environmental Protection Agency (USEPA) has developed a systematic planning process to support a variety of environmental decisions, but this process is not generally applied to the development of national or state-level numeric nutrient criteria. This article describes a method for implementing such an approach and uses it to evaluate the numeric total P criteria recently proposed by USEPA for colored lakes in Florida, USA. An empirical, log-linear relationship between geometric mean concentrations of total P (a potential stressor variable) and chlorophyll a (a nutrient-related response variable) in these lakes-that is assumed to be causal in nature-forms the basis for the analysis. The use of the geometric mean total P concentration of a lake to correctly indicate designated use status, defined in terms of a 20 µg/L geometric mean chlorophyll a threshold, is evaluated. Rates of decision errors analogous to the Type I and Type II error rates familiar in hypothesis testing, and a 3rd error rate, E(ni) , referred to as the nutrient criterion-based impairment error rate, are estimated. The results show that USEPA's proposed "baseline" and "modified" nutrient criteria approach, in which data on both total P and chlorophyll a may be considered in establishing numeric nutrient criteria for a given lake within a specified range, provides a means for balancing and minimizing designated use attainment decision errors. Copyright © 2011 SETAC.

  11. Analytical Assessment of Simultaneous Parallel Approach Feasibility from Total System Error

    NASA Technical Reports Server (NTRS)

    Madden, Michael M.

    2014-01-01

    In a simultaneous paired approach to closely-spaced parallel runways, a pair of aircraft flies in close proximity on parallel approach paths. The aircraft pair must maintain a longitudinal separation within a range that avoids wake encounters and, if one of the aircraft blunders, avoids collision. Wake avoidance defines the rear gate of the longitudinal separation. The lead aircraft generates a wake vortex that, with the aid of crosswinds, can travel laterally onto the path of the trail aircraft. As runway separation decreases, the wake has less distance to traverse to reach the path of the trail aircraft. The total system error of each aircraft further reduces this distance. The total system error is often modeled as a probability distribution function. Therefore, Monte-Carlo simulations are a favored tool for assessing a "safe" rear-gate. However, safety for paired approaches typically requires that a catastrophic wake encounter be a rare one-in-a-billion event during normal operation. Using a Monte-Carlo simulation to assert this event rarity with confidence requires a massive number of runs. Such large runs do not lend themselves to rapid turn-around during the early stages of investigation when the goal is to eliminate the infeasible regions of the solution space and to perform trades among the independent variables in the operational concept. One can employ statistical analysis using simplified models more efficiently to narrow the solution space and identify promising trades for more in-depth investigation using Monte-Carlo simulations. These simple, analytical models not only have to address the uncertainty of the total system error but also the uncertainty in navigation sources used to alert an abort of the procedure. This paper presents a method for integrating total system error, procedure abort rates, avionics failures, and surveillance errors into a statistical analysis that identifies the likely feasible runway separations for simultaneous paired approaches.

  12. Re-assessing accumulated oxygen deficit in middle-distance runners.

    PubMed

    Bickham, D; Le Rossignol, P; Gibbons, C; Russell, A P

    2002-12-01

    The purpose of this study was to re-assess the accumulated oxygen deficit (AOD), incorporating recent methodological improvements i.e., 4 min submaximal tests spread above and below the lactate threshold (LT). We Investigated the Influence of the VO2 -speed regression, on the precision of the estimated total energy demand and AOD. utilising different numbers of regression points and including measurement errors. Seven trained middle-distance runners (mean +/- SD age: 25.3 +/- 5.4y, mass: 73.7 +/- 4.3kg. VO2max 64.4 +/- 6.1 mL x kg(-1) x min(-1)) completed a VO2max, LT, 10 x 4 min exercise tests (above and below LT) and high-intensity exhaustive tests. The VO2 -speed regression was developed using 10 submaximal points and a forced y-intercept value. The average precision (measured as the width of 95% confidence Interval) for the estimated total energy demand using this regression was 7.8mL O2 Eq x kg(-1) x min(-1). There was a two-fold decrease in precision of estimated total energy demand with the Inclusion of measurement errors from the metabolic system. The mean AOD value was 43.3 mL O2 Eq x kg(-1) (upper and lower 95% CI 32.1 and 54.5mL o2 Eq x kg(-1) respectively). Converting the 95% CI for estimated total energy demand to AOD or including maximum possible measurement errors amplified the error associated with the estimated total energy demand. No significant difference in AOD variables were found, using 10,4 or 2 regression points with a forced y-intercept. For practical purposes we recommend the use of 4 submaximal values with a y-intercept. Using 95% CIs and calculating error highlighted possible error in estimating AOD. Without accurate data collection, increased variability could decrease the accuracy of the AOD as shown by a 95% CI of the AOD.

  13. An improved procedure for the validation of satellite-based precipitation estimates

    NASA Astrophysics Data System (ADS)

    Tang, Ling; Tian, Yudong; Yan, Fang; Habib, Emad

    2015-09-01

    The objective of this study is to propose and test a new procedure to improve the validation of remote-sensing, high-resolution precipitation estimates. Our recent studies show that many conventional validation measures do not accurately capture the unique error characteristics in precipitation estimates to better inform both data producers and users. The proposed new validation procedure has two steps: 1) an error decomposition approach to separate the total retrieval error into three independent components: hit error, false precipitation and missed precipitation; and 2) the hit error is further analyzed based on a multiplicative error model. In the multiplicative error model, the error features are captured by three model parameters. In this way, the multiplicative error model separates systematic and random errors, leading to more accurate quantification of the uncertainties. The proposed procedure is used to quantitatively evaluate the recent two versions (Version 6 and 7) of TRMM's Multi-sensor Precipitation Analysis (TMPA) real-time and research product suite (3B42 and 3B42RT) for seven years (2005-2011) over the continental United States (CONUS). The gauge-based National Centers for Environmental Prediction (NCEP) Climate Prediction Center (CPC) near-real-time daily precipitation analysis is used as the reference. In addition, the radar-based NCEP Stage IV precipitation data are also model-fitted to verify the effectiveness of the multiplicative error model. The results show that winter total bias is dominated by the missed precipitation over the west coastal areas and the Rocky Mountains, and the false precipitation over large areas in Midwest. The summer total bias is largely coming from the hit bias in Central US. Meanwhile, the new version (V7) tends to produce more rainfall in the higher rain rates, which moderates the significant underestimation exhibited in the previous V6 products. Moreover, the error analysis from the multiplicative error model provides a clear and concise picture of the systematic and random errors, with both versions of 3B42RT have higher errors in varying degrees than their research (post-real-time) counterparts. The new V7 algorithm shows obvious improvements in reducing random errors in both winter and summer seasons, compared to its predecessors V6. Stage IV, as expected, surpasses the satellite-based datasets in all the metrics over CONUS. Based on the results, we recommend the new procedure be adopted for routine validation of satellite-based precipitation datasets, and we expect the procedure will work effectively for higher resolution data to be produced in the Global Precipitation Measurement (GPM) era.

  14. What are incident reports telling us? A comparative study at two Australian hospitals of medication errors identified at audit, detected by staff and reported to an incident system

    PubMed Central

    Westbrook, Johanna I.; Li, Ling; Lehnbom, Elin C.; Baysari, Melissa T.; Braithwaite, Jeffrey; Burke, Rosemary; Conn, Chris; Day, Richard O.

    2015-01-01

    Objectives To (i) compare medication errors identified at audit and observation with medication incident reports; (ii) identify differences between two hospitals in incident report frequency and medication error rates; (iii) identify prescribing error detection rates by staff. Design Audit of 3291patient records at two hospitals to identify prescribing errors and evidence of their detection by staff. Medication administration errors were identified from a direct observational study of 180 nurses administering 7451 medications. Severity of errors was classified. Those likely to lead to patient harm were categorized as ‘clinically important’. Setting Two major academic teaching hospitals in Sydney, Australia. Main Outcome Measures Rates of medication errors identified from audit and from direct observation were compared with reported medication incident reports. Results A total of 12 567 prescribing errors were identified at audit. Of these 1.2/1000 errors (95% CI: 0.6–1.8) had incident reports. Clinically important prescribing errors (n = 539) were detected by staff at a rate of 218.9/1000 (95% CI: 184.0–253.8), but only 13.0/1000 (95% CI: 3.4–22.5) were reported. 78.1% (n = 421) of clinically important prescribing errors were not detected. A total of 2043 drug administrations (27.4%; 95% CI: 26.4–28.4%) contained ≥1 errors; none had an incident report. Hospital A had a higher frequency of incident reports than Hospital B, but a lower rate of errors at audit. Conclusions Prescribing errors with the potential to cause harm frequently go undetected. Reported incidents do not reflect the profile of medication errors which occur in hospitals or the underlying rates. This demonstrates the inaccuracy of using incident frequency to compare patient risk or quality performance within or across hospitals. New approaches including data mining of electronic clinical information systems are required to support more effective medication error detection and mitigation. PMID:25583702

  15. Human error identification for laparoscopic surgery: Development of a motion economy perspective.

    PubMed

    Al-Hakim, Latif; Sevdalis, Nick; Maiping, Tanaphon; Watanachote, Damrongpan; Sengupta, Shomik; Dissaranan, Charuspong

    2015-09-01

    This study postulates that traditional human error identification techniques fail to consider motion economy principles and, accordingly, their applicability in operating theatres may be limited. This study addresses this gap in the literature with a dual aim. First, it identifies the principles of motion economy that suit the operative environment and second, it develops a new error mode taxonomy for human error identification techniques which recognises motion economy deficiencies affecting the performance of surgeons and predisposing them to errors. A total of 30 principles of motion economy were developed and categorised into five areas. A hierarchical task analysis was used to break down main tasks of a urological laparoscopic surgery (hand-assisted laparoscopic nephrectomy) to their elements and the new taxonomy was used to identify errors and their root causes resulting from violation of motion economy principles. The approach was prospectively tested in 12 observed laparoscopic surgeries performed by 5 experienced surgeons. A total of 86 errors were identified and linked to the motion economy deficiencies. Results indicate the developed methodology is promising. Our methodology allows error prevention in surgery and the developed set of motion economy principles could be useful for training surgeons on motion economy principles. Copyright © 2015 Elsevier Ltd and The Ergonomics Society. All rights reserved.

  16. SURBAL: computerized metes and bounds surveying

    Treesearch

    Roger N. Baughman; James H. Patric

    1970-01-01

    A computer program has been developed at West Virginia University for use in metes and bounds surveying. Stations, slope distances, slope angles, and bearings are primary information needed for this program. Other information needed may include magnetic deviation, acceptable closure error, desired map scale, and title designation. SURBAL prints out latitudes and...

  17. Selection Practices of Group Leaders: A National Survey.

    ERIC Educational Resources Information Center

    Riva, Maria T.; Lippert, Laurel; Tackett, M. Jan

    2000-01-01

    Study surveys the selection practices of group leaders. Explores methods of selection, variables used to make selection decisions, and the types of selection errors that leaders have experienced. Results suggest that group leaders use clinical judgment to make selection decisions and endorse using some specific variables in selection. (Contains 22…

  18. Assessment of error rates in acoustic monitoring with the R package monitoR

    USGS Publications Warehouse

    Katz, Jonathan; Hafner, Sasha D.; Donovan, Therese

    2016-01-01

    Detecting population-scale reactions to climate change and land-use change may require monitoring many sites for many years, a process that is suited for an automated system. We developed and tested monitoR, an R package for long-term, multi-taxa acoustic monitoring programs. We tested monitoR with two northeastern songbird species: black-throated green warbler (Setophaga virens) and ovenbird (Seiurus aurocapilla). We compared detection results from monitoR in 52 10-minute surveys recorded at 10 sites in Vermont and New York, USA to a subset of songs identified by a human that were of a single song type and had visually identifiable spectrograms (e.g. a signal:noise ratio of at least 10 dB: 166 out of 439 total songs for black-throated green warbler, 502 out of 990 total songs for ovenbird). monitoR’s automated detection process uses a ‘score cutoff’, which is the minimum match needed for an unknown event to be considered a detection and results in a true positive, true negative, false positive or false negative detection. At the chosen score cut-offs, monitoR correctly identified presence for black-throated green warbler and ovenbird in 64% and 72% of the 52 surveys using binary point matching, respectively, and 73% and 72% of the 52 surveys using spectrogram cross-correlation, respectively. Of individual songs, 72% of black-throated green warbler songs and 62% of ovenbird songs were identified by binary point matching. Spectrogram cross-correlation identified 83% of black-throated green warbler songs and 66% of ovenbird songs. False positive rates were  for song event detection.

  19. From Hype to an Operational Tool: Efforts to Establish a Long-Term Monitoring Protocol of Alluvial Sandbars using `Structure-from-Motion' Photogrammetry

    NASA Astrophysics Data System (ADS)

    Rossi, R.; Buscombe, D.; Grams, P. E.; Schmidt, J. C.; Wheaton, J. M.

    2016-12-01

    Despite recent advances in the use of `Structure-from-Motion' (SfM) photogrammetry to accurately map landforms, its utility for reliably detecting and monitoring geomorphic change from repeat surveys remains underexplored in fluvial environments. It is unclear how the combination of various image acquisition platforms and techniques, survey scales, vegetation cover, and terrain complexities translate into accuracy and precision metrics for SfM-based construction of digital elevation models (DEMs) of fluvial landforms. Although unmanned aerial vehicles offer the potential to rapidly image large areas, they can be relatively costly, require skilled operators, are vulnerable in adverse weather conditions, and often rely on GPS-positioning to improve their stability. This research details image acquisition techniques for an underrepresented SfM platform: the pole-mounted camera. We highlight image acquisition and post-processing limitations of the SfM method for alluvial sandbars (10s to 100s m2) located in Marble and Grand Canyons in a remote, fluvial landscape with limited field access, strong light gradients, highly variable surface texture and limited ground control. We recommend a pole-based SfM protocol and evaluate it by comparing SfM-derived DEMs against concurrent, total station surveys. Error models of the sandbar surfaces are developed for a variety of surface characteristics (e.g., bare sand, steep slopes, and areas of shadow). The Geomorphic Change Detection (GCD) Software is used to compare SfM DEMs from before and after the 2014 high flow release from Glen Canyon Dam. Complementing existing total-station based sandbar surveys with potentially more efficient and cost-effective SfM methods will contribute to the understanding of morphodynamic responses of sandbars to high flow releases from Glen Canyon Dam. In addition, the development and implementation of a SfM-based operational method for monitoring geomorphic change will provide a methodological foundation for extending the approach to other fluvial environments.

  20. From Hype to an Operational Tool: Efforts to Establish a Long-Term Monitoring Protocol of Alluvial Sandbars using 'Structure-from-Motion' Photogrammetry

    NASA Astrophysics Data System (ADS)

    Rossi, R.; Buscombe, D.; Grams, P. E.; Wheaton, J. M.

    2015-12-01

    Despite recent advances in the use of 'Structure-from-Motion' (SfM) photogrammetry to accurately map landforms, its utility for reliably detecting and monitoring geomorphic change from repeat surveys remains underexplored in fluvial environments. It is unclear how the combination of various image acquisition platforms and techniques, survey scales, vegetation cover, and terrain complexities translate into accuracy and precision metrics for SfM-based construction of digital elevation models (DEMs) of fluvial landforms. Although unmanned aerial vehicles offer the potential to rapidly image large areas, they can be relatively costly, require skilled operators, are vulnerable in adverse weather conditions, and often rely on GPS-positioning to improve their stability. This research details image acquisition techniques for an underrepresented SfM platform: the pole-mounted camera. We highlight image acquisition and post-processing limitations of the SfM method for alluvial sandbars (10s to 100s m2) located in Marble and Grand Canyons in a remote, fluvial landscape with limited field access, strong light gradients, highly variable surface texture and limited ground control. We recommend a pole-based SfM protocol and evaluate it by comparing SfM-derived DEMs against concurrent, total station surveys and TLS derived DEMs. Error models of the sandbar surfaces are developed for a variety of surface characteristics (e.g., bare sand, steep slopes, and areas of shadow). The Geomorphic Change Detection (GCD) Software is used to compare SfM DEMs from before and after the 2014 high flow release from Glen Canyon Dam. Complementing existing total-station based sandbar surveys with potentially more efficient and cost-effective SfM methods will contribute to the understanding of morphodynamic responses of sandbars to high flow releases from Glen Canyon Dam. In addition, the development and implementation of a SfM-based operational protocol for monitoring geomorphic change will provide a methodological foundation for extending the approach to other fluvial environments.

Top