NASA Technical Reports Server (NTRS)
Clinton, N. J. (Principal Investigator)
1980-01-01
Labeling errors made in the large area crop inventory experiment transition year estimates by Earth Observation Division image analysts are identified and quantified. The analysis was made from a subset of blind sites in six U.S. Great Plains states (Oklahoma, Kansas, Montana, Minnesota, North and South Dakota). The image interpretation basically was well done, resulting in a total omission error rate of 24 percent and a commission error rate of 4 percent. The largest amount of error was caused by factors beyond the control of the analysts who were following the interpretation procedures. The odd signatures, the largest error cause group, occurred mostly in areas of moisture abnormality. Multicrop labeling was tabulated showing the distribution of labeling for all crops.
NASA Technical Reports Server (NTRS)
Auger, Ludovic; Tangborn, Andrew; Atlas, Robert (Technical Monitor)
2002-01-01
A suboptimal Kalman filter system which evolves error covariances in terms of a truncated set of wavelet coefficients has been developed for the assimilation of chemical tracer observations of CH4. The truncation is carried out in such a way that the resolution of the error covariance, is reduced only in the zonal direction, where gradients are smaller. Assimilation experiments which last 24 days, and used different degrees of truncation were carried out. These reduced the covariance, by 90, 97 and 99 % and the computational cost of covariance propagation by 80, 93 and 96 % respectively. The difference in both error covariance and the tracer field between the truncated and full systems over this period were found to be not growing in the first case, and a growing relatively slowly in the later two cases. The largest errors in the tracer fields were found to occur in regions of largest zonal gradients in the tracer field.
Rußig, Lorenz L; Schulze, Ralf K W
2013-12-01
The goal of the present study was to develop a theoretical analysis of errors in implant position, which can occur owing to minute registration errors of a reference marker in a cone beam computed tomography volume when inserting an implant with a surgical stent. A virtual dental-arch model was created using anatomic data derived from the literature. Basic trigonometry was used to compute effects of defined minute registration errors of only voxel size. The errors occurring at the implant's neck and apex both in horizontal as in vertical direction were computed for mean ±95%-confidence intervals of jaw width and length and typical implant lengths (8, 10 and 12 mm). Largest errors occur in vertical direction for larger voxel sizes and for greater arch dimensions. For a 10 mm implant in the frontal region, these can amount to a mean of 0.716 mm (range: 0.201-1.533 mm). Horizontal errors at the neck are negligible, with a mean overall deviation of 0.009 mm (range: 0.001-0.034 mm). Errors increase with distance to the registration marker and voxel size and are affected by implant length. Our study shows that minute and realistic errors occurring in the automated registration of a reference object have an impact on the implant's position and angulation. These errors occur in the fundamental initial step in the long planning chain; thus, they are critical and should be made aware to users of these systems. © 2012 John Wiley & Sons A/S.
NASA Technical Reports Server (NTRS)
Tangborn, Andrew; Auger, Ludovic
2003-01-01
A suboptimal Kalman filter system which evolves error covariances in terms of a truncated set of wavelet coefficients has been developed for the assimilation of chemical tracer observations of CH4. This scheme projects the discretized covariance propagation equations and covariance matrix onto an orthogonal set of compactly supported wavelets. Wavelet representation is localized in both location and scale, which allows for efficient representation of the inherently anisotropic structure of the error covariances. The truncation is carried out in such a way that the resolution of the error covariance is reduced only in the zonal direction, where gradients are smaller. Assimilation experiments which last 24 days, and used different degrees of truncation were carried out. These reduced the covariance size by 90, 97 and 99 % and the computational cost of covariance propagation by 80, 93 and 96 % respectively. The difference in both error covariance and the tracer field between the truncated and full systems over this period were found to be not growing in the first case, and growing relatively slowly in the later two cases. The largest errors in the tracer fields were found to occur in regions of largest zonal gradients in the constituent field. This results indicate that propagation of error covariances for a global two-dimensional data assimilation system are currently feasible. Recommendations for further reduction in computational cost are made with the goal of extending this technique to three-dimensional global assimilation systems.
NASA Astrophysics Data System (ADS)
Szunyogh, Istvan; Kostelich, Eric J.; Gyarmati, G.; Patil, D. J.; Hunt, Brian R.; Kalnay, Eugenia; Ott, Edward; Yorke, James A.
2005-08-01
The accuracy and computational efficiency of the recently proposed local ensemble Kalman filter (LEKF) data assimilation scheme is investigated on a state-of-the-art operational numerical weather prediction model using simulated observations. The model selected for this purpose is the T62 horizontal- and 28-level vertical-resolution version of the Global Forecast System (GFS) of the National Center for Environmental Prediction. The performance of the data assimilation system is assessed for different configurations of the LEKF scheme. It is shown that a modest size (40-member) ensemble is sufficient to track the evolution of the atmospheric state with high accuracy. For this ensemble size, the computational time per analysis is less than 9 min on a cluster of PCs. The analyses are extremely accurate in the mid-latitude storm track regions. The largest analysis errors, which are typically much smaller than the observational errors, occur where parametrized physical processes play important roles. Because these are also the regions where model errors are expected to be the largest, limitations of a real-data implementation of the ensemble-based Kalman filter may be easily mistaken for model errors. In light of these results, the importance of testing the ensemble-based Kalman filter data assimilation systems on simulated observations is stressed.
Hunter, Chad R R N; Klein, Ran; Beanlands, Rob S; deKemp, Robert A
2016-04-01
Patient motion is a common problem during dynamic positron emission tomography (PET) scans for quantification of myocardial blood flow (MBF). The purpose of this study was to quantify the prevalence of body motion in a clinical setting and evaluate with realistic phantoms the effects of motion on blood flow quantification, including CT attenuation correction (CTAC) artifacts that result from PET-CT misalignment. A cohort of 236 sequential patients was analyzed for patient motion under resting and peak stress conditions by two independent observers. The presence of motion, affected time-frames, and direction of motion was recorded; discrepancy between observers was resolved by consensus review. Based on these results, patient body motion effects on MBF quantification were characterized using the digital NURBS-based cardiac-torso phantom, with characteristic time activity curves (TACs) assigned to the heart wall (myocardium) and blood regions. Simulated projection data were corrected for attenuation and reconstructed using filtered back-projection. All simulations were performed without noise added, and a single CT image was used for attenuation correction and aligned to the early- or late-frame PET images. In the patient cohort, mild motion of 0.5 ± 0.1 cm occurred in 24% and moderate motion of 1.0 ± 0.3 cm occurred in 38% of patients. Motion in the superior/inferior direction accounted for 45% of all detected motion, with 30% in the superior direction. Anterior/posterior motion was predominant (29%) in the posterior direction. Left/right motion occurred in 24% of cases, with similar proportions in the left and right directions. Computer simulation studies indicated that errors in MBF can approach 500% for scans with severe patient motion (up to 2 cm). The largest errors occurred when the heart wall was shifted left toward the adjacent lung region, resulting in a severe undercorrection for attenuation of the heart wall. Simulations also indicated that the magnitude of MBF errors resulting from motion in the superior/inferior and anterior/posterior directions was similar (up to 250%). Body motion effects were more detrimental for higher resolution PET imaging (2 vs 10 mm full-width at half-maximum), and for motion occurring during the mid-to-late time-frames. Motion correction of the reconstructed dynamic image series resulted in significant reduction in MBF errors, but did not account for the residual PET-CTAC misalignment artifacts. MBF bias was reduced further using global partial-volume correction, and using dynamic alignment of the PET projection data to the CT scan for accurate attenuation correction during image reconstruction. Patient body motion can produce MBF estimation errors up to 500%. To reduce these errors, new motion correction algorithms must be effective in identifying motion in the left/right direction, and in the mid-to-late time-frames, since these conditions produce the largest errors in MBF, particularly for high resolution PET imaging. Ideally, motion correction should be done before or during image reconstruction to eliminate PET-CTAC misalignment artifacts.
Expertise effects in the Moses illusion: detecting contradictions with stored knowledge.
Cantor, Allison D; Marsh, Elizabeth J
2017-02-01
People frequently miss contradictions with stored knowledge; for example, readers often fail to notice any problem with a reference to the Atlantic as the largest ocean. Critically, such effects occur even though participants later demonstrate knowing the Pacific is the largest ocean (the Moses Illusion) [Erickson, T. D., & Mattson, M. E. (1981). From words to meaning: A semantic illusion. Journal of Verbal Learning & Verbal Behavior, 20, 540-551]. We investigated whether such oversights disappear when erroneous references contradict information in one's expert domain, material which likely has been encountered many times and is particularly well-known. Biology and history graduate students monitored for errors while answering biology and history questions containing erroneous presuppositions ("In what US state were the forty-niners searching for oil?"). Expertise helped: participants were less susceptible to the illusion and less likely to later reproduce errors in their expert domain. However, expertise did not eliminate the illusion, even when errors were bolded and underlined, meaning that it was unlikely that people simply skipped over errors. The results support claims that people often use heuristics to judge truth, as opposed to directly retrieving information from memory, likely because such heuristics are adaptive and often lead to the correct answer. Even experts sometimes use such shortcuts, suggesting that overlearned and accessible knowledge does not guarantee retrieval of that information.
The influence of LED lighting on task accuracy: time of day, gender and myopia effects
NASA Astrophysics Data System (ADS)
Rao, Feng; Chan, A. H. S.; Zhu, Xi-Fang
2017-07-01
In this research, task errors were obtained during performance of a marker location task in which the markers were shown on a computer screen under nine LED lighting conditions; three illuminances (100, 300 and 500 lx) and three color temperatures (3000, 4500 and 6500 K). A total of 47 students participated voluntarily in these tasks. The results showed that task errors in the morning were small and nearly constant across the nine lighting conditions. However in the afternoon, the task errors were significantly larger and varied across lighting conditions. The largest errors for the afternoon session occurred when the color temperature was 4500 K and illuminance 500 lx. There were significant differences between task errors in the morning and afternoon sessions. No significant difference between females and males was found. Task errors for high myopia students were significantly larger than for the low myopia students under the same lighting conditions. In summary, the influence of LED lighting on task accuracy during office hours was not gender dependent, but was time of day and myopia dependent.
Trommer, J.T.; Loper, J.E.; Hammett, K.M.; Bowman, Georgia
1996-01-01
Hydrologists use several traditional techniques for estimating peak discharges and runoff volumes from ungaged watersheds. However, applying these techniques to watersheds in west-central Florida requires that empirical relationships be extrapolated beyond tested ranges. As a result there is some uncertainty as to their accuracy. Sixty-six storms in 15 west-central Florida watersheds were modeled using (1) the rational method, (2) the U.S. Geological Survey regional regression equations, (3) the Natural Resources Conservation Service (formerly the Soil Conservation Service) TR-20 model, (4) the Army Corps of Engineers HEC-1 model, and (5) the Environmental Protection Agency SWMM model. The watersheds ranged between fully developed urban and undeveloped natural watersheds. Peak discharges and runoff volumes were estimated using standard or recommended methods for determining input parameters. All model runs were uncalibrated and the selection of input parameters was not influenced by observed data. The rational method, only used to calculate peak discharges, overestimated 45 storms, underestimated 20 storms and estimated the same discharge for 1 storm. The mean estimation error for all storms indicates the method overestimates the peak discharges. Estimation errors were generally smaller in the urban watersheds and larger in the natural watersheds. The U.S. Geological Survey regression equations provide peak discharges for storms of specific recurrence intervals. Therefore, direct comparison with observed data was limited to sixteen observed storms that had precipitation equivalent to specific recurrence intervals. The mean estimation error for all storms indicates the method overestimates both peak discharges and runoff volumes. Estimation errors were smallest for the larger natural watersheds in Sarasota County, and largest for the small watersheds located in the eastern part of the study area. The Natural Resources Conservation Service TR-20 model, overestimated peak discharges for 45 storms and underestimated 21 storms, and overestimated runoff volumes for 44 storms and underestimated 22 storms. The mean estimation error for all storms modeled indicates that the model overestimates peak discharges and runoff volumes. The smaller estimation errors in both peak discharges and runoff volumes were for storms occurring in the urban watersheds, and the larger errors were for storms occurring in the natural watersheds. The HEC-1 model overestimated peak discharge rates for 55 storms and underestimated 11 storms. Runoff volumes were overestimated for 44 storms and underestimated for 22 storms using the Army Corps of Engineers HEC-1 model. The mean estimation error for all the storms modeled indicates that the model overestimates peak discharge rates and runoff volumes. Generally, the smaller estimation errors in peak discharges were for storms occurring in the urban watersheds, and the larger errors were for storms occurring in the natural watersheds. Estimation errors in runoff volumes; however, were smallest for the 3 natural watersheds located in the southernmost part of Sarasota County. The Environmental Protection Agency Storm Water Management model produced similar peak discharges and runoff volumes when using both the Green-Ampt and Horton infiltration methods. Estimated peak discharge and runoff volume data calculated with the Horton method was only slightly higher than those calculated with the Green-Ampt method. The mean estimation error for all the storms modeled indicates the model using the Green-Ampt infiltration method overestimates peak discharges and slightly underestimates runoff volumes. Using the Horton infiltration method, the model overestimates both peak discharges and runoff volumes. The smaller estimation errors in both peak discharges and runoff volumes were for storms occurring in the five natural watersheds in Sarasota County with the least amount of impervious cover and the lowest slopes. The largest er
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hunter, Chad R. R. N.; Kemp, Robert A. de, E-mail: RAdeKemp@ottawaheart.ca; Klein, Ran
Purpose: Patient motion is a common problem during dynamic positron emission tomography (PET) scans for quantification of myocardial blood flow (MBF). The purpose of this study was to quantify the prevalence of body motion in a clinical setting and evaluate with realistic phantoms the effects of motion on blood flow quantification, including CT attenuation correction (CTAC) artifacts that result from PET–CT misalignment. Methods: A cohort of 236 sequential patients was analyzed for patient motion under resting and peak stress conditions by two independent observers. The presence of motion, affected time-frames, and direction of motion was recorded; discrepancy between observers wasmore » resolved by consensus review. Based on these results, patient body motion effects on MBF quantification were characterized using the digital NURBS-based cardiac-torso phantom, with characteristic time activity curves (TACs) assigned to the heart wall (myocardium) and blood regions. Simulated projection data were corrected for attenuation and reconstructed using filtered back-projection. All simulations were performed without noise added, and a single CT image was used for attenuation correction and aligned to the early- or late-frame PET images. Results: In the patient cohort, mild motion of 0.5 ± 0.1 cm occurred in 24% and moderate motion of 1.0 ± 0.3 cm occurred in 38% of patients. Motion in the superior/inferior direction accounted for 45% of all detected motion, with 30% in the superior direction. Anterior/posterior motion was predominant (29%) in the posterior direction. Left/right motion occurred in 24% of cases, with similar proportions in the left and right directions. Computer simulation studies indicated that errors in MBF can approach 500% for scans with severe patient motion (up to 2 cm). The largest errors occurred when the heart wall was shifted left toward the adjacent lung region, resulting in a severe undercorrection for attenuation of the heart wall. Simulations also indicated that the magnitude of MBF errors resulting from motion in the superior/inferior and anterior/posterior directions was similar (up to 250%). Body motion effects were more detrimental for higher resolution PET imaging (2 vs 10 mm full-width at half-maximum), and for motion occurring during the mid-to-late time-frames. Motion correction of the reconstructed dynamic image series resulted in significant reduction in MBF errors, but did not account for the residual PET–CTAC misalignment artifacts. MBF bias was reduced further using global partial-volume correction, and using dynamic alignment of the PET projection data to the CT scan for accurate attenuation correction during image reconstruction. Conclusions: Patient body motion can produce MBF estimation errors up to 500%. To reduce these errors, new motion correction algorithms must be effective in identifying motion in the left/right direction, and in the mid-to-late time-frames, since these conditions produce the largest errors in MBF, particularly for high resolution PET imaging. Ideally, motion correction should be done before or during image reconstruction to eliminate PET-CTAC misalignment artifacts.« less
Dudley, Robert W.
2015-12-03
The largest average errors of prediction are associated with regression equations for the lowest streamflows derived for months during which the lowest streamflows of the year occur (such as the 5 and 1 monthly percentiles for August and September). The regression equations have been derived on the basis of streamflow and basin characteristics data for unregulated, rural drainage basins without substantial streamflow or drainage modifications (for example, diversions and (or) regulation by dams or reservoirs, tile drainage, irrigation, channelization, and impervious paved surfaces), therefore using the equations for regulated or urbanized basins with substantial streamflow or drainage modifications will yield results of unknown error. Input basin characteristics derived using techniques or datasets other than those documented in this report or using values outside the ranges used to develop these regression equations also will yield results of unknown error.
NASA Astrophysics Data System (ADS)
Dai, Liyun; Che, Tao; Ding, Yongjian; Hao, Xiaohua
2017-08-01
Snow cover on the Qinghai-Tibetan Plateau (QTP) plays a significant role in the global climate system and is an important water resource for rivers in the high-elevation region of Asia. At present, passive microwave (PMW) remote sensing data are the only efficient way to monitor temporal and spatial variations in snow depth at large scale. However, existing snow depth products show the largest uncertainties across the QTP. In this study, MODIS fractional snow cover product, point, line and intense sampling data are synthesized to evaluate the accuracy of snow cover and snow depth derived from PMW remote sensing data and to analyze the possible causes of uncertainties. The results show that the accuracy of snow cover extents varies spatially and depends on the fraction of snow cover. Based on the assumption that grids with MODIS snow cover fraction > 10 % are regarded as snow cover, the overall accuracy in snow cover is 66.7 %, overestimation error is 56.1 %, underestimation error is 21.1 %, commission error is 27.6 % and omission error is 47.4 %. The commission and overestimation errors of snow cover primarily occur in the northwest and southeast areas with low ground temperature. Omission error primarily occurs in cold desert areas with shallow snow, and underestimation error mainly occurs in glacier and lake areas. With the increase of snow cover fraction, the overestimation error decreases and the omission error increases. A comparison between snow depths measured in field experiments, measured at meteorological stations and estimated across the QTP shows that agreement between observation and retrieval improves with an increasing number of observation points in a PMW grid. The misclassification and errors between observed and retrieved snow depth are associated with the relatively coarse resolution of PMW remote sensing, ground temperature, snow characteristics and topography. To accurately understand the variation in snow depth across the QTP, new algorithms should be developed to retrieve snow depth with higher spatial resolution and should consider the variation in brightness temperatures at different frequencies emitted from ground with changing ground features.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Pogson, E; Liverpool and Macarthur Cancer Therapy Centres, Liverpool, NSW; Ingham Institute for Applied Medical Research, Sydney, NSW
Purpose: To quantify the impact of differing magnitudes of simulated linear accelerator errors on the dose to the target volume and organs at risk for nasopharynx VMAT. Methods: Ten nasopharynx cancer patients were retrospectively replanned twice with one full arc VMAT by two institutions. Treatment uncertainties (gantry angle and collimator in degrees, MLC field size and MLC shifts in mm) were introduced into these plans at increments of 5,2,1,−1,−2 and −5. This was completed using an in-house Python script within Pinnacle3 and analysed using 3DVH and MatLab. The mean and maximum dose were calculated for the Planning Target Volume (PTV1),more » parotids, brainstem, and spinal cord and then compared to the original baseline plan. The D1cc was also calculated for the spinal cord and brainstem. Patient average results were compared across institutions. Results: Introduced gantry angle errors had the smallest effect of dose, no tolerances were exceeded for one institution, and the second institutions VMAT plans were only exceeded for gantry angle of ±5° affecting different sided parotids by 14–18%. PTV1, brainstem and spinal cord tolerances were exceeded for collimator angles of ±5 degrees, MLC shifts and MLC field sizes of ±1 and beyond, at the first institution. At the second institution, sensitivity to errors was marginally higher for some errors including the collimator error producing doses exceeding tolerances above ±2 degrees, and marginally lower with tolerances exceeded above MLC shifts of ±2. The largest differences occur with MLC field sizes, with both institutions reporting exceeded tolerances, for all introduced errors (±1 and beyond). Conclusion: The plan robustness for VMAT nasopharynx plans has been demonstrated. Gantry errors have the least impact on patient doses, however MLC field sizes exceed tolerances even with relatively low introduced errors and also produce the largest errors. This was consistent across both departments. The authors acknowledge funding support from the NSW Cancer Council.« less
Pierson, T.C.
2007-01-01
Dating of dynamic, young (<500 years) geomorphic landforms, particularly volcanofluvial features, requires higher precision than is possible with radiocarbon dating. Minimum ages of recently created landforms have long been obtained from tree-ring ages of the oldest trees growing on new surfaces. But to estimate the year of landform creation requires that two time corrections be added to tree ages obtained from increment cores: (1) the time interval between stabilization of the new landform surface and germination of the sampled trees (germination lag time or GLT); and (2) the interval between seedling germination and growth to sampling height, if the trees are not cored at ground level. The sum of these two time intervals is the colonization time gap (CTG). Such time corrections have been needed for more precise dating of terraces and floodplains in lowland river valleys in the Cascade Range, where significant eruption-induced lateral shifting and vertical aggradation of channels can occur over years to decades, and where timing of such geomorphic changes can be critical to emergency planning. Earliest colonizing Douglas fir (Pseudotsuga menziesii) were sampled for tree-ring dating at eight sites on lowland (<750 m a.s.l.), recently formed surfaces of known age near three Cascade volcanoes - Mount Rainier, Mount St. Helens and Mount Hood - in southwestern Washington and northwestern Oregon. Increment cores or stem sections were taken at breast height and, where possible, at ground level from the largest, oldest-looking trees at each study site. At least ten trees were sampled at each site unless the total of early colonizers was less. Results indicate that a correction of four years should be used for GLT and 10 years for CTG if the single largest (and presumed oldest) Douglas fir growing on a surface of unknown age is sampled. This approach would have a potential error of up to 20 years. Error can be reduced by sampling the five largest Douglas fir instead of the single largest. A GLT correction of 5 years should be added to the mean ring-count age of the five largest trees growing on the surface being dated, if the trees are cored at ground level. This correction would have an approximate error of ??5 years. If the trees are cored at about 1.4 m above the round surface (breast height), a CTG correction of 11 years should be added to the mean age of the five sampled trees (with an error of about ??7 years).
DOE Office of Scientific and Technical Information (OSTI.GOV)
Borot de Battisti, M; Maenhout, M; Lagendijk, J J W
Purpose: To develop a new method which adaptively determines the optimal needle insertion sequence for HDR prostate brachytherapy involving divergent needle-by-needle dose delivery by e.g. a robotic device. A needle insertion sequence is calculated at the beginning of the intervention and updated after each needle insertion with feedback on needle positioning errors. Methods: Needle positioning errors and anatomy changes may occur during HDR brachytherapy which can lead to errors in the delivered dose. A novel strategy was developed to calculate and update the needle sequence and the dose plan after each needle insertion with feedback on needle positioning errors. Themore » dose plan optimization was performed by numerical simulations. The proposed needle sequence determination optimizes the final dose distribution based on the dose coverage impact of each needle. This impact is predicted stochastically by needle insertion simulations. HDR procedures were simulated with varying number of needle insertions (4 to 12) using 11 patient MR data-sets with PTV, prostate, urethra, bladder and rectum delineated. Needle positioning errors were modeled by random normally distributed angulation errors (standard deviation of 3 mm at the needle’s tip). The final dose parameters were compared in the situations where the needle with the largest vs. the smallest dose coverage impact was selected at each insertion. Results: Over all scenarios, the percentage of clinically acceptable final dose distribution improved when the needle selected had the largest dose coverage impact (91%) compared to the smallest (88%). The differences were larger for few (4 to 6) needle insertions (maximum difference scenario: 79% vs. 60%). The computation time of the needle sequence optimization was below 60s. Conclusion: A new adaptive needle sequence determination for HDR prostate brachytherapy was developed. Coupled to adaptive planning, the selection of the needle with the largest dose coverage impact increases chances of reaching the clinical constraints. M. Borot de Battisti is funded by Philips Medical Systems Nederland B.V.; M. Moerland is principal investigator on a contract funded by Philips Medical Systems Nederland B.V.; G. Hautvast and D. Binnekamp are fulltime employees of Philips Medical Systems Nederland B.V.« less
Száz, Dénes; Farkas, Alexandra; Barta, András; Kretzer, Balázs; Egri, Ádám; Horváth, Gábor
2016-07-01
The theory of sky-polarimetric Viking navigation has been widely accepted for decades without any information about the accuracy of this method. Previously, we have measured the accuracy of the first and second steps of this navigation method in psychophysical laboratory and planetarium experiments. Now, we have tested the accuracy of the third step in a planetarium experiment, assuming that the first and second steps are errorless. Using the fists of their outstretched arms, 10 test persons had to estimate the elevation angles (measured in numbers of fists and fingers) of black dots (representing the position of the occluded Sun) projected onto the planetarium dome. The test persons performed 2400 elevation estimations, 48% of which were more accurate than ±1°. We selected three test persons with the (i) largest and (ii) smallest elevation errors and (iii) highest standard deviation of the elevation error. From the errors of these three persons, we calculated their error function, from which the North errors (the angles with which they deviated from the geographical North) were determined for summer solstice and spring equinox, two specific dates of the Viking sailing period. The range of possible North errors Δ ω N was the lowest and highest at low and high solar elevations, respectively. At high elevations, the maximal Δ ω N was 35.6° and 73.7° at summer solstice and 23.8° and 43.9° at spring equinox for the best and worst test person (navigator), respectively. Thus, the best navigator was twice as good as the worst one. At solstice and equinox, high elevations occur the most frequently during the day, thus high North errors could occur more frequently than expected before. According to our findings, the ideal periods for sky-polarimetric Viking navigation are immediately after sunrise and before sunset, because the North errors are the lowest at low solar elevations.
Száz, Dénes; Farkas, Alexandra; Barta, András; Kretzer, Balázs; Egri, Ádám
2016-01-01
The theory of sky-polarimetric Viking navigation has been widely accepted for decades without any information about the accuracy of this method. Previously, we have measured the accuracy of the first and second steps of this navigation method in psychophysical laboratory and planetarium experiments. Now, we have tested the accuracy of the third step in a planetarium experiment, assuming that the first and second steps are errorless. Using the fists of their outstretched arms, 10 test persons had to estimate the elevation angles (measured in numbers of fists and fingers) of black dots (representing the position of the occluded Sun) projected onto the planetarium dome. The test persons performed 2400 elevation estimations, 48% of which were more accurate than ±1°. We selected three test persons with the (i) largest and (ii) smallest elevation errors and (iii) highest standard deviation of the elevation error. From the errors of these three persons, we calculated their error function, from which the North errors (the angles with which they deviated from the geographical North) were determined for summer solstice and spring equinox, two specific dates of the Viking sailing period. The range of possible North errors ΔωN was the lowest and highest at low and high solar elevations, respectively. At high elevations, the maximal ΔωN was 35.6° and 73.7° at summer solstice and 23.8° and 43.9° at spring equinox for the best and worst test person (navigator), respectively. Thus, the best navigator was twice as good as the worst one. At solstice and equinox, high elevations occur the most frequently during the day, thus high North errors could occur more frequently than expected before. According to our findings, the ideal periods for sky-polarimetric Viking navigation are immediately after sunrise and before sunset, because the North errors are the lowest at low solar elevations. PMID:27493566
Analysis/forecast experiments with a flow-dependent correlation function using FGGE data
NASA Technical Reports Server (NTRS)
Baker, W. E.; Bloom, S. C.; Carus, H.; Nestler, M. S.
1986-01-01
The use of a flow-dependent correlation function to improve the accuracy of an optimum interpolation (OI) scheme is examined. The development of the correlation function for the OI analysis scheme used for numerical weather prediction is described. The scheme uses a multivariate surface analysis over the oceans to model the pressure-wind error cross-correlation and it has the ability to use an error correlation function that is flow- and geographically-dependent. A series of four-day data assimilation experiments, conducted from January 5-9, 1979, were used to investigate the effect of the different features of the OI scheme (error correlation) on forecast skill for the barotropic lows and highs. The skill of the OI was compared with that of a successive correlation method (SCM) of analysis. It is observed that the largest difference in the correlation statistics occurred in barotropic and baroclinic lows and highs. The comparison reveals that the OI forecasts were more accurate than the SCM forecasts.
The resolution of identity and chain of spheres approximations for the LPNO-CCSD singles Fock term
NASA Astrophysics Data System (ADS)
Izsák, Róbert; Hansen, Andreas; Neese, Frank
2012-10-01
In the present work, the RIJCOSX approximation, developed earlier for accelerating the SCF procedure, is applied to one of the limiting factors of LPNO-CCSD calculations: the evaluation of the singles Fock term. It turns out that the introduction of RIJCOSX in the evaluation of the closed shell LPNO-CCSD singles Fock term causes errors below the microhartree limit. If the proposed procedure is also combined with RIJCOSX in SCF, then a somewhat larger error occurs, but reaction energy errors will still remain negligible. The speedup for the singles Fock term only is about 9-10 fold for the largest basis set applied. For the case of Penicillin using the def2-QZVPP basis set, a single point energy evaluation takes 2 day 16 h on a single processor leading to a total speedup of 2.6 as compared to a fully analytic calculation. Using eight processors, the same calculation takes only 14 h.
Verification of concentration time formulae accuracy in Southern Brazil
NASA Astrophysics Data System (ADS)
Freitas Ferreira, Pedro; Allasia, Daniel; Herbstrith Froemming, Gabriel; Ribeiro Fontoura, Jessica; Tassi, Rutineia
2016-04-01
The time of concentration (TC) of an urban catchment is a fundamental watershed parameter used to compute the peak discharge and/or in the hydrological simulation of sewer systems. In the lack of hydrological data for its estimative, several empirical formulae are used, however, almost none of them have been verified in Brazil leading to large uncertainties in the correct value. In this light, were tested several formulae such as the proposed by Kirpich (and a modifications of this equation proposed by the National Transport Bureau of Brazil (DNIT)), U.S. Corps. Of Engineers, Pasini, Dooge , Johnstone , Ventura and Ven T Chow as they are used in Brazil. The verification was accomplished against measured data in 5 sub-basins situated in the Dilúvio basin, a semi urbanized watershed that contains the most developed area of the city of Porto Alegre. All the rainfall stations were active in the period from late 1970's until early 1980's due to the existence of Projeto Dilúvio but today, however, only two of them are still in operation. Porto Alegre is the capital and largest city in the Brazilian southernmost state of Rio Grande do Sul with a population of approximately 1.6 million inhabitants, the tenth most populous city in the country and the centre of Brazil's fourth largest metropolitan area, with almost 4,5 million inhabitants (IBGE, 2010). The city is situated in a humid subtropical climate with high and regular precipitation throughout the year. Most summer rainfall occurs during thunderstorms and an occasional tropical storm, hurricane or cyclone. The results showed an error of around 70% for half of the formulas, with a tendency to underestimate TC values. Among the tested methods, Johnstone had the best overall result, with an average error of 25%, well far from the second, Dooge, with 43% of average error. The best results were obtained in only one basin, Dilúvio, the largest one, with an area of 25km², with an error of just 3% for Modified Kirpich, and 5% for Dooge . The results show the necessity of more studies in order to help in the selection of TC parameter for ungauged basins in Brazil.
Time-dependent gravity in Southern California, May 1974 to April 1979
NASA Technical Reports Server (NTRS)
Whitcomb, J. H.; Franzen, W. O.; Given, J. W.; Pechmann, J. C.; Ruff, L. J.
1980-01-01
The Southern California gravity survey, begun in May 1974 to obtain high spatial and temporal density gravity measurements to be coordinated with long-baseline three dimensional geodetic measurements of the Astronomical Radio Interferometric Earth Surveying project, is presented. Gravity data was obtained from 28 stations located in and near the seismically active San Gabriel section of the Southern California Transverse Ranges and adjoining San Andreas Fault at intervals of one to two months using gravity meters relative to a base station standard meter. A single-reading standard deviation of 11 microGal is obtained which leads to a relative deviation of 16 microGal between stations, with data averaging reducing the standard error to 2 to 3 microGal. The largest gravity variations observed are found to correlate with nearby well water variations and smoothed rainfall levels, indicating the importance of ground water variations to gravity measurements. The largest earthquake to occur during the survey, which extended to April, 1979, is found to be accompanied in the station closest to the earthquake by the largest measured gravity changes that cannot be related to factors other than tectonic distortion.
[Improving blood safety: errors management in transfusion medicine].
Bujandrić, Nevenka; Grujić, Jasmina; Krga-Milanović, Mirjana
2014-01-01
The concept of blood safety includes the entire transfusion chain starting with the collection of blood from the blood donor, and ending with blood transfusion to the patient. The concept involves quality management system as the systematic monitoring of adverse reactions and incidents regarding the blood donor or patient. Monitoring of near-miss errors show the critical points in the working process and increase transfusion safety. The aim of the study was to present the analysis results of adverse and unexpected events in transfusion practice with a potential risk to the health of blood donors and patients. One-year retrospective study was based on the collection, analysis and interpretation of written reports on medical errors in the Blood Transfusion Institute of Vojvodina. Errors were distributed according to the type, frequency and part of the working process where they occurred. Possible causes and corrective actions were described for each error. The study showed that there were not errors with potential health consequences for the blood donor/patient. Errors with potentially damaging consequences for patients were detected throughout the entire transfusion chain. Most of the errors were identified in the preanalytical phase. The human factor was responsible for the largest number of errors. Error reporting system has an important role in the error management and the reduction of transfusion-related risk of adverse events and incidents. The ongoing analysis reveals the strengths and weaknesses of the entire process and indicates the necessary changes. Errors in transfusion medicine can be avoided in a large percentage and prevention is cost-effective, systematic and applicable.
Error-Transparent Quantum Gates for Small Logical Qubit Architectures
NASA Astrophysics Data System (ADS)
Kapit, Eliot
2018-02-01
One of the largest obstacles to building a quantum computer is gate error, where the physical evolution of the state of a qubit or group of qubits during a gate operation does not match the intended unitary transformation. Gate error stems from a combination of control errors and random single qubit errors from interaction with the environment. While great strides have been made in mitigating control errors, intrinsic qubit error remains a serious problem that limits gate fidelity in modern qubit architectures. Simultaneously, recent developments of small error-corrected logical qubit devices promise significant increases in logical state lifetime, but translating those improvements into increases in gate fidelity is a complex challenge. In this Letter, we construct protocols for gates on and between small logical qubit devices which inherit the parent device's tolerance to single qubit errors which occur at any time before or during the gate. We consider two such devices, a passive implementation of the three-qubit bit flip code, and the author's own [E. Kapit, Phys. Rev. Lett. 116, 150501 (2016), 10.1103/PhysRevLett.116.150501] very small logical qubit (VSLQ) design, and propose error-tolerant gate sets for both. The effective logical gate error rate in these models displays superlinear error reduction with linear increases in single qubit lifetime, proving that passive error correction is capable of increasing gate fidelity. Using a standard phenomenological noise model for superconducting qubits, we demonstrate a realistic, universal one- and two-qubit gate set for the VSLQ, with error rates an order of magnitude lower than those for same-duration operations on single qubits or pairs of qubits. These developments further suggest that incorporating small logical qubits into a measurement based code could substantially improve code performance.
The continuous UV flux of Alpha Lyrae - Non-LTE results
NASA Technical Reports Server (NTRS)
Snijders, M. A. J.
1977-01-01
Non-LTE calculations for the ultraviolet C I and Si I continuous opacity show that LTE results overestimate the importance of these sources of opacity and underestimate the emergent flux in Alpha Lyr. The largest errors occur between 1100 and 1160 A, where the predicted flux in non-LTE is as much as 50 times larger than in LTE, in reasonable accord with Copernicus observations. The discrepancy between LTE models and observations has been interpreted to result from the existence of a chromosphere. Until a self-consistent non-LTE model atmosphere becomes available, such an interpretation is premature.
Medical Errors and Barriers to Reporting in Ten Hospitals in Southern Iran
Khammarnia, Mohammad; Ravangard, Ramin; Barfar, Eshagh; Setoodehzadeh, Fatemeh
2015-01-01
Background: International research shows that medical errors (MEs) are a major threat to patient safety. The present study aimed to describe MEs and barriers to reporting them in Shiraz public hospitals, Iran. Methods: A cross-sectional, retrospective study was conducted in 10 Shiraz public hospitals in the south of Iran, 2013. Using the standardised checklist of Shiraz University of Medical Sciences (referred to the Clinical Governance Department and recorded documentations) and Uribe questionnaire, we gathered the data in the hospitals. Results: A total of 4379 MEs were recorded in 10 hospitals. The highest frequency (27.1%) was related to systematic errors. Besides, most of the errors had occurred in the largest hospital (54.9%), internal wards (36.3%), and morning shifts (55.0%). The results revealed a significant association between the MEs and wards and hospitals (p < 0.001). Moreover, individual and organisational factors were the barriers to reporting ME in the studied hospitals. Also, a significant correlation was observed between the ME reporting barriers and the participants’ job experiences (p < 0.001). Conclusion: The medical errors were highly frequent in the studied hospitals especially in the larger hospitals, morning shift and in the nursing practice. Moreover, individual and organisational factors were considered as the barriers to reporting MEs. PMID:28729811
Estimating the densities of benzene-derived explosives using atomic volumes.
Ghule, Vikas D; Nirwan, Ayushi; Devi, Alka
2018-02-09
The application of average atomic volumes to predict the crystal densities of benzene-derived energetic compounds of general formula C a H b N c O d is presented, along with the reliability of this method. The densities of 119 neutral nitrobenzenes, energetic salts, and cocrystals with diverse compositions were estimated and compared with experimental data. Of the 74 nitrobenzenes for which direct comparisons could be made, the % error in the estimated density was within 0-3% for 54 compounds, 3-5% for 12 compounds, and 5-8% for the remaining 8 compounds. Among 45 energetic salts and cocrystals, the % error in the estimated density was within 0-3% for 25 compounds, 3-5% for 13 compounds, and 5-7.4% for 7 compounds. The absolute error surpassed 0.05 g/cm 3 for 27 of the 119 compounds (22%). The largest errors occurred for compounds containing fused rings and for compounds with three -NH 2 or -OH groups. Overall, the present approach for estimating the densities of benzene-derived explosives with different functional groups was found to be reliable. Graphical abstract Application and reliability of average atom volume in the crystal density prediction of energetic compounds containing benzene ring.
How to test validity in orthodontic research: a mixed dentition analysis example.
Donatelli, Richard E; Lee, Shin-Jae
2015-02-01
The data used to test the validity of a prediction method should be different from the data used to generate the prediction model. In this study, we explored whether an independent data set is mandatory for testing the validity of a new prediction method and how validity can be tested without independent new data. Several validation methods were compared in an example using the data from a mixed dentition analysis with a regression model. The validation errors of real mixed dentition analysis data and simulation data were analyzed for increasingly large data sets. The validation results of both the real and the simulation studies demonstrated that the leave-1-out cross-validation method had the smallest errors. The largest errors occurred in the traditional simple validation method. The differences between the validation methods diminished as the sample size increased. The leave-1-out cross-validation method seems to be an optimal validation method for improving the prediction accuracy in a data set with limited sample sizes. Copyright © 2015 American Association of Orthodontists. Published by Elsevier Inc. All rights reserved.
Readers of Largest U.S. History Textbooks Discover a Storehouse of Misinformation.
ERIC Educational Resources Information Center
Putka, Gary
1992-01-01
Reports that a Texas advocacy group discovered thousands of errors in U.S. history textbooks. Notes that the books underwent the review after drawing favorable reactions from Texas education officials. Identifies possible explanations for the errors and steps being taken to reduce errors in the future. (SG)
A predictability study of Lorenz's 28-variable model as a dynamical system
NASA Technical Reports Server (NTRS)
Krishnamurthy, V.
1993-01-01
The dynamics of error growth in a two-layer nonlinear quasi-geostrophic model has been studied to gain an understanding of the mathematical theory of atmospheric predictability. The growth of random errors of varying initial magnitudes has been studied, and the relation between this classical approach and the concepts of the nonlinear dynamical systems theory has been explored. The local and global growths of random errors have been expressed partly in terms of the properties of an error ellipsoid and the Liapunov exponents determined by linear error dynamics. The local growth of small errors is initially governed by several modes of the evolving error ellipsoid but soon becomes dominated by the longest axis. The average global growth of small errors is exponential with a growth rate consistent with the largest Liapunov exponent. The duration of the exponential growth phase depends on the initial magnitude of the errors. The subsequent large errors undergo a nonlinear growth with a steadily decreasing growth rate and attain saturation that defines the limit of predictability. The degree of chaos and the largest Liapunov exponent show considerable variation with change in the forcing, which implies that the time variation in the external forcing can introduce variable character to the predictability.
New empirically-derived solar radiation pressure model for GPS satellites
NASA Technical Reports Server (NTRS)
Bar-Sever, Y.; Kuang, D.
2003-01-01
Solar radiation pressure force is the second largest perturbation acting on GPS satellites, after the gravitational attraction from the Earth, Sun, and Moon. It is the largest error source in the modeling of GPS orbital dynamics.
Estimation of sampling error uncertainties in observed surface air temperature change in China
NASA Astrophysics Data System (ADS)
Hua, Wei; Shen, Samuel S. P.; Weithmann, Alexander; Wang, Huijun
2017-08-01
This study examines the sampling error uncertainties in the monthly surface air temperature (SAT) change in China over recent decades, focusing on the uncertainties of gridded data, national averages, and linear trends. Results indicate that large sampling error variances appear at the station-sparse area of northern and western China with the maximum value exceeding 2.0 K2 while small sampling error variances are found at the station-dense area of southern and eastern China with most grid values being less than 0.05 K2. In general, the negative temperature existed in each month prior to the 1980s, and a warming in temperature began thereafter, which accelerated in the early and mid-1990s. The increasing trend in the SAT series was observed for each month of the year with the largest temperature increase and highest uncertainty of 0.51 ± 0.29 K (10 year)-1 occurring in February and the weakest trend and smallest uncertainty of 0.13 ± 0.07 K (10 year)-1 in August. The sampling error uncertainties in the national average annual mean SAT series are not sufficiently large to alter the conclusion of the persistent warming in China. In addition, the sampling error uncertainties in the SAT series show a clear variation compared with other uncertainty estimation methods, which is a plausible reason for the inconsistent variations between our estimate and other studies during this period.
NASA Astrophysics Data System (ADS)
GonzáLez, Pablo J.; FernáNdez, José
2011-10-01
Interferometric Synthetic Aperture Radar (InSAR) is a reliable technique for measuring crustal deformation. However, despite its long application in geophysical problems, its error estimation has been largely overlooked. Currently, the largest problem with InSAR is still the atmospheric propagation errors, which is why multitemporal interferometric techniques have been successfully developed using a series of interferograms. However, none of the standard multitemporal interferometric techniques, namely PS or SB (Persistent Scatterers and Small Baselines, respectively) provide an estimate of their precision. Here, we present a method to compute reliable estimates of the precision of the deformation time series. We implement it for the SB multitemporal interferometric technique (a favorable technique for natural terrains, the most usual target of geophysical applications). We describe the method that uses a properly weighted scheme that allows us to compute estimates for all interferogram pixels, enhanced by a Montecarlo resampling technique that properly propagates the interferogram errors (variance-covariances) into the unknown parameters (estimated errors for the displacements). We apply the multitemporal error estimation method to Lanzarote Island (Canary Islands), where no active magmatic activity has been reported in the last decades. We detect deformation around Timanfaya volcano (lengthening of line-of-sight ˜ subsidence), where the last eruption in 1730-1736 occurred. Deformation closely follows the surface temperature anomalies indicating that magma crystallization (cooling and contraction) of the 300-year shallow magmatic body under Timanfaya volcano is still ongoing.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lobb, Eric, E-mail: eclobb2@gmail.com
2014-04-01
The dosimetric effect of errors in patient position is studied on-phantom as a function of simulated bolus thickness to assess the need for bolus utilization in scalp radiotherapy with tomotherapy. A treatment plan is generated on a cylindrical phantom, mimicking a radiotherapy technique for the scalp utilizing primarily tangential beamlets. A planning target volume with embedded scalplike clinical target volumes (CTVs) is planned to a uniform dose of 200 cGy. Translational errors in phantom position are introduced in 1-mm increments and dose is recomputed from the original sinogram. For each error the maximum dose, minimum dose, clinical target dose homogeneitymore » index (HI), and dose-volume histogram (DVH) are presented for simulated bolus thicknesses from 0 to 10 mm. Baseline HI values for all bolus thicknesses were in the 5.5 to 7.0 range, increasing to a maximum of 18.0 to 30.5 for the largest positioning errors when 0 to 2 mm of bolus is used. Utilizing 5 mm of bolus resulted in a maximum HI value of 9.5 for the largest positioning errors. Using 0 to 2 mm of bolus resulted in minimum and maximum dose values of 85% to 94% and 118% to 125% of the prescription dose, respectively. When using 5 mm of bolus these values were 98.5% and 109.5%. DVHs showed minimal changes in CTV dose coverage when using 5 mm of bolus, even for the largest positioning errors. CTV dose homogeneity becomes increasingly sensitive to errors in patient position as bolus thickness decreases when treating the scalp with primarily tangential beamlets. Performing a radial expansion of the scalp CTV into 5 mm of bolus material minimizes dosimetric sensitivity to errors in patient position as large as 5 mm and is therefore recommended.« less
NASA Technical Reports Server (NTRS)
Zhang, Liwei Dennis; Milman, Mark; Korechoff, Robert
2004-01-01
The current design of the Space Interferometry Mission (SIM) employs a 19 laser-metrology-beam system (also called L19 external metrology truss) to monitor changes of distances between the fiducials of the flight system's multiple baselines. The function of the external metrology truss is to aid in the determination of the time-variations of the interferometer baseline. The largest contributor to truss error occurs in SIM wide-angle observations when the articulation of the siderostat mirrors (in order to gather starlight from different sky coordinates) brings to light systematic errors due to offsets at levels of instrument components (which include comer cube retro-reflectors, etc.). This error is labeled external metrology wide-angle field-dependent error. Physics-based model of field-dependent error at single metrology gauge level is developed and linearly propagated to errors in interferometer delay. In this manner delay error sensitivity to various error parameters or their combination can be studied using eigenvalue/eigenvector analysis. Also validation of physics-based field-dependent model on SIM testbed lends support to the present approach. As a first example, dihedral error model is developed for the comer cubes (CC) attached to the siderostat mirrors. Then the delay errors due to this effect can be characterized using the eigenvectors of composite CC dihedral error. The essence of the linear error model is contained in an error-mapping matrix. A corresponding Zernike component matrix approach is developed in parallel, first for convenience of describing the RMS of errors across the field-of-regard (FOR), and second for convenience of combining with additional models. Average and worst case residual errors are computed when various orders of field-dependent terms are removed from the delay error. Results of the residual errors are important in arriving at external metrology system component requirements. Double CCs with ideally co-incident vertices reside with the siderostat. The non-common vertex error (NCVE) is treated as a second example. Finally combination of models, and various other errors are discussed.
Evaluation of Acoustic Doppler Current Profiler measurements of river discharge
Morlock, S.E.
1996-01-01
The standard deviations of the ADCP measurements ranged from approximately 1 to 6 percent and were generally higher than the measurement errors predicted by error-propagation analysis of ADCP instrument performance. These error-prediction methods assume that the largest component of ADCP discharge measurement error is instrument related. The larger standard deviations indicate that substantial portions of measurement error may be attributable to sources unrelated to ADCP electronics or signal processing and are functions of the field environment.
Role-modeling and medical error disclosure: a national survey of trainees.
Martinez, William; Hickson, Gerald B; Miller, Bonnie M; Doukas, David J; Buckley, John D; Song, John; Sehgal, Niraj L; Deitz, Jennifer; Braddock, Clarence H; Lehmann, Lisa Soleymani
2014-03-01
To measure trainees' exposure to negative and positive role-modeling for responding to medical errors and to examine the association between that exposure and trainees' attitudes and behaviors regarding error disclosure. Between May 2011 and June 2012, 435 residents at two large academic medical centers and 1,187 medical students from seven U.S. medical schools received anonymous, electronic questionnaires. The questionnaire asked respondents about (1) experiences with errors, (2) training for responding to errors, (3) behaviors related to error disclosure, (4) exposure to role-modeling for responding to errors, and (5) attitudes regarding disclosure. Using multivariate regression, the authors analyzed whether frequency of exposure to negative and positive role-modeling independently predicted two primary outcomes: (1) attitudes regarding disclosure and (2) nontransparent behavior in response to a harmful error. The response rate was 55% (884/1,622). Training on how to respond to errors had the largest independent, positive effect on attitudes (standardized effect estimate, 0.32, P < .001); negative role-modeling had the largest independent, negative effect (standardized effect estimate, -0.26, P < .001). Positive role-modeling had a positive effect on attitudes (standardized effect estimate, 0.26, P < .001). Exposure to negative role-modeling was independently associated with an increased likelihood of trainees' nontransparent behavior in response to an error (OR 1.37, 95% CI 1.15-1.64; P < .001). Exposure to role-modeling predicts trainees' attitudes and behavior regarding the disclosure of harmful errors. Negative role models may be a significant impediment to disclosure among trainees.
Warrick, J.A.; Mertes, L.A.K.; Siegel, D.A.; Mackenzie, C.
2004-01-01
A technique is presented for estimating suspended sediment concentrations of turbid coastal waters with remotely sensed multi-spectral data. The method improves upon many standard techniques, since it incorporates analyses of multiple wavelength bands (four for Sea-viewing Wide Field of view Sensor (SeaWiFS)) and a nonlinear calibration, which produce highly accurate results (expected errors are approximately ±10%). Further, potential errors produced by erroneous atmospheric calibration in excessively turbid waters and influences of dissolved organic materials, chlorophyll pigments and atmospheric aerosols are limited by a dark pixel subtraction and removal of the violet to blue wavelength bands. Results are presented for the Santa Barbara Channel, California where suspended sediment concentrations ranged from 0–200+ mg l−1 (±20 mg l−1) immediately after large river runoff events. The largest plumes were observed 10–30 km off the coast and occurred immediately following large El Niño winter floods.
Interdisciplinary Coordination Reviews: A Process to Reduce Construction Costs.
ERIC Educational Resources Information Center
Fewell, Dennis A.
1998-01-01
Interdisciplinary Coordination design review is instrumental in detecting coordination errors and omissions in construction documents. Cleansing construction documents of interdisciplinary coordination errors reduces time extensions, the largest source of change orders, and limits exposure to liability claims. Improving the quality of design…
On the Limitations of Variational Bias Correction
NASA Technical Reports Server (NTRS)
Moradi, Isaac; Mccarty, Will; Gelaro, Ronald
2018-01-01
Satellite radiances are the largest dataset assimilated into Numerical Weather Prediction (NWP) models, however the data are subject to errors and uncertainties that need to be accounted for before assimilating into the NWP models. Variational bias correction uses the time series of observation minus background to estimate the observations bias. This technique does not distinguish between the background error, forward operator error, and observations error so that all these errors are summed up together and counted as observation error. We identify some sources of observations errors (e.g., antenna emissivity, non-linearity in the calibration, and antenna pattern) and show the limitations of variational bias corrections on estimating these errors.
A statistical approach to determining energetic outer radiation belt electron precipitation fluxes
NASA Astrophysics Data System (ADS)
Simon Wedlund, Mea; Clilverd, Mark A.; Rodger, Craig J.; Cresswell-Moorcock, Kathy; Cobbett, Neil; Breen, Paul; Danskin, Donald; Spanswick, Emma; Rodriguez, Juan V.
2014-05-01
Subionospheric radio wave data from an Antarctic-Arctic Radiation-Belt (Dynamic) Deposition VLF Atmospheric Research Konsortia (AARDDVARK) receiver located in Churchill, Canada, is analyzed to determine the characteristics of electron precipitation into the atmosphere over the range 3 < L < 7. The study advances previous work by combining signals from two U.S. transmitters from 20 July to 20 August 2010, allowing error estimates of derived electron precipitation fluxes to be calculated, including the application of time-varying electron energy spectral gradients. Electron precipitation observations from the NOAA POES satellites and a ground-based riometer provide intercomparison and context for the AARDDVARK measurements. AARDDVARK radiowave propagation data showed responses suggesting energetic electron precipitation from the outer radiation belt starting 27 July 2010 and lasting ~20 days. The uncertainty in >30 keV precipitation flux determined by the AARDDVARK technique was found to be ±10%. Peak >30 keV precipitation fluxes of AARDDVARK-derived precipitation flux during the main and recovery phase of the largest geomagnetic storm, which started on 4 August 2010, were >105 el cm-2 s-1 sr-1. The largest fluxes observed by AARDDVARK occurred on the dayside and were delayed by several days from the start of the geomagnetic disturbance. During the main phase of the disturbances, nightside fluxes were dominant. Significant differences in flux estimates between POES, AARDDVARK, and the riometer were found after the main phase of the largest disturbance, with evidence provided to suggest that >700 keV electron precipitation was occurring. Currently the presence of such relativistic electron precipitation introduces some uncertainty in the analysis of AARDDVARK data, given the assumption of a power law electron precipitation spectrum.
Khammarnia, Mohammad; Sharifian, Roxana; Zand, Farid; Barati, Omid; Keshtkaran, Ali; Sabetian, Golnar; Shahrokh, , Nasim; Setoodezadeh, Fatemeh
2017-01-01
Background: One way to reduce medical errors associated with physician orders is computerized physician order entry (CPOE) software. This study was conducted to compare prescription orders between 2 groups before and after CPOE implementation in a hospital. Methods: We conducted a before-after prospective study in 2 intensive care unit (ICU) wards (as intervention and control wards) in the largest tertiary public hospital in South of Iran during 2014 and 2016. All prescription orders were validated by a clinical pharmacist and an ICU physician. The rates of ordering the errors in medical orders were compared before (manual ordering) and after implementation of the CPOE. A standard checklist was used for data collection. For the data analysis, SPSS Version 21, descriptive statistics, and analytical tests such as McNemar, chi-square, and logistic regression were used. Results: The CPOE significantly decreased 2 types of errors, illegible orders and lack of writing the drug form, in the intervention ward compared to the control ward (p< 0.05); however, the 2 errors increased due to the defect in the CPOE (p< 0.001). The use of CPOE decreased the prescription errors from 19% to 3% (p= 0.001), However, no differences were observed in the control ward (p<0.05). In addition, more errors occurred in the morning shift (p< 0.001). Conclusion: In general, the use of CPOE significantly reduced the prescription errors. Nonetheless, more caution should be exercised in the use of this system, and its deficiencies should be resolved. Furthermore, it is recommended that CPOE be used to improve the quality of delivered services in hospitals. PMID:29445698
Khammarnia, Mohammad; Sharifian, Roxana; Zand, Farid; Barati, Omid; Keshtkaran, Ali; Sabetian, Golnar; Shahrokh, Nasim; Setoodezadeh, Fatemeh
2017-01-01
Background: One way to reduce medical errors associated with physician orders is computerized physician order entry (CPOE) software. This study was conducted to compare prescription orders between 2 groups before and after CPOE implementation in a hospital. Methods: We conducted a before-after prospective study in 2 intensive care unit (ICU) wards (as intervention and control wards) in the largest tertiary public hospital in South of Iran during 2014 and 2016. All prescription orders were validated by a clinical pharmacist and an ICU physician. The rates of ordering the errors in medical orders were compared before (manual ordering) and after implementation of the CPOE. A standard checklist was used for data collection. For the data analysis, SPSS Version 21, descriptive statistics, and analytical tests such as McNemar, chi-square, and logistic regression were used. Results: The CPOE significantly decreased 2 types of errors, illegible orders and lack of writing the drug form, in the intervention ward compared to the control ward (p< 0.05); however, the 2 errors increased due to the defect in the CPOE (p< 0.001). The use of CPOE decreased the prescription errors from 19% to 3% (p= 0.001), However, no differences were observed in the control ward (p<0.05). In addition, more errors occurred in the morning shift (p< 0.001). Conclusion: In general, the use of CPOE significantly reduced the prescription errors. Nonetheless, more caution should be exercised in the use of this system, and its deficiencies should be resolved. Furthermore, it is recommended that CPOE be used to improve the quality of delivered services in hospitals.
Revised techniques for estimating peak discharges from channel width in Montana
Parrett, Charles; Hull, J.A.; Omang, R.J.
1987-01-01
This study was conducted to develop new estimating equations based on channel width and the updated flood frequency curves of previous investigations. Simple regression equations for estimating peak discharges with recurrence intervals of 2, 5, 10 , 25, 50, and 100 years were developed for seven regions in Montana. The standard errors of estimates for the equations that use active channel width as the independent variables ranged from 30% to 87%. The standard errors of estimate for the equations that use bankfull width as the independent variable ranged from 34% to 92%. The smallest standard errors generally occurred in the prediction equations for the 2-yr flood, 5-yr flood, and 10-yr flood, and the largest standard errors occurred in the prediction equations for the 100-yr flood. The equations that use active channel width and the equations that use bankfull width were determined to be about equally reliable in five regions. In the West Region, the equations that use bankfull width were slightly more reliable than those based on active channel width, whereas in the East-Central Region the equations that use active channel width were slightly more reliable than those based on bankfull width. Compared with similar equations previously developed, the standard errors of estimate for the new equations are substantially smaller in three regions and substantially larger in two regions. Limitations on the use of the estimating equations include: (1) The equations are based on stable conditions of channel geometry and prevailing water and sediment discharge; (2) The measurement of channel width requires a site visit, preferably by a person with experience in the method, and involves appreciable measurement errors; (3) Reliability of results from the equations for channel widths beyond the range of definition is unknown. In spite of the limitations, the estimating equations derived in this study are considered to be as reliable as estimating equations based on basin and climatic variables. Because the two types of estimating equations are independent, results from each can be weighted inversely proportional to their variances, and averaged. The weighted average estimate has a variance less than either individual estimate. (Author 's abstract)
DOE Office of Scientific and Technical Information (OSTI.GOV)
McVicker, A; Oldham, M; Yin, F
2014-06-15
Purpose: To test the ability of the TG-119 commissioning process and RPC credentialing to detect errors in the commissioning process for a commercial Treatment Planning System (TPS). Methods: We introduced commissioning errors into the commissioning process for the Anisotropic Analytical Algorithm (AAA) within the Eclipse TPS. We included errors in Dosimetric Leaf Gap (DLG), electron contamination, flattening filter material, and beam profile measurement with an inappropriately large farmer chamber (simulated using sliding window smoothing of profiles). We then evaluated the clinical impact of these errors on clinical intensity modulated radiation therapy (IMRT) plans (head and neck, low and intermediate riskmore » prostate, mesothelioma, and scalp) by looking at PTV D99, and mean and max OAR dose. Finally, for errors with substantial clinical impact we determined sensitivity of the RPC IMRT film analysis at the midpoint between PTV and OAR using a 4mm distance to agreement metric, and of a 7% TLD dose comparison. We also determined sensitivity of the 3 dose planes of the TG-119 C-shape IMRT phantom using gamma criteria of 3% 3mm. Results: The largest clinical impact came from large changes in the DLG with a change of 1mm resulting in up to a 5% change in the primary PTV D99. This resulted in a discrepancy in the RPC TLDs in the PTVs and OARs of 7.1% and 13.6% respectively, which would have resulted in detection. While use of incorrect flattening filter caused only subtle errors (<1%) in clinical plans, the effect was most pronounced for the RPC TLDs in the OARs (>6%). Conclusion: The AAA commissioning process within the Eclipse TPS is surprisingly robust to user error. When errors do occur, the RPC and TG-119 commissioning credentialing criteria are effective at detecting them; however OAR TLDs are the most sensitive despite the RPC currently excluding them from analysis.« less
Performance of the Gemini Planet Imager’s adaptive optics system
Poyneer, Lisa A.; Palmer, David W.; Macintosh, Bruce; ...
2016-01-07
The Gemini Planet Imager’s adaptive optics (AO) subsystem was designed specifically to facilitate high-contrast imaging. We give a definitive description of the system’s algorithms and technologies as built. Ultimately, the error budget indicates that for all targets and atmospheric conditions AO bandwidth error is the largest term.
NASA Astrophysics Data System (ADS)
Wu, Kang-Hung; Su, Ching-Lun; Chu, Yen-Hsyang
2015-03-01
In this article, we use the International Reference Ionosphere (IRI) model to simulate temporal and spatial distributions of global E region electron densities retrieved by the FORMOSAT-3/COSMIC satellites by means of GPS radio occultation (RO) technique. Despite regional discrepancies in the magnitudes of the E region electron density, the IRI model simulations can, on the whole, describe the COSMIC measurements in quality and quantity. On the basis of global ionosonde network and the IRI model, the retrieval errors of the global COSMIC-measured E region peak electron density (NmE) from July 2006 to July 2011 are examined and simulated. The COSMIC measurement and the IRI model simulation both reveal that the magnitudes of the percentage error (PE) and root mean-square-error (RMSE) of the relative RO retrieval errors of the NmE values are dependent on local time (LT) and geomagnetic latitude, with minimum in the early morning and at high latitudes and maximum in the afternoon and at middle latitudes. In addition, the seasonal variation of PE and RMSE values seems to be latitude dependent. After removing the IRI model-simulated GPS RO retrieval errors from the original COSMIC measurements, the average values of the annual and monthly mean percentage errors of the RO retrieval errors of the COSMIC-measured E region electron density are, respectively, substantially reduced by a factor of about 2.95 and 3.35, and the corresponding root-mean-square errors show averaged decreases of 15.6% and 15.4%, respectively. It is found that, with this process, the largest reduction in the PE and RMSE of the COSMIC-measured NmE occurs at the equatorial anomaly latitudes 10°N-30°N in the afternoon from 14 to 18 LT, with a factor of 25 and 2, respectively. Statistics show that the residual errors that remained in the corrected COSMIC-measured NmE vary in a range of -20% to 38%, which are comparable to or larger than the percentage errors of the IRI-predicted NmE fluctuating in a range of -6.5% to 20%.
Evaluation of three lidar scanning strategies for turbulence measurements
NASA Astrophysics Data System (ADS)
Newman, J. F.; Klein, P. M.; Wharton, S.; Sathe, A.; Bonin, T. A.; Chilson, P. B.; Muschinski, A.
2015-11-01
Several errors occur when a traditional Doppler-beam swinging (DBS) or velocity-azimuth display (VAD) strategy is used to measure turbulence with a lidar. To mitigate some of these errors, a scanning strategy was recently developed which employs six beam positions to independently estimate the u, v, and w velocity variances and covariances. In order to assess the ability of these different scanning techniques to measure turbulence, a Halo scanning lidar, WindCube v2 pulsed lidar and ZephIR continuous wave lidar were deployed at field sites in Oklahoma and Colorado with collocated sonic anemometers. Results indicate that the six-beam strategy mitigates some of the errors caused by VAD and DBS scans, but the strategy is strongly affected by errors in the variance measured at the different beam positions. The ZephIR and WindCube lidars overestimated horizontal variance values by over 60 % under unstable conditions as a result of variance contamination, where additional variance components contaminate the true value of the variance. A correction method was developed for the WindCube lidar that uses variance calculated from the vertical beam position to reduce variance contamination in the u and v variance components. The correction method reduced WindCube variance estimates by over 20 % at both the Oklahoma and Colorado sites under unstable conditions, when variance contamination is largest. This correction method can be easily applied to other lidars that contain a vertical beam position and is a promising method for accurately estimating turbulence with commercially available lidars.
Evaluation of three lidar scanning strategies for turbulence measurements
NASA Astrophysics Data System (ADS)
Newman, Jennifer F.; Klein, Petra M.; Wharton, Sonia; Sathe, Ameya; Bonin, Timothy A.; Chilson, Phillip B.; Muschinski, Andreas
2016-05-01
Several errors occur when a traditional Doppler beam swinging (DBS) or velocity-azimuth display (VAD) strategy is used to measure turbulence with a lidar. To mitigate some of these errors, a scanning strategy was recently developed which employs six beam positions to independently estimate the u, v, and w velocity variances and covariances. In order to assess the ability of these different scanning techniques to measure turbulence, a Halo scanning lidar, WindCube v2 pulsed lidar, and ZephIR continuous wave lidar were deployed at field sites in Oklahoma and Colorado with collocated sonic anemometers.Results indicate that the six-beam strategy mitigates some of the errors caused by VAD and DBS scans, but the strategy is strongly affected by errors in the variance measured at the different beam positions. The ZephIR and WindCube lidars overestimated horizontal variance values by over 60 % under unstable conditions as a result of variance contamination, where additional variance components contaminate the true value of the variance. A correction method was developed for the WindCube lidar that uses variance calculated from the vertical beam position to reduce variance contamination in the u and v variance components. The correction method reduced WindCube variance estimates by over 20 % at both the Oklahoma and Colorado sites under unstable conditions, when variance contamination is largest. This correction method can be easily applied to other lidars that contain a vertical beam position and is a promising method for accurately estimating turbulence with commercially available lidars.
NASA Astrophysics Data System (ADS)
Duan, Wansuo; Zhao, Peng
2017-04-01
Within the Zebiak-Cane model, the nonlinear forcing singular vector (NFSV) approach is used to investigate the role of model errors in the "Spring Predictability Barrier" (SPB) phenomenon within ENSO predictions. NFSV-related errors have the largest negative effect on the uncertainties of El Niño predictions. NFSV errors can be classified into two types: the first is characterized by a zonal dipolar pattern of SST anomalies (SSTA), with the western poles centered in the equatorial central-western Pacific exhibiting positive anomalies and the eastern poles in the equatorial eastern Pacific exhibiting negative anomalies; and the second is characterized by a pattern almost opposite the first type. The first type of error tends to have the worst effects on El Niño growth-phase predictions, whereas the latter often yields the largest negative effects on decaying-phase predictions. The evolution of prediction errors caused by NFSV-related errors exhibits prominent seasonality, with the fastest error growth in the spring and/or summer seasons; hence, these errors result in a significant SPB related to El Niño events. The linear counterpart of NFSVs, the (linear) forcing singular vector (FSV), induces a less significant SPB because it contains smaller prediction errors. Random errors cannot generate a SPB for El Niño events. These results show that the occurrence of an SPB is related to the spatial patterns of tendency errors. The NFSV tendency errors cause the most significant SPB for El Niño events. In addition, NFSVs often concentrate these large value errors in a few areas within the equatorial eastern and central-western Pacific, which likely represent those areas sensitive to El Niño predictions associated with model errors. Meanwhile, these areas are also exactly consistent with the sensitive areas related to initial errors determined by previous studies. This implies that additional observations in the sensitive areas would not only improve the accuracy of the initial field but also promote the reduction of model errors to greatly improve ENSO forecasts.
Henneman, Elizabeth A; Roche, Joan P; Fisher, Donald L; Cunningham, Helene; Reilly, Cheryl A; Nathanson, Brian H; Henneman, Philip L
2010-02-01
This study examined types of errors that occurred or were recovered in a simulated environment by student nurses. Errors occurred in all four rule-based error categories, and all students committed at least one error. The most frequent errors occurred in the verification category. Another common error was related to physician interactions. The least common errors were related to coordinating information with the patient and family. Our finding that 100% of student subjects committed rule-based errors is cause for concern. To decrease errors and improve safe clinical practice, nurse educators must identify effective strategies that students can use to improve patient surveillance. Copyright 2010 Elsevier Inc. All rights reserved.
Concomitant prescribing and dispensing errors at a Brazilian hospital: a descriptive study
Silva, Maria das Dores Graciano; Rosa, Mário Borges; Franklin, Bryony Dean; Reis, Adriano Max Moreira; Anchieta, Lêni Márcia; Mota, Joaquim Antônio César
2011-01-01
OBJECTIVE: To analyze the prevalence and types of prescribing and dispensing errors occurring with high-alert medications and to propose preventive measures to avoid errors with these medications. INTRODUCTION: The prevalence of adverse events in health care has increased, and medication errors are probably the most common cause of these events. Pediatric patients are known to be a high-risk group and are an important target in medication error prevention. METHODS: Observers collected data on prescribing and dispensing errors occurring with high-alert medications for pediatric inpatients in a university hospital. In addition to classifying the types of error that occurred, we identified cases of concomitant prescribing and dispensing errors. RESULTS: One or more prescribing errors, totaling 1,632 errors, were found in 632 (89.6%) of the 705 high-alert medications that were prescribed and dispensed. We also identified at least one dispensing error in each high-alert medication dispensed, totaling 1,707 errors. Among these dispensing errors, 723 (42.4%) content errors occurred concomitantly with the prescribing errors. A subset of dispensing errors may have occurred because of poor prescription quality. The observed concomitancy should be examined carefully because improvements in the prescribing process could potentially prevent these problems. CONCLUSION: The system of drug prescribing and dispensing at the hospital investigated in this study should be improved by incorporating the best practices of medication safety and preventing medication errors. High-alert medications may be used as triggers for improving the safety of the drug-utilization system. PMID:22012039
NASA Technical Reports Server (NTRS)
Tsaoussi, Lucia S.; Koblinsky, Chester J.
1994-01-01
In order to facilitate the use of satellite-derived sea surface topography and velocity oceanographic models, methodology is presented for deriving the total error covariance and its geographic distribution from TOPEX/POSEIDON measurements. The model is formulated using a parametric model fit to the altimeter range observations. The topography and velocity modeled with spherical harmonic expansions whose coefficients are found through optimal adjustment to the altimeter range residuals using Bayesian statistics. All other parameters, including the orbit, geoid, surface models, and range corrections are provided as unadjusted parameters. The maximum likelihood estimates and errors are derived from the probability density function of the altimeter range residuals conditioned with a priori information. Estimates of model errors for the unadjusted parameters are obtained from the TOPEX/POSEIDON postlaunch verification results and the error covariances for the orbit and the geoid, except for the ocean tides. The error in the ocean tides is modeled, first, as the difference between two global tide models and, second, as the correction to the present tide model, the correction derived from the TOPEX/POSEIDON data. A formal error covariance propagation scheme is used to derive the total error. Our global total error estimate for the TOPEX/POSEIDON topography relative to the geoid for one 10-day period is found tio be 11 cm RMS. When the error in the geoid is removed, thereby providing an estimate of the time dependent error, the uncertainty in the topography is 3.5 cm root mean square (RMS). This level of accuracy is consistent with direct comparisons of TOPEX/POSEIDON altimeter heights with tide gauge measurements at 28 stations. In addition, the error correlation length scales are derived globally in both east-west and north-south directions, which should prove useful for data assimilation. The largest error correlation length scales are found in the tropics. Errors in the velocity field are smallest in midlatitude regions. For both variables the largest errors caused by uncertainty in the geoid. More accurate representations of the geoid await a dedicated geopotential satellite mission. Substantial improvements in the accuracy of ocean tide models are expected in the very near future from research with TOPEX/POSEIDON data.
Creating Illusions of Knowledge: Learning Errors that Contradict Prior Knowledge
ERIC Educational Resources Information Center
Fazio, Lisa K.; Barber, Sarah J.; Rajaram, Suparna; Ornstein, Peter A.; Marsh, Elizabeth J.
2013-01-01
Most people know that the Pacific is the largest ocean on Earth and that Edison invented the light bulb. Our question is whether this knowledge is stable, or if people will incorporate errors into their knowledge bases, even if they have the correct knowledge stored in memory. To test this, we asked participants general-knowledge questions 2 weeks…
Taylor, Diane M; Chow, Fotini K; Delkash, Madjid; Imhoff, Paul T
2016-10-01
Landfills are a significant contributor to anthropogenic methane emissions, but measuring these emissions can be challenging. This work uses numerical simulations to assess the accuracy of the tracer dilution method, which is used to estimate landfill emissions. Atmospheric dispersion simulations with the Weather Research and Forecast model (WRF) are run over Sandtown Landfill in Delaware, USA, using observation data to validate the meteorological model output. A steady landfill methane emissions rate is used in the model, and methane and tracer gas concentrations are collected along various transects downwind from the landfill for use in the tracer dilution method. The calculated methane emissions are compared to the methane emissions rate used in the model to find the percent error of the tracer dilution method for each simulation. The roles of different factors are examined: measurement distance from the landfill, transect angle relative to the wind direction, speed of the transect vehicle, tracer placement relative to the hot spot of methane emissions, complexity of topography, and wind direction. Results show that percent error generally decreases with distance from the landfill, where the tracer and methane plumes become well mixed. Tracer placement has the largest effect on percent error, and topography and wind direction both have significant effects, with measurement errors ranging from -12% to 42% over all simulations. Transect angle and transect speed have small to negligible effects on the accuracy of the tracer dilution method. These tracer dilution method simulations provide insight into measurement errors that might occur in the field, enhance understanding of the method's limitations, and aid interpretation of field data. Copyright © 2016 Elsevier Ltd. All rights reserved.
Tax revenue and inflation rate predictions in Banda Aceh using Vector Error Correction Model (VECM)
NASA Astrophysics Data System (ADS)
Maulia, Eva; Miftahuddin; Sofyan, Hizir
2018-05-01
A country has some important parameters to achieve the welfare of the economy, such as tax revenues and inflation. One of the largest revenues of the state budget in Indonesia comes from the tax sector. Besides, the rate of inflation occurring in a country can be used as one measure, to measure economic problems that the country facing. Given the importance of tax revenue and inflation rate control in achieving economic prosperity, it is necessary to analyze the relationship and forecasting tax revenue and inflation rate. VECM (Vector Error Correction Model) was chosen as the method used in this research, because of the data used in the form of multivariate time series data. This study aims to produce a VECM model with optimal lag and to predict the tax revenue and inflation rate of the VECM model. The results show that the best model for data of tax revenue and the inflation rate in Banda Aceh City is VECM with 3rd optimal lag or VECM (3). Of the seven models formed, there is a significant model that is the acceptance model of income tax. The predicted results of tax revenue and the inflation rate in Kota Banda Aceh for the next 6, 12 and 24 periods (months) obtained using VECM (3) are considered valid, since they have a minimum error value compared to other models.
32 CFR 1653.3 - Review by the National Appeal Board.
Code of Federal Regulations, 2011 CFR
2011-07-01
... review the file to insure that no procedural errors have occurred during the history of the current claim. Files containing procedural errors will be returned to the board where the errors occurred for any additional processing necessary to correct such errors. (c) Files containing procedural errors that were not...
Application of Intra-Oral Dental Scanners in the Digital Workflow of Implantology
van der Meer, Wicher J.; Andriessen, Frank S.; Wismeijer, Daniel; Ren, Yijin
2012-01-01
Intra-oral scanners will play a central role in digital dentistry in the near future. In this study the accuracy of three intra-oral scanners was compared. Materials and methods: A master model made of stone was fitted with three high precision manufactured PEEK cylinders and scanned with three intra-oral scanners: the CEREC (Sirona), the iTero (Cadent) and the Lava COS (3M). In software the digital files were imported and the distance between the centres of the cylinders and the angulation between the cylinders was assessed. These values were compared to the measurements made on a high accuracy 3D scan of the master model. Results: The distance errors were the smallest and most consistent for the Lava COS. The distance errors for the Cerec were the largest and least consistent. All the angulation errors were small. Conclusions: The Lava COS in combination with a high accuracy scanning protocol resulted in the smallest and most consistent errors of all three scanners tested when considering mean distance errors in full arch impressions both in absolute values and in consistency for both measured distances. For the mean angulation errors, the Lava COS had the smallest errors between cylinders 1–2 and the largest errors between cylinders 1–3, although the absolute difference with the smallest mean value (iTero) was very small (0,0529°). An expected increase in distance and/or angular errors over the length of the arch due to an accumulation of registration errors of the patched 3D surfaces could be observed in this study design, but the effects were statistically not significant. Clinical relevance For making impressions of implant cases for digital workflows, the most accurate scanner with the scanning protocol that will ensure the most accurate digital impression should be used. In our study model that was the Lava COS with the high accuracy scanning protocol. PMID:22937030
Evaluation of three lidar scanning strategies for turbulence measurements
Newman, Jennifer F.; Klein, Petra M.; Wharton, Sonia; ...
2016-05-03
Several errors occur when a traditional Doppler beam swinging (DBS) or velocity–azimuth display (VAD) strategy is used to measure turbulence with a lidar. To mitigate some of these errors, a scanning strategy was recently developed which employs six beam positions to independently estimate the u, v, and w velocity variances and covariances. In order to assess the ability of these different scanning techniques to measure turbulence, a Halo scanning lidar, WindCube v2 pulsed lidar, and ZephIR continuous wave lidar were deployed at field sites in Oklahoma and Colorado with collocated sonic anemometers.Results indicate that the six-beam strategy mitigates some of the errors caused bymore » VAD and DBS scans, but the strategy is strongly affected by errors in the variance measured at the different beam positions. The ZephIR and WindCube lidars overestimated horizontal variance values by over 60 % under unstable conditions as a result of variance contamination, where additional variance components contaminate the true value of the variance. A correction method was developed for the WindCube lidar that uses variance calculated from the vertical beam position to reduce variance contamination in the u and v variance components. The correction method reduced WindCube variance estimates by over 20 % at both the Oklahoma and Colorado sites under unstable conditions, when variance contamination is largest. This correction method can be easily applied to other lidars that contain a vertical beam position and is a promising method for accurately estimating turbulence with commercially available lidars.« less
Evaluation of three lidar scanning strategies for turbulence measurements
DOE Office of Scientific and Technical Information (OSTI.GOV)
Newman, Jennifer F.; Klein, Petra M.; Wharton, Sonia
Several errors occur when a traditional Doppler beam swinging (DBS) or velocity–azimuth display (VAD) strategy is used to measure turbulence with a lidar. To mitigate some of these errors, a scanning strategy was recently developed which employs six beam positions to independently estimate the u, v, and w velocity variances and covariances. In order to assess the ability of these different scanning techniques to measure turbulence, a Halo scanning lidar, WindCube v2 pulsed lidar, and ZephIR continuous wave lidar were deployed at field sites in Oklahoma and Colorado with collocated sonic anemometers.Results indicate that the six-beam strategy mitigates some of the errors caused bymore » VAD and DBS scans, but the strategy is strongly affected by errors in the variance measured at the different beam positions. The ZephIR and WindCube lidars overestimated horizontal variance values by over 60 % under unstable conditions as a result of variance contamination, where additional variance components contaminate the true value of the variance. A correction method was developed for the WindCube lidar that uses variance calculated from the vertical beam position to reduce variance contamination in the u and v variance components. The correction method reduced WindCube variance estimates by over 20 % at both the Oklahoma and Colorado sites under unstable conditions, when variance contamination is largest. This correction method can be easily applied to other lidars that contain a vertical beam position and is a promising method for accurately estimating turbulence with commercially available lidars.« less
Impact of numerical choices on water conservation in the E3SM Atmosphere Model Version 1 (EAM V1)
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zhang, Kai; Rasch, Philip J.; Taylor, Mark A.
The conservation of total water is an important numerical feature for global Earth system models. Even small conservation problems in the water budget can lead to systematic errors in century-long simulations for sea level rise projection. This study quantifies and reduces various sources of water conservation error in the atmosphere component of the Energy Exascale Earth System Model. Several sources of water conservation error have been identified during the development of the version 1 (V1) model. The largest errors result from the numerical coupling between the resolved dynamics and the parameterized sub-grid physics. A hybrid coupling using different methods formore » fluid dynamics and tracer transport provides a reduction of water conservation error by a factor of 50 at 1° horizontal resolution as well as consistent improvements at other resolutions. The second largest error source is the use of an overly simplified relationship between the surface moisture flux and latent heat flux at the interface between the host model and the turbulence parameterization. This error can be prevented by applying the same (correct) relationship throughout the entire model. Two additional types of conservation error that result from correcting the surface moisture flux and clipping negative water concentrations can be avoided by using mass-conserving fixers. With all four error sources addressed, the water conservation error in the V1 model is negligible and insensitive to the horizontal resolution. The associated changes in the long-term statistics of the main atmospheric features are small. A sensitivity analysis is carried out to show that the magnitudes of the conservation errors decrease strongly with temporal resolution but increase with horizontal resolution. The increased vertical resolution in the new model results in a very thin model layer at the Earth’s surface, which amplifies the conservation error associated with the surface moisture flux correction. We note that for some of the identified error sources, the proposed fixers are remedies rather than solutions to the problems at their roots. Future improvements in time integration would be beneficial for this model.« less
Impact of numerical choices on water conservation in the E3SM Atmosphere Model version 1 (EAMv1)
NASA Astrophysics Data System (ADS)
Zhang, Kai; Rasch, Philip J.; Taylor, Mark A.; Wan, Hui; Leung, Ruby; Ma, Po-Lun; Golaz, Jean-Christophe; Wolfe, Jon; Lin, Wuyin; Singh, Balwinder; Burrows, Susannah; Yoon, Jin-Ho; Wang, Hailong; Qian, Yun; Tang, Qi; Caldwell, Peter; Xie, Shaocheng
2018-06-01
The conservation of total water is an important numerical feature for global Earth system models. Even small conservation problems in the water budget can lead to systematic errors in century-long simulations. This study quantifies and reduces various sources of water conservation error in the atmosphere component of the Energy Exascale Earth System Model. Several sources of water conservation error have been identified during the development of the version 1 (V1) model. The largest errors result from the numerical coupling between the resolved dynamics and the parameterized sub-grid physics. A hybrid coupling using different methods for fluid dynamics and tracer transport provides a reduction of water conservation error by a factor of 50 at 1° horizontal resolution as well as consistent improvements at other resolutions. The second largest error source is the use of an overly simplified relationship between the surface moisture flux and latent heat flux at the interface between the host model and the turbulence parameterization. This error can be prevented by applying the same (correct) relationship throughout the entire model. Two additional types of conservation error that result from correcting the surface moisture flux and clipping negative water concentrations can be avoided by using mass-conserving fixers. With all four error sources addressed, the water conservation error in the V1 model becomes negligible and insensitive to the horizontal resolution. The associated changes in the long-term statistics of the main atmospheric features are small. A sensitivity analysis is carried out to show that the magnitudes of the conservation errors in early V1 versions decrease strongly with temporal resolution but increase with horizontal resolution. The increased vertical resolution in V1 results in a very thin model layer at the Earth's surface, which amplifies the conservation error associated with the surface moisture flux correction. We note that for some of the identified error sources, the proposed fixers are remedies rather than solutions to the problems at their roots. Future improvements in time integration would be beneficial for V1.
NASA Astrophysics Data System (ADS)
Chang, Wen-Li
2010-01-01
We investigate the influence of blurred ways on pattern recognition of a Barabási-Albert scale-free Hopfield neural network (SFHN) with a small amount of errors. Pattern recognition is an important function of information processing in brain. Due to heterogeneous degree of scale-free network, different blurred ways have different influences on pattern recognition with same errors. Simulation shows that among partial recognition, the larger loading ratio (the number of patterns to average degree P/langlekrangle) is, the smaller the overlap of SFHN is. The influence of directed (large) way is largest and the directed (small) way is smallest while random way is intermediate between them. Under the ratio of the numbers of stored patterns to the size of the network P/N is less than 0. 1 conditions, there are three families curves of the overlap corresponding to directed (small), random and directed (large) blurred ways of patterns and these curves are not associated with the size of network and the number of patterns. This phenomenon only occurs in the SFHN. These conclusions are benefit for understanding the relation between neural network structure and brain function.
Maxwell, S.K.; Wood, E.C.; Janus, A.
2008-01-01
The U.S. Geological Survey (USGS) 2001 National Land Cover Database (NLCD) was compared to the U.S. Department of Agriculture (USDA) 2002 Census of Agriculture. We compared areal estimates for cropland at the state and county level for 14 States in the Upper Midwest region of the United States. Absolute differences between the NLCD and Census cropland areal estimates at the state level ranged from 1.3% (Minnesota) to 37.0% (Wisconsin). The majority of counties (74.5%) had differences of less than 100 km2. 7.2% of the counties had differences of more than 200 km2. Regions where the largest areal differences occurred were in southern Illinois, North Dakota, South Dakota, and Wisconsin, and generally occurred in areas with the lowest proportions of cropland (i.e., dominated by forest or grassland). Before using the 2001 NLCD for agricultural applications, such as mapping of specific crop types, users should be aware of the potential for misclassification errors, especially where the proportion of cropland to other land cover types is fairly low.
Maskens, Carolyn; Downie, Helen; Wendt, Alison; Lima, Ana; Merkley, Lisa; Lin, Yulia; Callum, Jeannie
2014-01-01
This report provides a comprehensive analysis of transfusion errors occurring at a large teaching hospital and aims to determine key errors that are threatening transfusion safety, despite implementation of safety measures. Errors were prospectively identified from 2005 to 2010. Error data were coded on a secure online database called the Transfusion Error Surveillance System. Errors were defined as any deviation from established standard operating procedures. Errors were identified by clinical and laboratory staff. Denominator data for volume of activity were used to calculate rates. A total of 15,134 errors were reported with a median number of 215 errors per month (range, 85-334). Overall, 9083 (60%) errors occurred on the transfusion service and 6051 (40%) on the clinical services. In total, 23 errors resulted in patient harm: 21 of these errors occurred on the clinical services and two in the transfusion service. Of the 23 harm events, 21 involved inappropriate use of blood. Errors with no harm were 657 times more common than events that caused harm. The most common high-severity clinical errors were sample labeling (37.5%) and inappropriate ordering of blood (28.8%). The most common high-severity error in the transfusion service was sample accepted despite not meeting acceptance criteria (18.3%). The cost of product and component loss due to errors was $593,337. Errors occurred at every point in the transfusion process, with the greatest potential risk of patient harm resulting from inappropriate ordering of blood products and errors in sample labeling. © 2013 American Association of Blood Banks (CME).
Morcrette, C. J.; Van Weverberg, K.; Ma, H. -Y.; ...
2018-02-16
We introduce the Clouds Above the United States and Errors at the Surface (CAUSES) project with its aim of better understanding the physical processes leading to warm screen temperature biases over the American Midwest in many numerical models. In this first of four companion papers, 11 different models, from nine institutes, perform a series of 5 day hindcasts, each initialized from reanalyses. After describing the common experimental protocol and detailing each model configuration, a gridded temperature data set is derived from observations and used to show that all the models have a warm bias over parts of the Midwest. Additionally,more » a strong diurnal cycle in the screen temperature bias is found in most models. In some models the bias is largest around midday, while in others it is largest during the night. At the Department of Energy Atmospheric Radiation Measurement Southern Great Plains (SGP) site, the model biases are shown to extend several kilometers into the atmosphere. Finally, to provide context for the companion papers, in which observations from the SGP site are used to evaluate the different processes contributing to errors there, it is shown that there are numerous locations across the Midwest where the diurnal cycle of the error is highly correlated with the diurnal cycle of the error at SGP. This suggests that conclusions drawn from detailed evaluation of models using instruments located at SGP will be representative of errors that are prevalent over a larger spatial scale.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Morcrette, C. J.; Van Weverberg, K.; Ma, H. -Y.
We introduce the Clouds Above the United States and Errors at the Surface (CAUSES) project with its aim of better understanding the physical processes leading to warm screen temperature biases over the American Midwest in many numerical models. In this first of four companion papers, 11 different models, from nine institutes, perform a series of 5 day hindcasts, each initialized from reanalyses. After describing the common experimental protocol and detailing each model configuration, a gridded temperature data set is derived from observations and used to show that all the models have a warm bias over parts of the Midwest. Additionally,more » a strong diurnal cycle in the screen temperature bias is found in most models. In some models the bias is largest around midday, while in others it is largest during the night. At the Department of Energy Atmospheric Radiation Measurement Southern Great Plains (SGP) site, the model biases are shown to extend several kilometers into the atmosphere. Finally, to provide context for the companion papers, in which observations from the SGP site are used to evaluate the different processes contributing to errors there, it is shown that there are numerous locations across the Midwest where the diurnal cycle of the error is highly correlated with the diurnal cycle of the error at SGP. This suggests that conclusions drawn from detailed evaluation of models using instruments located at SGP will be representative of errors that are prevalent over a larger spatial scale.« less
NASA Astrophysics Data System (ADS)
Morcrette, C. J.; Van Weverberg, K.; Ma, H.-Y.; Ahlgrimm, M.; Bazile, E.; Berg, L. K.; Cheng, A.; Cheruy, F.; Cole, J.; Forbes, R.; Gustafson, W. I.; Huang, M.; Lee, W.-S.; Liu, Y.; Mellul, L.; Merryfield, W. J.; Qian, Y.; Roehrig, R.; Wang, Y.-C.; Xie, S.; Xu, K.-M.; Zhang, C.; Klein, S.; Petch, J.
2018-03-01
We introduce the Clouds Above the United States and Errors at the Surface (CAUSES) project with its aim of better understanding the physical processes leading to warm screen temperature biases over the American Midwest in many numerical models. In this first of four companion papers, 11 different models, from nine institutes, perform a series of 5 day hindcasts, each initialized from reanalyses. After describing the common experimental protocol and detailing each model configuration, a gridded temperature data set is derived from observations and used to show that all the models have a warm bias over parts of the Midwest. Additionally, a strong diurnal cycle in the screen temperature bias is found in most models. In some models the bias is largest around midday, while in others it is largest during the night. At the Department of Energy Atmospheric Radiation Measurement Southern Great Plains (SGP) site, the model biases are shown to extend several kilometers into the atmosphere. Finally, to provide context for the companion papers, in which observations from the SGP site are used to evaluate the different processes contributing to errors there, it is shown that there are numerous locations across the Midwest where the diurnal cycle of the error is highly correlated with the diurnal cycle of the error at SGP. This suggests that conclusions drawn from detailed evaluation of models using instruments located at SGP will be representative of errors that are prevalent over a larger spatial scale.
Creating illusions of knowledge: learning errors that contradict prior knowledge.
Fazio, Lisa K; Barber, Sarah J; Rajaram, Suparna; Ornstein, Peter A; Marsh, Elizabeth J
2013-02-01
Most people know that the Pacific is the largest ocean on Earth and that Edison invented the light bulb. Our question is whether this knowledge is stable, or if people will incorporate errors into their knowledge bases, even if they have the correct knowledge stored in memory. To test this, we asked participants general-knowledge questions 2 weeks before they read stories that contained errors (e.g., "Franklin invented the light bulb"). On a later general-knowledge test, participants reproduced story errors despite previously answering the questions correctly. This misinformation effect was found even for questions that were answered correctly on the initial test with the highest level of confidence. Furthermore, prior knowledge offered no protection against errors entering the knowledge base; the misinformation effect was equivalent for previously known and unknown facts. Errors can enter the knowledge base even when learners have the knowledge necessary to catch the errors. 2013 APA, all rights reserved
Spiegel, Paul B; Le, Phuoc; Ververs, Mija-Tesse; Salama, Peter
2007-01-01
Background The fields of expertise of natural disasters and complex emergencies (CEs) are quite distinct, with different tools for mitigation and response as well as different types of competent organizations and qualified professionals who respond. However, natural disasters and CEs can occur concurrently in the same geographic location, and epidemics can occur during or following either event. The occurrence and overlap of these three types of events have not been well studied. Methods All natural disasters, CEs and epidemics occurring within the past decade (1995–2004) that met the inclusion criteria were included. The largest 30 events in each category were based on the total number of deaths recorded. The main databases used were the Emergency Events Database for natural disasters, the Uppsala Conflict Database Program for CEs and the World Health Organization outbreaks archive for epidemics. Analysis During the past decade, 63% of the largest CEs had ≥1 epidemic compared with 23% of the largest natural disasters. Twenty-seven percent of the largest natural disasters occurred in areas with ≥1 ongoing CE while 87% of the largest CEs had ≥1 natural disaster. Conclusion Epidemics commonly occur during CEs. The data presented in this article do not support the often-repeated assertion that epidemics, especially large-scale epidemics, commonly occur following large-scale natural disasters. This observation has important policy and programmatic implications when preparing and responding to epidemics. There is an important and previously unrecognized overlap between natural disasters and CEs. Training and tools are needed to help bridge the gap between the different type of organizations and professionals who respond to natural disasters and CEs to ensure an integrated and coordinated response. PMID:17411460
NASA Astrophysics Data System (ADS)
Dahlqvist, Per
1999-10-01
We estimate the error in the semiclassical trace formula for the Sinai billiard under the assumption that the largest source of error is due to penumbra diffraction: namely, diffraction effects for trajectories passing within a distance Ricons/Journals/Common/cdot" ALT="cdot" ALIGN="TOP"/>O((kR)-2/3) to the disc and trajectories being scattered in very forward directions. Here k is the momentum and R the radius of the scatterer. The semiclassical error is estimated by perturbing the Berry-Keating formula. The analysis necessitates an asymptotic analysis of very long periodic orbits. This is obtained within an approximation originally due to Baladi, Eckmann and Ruelle. We find that the average error, for sufficiently large values of kR, will exceed the mean level spacing.
Worldwide Survey of Alcohol and Nonmedical Drug Use among Military Personnel: 1982,
1983-01-01
cell . The first number is an estimate of the percentage of the population with the characteristics that define the cell . The second number, in...multiplying 1.96 times the standard error for that cell . (Obviously, for very small or very large estimates, the respective smallest or largest value in...that the cell proportions estimate the true population value more precisely, and larger standard errors indicate that the true population value is
NASA Astrophysics Data System (ADS)
Pan, X.; Yang, Y.; Liu, Y.; Fan, X.; Shan, L.; Zhang, X.
2018-04-01
Error source analyses are critical for the satellite-retrieved surface net radiation (Rn) products. In this study, we evaluate the Rn error sources in the Clouds and the Earth's Radiant Energy System (CERES) project at 43 sites from July in 2007 to December in 2007 in China. The results show that cloud fraction (CF), land surface temperature (LST), atmospheric temperature (AT) and algorithm error dominate the Rn error, with error contributions of -20, 15, 10 and 10 W/m2 (net shortwave (NSW)/longwave (NLW) radiation), respectively. For NSW, the dominant error source is algorithm error (more than 10 W/m2), particularly in spring and summer with abundant cloud. For NLW, due to the high sensitivity of algorithm and large LST/CF error, LST and CF are the largest error sources, especially in northern China. The AT influences the NLW error large in southern China because of the large AT error in there. The total precipitable water has weak influence on Rn error even with the high sensitivity of algorithm. In order to improve Rn quality, CF and LST (AT) error in northern (southern) China should be decreased.
A Robust False Matching Points Detection Method for Remote Sensing Image Registration
NASA Astrophysics Data System (ADS)
Shan, X. J.; Tang, P.
2015-04-01
Given the influences of illumination, imaging angle, and geometric distortion, among others, false matching points still occur in all image registration algorithms. Therefore, false matching points detection is an important step in remote sensing image registration. Random Sample Consensus (RANSAC) is typically used to detect false matching points. However, RANSAC method cannot detect all false matching points in some remote sensing images. Therefore, a robust false matching points detection method based on Knearest- neighbour (K-NN) graph (KGD) is proposed in this method to obtain robust and high accuracy result. The KGD method starts with the construction of the K-NN graph in one image. K-NN graph can be first generated for each matching points and its K nearest matching points. Local transformation model for each matching point is then obtained by using its K nearest matching points. The error of each matching point is computed by using its transformation model. Last, L matching points with largest error are identified false matching points and removed. This process is iterative until all errors are smaller than the given threshold. In addition, KGD method can be used in combination with other methods, such as RANSAC. Several remote sensing images with different resolutions and terrains are used in the experiment. We evaluate the performance of KGD method, RANSAC + KGD method, RANSAC, and Graph Transformation Matching (GTM). The experimental results demonstrate the superior performance of the KGD and RANSAC + KGD methods.
The Estimation of Gestational Age at Birth in Database Studies.
Eberg, Maria; Platt, Robert W; Filion, Kristian B
2017-11-01
Studies on the safety of prenatal medication use require valid estimation of the pregnancy duration. However, gestational age is often incompletely recorded in administrative and clinical databases. Our objective was to compare different approaches to estimating the pregnancy duration. Using data from the Clinical Practice Research Datalink and Hospital Episode Statistics, we examined the following four approaches to estimating missing gestational age: (1) generalized estimating equations for longitudinal data; (2) multiple imputation; (3) estimation based on fetal birth weight and sex; and (4) conventional approaches that assigned a fixed value (39 weeks for all or 39 weeks for full term and 35 weeks for preterm). The gestational age recorded in Hospital Episode Statistics was considered the gold standard. We conducted a simulation study comparing the described approaches in terms of estimated bias and mean square error. A total of 25,929 infants from 22,774 mothers were included in our "gold standard" cohort. The smallest average absolute bias was observed for the generalized estimating equation that included birth weight, while the largest absolute bias occurred when assigning 39-week gestation to all those with missing values. The smallest mean square errors were detected with generalized estimating equations while multiple imputation had the highest mean square errors. The use of generalized estimating equations resulted in the most accurate estimation of missing gestational age when birth weight information was available. In the absence of birth weight, assignment of fixed gestational age based on term/preterm status may be the optimal approach.
Higher-order ionospheric error at Arecibo, Millstone, and Jicamarca
NASA Astrophysics Data System (ADS)
Matteo, N. A.; Morton, Y. T.
2010-12-01
The ionosphere is a dominant source of Global Positioning System receiver range measurement error. Although dual-frequency receivers can eliminate the first-order ionospheric error, most second- and third-order errors remain in the range measurements. Higher-order ionospheric error is a function of both electron density distribution and the magnetic field vector along the GPS signal propagation path. This paper expands previous efforts by combining incoherent scatter radar (ISR) electron density measurements, the International Reference Ionosphere model, exponential decay extensions of electron densities, the International Geomagnetic Reference Field, and total electron content maps to compute higher-order error at ISRs in Arecibo, Puerto Rico; Jicamarca, Peru; and Millstone Hill, Massachusetts. Diurnal patterns, dependency on signal direction, seasonal variation, and geomagnetic activity dependency are analyzed. Higher-order error is largest at Arecibo with code phase maxima circa 7 cm for low-elevation southern signals. The maximum variation of the error over all angles of arrival is circa 8 cm.
Optimal configurations of spatial scale for grid cell firing under noise and uncertainty
Towse, Benjamin W.; Barry, Caswell; Bush, Daniel; Burgess, Neil
2014-01-01
We examined the accuracy with which the location of an agent moving within an environment could be decoded from the simulated firing of systems of grid cells. Grid cells were modelled with Poisson spiking dynamics and organized into multiple ‘modules’ of cells, with firing patterns of similar spatial scale within modules and a wide range of spatial scales across modules. The number of grid cells per module, the spatial scaling factor between modules and the size of the environment were varied. Errors in decoded location can take two forms: small errors of precision and larger errors resulting from ambiguity in decoding periodic firing patterns. With enough cells per module (e.g. eight modules of 100 cells each) grid systems are highly robust to ambiguity errors, even over ranges much larger than the largest grid scale (e.g. over a 500 m range when the maximum grid scale is 264 cm). Results did not depend strongly on the precise organization of scales across modules (geometric, co-prime or random). However, independent spatial noise across modules, which would occur if modules receive independent spatial inputs and might increase with spatial uncertainty, dramatically degrades the performance of the grid system. This effect of spatial uncertainty can be mitigated by uniform expansion of grid scales. Thus, in the realistic regimes simulated here, the optimal overall scale for a grid system represents a trade-off between minimizing spatial uncertainty (requiring large scales) and maximizing precision (requiring small scales). Within this view, the temporary expansion of grid scales observed in novel environments may be an optimal response to increased spatial uncertainty induced by the unfamiliarity of the available spatial cues. PMID:24366144
Quinn, Gene R; Ranum, Darrell; Song, Ellen; Linets, Margarita; Keohane, Carol; Riah, Heather; Greenberg, Penny
2017-10-01
Diagnostic errors are an underrecognized source of patient harm, and cardiovascular disease can be challenging to diagnose in the ambulatory setting. Although malpractice data can inform diagnostic error reduction efforts, no studies have examined outpatient cardiovascular malpractice cases in depth. A study was conducted to examine the characteristics of outpatient cardiovascular malpractice cases brought against general medicine practitioners. Some 3,407 closed malpractice claims were analyzed in outpatient general medicine from CRICO Strategies' Comparative Benchmarking System database-the largest detailed database of paid and unpaid malpractice in the world-and multivariate models were created to determine the factors that predicted case outcomes. Among the 153 patients in cardiovascular malpractice cases for whom patient comorbidities were coded, the majority (63%) had at least one traditional cardiac risk factor, such as diabetes, tobacco use, or previous cardiovascular disease. Cardiovascular malpractice cases were more likely to involve an allegation of error in diagnosis (75% vs. 47%, p <0.0001), have high clinical severity (86% vs. 49%, p <0.0001) and result in death (75% vs. 27%, p <0.0001), as compared to noncardiovascular cases. Initial diagnoses of nonspecific chest pain and mimics of cardiovascular pain (for example, esophageal disease) were common and independently increased the likelihood of a claim resulting in a payment (p <0.01). Cardiovascular malpractice cases against outpatient general medicine physicians mostly occur in patients with conventional risk factors for coronary artery disease and are often diagnosed with common mimics of cardiovascular pain. These findings suggest that these patients may be high-yield targets for preventing diagnostic errors in the ambulatory setting. Copyright © 2017 The Authors. Published by Elsevier Inc. All rights reserved.
Dörnberger, V; Dörnberger, G
1987-01-01
On 99 testes of corpses (death had occurred between 26 und 86 years) comparative volumetry was done. In the left surrounding capsules (without scrotal skin and tunica dartos) the testes were measured via real time sonography in a waterbath (7.5 MHz linear-scan), afterwards length, breadth and height were measured by a sliding calibre, the largest diameter (the length) of the testis was determined by Schirren's circle and finally the size of the testis was measured via Prader's orchidometer. After all the testes were surgically exposed, their volume (by litres) was determined according to Archimedes' principle. As for the Archimedes' principle a random mean error of 7% must be accepted, sonographic determination of the volume showed a random mean error of 15%. Whereas the accuracy of measurement increases with increasing volumes, both methods should be used with caution if the volumes are below 4 ml, since the possibilities of error are rather great. According to Prader's orchidometer the measured volumes on average were higher (+ 27%) with a random mean error of 19.5%. With Schirren's circle the obtained mean value was even higher (+ 52%) in comparison to the "real" volume by Archimedes' principle with a random mean error of 19%. The measurements of the testes in the left capsules by sliding calibre can be optimized, if one applies a correcting factor f (sliding calibre) = 0.39 for calculation of the testis volume corresponding to an ellipsoid. Here you will get the same mean value as in Archimedes' principle with a standard mean error of only 9%. If one applies the correction factor of real time sonography of testis f (sono) = 0.65 the mean value of sliding calibre measurements would be 68.8% too high with a standard mean error of 20.3%. For measurements via sliding calibre the calculation of the testis volume corresponding to an ellipsoid one should apply the smaller factor f (sliding calibre) = 0.39, because in this way the left capsules of testis and the epididymis are considered.
Simonsen, Bjoerg O; Daehlin, Gro K; Johansson, Inger; Farup, Per G
2014-11-21
Nurses experience insufficient medication knowledge; particularly in drug dose calculations, but also in drug management and pharmacology. The weak knowledge could be a result of deficiencies in the basic nursing education, or lack of continuing maintenance training during working years. The aim of this study was to compare the medication knowledge, certainty and risk of error between graduating bachelor students in nursing and experienced registered nurses. Bachelor students in closing term and registered nurses with at least one year job experience underwent a multiple choice test in pharmacology, drug management and drug dose calculations: 3x14 questions with 3-4 alternative answers (score 0-42). Certainty of each answer was recorded with score 0-3, 0-1 indicating need for assistance. Risk of error was scored 1-3, where 3 expressed high risk: being certain that a wrong answer was correct. The results are presented as mean and (SD). Participants were 243 graduating students (including 29 men), aged 28.2 (7.6) years, and 203 registered nurses (including 16 men), aged 42.0 (9.3) years and with a working experience of 12.4 years (9.2). The knowledge among the nurses was found to be superior to that of the students: 68.9%(8.0) and 61.5%(7.8) correct answers, respectively, (p < 0.001). The difference was largest in drug management and dose calculations. The improvement occurred during the first working year. The nurses expressed higher degree of certainty and the risk of error was lower, both overall and for each topic (p < 0.01). Low risk of error was associated with high knowledge and high sense of coping (p < 0.001). The medication knowledge among experienced nurses was superior to bachelor students in nursing, but nevertheless insufficient. As much as 25% of the answers to the drug management questions would lead to high risk of error. More emphasis should be put into the basic nursing education and in the introduction to medication procedures in clinical practice to improve the nurses' medication knowledge and reduce the risk of error.
NASA Technical Reports Server (NTRS)
Roberts, J. Brent; Clayson, C. A.
2012-01-01
Residual forcing necessary to close the MLTB on seasonal time scales are largest in regions of strongest surface heat flux forcing. Identifying the dominant source of error - surface heat flux error, mixed layer depth estimation, ocean dynamical forcing - remains a challenge in the eastern tropical oceans where ocean processes are very active. Improved sub-surface observations are necessary to better constrain errors. 1. Mixed layer depth evolution is critical to the seasonal evolution of mixed layer temperatures. It determines the inertia of the mixed layer, and scales the sensitivity of the MLTB to errors in surface heat flux and ocean dynamical forcing. This role produces timing impacts for errors in SST prediction. 2. Errors in the MLTB are larger than the historical 10Wm-2 target accuracy. In some regions, a larger accuracy can be tolerated if the goal is to resolve the seasonal SST cycle.
When is an error not a prediction error? An electrophysiological investigation.
Holroyd, Clay B; Krigolson, Olave E; Baker, Robert; Lee, Seung; Gibson, Jessica
2009-03-01
A recent theory holds that the anterior cingulate cortex (ACC) uses reinforcement learning signals conveyed by the midbrain dopamine system to facilitate flexible action selection. According to this position, the impact of reward prediction error signals on ACC modulates the amplitude of a component of the event-related brain potential called the error-related negativity (ERN). The theory predicts that ERN amplitude is monotonically related to the expectedness of the event: It is larger for unexpected outcomes than for expected outcomes. However, a recent failure to confirm this prediction has called the theory into question. In the present article, we investigated this discrepancy in three trial-and-error learning experiments. All three experiments provided support for the theory, but the effect sizes were largest when an optimal response strategy could actually be learned. This observation suggests that ACC utilizes dopamine reward prediction error signals for adaptive decision making when the optimal behavior is, in fact, learnable.
The importance of intra-hospital pharmacovigilance in the detection of medication errors
Villegas, Francisco; Figueroa-Montero, David; Barbero-Becerra, Varenka; Juárez-Hernández, Eva; Uribe, Misael; Chávez-Tapia, Norberto; González-Chon, Octavio
2018-01-01
Hospitalized patients are susceptible to medication errors, which represent between the fourth and the sixth cause of death. The department of intra-hospital pharmacovigilance intervenes in the entire process of medication with the purpose to prevent, repair and assess damages. To analyze medication errors reported by Mexican Fundación Clínica Médica Sur pharmacovigilance system and their impact on patients. Prospective study carried out from 2012 to 2015, where medication prescriptions given to patients were recorded. Owing to heterogeneity, data were described as absolute numbers in a logarithmic scale. 292 932 prescriptions of 56 368 patients were analyzed, and 8.9% of medication errors were identified. The treating physician was responsible of 83.32% of medication errors, residents of 6.71% and interns of 0.09%. No error caused permanent damage or death. This is the pharmacovigilance study with the largest sample size reported. Copyright: © 2018 SecretarÍa de Salud.
Conjunction Assessment Late-Notice High-Interest Event Investigation: Space Weather Aspects
NASA Technical Reports Server (NTRS)
Pachura, D.; Hejduk, M. D.
2016-01-01
Late-notice events usually driven by large changes in primary (protected) object or secondary object state. Main parameter to represent size of state change is component position difference divided by associated standard deviation (epsilon divided by sigma) from covariance. Investigation determined actual frequency of large state changes, in both individual and combined states. Compared them to theoretically expected frequencies. Found that large changes ( (epsilon divided by sigma) is greater than 3) in individual object states occur much more frequently than theory dictates. Effect is less pronounced in radial components and in events with probability of collision (Pc) greater than 1 (sup -5) (1e-5). Found combined state matched much closer to theoretical expectation, especially for radial and cross-track. In-track is expected to be the most vulnerable to modeling errors, so not surprising that non-compliance largest in this component.
Cabilan, C J; Hughes, James A; Shannon, Carl
2017-12-01
To describe the contextual, modal and psychological classification of medication errors in the emergency department to know the factors associated with the reported medication errors. The causes of medication errors are unique in every clinical setting; hence, error minimisation strategies are not always effective. For this reason, it is fundamental to understand the causes specific to the emergency department so that targeted strategies can be implemented. Retrospective analysis of reported medication errors in the emergency department. All voluntarily staff-reported medication-related incidents from 2010-2015 from the hospital's electronic incident management system were retrieved for analysis. Contextual classification involved the time, place and the type of medications involved. Modal classification pertained to the stage and issue (e.g. wrong medication, wrong patient). Psychological classification categorised the errors in planning (knowledge-based and rule-based errors) and skill (slips and lapses). There were 405 errors reported. Most errors occurred in the acute care area, short-stay unit and resuscitation area, during the busiest shifts (0800-1559, 1600-2259). Half of the errors involved high-alert medications. Many of the errors occurred during administration (62·7%), prescribing (28·6%) and commonly during both stages (18·5%). Wrong dose, wrong medication and omission were the issues that dominated. Knowledge-based errors characterised the errors that occurred in prescribing and administration. The highest proportion of slips (79·5%) and lapses (76·1%) occurred during medication administration. It is likely that some of the errors occurred due to the lack of adherence to safety protocols. Technology such as computerised prescribing, barcode medication administration and reminder systems could potentially decrease the medication errors in the emergency department. There was a possibility that some of the errors could be prevented if safety protocols were adhered to, which highlights the need to also address clinicians' attitudes towards safety. Technology can be implemented to help minimise errors in the ED, but this must be coupled with efforts to enhance the culture of safety. © 2017 John Wiley & Sons Ltd.
Action errors, error management, and learning in organizations.
Frese, Michael; Keith, Nina
2015-01-03
Every organization is confronted with errors. Most errors are corrected easily, but some may lead to negative consequences. Organizations often focus on error prevention as a single strategy for dealing with errors. Our review suggests that error prevention needs to be supplemented by error management--an approach directed at effectively dealing with errors after they have occurred, with the goal of minimizing negative and maximizing positive error consequences (examples of the latter are learning and innovations). After defining errors and related concepts, we review research on error-related processes affected by error management (error detection, damage control). Empirical evidence on positive effects of error management in individuals and organizations is then discussed, along with emotional, motivational, cognitive, and behavioral pathways of these effects. Learning from errors is central, but like other positive consequences, learning occurs under certain circumstances--one being the development of a mind-set of acceptance of human error.
Medication errors in anesthesia: unacceptable or unavoidable?
Dhawan, Ira; Tewari, Anurag; Sehgal, Sankalp; Sinha, Ashish Chandra
Medication errors are the common causes of patient morbidity and mortality. It adds financial burden to the institution as well. Though the impact varies from no harm to serious adverse effects including death, it needs attention on priority basis since medication errors' are preventable. In today's world where people are aware and medical claims are on the hike, it is of utmost priority that we curb this issue. Individual effort to decrease medication error alone might not be successful until a change in the existing protocols and system is incorporated. Often drug errors that occur cannot be reversed. The best way to 'treat' drug errors is to prevent them. Wrong medication (due to syringe swap), overdose (due to misunderstanding or preconception of the dose, pump misuse and dilution error), incorrect administration route, under dosing and omission are common causes of medication error that occur perioperatively. Drug omission and calculation mistakes occur commonly in ICU. Medication errors can occur perioperatively either during preparation, administration or record keeping. Numerous human and system errors can be blamed for occurrence of medication errors. The need of the hour is to stop the blame - game, accept mistakes and develop a safe and 'just' culture in order to prevent medication errors. The newly devised systems like VEINROM, a fluid delivery system is a novel approach in preventing drug errors due to most commonly used medications in anesthesia. Similar developments along with vigilant doctors, safe workplace culture and organizational support all together can help prevent these errors. Copyright © 2016. Published by Elsevier Editora Ltda.
Echeta, Genevieve; Moffett, Brady S; Checchia, Paul; Benton, Mary Kay; Klouda, Leda; Rodriguez, Fred H; Franklin, Wayne
2014-01-01
Adults with congenital heart disease (CHD) are often cared for at pediatric hospitals. There are no data describing the incidence or type of medication prescribing errors in adult patients admitted to a pediatric cardiovascular intensive care unit (CVICU). A review of patients >18 years of age admitted to the pediatric CVICU at our institution from 2009 to 2011 occurred. A comparator group <18 years of age but >70 kg (a typical adult weight) was identified. Medication prescribing errors were determined according to a commonly used adult drug reference. An independent panel consisting of a physician specializing in the care of adult CHD patients, a nurse, and a pharmacist evaluated all errors. Medication prescribing orders were classified as appropriate, underdose, overdose, or nonstandard (dosing per weight instead of standard adult dosing), and severity of error was classified. Eighty-five adult (74 patients) and 33 pediatric admissions (32 patients) met study criteria (mean age 27.5 ± 9.4 years, 53% male vs. 14.9 ± 1.8 years, 63% male). A cardiothoracic surgical procedure occurred in 81.4% of admissions. Adult admissions weighed less than pediatric admissions (72.8 ± 22.4 kg vs. 85.6 ± 14.9 kg, P < .01) but hospital length of stay was similar. (Adult 6 days [range 1-216 days]; pediatric 5 days [Range 2-123 days], P = .52.) A total of 112 prescribing errors were identified and they occurred less often in adults (42.4% of admissions vs. 66.7% of admissions, P = .02). Adults had a lower mean number of errors (0.7 errors per adult admission vs. 1.7 errors per pediatric admission, P < .01). Prescribing errors occurred most commonly with antimicrobials (n = 27). Underdosing was the most common category of prescribing error. Most prescribing errors were determined to have not caused harm to the patient. Prescribing errors occur frequently in adult patients admitted to a pediatric CVICU but occur more often in pediatric patients of adult weight. © 2013 Wiley Periodicals, Inc.
Cassidy, Nicola; Duggan, Edel; Williams, David J P; Tracey, Joseph A
2011-07-01
Medication errors are widely reported for hospitalised patients, but limited data are available for medication errors that occur in community-based and clinical settings. Epidemiological data from poisons information centres enable characterisation of trends in medication errors occurring across the healthcare spectrum. The objective of this study was to characterise the epidemiology and type of medication errors reported to the National Poisons Information Centre (NPIC) of Ireland. A 3-year prospective study on medication errors reported to the NPIC was conducted from 1 January 2007 to 31 December 2009 inclusive. Data on patient demographics, enquiry source, location, pharmaceutical agent(s), type of medication error, and treatment advice were collated from standardised call report forms. Medication errors were categorised as (i) prescribing error (i.e. physician error), (ii) dispensing error (i.e. pharmacy error), and (iii) administration error involving the wrong medication, the wrong dose, wrong route, or the wrong time. Medication errors were reported for 2348 individuals, representing 9.56% of total enquiries to the NPIC over 3 years. In total, 1220 children and adolescents under 18 years of age and 1128 adults (≥ 18 years old) experienced a medication error. The majority of enquiries were received from healthcare professionals, but members of the public accounted for 31.3% (n = 736) of enquiries. Most medication errors occurred in a domestic setting (n = 2135), but a small number occurred in healthcare facilities: nursing homes (n = 110, 4.68%), hospitals (n = 53, 2.26%), and general practitioner surgeries (n = 32, 1.36%). In children, medication errors with non-prescription pharmaceuticals predominated (n = 722) and anti-pyretics and non-opioid analgesics, anti-bacterials, and cough and cold preparations were the main pharmaceutical classes involved. Medication errors with prescription medication predominated for adults (n = 866) and the major medication classes included anti-pyretics and non-opioid analgesics, psychoanaleptics, and psychleptic agents. Approximately 97% (n = 2279) of medication errors were as a result of drug administration errors (comprising a double dose [n = 1040], wrong dose [n = 395], wrong medication [n = 597], wrong route [n = 133], and wrong time [n = 110]). Prescribing and dispensing errors accounted for 0.68% (n = 16) and 2.26% (n = 53) of errors, respectively. Empirical data from poisons information centres facilitate the characterisation of medication errors occurring in the community and across the healthcare spectrum. Poison centre data facilitate the detection of subtle trends in medication errors and can contribute to pharmacovigilance. Collaboration between pharmaceutical manufacturers, consumers, medical, and regulatory communities is needed to advance patient safety and reduce medication errors.
Schultze, A E; Irizarry, A R
2017-02-01
Veterinary clinical pathologists are well positioned via education and training to assist in investigations of unexpected results or increased variation in clinical pathology data. Errors in testing and unexpected variability in clinical pathology data are sometimes referred to as "laboratory errors." These alterations may occur in the preanalytical, analytical, or postanalytical phases of studies. Most of the errors or variability in clinical pathology data occur in the preanalytical or postanalytical phases. True analytical errors occur within the laboratory and are usually the result of operator or instrument error. Analytical errors are often ≤10% of all errors in diagnostic testing, and the frequency of these types of errors has decreased in the last decade. Analytical errors and increased data variability may result from instrument malfunctions, inability to follow proper procedures, undetected failures in quality control, sample misidentification, and/or test interference. This article (1) illustrates several different types of analytical errors and situations within laboratories that may result in increased variability in data, (2) provides recommendations regarding prevention of testing errors and techniques to control variation, and (3) provides a list of references that describe and advise how to deal with increased data variability.
Stultz, Jeremy S; Nahata, Milap C
2015-07-01
Information technology (IT) has the potential to prevent medication errors. While many studies have analyzed specific IT technologies and preventable adverse drug events, no studies have identified risk factors for errors still occurring that are not preventable by IT. The objective of this study was to categorize reported or trigger tool-identified errors and adverse events (AEs) at a pediatric tertiary care institution. Also, we sought to identify medication errors preventable by IT, determine why IT-preventable errors occurred, and to identify risk factors for errors that were not preventable by IT. This was a retrospective analysis of voluntarily reported or trigger tool-identified errors and AEs occurring from 1 July 2011 to 30 June 2012. Medication errors reaching the patients were categorized based on the origin, severity, and location of the error, the month in which they occurred, and the age of the patient involved. Error characteristics were included in a multivariable logistic regression model to determine independent risk factors for errors occurring that were not preventable by IT. A medication error was defined as a medication-related failure of a planned action to be completed as intended or the use of a wrong plan to achieve an aim. An IT-preventable error was defined as having an IT system in place to aid in prevention of the error at the phase and location of its origin. There were 936 medication errors (identified by voluntarily reporting or a trigger tool system) included and analyzed. Drug administration errors were identified most frequently (53.4% ), but prescribing errors most frequently caused harm (47.2 % of harmful errors). There were 470 (50.2 %) errors that were IT preventable at their origin, including 155 due to IT system bypasses, 103 due to insensitivity of IT alerting systems, and 47 with IT alert overrides. Dispensing, administration, and documentation errors had higher odds than prescribing errors for being not preventable by IT [odds ratio (OR) 8.0, 95 % CI 4.4-14.6; OR 2.4, 95 % CI 1.7-3.7; and OR 6.7, 95 % CI 3.3-14.5, respectively; all p < 0.001). Errors occurring in the operating room and in the outpatient setting had higher odds than intensive care units for being not preventable by IT (OR 10.4, 95 % CI 4.0-27.2, and OR 2.6, 95 % CI 1.3-5.0, respectively; all p ≤ 0.004). Despite extensive IT implementation at the studied institution, approximately one-half of the medication errors identified by voluntarily reporting or a trigger tool system were not preventable by the utilized IT systems. Inappropriate use of IT systems was a common cause of errors. The identified risk factors represent areas where IT safety features were lacking.
Recognizing and managing errors of cognitive underspecification.
Duthie, Elizabeth A
2014-03-01
James Reason describes cognitive underspecification as incomplete communication that creates a knowledge gap. Errors occur when an information mismatch occurs in bridging that gap with a resulting lack of shared mental models during the communication process. There is a paucity of studies in health care examining this cognitive error and the role it plays in patient harm. The goal of the following case analyses is to facilitate accurate recognition, identify how it contributes to patient harm, and suggest appropriate management strategies. Reason's human error theory is applied in case analyses of errors of cognitive underspecification. Sidney Dekker's theory of human incident investigation is applied to event investigation to facilitate identification of this little recognized error. Contributory factors leading to errors of cognitive underspecification include workload demands, interruptions, inexperienced practitioners, and lack of a shared mental model. Detecting errors of cognitive underspecification relies on blame-free listening and timely incident investigation. Strategies for interception include two-way interactive communication, standardization of communication processes, and technological support to ensure timely access to documented clinical information. Although errors of cognitive underspecification arise at the sharp end with the care provider, effective management is dependent upon system redesign that mitigates the latent contributory factors. Cognitive underspecification is ubiquitous whenever communication occurs. Accurate identification is essential if effective system redesign is to occur.
Mistake proofing: changing designs to reduce error
Grout, J R
2006-01-01
Mistake proofing uses changes in the physical design of processes to reduce human error. It can be used to change designs in ways that prevent errors from occurring, to detect errors after they occur but before harm occurs, to allow processes to fail safely, or to alter the work environment to reduce the chance of errors. Effective mistake proofing design changes should initially be effective in reducing harm, be inexpensive, and easily implemented. Over time these design changes should make life easier and speed up the process. Ideally, the design changes should increase patients' and visitors' understanding of the process. These designs should themselves be mistake proofed and follow the good design practices of other disciplines. PMID:17142609
The Frame Constraint on Experimentally Elicited Speech Errors in Japanese.
Saito, Akie; Inoue, Tomoyoshi
2017-06-01
The so-called syllable position effect in speech errors has been interpreted as reflecting constraints posed by the frame structure of a given language, which is separately operating from linguistic content during speech production. The effect refers to the phenomenon that when a speech error occurs, replaced and replacing sounds tend to be in the same position within a syllable or word. Most of the evidence for the effect comes from analyses of naturally occurring speech errors in Indo-European languages, and there are few studies examining the effect in experimentally elicited speech errors and in other languages. This study examined whether experimentally elicited sound errors in Japanese exhibits the syllable position effect. In Japanese, the sub-syllabic unit known as "mora" is considered to be a basic sound unit in production. Results showed that the syllable position effect occurred in mora errors, suggesting that the frame constrains the ordering of sounds during speech production.
Bishop, Joseph E.; Emery, John M.; Battaile, Corbett C.; ...
2016-03-16
Two fundamental approximations in macroscale solid-mechanics modeling are (1) the assumption of scale separation in homogenization theory and (2) the use of a macroscopic plasticity material model that represents, in a mean sense, the multitude of inelastic processes occurring at the microscale. With the goal of quantifying the errors induced by these approximations on engineering quantities of interest, we perform a set of direct numerical simulations (DNS) in which polycrystalline microstructures are embedded throughout a macroscale structure. The largest simulations model over 50,000 grains. The microstructure is idealized using a randomly close-packed Voronoi tessellation in which each polyhedral Voronoi cellmore » represents a grain. An face centered cubic crystal-plasticity model is used to model the mechanical response of each grain. The overall grain structure is equiaxed, and each grain is randomly oriented with no overall texture. The detailed results from the DNS simulations are compared to results obtained from conventional macroscale simulations that use homogeneous isotropic plasticity models. The macroscale plasticity models are calibrated using a representative volume element of the idealized microstructure. Furthermore, we envision that DNS modeling will be used to gain new insights into the mechanics of material deformation and failure.« less
ERIC Educational Resources Information Center
El-khateeb, Mahmoud M. A.
2016-01-01
The purpose of this study aims to investigate the errors classes occurred by the Preparatory year students at King Saud University, through analysis student responses to the items of the study test, and to identify the varieties of the common errors and ratios of common errors that occurred in solving inequalities. In the collection of the data,…
Use of Standalone GPS for Approach with Vertical Guidance.
DOT National Transportation Integrated Search
2001-01-22
The accuracy of GPS has improved dramatically over the past year with the removal of Selective Availability. The largest error source now is the ionosphere which can be removed in the future when the additional civil frequencies become available. Pre...
Enhanced orbit determination filter sensitivity analysis: Error budget development
NASA Technical Reports Server (NTRS)
Estefan, J. A.; Burkhart, P. D.
1994-01-01
An error budget analysis is presented which quantifies the effects of different error sources in the orbit determination process when the enhanced orbit determination filter, recently developed, is used to reduce radio metric data. The enhanced filter strategy differs from more traditional filtering methods in that nearly all of the principal ground system calibration errors affecting the data are represented as filter parameters. Error budget computations were performed for a Mars Observer interplanetary cruise scenario for cases in which only X-band (8.4-GHz) Doppler data were used to determine the spacecraft's orbit, X-band ranging data were used exclusively, and a combined set in which the ranging data were used in addition to the Doppler data. In all three cases, the filter model was assumed to be a correct representation of the physical world. Random nongravitational accelerations were found to be the largest source of error contributing to the individual error budgets. Other significant contributors, depending on the data strategy used, were solar-radiation pressure coefficient uncertainty, random earth-orientation calibration errors, and Deep Space Network (DSN) station location uncertainty.
Van de Vreede, Melita; McGrath, Anne; de Clifford, Jan
2018-05-14
Objective. The aim of the present study was to identify and quantify medication errors reportedly related to electronic medication management systems (eMMS) and those considered likely to occur more frequently with eMMS. This included developing a new classification system relevant to eMMS errors. Methods. Eight Victorian hospitals with eMMS participated in a retrospective audit of reported medication incidents from their incident reporting databases between May and July 2014. Site-appointed project officers submitted deidentified incidents they deemed new or likely to occur more frequently due to eMMS, together with the Incident Severity Rating (ISR). The authors reviewed and classified incidents. Results. There were 5826 medication-related incidents reported. In total, 93 (47 prescribing errors, 46 administration errors) were identified as new or potentially related to eMMS. Only one ISR2 (moderate) and no ISR1 (severe or death) errors were reported, so harm to patients in this 3-month period was minimal. The most commonly reported error types were 'human factors' and 'unfamiliarity or training' (70%) and 'cross-encounter or hybrid system errors' (22%). Conclusions. Although the results suggest that the errors reported were of low severity, organisations must remain vigilant to the risk of new errors and avoid the assumption that eMMS is the panacea to all medication error issues. What is known about the topic? eMMS have been shown to reduce some types of medication errors, but it has been reported that some new medication errors have been identified and some are likely to occur more frequently with eMMS. There are few published Australian studies that have reported on medication error types that are likely to occur more frequently with eMMS in more than one organisation and that include administration and prescribing errors. What does this paper add? This paper includes a new simple classification system for eMMS that is useful and outlines the most commonly reported incident types and can inform organisations and vendors on possible eMMS improvements. The paper suggests a new classification system for eMMS medication errors. What are the implications for practitioners? The results of the present study will highlight to organisations the need for ongoing review of system design, refinement of workflow issues, staff education and training and reporting and monitoring of errors.
Hydrometric Data Rescue in the Paraná River Basin
NASA Astrophysics Data System (ADS)
Antico, Andrés.; Aguiar, Ricardo O.; Amsler, Mario L.
2018-02-01
The Paraná River streamflow is the third largest in South America and the sixth largest in the world. Thus, preserving historical Paraná hydrometric data is relevant for understanding South American and global hydroclimate changes. In this work, we rescued paper format data of daily Paraná water level observations taken uninterruptedly at Rosario City, Argentina, from January 1875 to present. The rescue consisted of the following activities: (i) imaging and digitization of paper format data, (ii) application of quality checks and homogeneity tests to the digitized water levels, and (iii) consideration of errors caused by gauge sinkings that may have occurred from 1875 to 1908. In addition, a rating curve was obtained for Rosario and it was used to convert water levels into discharges. The rescued water level observations and their associated discharge data provide the longest (last 143 years) continuous hydrometric records of the Paraná basin. The usefulness of these records was demonstrated by showing that the Paraná-Pacific Ocean links observed after 1900 in previous studies are also evidenced in our nineteenth-century discharge data. That is, high Paraná discharges coincided with El Niño events and with El Niño-like states of the Interdecadal Pacific Oscillation (IPO), whereas low discharges coincided with La Niña events and with La Niña-like IPO states.
Siewert, Bettina; Brook, Olga R; Hochman, Mary; Eisenberg, Ronald L
2016-03-01
The purpose of this study is to analyze the impact of communication errors on patient care, customer satisfaction, and work-flow efficiency and to identify opportunities for quality improvement. We performed a search of our quality assurance database for communication errors submitted from August 1, 2004, through December 31, 2014. Cases were analyzed regarding the step in the imaging process at which the error occurred (i.e., ordering, scheduling, performance of examination, study interpretation, or result communication). The impact on patient care was graded on a 5-point scale from none (0) to catastrophic (4). The severity of impact between errors in result communication and those that occurred at all other steps was compared. Error evaluation was performed independently by two board-certified radiologists. Statistical analysis was performed using the chi-square test and kappa statistics. Three hundred eighty of 422 cases were included in the study. One hundred ninety-nine of the 380 communication errors (52.4%) occurred at steps other than result communication, including ordering (13.9%; n = 53), scheduling (4.7%; n = 18), performance of examination (30.0%; n = 114), and study interpretation (3.7%; n = 14). Result communication was the single most common step, accounting for 47.6% (181/380) of errors. There was no statistically significant difference in impact severity between errors that occurred during result communication and those that occurred at other times (p = 0.29). In 37.9% of cases (144/380), there was an impact on patient care, including 21 minor impacts (5.5%; result communication, n = 13; all other steps, n = 8), 34 moderate impacts (8.9%; result communication, n = 12; all other steps, n = 22), and 89 major impacts (23.4%; result communication, n = 45; all other steps, n = 44). In 62.1% (236/380) of cases, no impact was noted, but 52.6% (200/380) of cases had the potential for an impact. Among 380 communication errors in a radiology department, 37.9% had a direct impact on patient care, with an additional 52.6% having a potential impact. Most communication errors (52.4%) occurred at steps other than result communication, with similar severity of impact.
Structured methods for identifying and correcting potential human errors in aviation operations
DOE Office of Scientific and Technical Information (OSTI.GOV)
Nelson, W.R.
1997-10-01
Human errors have been identified as the source of approximately 60% of the incidents and accidents that occur in commercial aviation. It can be assumed that a very large number of human errors occur in aviation operations, even though in most cases the redundancies and diversities built into the design of aircraft systems prevent the errors from leading to serious consequences. In addition, when it is acknowledged that many system failures have their roots in human errors that occur in the design phase, it becomes apparent that the identification and elimination of potential human errors could significantly decrease the risksmore » of aviation operations. This will become even more critical during the design of advanced automation-based aircraft systems as well as next-generation systems for air traffic management. Structured methods to identify and correct potential human errors in aviation operations have been developed and are currently undergoing testing at the Idaho National Engineering and Environmental Laboratory (INEEL).« less
Bertholet, Jenny; Worm, Esben; Høyer, Morten; Poulsen, Per
2017-06-01
Accurate patient positioning is crucial in stereotactic body radiation therapy (SBRT) due to a high dose regimen. Cone-beam computed tomography (CBCT) is often used for patient positioning based on radio-opaque markers. We compared six CBCT-based set-up strategies with or without rotational correction. Twenty-nine patients with three implanted markers received 3-6 fraction liver SBRT. The markers were delineated on the mid-ventilation phase of a 4D-planning-CT. One pretreatment CBCT was acquired per fraction. Set-up strategy 1 used only translational correction based on manual marker match between the CBCT and planning CT. Set-up strategy 2 used automatic 6 degrees-of-freedom registration of the vertebrae closest to the target. The 3D marker trajectories were also extracted from the projections and the mean position of each marker was calculated and used for set-up strategies 3-6. Translational correction only was used for strategy 3. Translational and rotational corrections were used for strategies 4-6 with the rotation being either vertebrae based (strategy 4), or marker based and constrained to ±3° (strategy 5) or unconstrained (strategy 6). The resulting set-up error was calculated as the 3D root-mean-square set-up error of the three markers. The set-up error of the spinal cord was calculated for all strategies. The bony anatomy set-up (2) had the largest set-up error (5.8 mm). The marker-based set-up with unconstrained rotations (6) had the smallest set-up error (0.8 mm) but the largest spinal cord set-up error (12.1 mm). The marker-based set-up with translational correction only (3) or with bony anatomy rotational correction (4) had equivalent set-up error (1.3 mm) but rotational correction reduced the spinal cord set-up error from 4.1 mm to 3.5 mm. Marker-based set-up was substantially better than bony-anatomy set-up. Rotational correction may improve the set-up, but further investigations are required to determine the optimal correction strategy.
Implementing smart infusion pumps with dose-error reduction software: real-world experiences.
Heron, Claire
2017-04-27
Intravenous (IV) drug administration, especially with 'smart pumps', is complex and susceptible to errors. Although errors can occur at any stage of the IV medication process, most errors occur during reconstitution and administration. Dose-error reduction software (DERS) loaded on to infusion pumps incorporates a drug library with predefined upper and lower drug dose limits and infusion rates, which can reduce IV infusion errors. Although this is an important advance for patient safety at the point of care, uptake is still relatively low. This article discuses the challenges and benefits of implementing DERS in clinical practice as experienced by three UK trusts.
Time Lapse of World’s Largest 3-D Printed Object
DOE Office of Scientific and Technical Information (OSTI.GOV)
None
2016-08-29
Researchers at the MDF have 3D-printed a large-scale trim tool for a Boeing 777X, the world’s largest twin-engine jet airliner. The additively manufactured tool was printed on the Big Area Additive Manufacturing, or BAAM machine over a 30-hour period. The team used a thermoplastic pellet comprised of 80% ABS plastic and 20% carbon fiber from local material supplier. The tool has proven to decrease time, labor, cost and errors associated with traditional manufacturing techniques and increased energy savings in preliminary testing and will undergo further, long term testing.
Infrared Time Lapse of World’s Largest 3D-Printed Object
DOE Office of Scientific and Technical Information (OSTI.GOV)
None
Researchers at Oak Ridge National Laboratory have 3D-printed a large-scale trim tool for a Boeing 777X, the world’s largest twin-engine jet airliner. The additively manufactured tool was printed on the Big Area Additive Manufacturing, or BAAM machine over a 30-hour period. The team used a thermoplastic pellet comprised of 80% ABS plastic and 20% carbon fiber from local material supplier. The tool has proven to decrease time, labor, cost and errors associated with traditional manufacturing techniques and increased energy savings in preliminary testing and will undergo further, long term testing.
Effect of stratospheric aerosol layers on the TOMS/SBUV ozone retrieval
NASA Technical Reports Server (NTRS)
Torres, O.; Ahmad, Zia; Pan, L.; Herman, J. R.; Bhartia, P. K.; Mcpeters, R.
1994-01-01
An evaluation of the optical effects of stratospheric aerosol layers on total ozone retrieval from space by the TOMS/SBUV type instruments is presented here. Using the Dave radiative transfer model we estimate the magnitude of the errors in the retrieved ozone when polar stratospheric clouds (PSC's) or volcanic aerosol layers interfere with the measurements. The largest errors are produced by optically thick water ice PSC's. Results of simulation experiments on the effect of the Pinatubo aerosol cloud on the Nimbus-7 and Meteor-3 TOMS products are presented.
Characteristics of Single-Event Upsets in a Fabric Switch (ADS151)
NASA Technical Reports Server (NTRS)
Buchner, Stephen; Carts, Martin A.; McMorrow, Dale; Kim, Hak; Marshall, Paul W.; LaBel, Kenneth A.
2003-01-01
Abstract-Two types of single event effects - bit errors and single event functional interrupts - were observed during heavy-ion testing of the AD8151 crosspoint switch. Bit errors occurred in bursts with the average number of bits in a burst being dependent on both the ion LET and on the data rate. A pulsed laser was used to identify the locations on the chip where the bit errors and single event functional interrupts occurred. Bit errors originated in the switches, drivers, and output buffers. Single event functional interrupts occurred when the laser was focused on the second rank latch containing the data specifying the state of each switch in the 33x17 matrix.
Yelland, Lisa N; Kahan, Brennan C; Dent, Elsa; Lee, Katherine J; Voysey, Merryn; Forbes, Andrew B; Cook, Jonathan A
2018-06-01
Background/aims In clinical trials, it is not unusual for errors to occur during the process of recruiting, randomising and providing treatment to participants. For example, an ineligible participant may inadvertently be randomised, a participant may be randomised in the incorrect stratum, a participant may be randomised multiple times when only a single randomisation is permitted or the incorrect treatment may inadvertently be issued to a participant at randomisation. Such errors have the potential to introduce bias into treatment effect estimates and affect the validity of the trial, yet there is little motivation for researchers to report these errors and it is unclear how often they occur. The aim of this study is to assess the prevalence of recruitment, randomisation and treatment errors and review current approaches for reporting these errors in trials published in leading medical journals. Methods We conducted a systematic review of individually randomised, phase III, randomised controlled trials published in New England Journal of Medicine, Lancet, Journal of the American Medical Association, Annals of Internal Medicine and British Medical Journal from January to March 2015. The number and type of recruitment, randomisation and treatment errors that were reported and how they were handled were recorded. The corresponding authors were contacted for a random sample of trials included in the review and asked to provide details on unreported errors that occurred during their trial. Results We identified 241 potentially eligible articles, of which 82 met the inclusion criteria and were included in the review. These trials involved a median of 24 centres and 650 participants, and 87% involved two treatment arms. Recruitment, randomisation or treatment errors were reported in 32 in 82 trials (39%) that had a median of eight errors. The most commonly reported error was ineligible participants inadvertently being randomised. No mention of recruitment, randomisation or treatment errors was found in the remaining 50 of 82 trials (61%). Based on responses from 9 of the 15 corresponding authors who were contacted regarding recruitment, randomisation and treatment errors, between 1% and 100% of the errors that occurred in their trials were reported in the trial publications. Conclusion Recruitment, randomisation and treatment errors are common in individually randomised, phase III trials published in leading medical journals, but reporting practices are inadequate and reporting standards are needed. We recommend researchers report all such errors that occurred during the trial and describe how they were handled in trial publications to improve transparency in reporting of clinical trials.
A description of medication errors reported by pharmacists in a neonatal intensive care unit.
Pawluk, Shane; Jaam, Myriam; Hazi, Fatima; Al Hail, Moza Sulaiman; El Kassem, Wessam; Khalifa, Hanan; Thomas, Binny; Abdul Rouf, Pallivalappila
2017-02-01
Background Patients in the Neonatal Intensive Care Unit (NICU) are at an increased risk for medication errors. Objective The objective of this study is to describe the nature and setting of medication errors occurring in patients admitted to an NICU in Qatar based on a standard electronic system reported by pharmacists. Setting Neonatal intensive care unit, Doha, Qatar. Method This was a retrospective cross-sectional study on medication errors reported electronically by pharmacists in the NICU between January 1, 2014 and April 30, 2015. Main outcome measure Data collected included patient information, and incident details including error category, medications involved, and follow-up completed. Results A total of 201 NICU pharmacists-reported medication errors were submitted during the study period. All reported errors did not reach the patient and did not cause harm. Of the errors reported, 98.5% occurred in the prescribing phase of the medication process with 58.7% being due to calculation errors. Overall, 53 different medications were documented in error reports with the anti-infective agents being the most frequently cited. The majority of incidents indicated that the primary prescriber was contacted and the error was resolved before reaching the next phase of the medication process. Conclusion Medication errors reported by pharmacists occur most frequently in the prescribing phase of the medication process. Our data suggest that error reporting systems need to be specific to the population involved. Special attention should be paid to frequently used medications in the NICU as these were responsible for the greatest numbers of medication errors.
Altitude deviations: Breakdowns of an error-tolerant system
NASA Technical Reports Server (NTRS)
Palmer, Everett A.; Hutchins, Edwin L.; Ritter, Richard D.; Vancleemput, Inge
1993-01-01
Pilot reports of aviation incidents to the Aviation Safety Reporting System (ASRS) provide a window on the problems occurring in today's airline cockpits. The narratives of 10 pilot reports of errors made in the automation-assisted altitude-change task are used to illustrate some of the issues of pilots interacting with automatic systems. These narratives are then used to construct a description of the cockpit as an information processing system. The analysis concentrates on the error-tolerant properties of the system and on how breakdowns can occasionally occur. An error-tolerant system can detect and correct its internal processing errors. The cockpit system consists of two or three pilots supported by autoflight, flight-management, and alerting systems. These humans and machines have distributed access to clearance information and perform redundant processing of information. Errors can be detected as deviations from either expected behavior or as deviations from expected information. Breakdowns in this system can occur when the checking and cross-checking tasks that give the system its error-tolerant properties are not performed because of distractions or other task demands. Recommendations based on the analysis for improving the error tolerance of the cockpit system are given.
Assessment study of lichenometric methods for dating surfaces
NASA Astrophysics Data System (ADS)
Jomelli, Vincent; Grancher, Delphine; Naveau, Philippe; Cooley, Daniel; Brunstein, Daniel
2007-04-01
In this paper, we discuss the advantages and drawbacks of the most classical approaches used in lichenometry. In particular, we perform a detailed comparison among methods based on the statistical analysis of either the largest lichen diameters recorded on geomorphic features or the frequency of all lichens. To assess the performance of each method, a careful comparison design with well-defined criteria is proposed and applied to two distinct data sets. First, we study 350 tombstones. This represents an ideal test bed because tombstone dates are known and, therefore, the quality of the estimated lichen growth curve can be easily tested for the different techniques. Secondly, 37 moraines from two tropical glaciers are investigated. This analysis corresponds to our real case study. For both data sets, we apply our list of criteria that reflects precision, error measurements and their theoretical foundations when proposing estimated ages and their associated confidence intervals. From this comparison, it clearly appears that two methods, the mean of the n largest lichen diameters and the recent Bayesian method based on extreme value theory, offer the most reliable estimates of moraine and tombstones dates. Concerning the spread of the error, the latter approach provides the smallest uncertainty and it is the only one that takes advantage of the statistical nature of the observations by fitting an extreme value distribution to the largest diameters.
Evaluation of the accuracy of GPS as a method of locating traffic collisions.
DOT National Transportation Integrated Search
2004-06-01
The objective of this study were to determine the accuracy of GPS units as a traffic crash location tool, evaluate the accuracy of the location data obtained using the GPS units, and determine the largest sources of any errors found. : The analysis s...
40 CFR 112.12 - Spill Prevention, Control, and Countermeasure Plan requirements.
Code of Federal Regulations, 2012 CFR
2012-07-01
... equipment failure or human error at the facility. (c) Bulk storage containers. (1) Not use a container for... means of containment for the entire capacity of the largest single container and sufficient freeboard to... soil conditions. (6) Bulk storage container inspections. (i) Except for containers that meet the...
40 CFR 112.12 - Spill Prevention, Control, and Countermeasure Plan requirements.
Code of Federal Regulations, 2014 CFR
2014-07-01
... equipment failure or human error at the facility. (c) Bulk storage containers. (1) Not use a container for... means of containment for the entire capacity of the largest single container and sufficient freeboard to... soil conditions. (6) Bulk storage container inspections. (i) Except for containers that meet the...
40 CFR 112.12 - Spill Prevention, Control, and Countermeasure Plan requirements.
Code of Federal Regulations, 2013 CFR
2013-07-01
... equipment failure or human error at the facility. (c) Bulk storage containers. (1) Not use a container for... means of containment for the entire capacity of the largest single container and sufficient freeboard to... soil conditions. (6) Bulk storage container inspections. (i) Except for containers that meet the...
Gap filling strategies and error in estimating annual soil respiration
USDA-ARS?s Scientific Manuscript database
Soil respiration (Rsoil) is one of the largest CO2 fluxes in the global carbon (C) cycle. Estimation of annual Rsoil requires extrapolation of survey measurements or gap-filling of automated records to produce a complete time series. While many gap-filling methodologies have been employed, there is ...
Morrison, Maeve; Cope, Vicki; Murray, Melanie
2018-05-15
Medication errors remain a commonly reported clinical incident in health care as highlighted by the World Health Organization's focus to reduce medication-related harm. This retrospective quantitative analysis examined medication errors reported by staff using an electronic Clinical Incident Management System (CIMS) during a 3-year period from April 2014 to April 2017 at a metropolitan mental health ward in Western Australia. The aim of the project was to identify types of medication errors and the context in which they occur and to consider recourse so that medication errors can be reduced. Data were retrieved from the Clinical Incident Management System database and concerned medication incidents from categorized tiers within the system. Areas requiring improvement were identified, and the quality of the documented data captured in the database was reviewed for themes pertaining to medication errors. Content analysis provided insight into the following issues: (i) frequency of problem, (ii) when the problem was detected, and (iii) characteristics of the error (classification of drug/s, where the error occurred, what time the error occurred, what day of the week it occurred, and patient outcome). Data were compared to the state-wide results published in the Your Safety in Our Hands (2016) report. Results indicated several areas upon which quality improvement activities could be focused. These include the following: structural changes; changes to policy and practice; changes to individual responsibilities; improving workplace culture to counteract underreporting of medication errors; and improvement in safety and quality administration of medications within a mental health setting. © 2018 Australian College of Mental Health Nurses Inc.
SAGE (version 5.96) Ozone Trends in the Lower Stratosphere
NASA Technical Reports Server (NTRS)
Cunnold, D. M.; Wang, H. J.; Thomason, L. W.; Zawodny, J. M.; Logan, J. A.; Megretkaia, I. A.
2002-01-01
Ozone retrievals from Stratospheric Aerosol and Gas Experiment (SAGE) II version 5.96 (v5.96) below approx. 25 km altitude are discussed. This version of the algorithm includes improved constraints on the wavelength dependence of aerosol extinctions based on the ensemble of aerosol size distribution measurements. This results in a reduction of SAGE ozone errors in the 2 years after the Mount Pinatubo eruption. However, SAGE ozone concentrations are still approx. 10% larger than ozonesonde and Halogen Occultation Experiment (HALOE) measurements below 20 km altitude under nonvolcanic conditions (and by more than this in the tropics). The analysis by Steele and Turco suggests that the SAGE ozone overpredictions are in the wrong direction to be explained by aerosol extinction extrapolation errors. Moreover, preliminary SAGE 11 v6.0a retrievals suggest that they are partially accounted for by geometric difficulties at low altitudes in v5.96 and prior retrievals. SAGE ozone trends for the 1979-1996 and 1984-1996 periods are calculated and compared, and the sources of trend errors are discussed. These calculations are made after filtering out ozone data during periods of high, local aerosol extinctions. In the lower stratosphere, below approx. 28 km altitude, there is shown to be excellent agreement in the altitudinal structure of ozone decreases at 45 deg N between SAGE and ozonesondes with the largest decrease in both between 1979 and 1996 having occurred below 20 km altitude, amounting to 0.9 +/- 0.7% yr (2sigma) at 16 km altitude. However, in contrast to the fairly steady decreases at 45 deg N, both SAGE measurements and Lauder ozonesondes show ozone increases at 45 deg S over the period from the mid-1980s to 1996 of 0.2 +/- 0.5%/yr (2sigma) from 15 to 20 km altitude. The SAGE data suggest that this increase is a wintertime phenomenon which occurs in the 15-20 km height range. Changes in dynamics are suggested as the most likely cause of this increase. These hemispheric differences in ozone trends are supported by ozone column measurements by the Total Ozone Mapping Spectrometer (TOMS).
Smith, Brian T; Coiro, Daniel J; Finson, Richard; Betz, Randal R; McCarthy, James
2002-03-01
Force-sensing resistors (FSRs) were used to detect the transitions between five main phases of gait for the control of electrical stimulation (ES) while walking with seven children with spastic diplegia, cerebral palsy. The FSR positions within each child's insoles were customized based on plantar pressure profiles determined using a pressure-sensitive membrane array (Tekscan Inc., Boston, MA). The FSRs were placed in the insoles so that pressure transitions coincided with an ipsilateral or contralateral gait event. The transitions between the following gait phases were determined: loading response, mid- and terminal stance, and pre- and initial swing. Following several months of walking on a regular basis with FSR-triggered intramuscular ES to the hip and knee extensors, hip abductors, and ankle dorsi and plantar flexors, the accuracy and reliability of the FSRs to detect gait phase transitions were evaluated. Accuracy was evaluated with four of the subjects by synchronizing the output of the FSR detection scheme with a VICON (Oxford Metrics, U.K.) motion analysis system, which was used as the gait event reference. While mean differences between each FSR-detected gait event and that of the standard (VICON) ranged from +35 ms (indicating that the FSR detection scheme recognized the event before it actually happened) to -55 ms (indicating that the FSR scheme recognized the event after it occurred), the difference data was widely distributed, which appeared to be due in part to both intrasubject (step-to-step) and intersubject variability. Terminal stance exhibited the largest mean difference and standard deviation, while initial swing exhibited the smallest deviation and preswing the smallest mean difference. To determine step-to-step reliability, all seven children walked on a level walkway for at least 50 steps. Of 642 steps, there were no detection errors in 94.5% of the steps. Of the steps that contained a detection error, 80% were due to the failure of the FSR signal to reach the programmed threshold level during the transition to loading response. Recovery from an error always occurred one to three steps later.
Estimation of the barrier layer thickness in the Indian Ocean using Aquarius Salinity
NASA Astrophysics Data System (ADS)
Felton, Clifford S.; Subrahmanyam, Bulusu; Murty, V. S. N.; Shriver, Jay F.
2014-07-01
Monthly barrier layer thickness (BLT) estimates are derived from satellite measurements using a multilinear regression model (MRM) within the Indian Ocean. Sea surface salinity (SSS) from the recently launched Soil Moisture and Ocean Salinity (SMOS) and Aquarius SAC-D salinity missions are utilized to estimate the BLT. The MRM relates BLT to sea surface salinity (SSS), sea surface temperature (SST), and sea surface height anomalies (SSHA). Three regions where the BLT variability is most rigorous are selected to evaluate the performance of the MRM for 2012; the Southeast Arabian Sea (SEAS), Bay of Bengal (BoB), and Eastern Equatorial Indian Ocean (EEIO). The MRM derived BLT estimates are compared to gridded Argo and Hybrid Coordinate Ocean Model (HYCOM) BLTs. It is shown that different mechanisms are important for sustaining the BLT variability in each of the selected regions. Sensitivity tests show that SSS is the primary driver of the BLT within the MRM. Results suggest that salinity measurements obtained from Aquarius and SMOS can be useful for tracking and predicting the BLT in the Indian Ocean. Largest MRM errors occur along coastlines and near islands where land contamination skews the satellite SSS retrievals. The BLT evolution during 2012, as well as the advantages and disadvantages of the current model are discussed. BLT estimations using HYCOM simulations display large errors that are related to model layer structure and the selected BLT methodology.
Michaelson, M; Walsh, E; Bradley, C P; McCague, P; Owens, R; Sahm, L J
2017-08-01
Prescribing error may result in adverse clinical outcomes leading to increased patient morbidity, mortality and increased economic burden. Many errors occur during transitional care as patients move between different stages and settings of care. To conduct a review of medication information and identify prescribing error among an adult population in an urban hospital. Retrospective review of medication information was conducted. Part 1: an audit of discharge prescriptions which assessed: legibility, compliance with legal requirements, therapeutic errors (strength, dose and frequency) and drug interactions. Part 2: A review of all sources of medication information (namely pre-admission medication list, drug Kardex, discharge prescription, discharge letter) for 15 inpatients to identify unintentional prescription discrepancies, defined as: "undocumented and/or unjustified medication alteration" throughout the hospital stay. Part 1: of the 5910 prescribed items; 53 (0.9%) were deemed illegible. Of the controlled drug prescriptions 11.1% (n = 167) met all the legal requirements. Therapeutic errors occurred in 41% of prescriptions (n = 479) More than 1 in 5 patients (21.9%) received a prescription containing a drug interaction. Part 2: 175 discrepancies were identified across all sources of medication information; of which 78 were deemed unintentional. Of these: 10.2% (n = 8) occurred at the point of admission, whereby 76.9% (n = 60) occurred at the point of discharge. The study identified the time of discharge as a point at which prescribing errors are likely to occur. This has implications for patient safety and provider work load in both primary and secondary care.
Earthquake Catalogue of the Caucasus
NASA Astrophysics Data System (ADS)
Godoladze, T.; Gok, R.; Tvaradze, N.; Tumanova, N.; Gunia, I.; Onur, T.
2016-12-01
The Caucasus has a documented historical catalog stretching back to the beginning of the Christian era. Most of the largest historical earthquakes prior to the 19th century are assumed to have occurred on active faults of the Greater Caucasus. Important earthquakes include the Samtskhe earthquake of 1283 (Ms˜7.0, Io=9); Lechkhumi-Svaneti earthquake of 1350 (Ms˜7.0, Io=9); and the Alaverdi earthquake of 1742 (Ms˜6.8, Io=9). Two significant historical earthquakes that may have occurred within the Javakheti plateau in the Lesser Caucasus are the Tmogvi earthquake of 1088 (Ms˜6.5, Io=9) and the Akhalkalaki earthquake of 1899 (Ms˜6.3, Io =8-9). Large earthquakes that occurred in the Caucasus within the period of instrumental observation are: Gori 1920; Tabatskuri 1940; Chkhalta 1963; Racha earthquake of 1991 (Ms=7.0), is the largest event ever recorded in the region; Barisakho earthquake of 1992 (M=6.5); Spitak earthquake of 1988 (Ms=6.9, 100 km south of Tbilisi), which killed over 50,000 people in Armenia. Recently, permanent broadband stations have been deployed across the region as part of the various national networks (Georgia (˜25 stations), Azerbaijan (˜35 stations), Armenia (˜14 stations)). The data from the last 10 years of observation provides an opportunity to perform modern, fundamental scientific investigations. In order to improve seismic data quality a catalog of all instrumentally recorded earthquakes has been compiled by the IES (Institute of Earth Sciences/NSMC, Ilia State University) in the framework of regional joint project (Armenia, Azerbaijan, Georgia, Turkey, USA) "Probabilistic Seismic Hazard Assessment (PSHA) in the Caucasus. The catalogue consists of more then 80,000 events. First arrivals of each earthquake of Mw>=4.0 have been carefully examined. To reduce calculation errors, we corrected arrivals from the seismic records. We improved locations of the events and recalculate Moment magnitudes in order to obtain unified magnitude catalogue of the region. The results will serve as the input for the Seismic hazard assessment for the region
NASA Technical Reports Server (NTRS)
Chamberlain, D. M.; Elliot, J. L.
1997-01-01
We present a method for speeding up numerical calculations of a light curve for a stellar occultation by a planetary atmosphere with an arbitrary atmospheric model that has spherical symmetry. This improved speed makes least-squares fitting for model parameters practical. Our method takes as input several sets of values for the first two radial derivatives of the refractivity at different values of model parameters, and interpolates to obtain the light curve at intermediate values of one or more model parameters. It was developed for small occulting bodies such as Pluto and Triton, but is applicable to planets of all sizes. We also present the results of a series of tests showing that our method calculates light curves that are correct to an accuracy of 10(exp -4) of the unocculted stellar flux. The test benchmarks are (i) an atmosphere with a l/r dependence of temperature, which yields an analytic solution for the light curve, (ii) an atmosphere that produces an exponential refraction angle, and (iii) a small-planet isothermal model. With our method, least-squares fits to noiseless data also converge to values of parameters with fractional errors of no more than 10(exp -4), with the largest errors occurring in small planets. These errors are well below the precision of the best stellar occultation data available. Fits to noisy data had formal errors consistent with the level of synthetic noise added to the light curve. We conclude: (i) one should interpolate refractivity derivatives and then form light curves from the interpolated values, rather than interpolating the light curves themselves; (ii) for the most accuracy, one must specify the atmospheric model for radii many scale heights above half light; and (iii) for atmospheres with smoothly varying refractivity with altitude, light curves can be sampled as coarsely as two points per scale height.
NASA Astrophysics Data System (ADS)
Hamlyn, J.; Keir, D.; Hammond, J. O.; Wright, T. J.; Neuberg, J.; Kibreab, A.; Ogubazghi, G.; Goitom, B.
2012-12-01
Nabro volcano sits on the Danakil block next to the Afar triangle, nested between the Somalian, Arabian and Nubian plates. It is the largest and most central volcano within the ~110-km-long, SSW-NNE trending Nabro Volcanic Range (NVR) which extends from the Afar depression to the Red Sea. On the 12th June 2011, Nabro volcano suddenly erupted after being inactive for 10, 000 years. The resulting ash cloud rose 15 km, it reached the stratosphere and forced aircraft to re-route. The eruption also caused a 17 km long lava flow and ranks as one of the largest SO2 eruptions since the Mt. Pinatubo (1991) event. In response, a network of 8 seismometers were located around the active vent and were recording by the 31st August. Also, satellites with InSAR acquisition capabilities were tasked to the region including TerraSAR-X, Cosmo-SkyMed and Envisat. We processed the seismic signals detected by the array and those arriving at a regional seismic station (located in the north west) to provide accurate earthquake locations for the period September-October, 2011. We used Hypoinverse-2000 to provide preliminary locations for events, which were then relocated using HypoDD. Absolute error after Hypoinverse-2000 processing was, on average, approximately ±2 and ±4 km in the horizontal and the vertical directions, respectively. These errors were reduced to a relative error of ±20 and ±30 m in the horizontal and vertical directions, respectively, using HypoDD. Investigation of the parameters controlling the relocation was completed, in order to monitor bias that they caused in the final positioning of the hypocentres. The hypocentres produced have a very small relative depth error (~±30m), and show columns and clusters of activity as well as areas devoid of events. The majority of the seismic events are located at the active vent and within Nabro caldera, with fewer events located on the flanks. There also appears to be a smaller cluster of events to the south-west of Nabro beneath neighbouring Mallahle volcanic caldera, despite no eruption occurring here. This may imply some form of co-dependent relationship within the magma system below both calderas. We also investigated temporal patterns, but none were apparent at this late stage of the eruption. In addition to this seismic data, InSAR acquisitions from the TerraSAR-X catalogue have also been processed. We will show a time series analysis of stripmap acquisitions over Nabro, taken immediately after the eruption in order to show areas of ground deformation. These will be compared to the spatial and temporal distribution of seismicity.
Determination of Barometric Altimeter Errors for the Orion Exploration Flight Test-1 Entry
NASA Technical Reports Server (NTRS)
Brown, Denise L.; Bunoz, Jean-Philippe; Gay, Robert
2012-01-01
The Exploration Flight Test 1 (EFT-1) mission is the unmanned flight test for the upcoming Multi-Purpose Crew Vehicle (MPCV). During entry, the EFT-1 vehicle will trigger several Landing and Recovery System (LRS) events, such as parachute deployment, based on on-board altitude information. The primary altitude source is the filtered navigation solution updated with GPS measurement data. The vehicle also has three barometric altimeters that will be used to measure atmospheric pressure during entry. In the event that GPS data is not available during entry, the altitude derived from the barometric altimeter pressure will be used to trigger chute deployment for the drogues and main parachutes. Therefore it is important to understand the impact of error sources on the pressure measured by the barometric altimeters and on the altitude derived from that pressure. The error sources for the barometric altimeters are not independent, and many error sources result in bias in a specific direction. Therefore conventional error budget methods could not be applied. Instead, high fidelity Monte-Carlo simulation was performed and error bounds were determined based on the results of this analysis. Aerodynamic errors were the largest single contributor to the error budget for the barometric altimeters. The large errors drove a change to the altitude trigger setpoint for FBC jettison deploy.
Wei, Wenjuan; Xiong, Jianyin; Zhang, Yinping
2013-01-01
Mass transfer models are useful in predicting the emissions of volatile organic compounds (VOCs) and formaldehyde from building materials in indoor environments. They are also useful for human exposure evaluation and in sustainable building design. The measurement errors in the emission characteristic parameters in these mass transfer models, i.e., the initial emittable concentration (C 0), the diffusion coefficient (D), and the partition coefficient (K), can result in errors in predicting indoor VOC and formaldehyde concentrations. These errors have not yet been quantitatively well analyzed in the literature. This paper addresses this by using modelling to assess these errors for some typical building conditions. The error in C 0, as measured in environmental chambers and applied to a reference living room in Beijing, has the largest influence on the model prediction error in indoor VOC and formaldehyde concentration, while the error in K has the least effect. A correlation between the errors in D, K, and C 0 and the error in the indoor VOC and formaldehyde concentration prediction is then derived for engineering applications. In addition, the influence of temperature on the model prediction of emissions is investigated. It shows the impact of temperature fluctuations on the prediction errors in indoor VOC and formaldehyde concentrations to be less than 7% at 23±0.5°C and less than 30% at 23±2°C.
Results from a Sting Whip Correction Verification Test at the Langley 16-Foot Transonic Tunnel
NASA Technical Reports Server (NTRS)
Crawford, B. L.; Finley, T. D.
2002-01-01
In recent years, great strides have been made toward correcting the largest error in inertial Angle of Attack (AoA) measurements in wind tunnel models. This error source is commonly referred to as 'sting whip' and is caused by aerodynamically induced forces imparting dynamics on sting-mounted models. These aerodynamic forces cause the model to whip through an arc section in the pitch and/or yaw planes, thus generating a centrifugal acceleration and creating a bias error in the AoA measurement. It has been shown that, under certain conditions, this induced AoA error can be greater than one third of a degree. An error of this magnitude far exceeds the target AoA goal of 0.01 deg established at NASA Langley Research Center (LaRC) and elsewhere. New sting whip correction techniques being developed at LaRC are able to measure and reduce this sting whip error by an order of magnitude. With this increase of accuracy, the 0.01 deg AoA target is achievable under all but the most severe conditions.
Parameter Estimation for GRACE-FO Geometric Ranging Errors
NASA Astrophysics Data System (ADS)
Wegener, H.; Mueller, V.; Darbeheshti, N.; Naeimi, M.; Heinzel, G.
2017-12-01
Onboard GRACE-FO, the novel Laser Ranging Instrument (LRI) serves as a technology demonstrator, but it is a fully functional instrument to provide an additional high-precision measurement of the primary mission observable: the biased range between the two spacecraft. Its (expectedly) two largest error sources are laser frequency noise and tilt-to-length (TTL) coupling. While not much can be done about laser frequency noise, the mechanics of the TTL error are widely understood. They depend, however, on unknown parameters. In order to improve the quality of the ranging data, it is hence essential to accurately estimate these parameters and remove the resulting TTL error from the data.Means to do so will be discussed. In particular, the possibility of using calibration maneuvers, the utility of the attitude information provided by the LRI via Differential Wavefront Sensing (DWS), and the benefit from combining ranging data from LRI with ranging data from the established microwave ranging, will be mentioned.
Seasonal to interannual Arctic sea ice predictability in current global climate models
NASA Astrophysics Data System (ADS)
Tietsche, S.; Day, J. J.; Guemas, V.; Hurlin, W. J.; Keeley, S. P. E.; Matei, D.; Msadek, R.; Collins, M.; Hawkins, E.
2014-02-01
We establish the first intermodel comparison of seasonal to interannual predictability of present-day Arctic climate by performing coordinated sets of idealized ensemble predictions with four state-of-the-art global climate models. For Arctic sea ice extent and volume, there is potential predictive skill for lead times of up to 3 years, and potential prediction errors have similar growth rates and magnitudes across the models. Spatial patterns of potential prediction errors differ substantially between the models, but some features are robust. Sea ice concentration errors are largest in the marginal ice zone, and in winter they are almost zero away from the ice edge. Sea ice thickness errors are amplified along the coasts of the Arctic Ocean, an effect that is dominated by sea ice advection. These results give an upper bound on the ability of current global climate models to predict important aspects of Arctic climate.
Fischer, Melissa A; Mazor, Kathleen M; Baril, Joann; Alper, Eric; DeMarco, Deborah; Pugnaire, Michele
2006-01-01
CONTEXT Trainees are exposed to medical errors throughout medical school and residency. Little is known about what facilitates and limits learning from these experiences. OBJECTIVE To identify major factors and areas of tension in trainees' learning from medical errors. DESIGN, SETTING, AND PARTICIPANTS Structured telephone interviews with 59 trainees (medical students and residents) from 1 academic medical center. Five authors reviewed transcripts of audiotaped interviews using content analysis. RESULTS Trainees were aware that medical errors occur from early in medical school. Many had an intense emotional response to the idea of committing errors in patient care. Students and residents noted variation and conflict in institutional recommendations and individual actions. Many expressed role confusion regarding whether and how to initiate discussion after errors occurred. Some noted the conflict between reporting errors to seniors who were responsible for their evaluation. Learners requested more open discussion of actual errors and faculty disclosure. No students or residents felt that they learned better from near misses than from actual errors, and many believed that they learned the most when harm was caused. CONCLUSIONS Trainees are aware of medical errors, but remaining tensions may limit learning. Institutions can immediately address variability in faculty response and local culture by disseminating clear, accessible algorithms to guide behavior when errors occur. Educators should develop longitudinal curricula that integrate actual cases and faculty disclosure. Future multi-institutional work should focus on identified themes such as teaching and learning in emotionally charged situations, learning from errors and near misses and balance between individual and systems responsibility. PMID:16704381
Performance Data Errors in Air Carrier Operations: Causes and Countermeasures
NASA Technical Reports Server (NTRS)
Berman, Benjamin A.; Dismukes, R Key; Jobe, Kimberly K.
2012-01-01
Several airline accidents have occurred in recent years as the result of erroneous weight or performance data used to calculate V-speeds, flap/trim settings, required runway lengths, and/or required climb gradients. In this report we consider 4 recent studies of performance data error, report our own study of ASRS-reported incidents, and provide countermeasures that can reduce vulnerability to accidents caused by performance data errors. Performance data are generated through a lengthy process involving several employee groups and computer and/or paper-based systems. Although much of the airline indUStry 's concern has focused on errors pilots make in entering FMS data, we determined that errors occur at every stage of the process and that errors by ground personnel are probably at least as frequent and certainly as consequential as errors by pilots. Most of the errors we examined could in principle have been trapped by effective use of existing procedures or technology; however, the fact that they were not trapped anywhere indicates the need for better countermeasures. Existing procedures are often inadequately designed to mesh with the ways humans process information. Because procedures often do not take into account the ways in which information flows in actual flight ops and time pressures and interruptions experienced by pilots and ground personnel, vulnerability to error is greater. Some aspects of NextGen operations may exacerbate this vulnerability. We identify measures to reduce the number of errors and to help catch the errors that occur.
ATC operational error analysis.
DOT National Transportation Integrated Search
1972-01-01
The primary causes of operational errors are discussed and the effects of these errors on an ATC system's performance are described. No attempt is made to specify possible error models for the spectrum of blunders that can occur although previous res...
Global Marine Gravity and Bathymetry at 1-Minute Resolution
NASA Astrophysics Data System (ADS)
Sandwell, D. T.; Smith, W. H.
2008-12-01
We have developed global gravity and bathymetry grids at 1-minute resolution. Three approaches are used to reduce the error in the satellite-derived marine gravity anomalies. First, we have retracked the raw waveforms from the ERS-1 and Geosat/GM missions resulting in improvements in range precision of 40% and 27%, respectively. Second, we have used the recently published EGM2008 global gravity model as a reference field to provide a seamless gravity transition from land to ocean. Third we have used a biharmonic spline interpolation method to construct residual vertical deflection grids. Comparisons between shipboard gravity and the global gravity grid show errors ranging from 2.0 mGal in the Gulf of Mexico to 4.0 mGal in areas with rugged seafloor topography. The largest errors occur on the crests of narrow large seamounts. The bathymetry grid is based on prediction from satellite gravity and available ship soundings. Global soundings were assembled from a wide variety of sources including NGDC/GEODAS, NOAA Coastal Relief, CCOM, IFREMER, JAMSTEC, NSF Polar Programs, UKHO, LDEO, HIG, SIO and numerous miscellaneous contributions. The National Geospatial-intelligence Agency and other volunteering hydrographic offices within the International Hydrographic Organization provided global significant shallow water (< 300 m) soundings derived from their nautical charts. All soundings were converted to a common format and were hand-edited in relation to a smooth bathymetric model. Land elevations and shoreline location are based on a combination SRTM30, GTOPO30, and ICESAT data. A new feature of the bathymetry grid is a matching grid of source identification number that enables one to establish the origin of the depth estimate in each grid cell. Both the gravity and bathymetry grids are freely available.
Evaluation of micro-GPS receivers for tracking small-bodied mammals
Shipley, Lisa A.; Forbey, Jennifer S.; Olsoy, Peter J.
2017-01-01
GPS telemetry markedly enhances the temporal and spatial resolution of animal location data, and recent advances in micro-GPS receivers permit their deployment on small mammals. One such technological advance, snapshot technology, allows for improved battery life by reducing the time to first fix via postponing recovery of satellite ephemeris (satellite location) data and processing of locations. However, no previous work has employed snapshot technology for small, terrestrial mammals. We evaluated performance of two types of micro-GPS (< 20 g) receivers (traditional and snapshot) on a small, semi-fossorial lagomorph, the pygmy rabbit (Brachylagus idahoensis), to understand how GPS errors might influence fine-scale assessments of space use and habitat selection. During stationary tests, microtopography (i.e., burrows) and satellite geometry had the largest influence on GPS fix success rate (FSR) and location error (LE). There was no difference between FSR while animals wore the GPS collars above ground (determined via light sensors) and FSR generated during stationary, above-ground trials, suggesting that animal behavior other than burrowing did not markedly influence micro-GPS errors. In our study, traditional micro-GPS receivers demonstrated similar FSR and LE to snapshot receivers, however, snapshot receivers operated inconsistently due to battery and software failures. In contrast, the initial traditional receivers deployed on animals experienced some breakages, but a modified collar design consistently functioned as expected. If such problems were resolved, snapshot technology could reduce the tradeoff between fix interval and battery life that occurs with traditional micro-GPS receivers. Our results suggest that micro-GPS receivers are capable of addressing questions about space use and resource selection by small mammals, but that additional techniques might be needed to identify use of habitat structures (e.g., burrows, tree cavities, rock crevices) that could affect micro-GPS performance and bias study results. PMID:28301495
NASA Astrophysics Data System (ADS)
Torres, A. D.; Keppel-Aleks, G.; Doney, S. C.; Feng, S.; Lauvaux, T.; Fendrock, M. A.; Rheuben, J.
2017-12-01
Remote sensing instruments provide an unprecedented density of observations of the atmospheric CO2 column average mole fraction (denoted as XCO2), which can be used to constrain regional scale carbon fluxes. Inferring fluxes from XCO2 observations is challenging, as measurements and inversion methods are sensitive to not only the imprint local and large-scale fluxes, but also mesoscale and synoptic-scale atmospheric transport. Quantifying the fine-scale variability in XCO2 from mesoscale and synoptic-scale atmospheric transport will likely improve overall error estimates from flux inversions by improving estimates of representation errors that occur when XCO2 observations are compared to modeled XCO2 in relatively coarse transport models. Here, we utilize various statistical methods to quantify the imprint of atmospheric transport on XCO2 observations. We compare spatial variations along Orbiting Carbon Observatory (OCO-2) satellite tracks to temporal variations observed by the Total Column Carbon Observing Network (TCCON). We observe a coherent seasonal cycle of both within-day temporal and fine-scale spatial variability (of order 10 km) of XCO2 from these two datasets, suggestive of the imprint of mesoscale systems. To account for other potential sources of error in XCO2 retrieval, we compare observed temporal and spatial variations of XCO2 to high-resolution output from the Weather Research and Forecasting (WRF) model run at 9 km resolution. In both simulations and observations, the Northern hemisphere mid-latitude XCO2 showed peak variability during the growing season when atmospheric gradients are largest. These results are qualitatively consistent with our expectations of seasonal variations of the imprint of synoptic and mesoscale atmospheric transport on XCO2 observations; suggesting that these statistical methods could be sensitive to the imprint of atmospheric transport on XCO2 observations.
NASA Astrophysics Data System (ADS)
Zhang, Shengjun; Li, Jiancheng; Jin, Taoyong; Che, Defu
2018-04-01
Marine gravity anomaly derived from satellite altimetry can be computed using either sea surface height or sea surface slope measurements. Here we consider the slope method and evaluate the errors in the slope of the corrections supplied with the Jason-1 geodetic mission data. The slope corrections are divided into three groups based on whether they are small, comparable, or large with respect to the 1 microradian error in the current sea surface slope models. (1) The small and thus negligible corrections include dry tropospheric correction, inverted barometer correction, solid earth tide and geocentric pole tide. (2) The moderately important corrections include wet tropospheric correction, dual-frequency ionospheric correction and sea state bias. The radiometer measurements are more preferred than model values in the geophysical data records for constraining wet tropospheric effect owing to the highly variable water-vapor structure in atmosphere. The items of dual-frequency ionospheric correction and sea state bias should better not be directly added to range observations for obtaining sea surface slopes since their inherent errors may cause abnormal sea surface slopes and along-track smoothing with uniform distribution weight in certain width is an effective strategy for avoiding introducing extra noises. The slopes calculated from radiometer wet tropospheric corrections, and along-track smoothed dual-frequency ionospheric corrections, sea state bias are generally within ±0.5 microradians and no larger than 1 microradians. (3) Ocean tide has the largest influence on obtaining sea surface slopes while most of ocean tide slopes distribute within ±3 microradians. Larger ocean tide slopes mostly occur over marginal and island-surrounding seas, and extra tidal models with better precision or with extending process (e.g. Got-e) are strongly recommended for updating corrections in geophysical data records.
Error and bias in size estimates of whale sharks: implications for understanding demography.
Sequeira, Ana M M; Thums, Michele; Brooks, Kim; Meekan, Mark G
2016-03-01
Body size and age at maturity are indicative of the vulnerability of a species to extinction. However, they are both difficult to estimate for large animals that cannot be restrained for measurement. For very large species such as whale sharks, body size is commonly estimated visually, potentially resulting in the addition of errors and bias. Here, we investigate the errors and bias associated with total lengths of whale sharks estimated visually by comparing them with measurements collected using a stereo-video camera system at Ningaloo Reef, Western Australia. Using linear mixed-effects models, we found that visual lengths were biased towards underestimation with increasing size of the shark. When using the stereo-video camera, the number of larger individuals that were possibly mature (or close to maturity) that were detected increased by approximately 10%. Mean lengths calculated by each method were, however, comparable (5.002 ± 1.194 and 6.128 ± 1.609 m, s.d.), confirming that the population at Ningaloo is mostly composed of immature sharks based on published lengths at maturity. We then collated data sets of total lengths sampled from aggregations of whale sharks worldwide between 1995 and 2013. Except for locations in the East Pacific where large females have been reported, these aggregations also largely consisted of juveniles (mean lengths less than 7 m). Sightings of the largest individuals were limited and occurred mostly prior to 2006. This result highlights the urgent need to locate and quantify the numbers of mature male and female whale sharks in order to ascertain the conservation status and ensure persistence of the species.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Velec, Michael, E-mail: michael.velec@rmp.uhn.on.ca; Institute of Medical Science, University of Toronto, Toronto, ON; Moseley, Joanne L.
2012-07-15
Purpose: To investigate the accumulated dose deviations to tumors and normal tissues in liver stereotactic body radiotherapy (SBRT) and investigate their geometric causes. Methods and Materials: Thirty previously treated liver cancer patients were retrospectively evaluated. Stereotactic body radiotherapy was planned on the static exhale CT for 27-60 Gy in 6 fractions, and patients were treated in free-breathing with daily cone-beam CT guidance. Biomechanical model-based deformable image registration accumulated dose over both the planning four-dimensional (4D) CT (predicted breathing dose) and also over each fraction's respiratory-correlated cone-beam CT (accumulated treatment dose). The contribution of different geometric errors to changes between themore » accumulated and predicted breathing dose were quantified. Results: Twenty-one patients (70%) had accumulated dose deviations relative to the planned static prescription dose >5%, ranging from -15% to 5% in tumors and -42% to 8% in normal tissues. Sixteen patients (53%) still had deviations relative to the 4D CT-predicted dose, which were similar in magnitude. Thirty-two tissues in these 16 patients had deviations >5% relative to the 4D CT-predicted dose, and residual setup errors (n = 17) were most often the largest cause of the deviations, followed by deformations (n = 8) and breathing variations (n = 7). Conclusion: The majority of patients had accumulated dose deviations >5% relative to the static plan. Significant deviations relative to the predicted breathing dose still occurred in more than half the patients, commonly owing to residual setup errors. Accumulated SBRT dose may be warranted to pursue further dose escalation, adaptive SBRT, and aid in correlation with clinical outcomes.« less
Multiplate Radiation Shields: Investigating Radiational Heating Errors
NASA Astrophysics Data System (ADS)
Richardson, Scott James
1995-01-01
Multiplate radiation shield errors are examined using the following techniques: (1) analytic heat transfer analysis, (2) optical ray tracing, (3) numerical fluid flow modeling, (4) laboratory testing, (5) wind tunnel testing, and (6) field testing. Guidelines for reducing radiational heating errors are given that are based on knowledge of the temperature sensor to be used, with the shield being chosen to match the sensor design. Small, reflective sensors that are exposed directly to the air stream (not inside a filter as is the case for many temperature and relative humidity probes) should be housed in a shield that provides ample mechanical and rain protection while impeding the air flow as little as possible; protection from radiation sources is of secondary importance. If a sensor does not meet the above criteria (i.e., is large or absorbing), then a standard Gill shield performs reasonably well. A new class of shields, called part-time aspirated multiplate radiation shields, are introduced. This type of shield consists of a multiplate design usually operated in a passive manner but equipped with a fan-forced aspiration capability to be used when necessary (e.g., low wind speed). The fans used here are 12 V DC that can be operated with a small dedicated solar panel. This feature allows the fan to operate when global solar radiation is high, which is when the largest radiational heating errors usually occur. A prototype shield was constructed and field tested and an example is given in which radiational heating errors were reduced from 2 ^circC to 1.2 ^circC. The fan was run continuously to investigate night-time low wind speed errors and the prototype shield reduced errors from 1.6 ^ circC to 0.3 ^circC. Part-time aspirated shields are an inexpensive alternative to fully aspirated shields and represent a good compromise between cost, power consumption, reliability (because they should be no worse than a standard multiplate shield if the fan fails), and accuracy. In addition, it is possible to modify existing passive shields to incorporate part-time aspiration, thus making them even more cost-effective. Finally, a new shield is described that incorporates a large diameter top plate that is designed to shade the lower portion of the shield. This shield increases flow through it by 60%, compared to the Gill design and it is likely to reduce radiational heating errors, although it has not been tested.
NASA Astrophysics Data System (ADS)
Staubwasser, M.; Sirocko, F.; Erlenkeuser, H.; Grootes, P. M.; Segl, M.
2003-04-01
Planktonic oxygen isotope ratios from the well-dated laminated sediment core 63KA off the river Indus delta are presented. The record reveals significant climate changes in the south Asian monsoon system throughout the Holocene. The most prominent event of the early-mid Holocene occurred after 8.4 ka BP and is within dating error of the GISP/GRIP event centered at 8.2 ka BP. The late Holocene is generally more variable and the largest change of the entire Holocene occurred at 4.2 ka BP. This event is concordant with the end of urban Harappan civilization in the Indus valley. Opposing isotopic trends across the northern Arabian Sea surface indicate a reduction in Indus river discharge at that time. Consequently, sustained drought may have initiated the archaeologically recorded interval of southeastward habitat tracking within the Harappan cultural domain. The hemispheric significance of the 4.2 ka BP event is evident from concordant climate change in the eastern Mediterranean and the Middle East. The remainder of the late Holocene shows drought cycles of approximately 700 years that are coherent with the evolution of cosmogenic radiocarbon production rates in the atmosphere. This suggests that solar variability is one fundamental cause behind late Holocene rainfall changes over south Asia.
Altimeter error sources at the 10-cm performance level
NASA Technical Reports Server (NTRS)
Martin, C. F.
1977-01-01
Error sources affecting the calibration and operational use of a 10 cm altimeter are examined to determine the magnitudes of current errors and the investigations necessary to reduce them to acceptable bounds. Errors considered include those affecting operational data pre-processing, and those affecting altitude bias determination, with error budgets developed for both. The most significant error sources affecting pre-processing are bias calibration, propagation corrections for the ionosphere, and measurement noise. No ionospheric models are currently validated at the required 10-25% accuracy level. The optimum smoothing to reduce the effects of measurement noise is investigated and found to be on the order of one second, based on the TASC model of geoid undulations. The 10 cm calibrations are found to be feasible only through the use of altimeter passes that are very high elevation for a tracking station which tracks very close to the time of altimeter track, such as a high elevation pass across the island of Bermuda. By far the largest error source, based on the current state-of-the-art, is the location of the island tracking station relative to mean sea level in the surrounding ocean areas.
Relationship auditing of the FMA ontology
Gu, Huanying (Helen); Wei, Duo; Mejino, Jose L.V.; Elhanan, Gai
2010-01-01
The Foundational Model of Anatomy (FMA) ontology is a domain reference ontology based on a disciplined modeling approach. Due to its large size, semantic complexity and manual data entry process, errors and inconsistencies are unavoidable and might remain within the FMA structure without detection. In this paper, we present computable methods to highlight candidate concepts for various relationship assignment errors. The process starts with locating structures formed by transitive structural relationships (part_of, tributary_of, branch_of) and examine their assignments in the context of the IS-A hierarchy. The algorithms were designed to detect five major categories of possible incorrect relationship assignments: circular, mutually exclusive, redundant, inconsistent, and missed entries. A domain expert reviewed samples of these presumptive errors to confirm the findings. Seven thousand and fifty-two presumptive errors were detected, the largest proportion related to part_of relationship assignments. The results highlight the fact that errors are unavoidable in complex ontologies and that well designed algorithms can help domain experts to focus on concepts with high likelihood of errors and maximize their effort to ensure consistency and reliability. In the future similar methods might be integrated with data entry processes to offer real-time error detection. PMID:19475727
Investigation of Primary Mirror Segment's Residual Errors for the Thirty Meter Telescope
NASA Technical Reports Server (NTRS)
Seo, Byoung-Joon; Nissly, Carl; Angeli, George; MacMynowski, Doug; Sigrist, Norbert; Troy, Mitchell; Williams, Eric
2009-01-01
The primary mirror segment aberrations after shape corrections with warping harness have been identified as the single largest error term in the Thirty Meter Telescope (TMT) image quality error budget. In order to better understand the likely errors and how they will impact the telescope performance we have performed detailed simulations. We first generated unwarped primary mirror segment surface shapes that met TMT specifications. Then we used the predicted warping harness influence functions and a Shack-Hartmann wavefront sensor model to determine estimates for the 492 corrected segment surfaces that make up the TMT primary mirror. Surface and control parameters, as well as the number of subapertures were varied to explore the parameter space. The corrected segment shapes were then passed to an optical TMT model built using the Jet Propulsion Laboratory (JPL) developed Modeling and Analysis for Controlled Optical Systems (MACOS) ray-trace simulator. The generated exit pupil wavefront error maps provided RMS wavefront error and image-plane characteristics like the Normalized Point Source Sensitivity (PSSN). The results have been used to optimize the segment shape correction and wavefront sensor designs as well as provide input to the TMT systems engineering error budgets.
Anatomizing one of the largest saltwater inflows into the Baltic Sea in December 2014
NASA Astrophysics Data System (ADS)
Gräwe, Ulf; Naumann, Michael; Mohrholz, Volker; Burchard, Hans
2015-11-01
In December 2014, an exceptional inflow event into the Baltic Sea was observed, a so-called Major Baltic Inflow (MBI). Such inflow events are important for the deep water ventilation in the Baltic Sea and typically occur every 3-10 years. Based on first observational data sets, this inflow had been ranked as the third largest since 100 years. With the help of a multinested modeling system, reaching from the North Atlantic (8 km resolution) to the Western Baltic Sea (600 m resolution, which is baroclinic eddy resolving), this event is reproduced in detail. The model gave a slightly lower salt transport of 3.8 Gt, compared to the observational estimate of four Gt. Moreover, by using passive tracers to mark the different inflowing water masses, including an age tracer, the inflowing water masses could be tracked and their paths and timing through the different basins could be reproduced and investigated. The analysis is supported by the recently developed Total Exchange Flow (TEF) to quantify the volume transport in different salinity classes. To account for uncertainties in the modeled velocity and tracer fields, a Monte Carlo Analysis (MCA) is applied to correct possible biases and errors. With the help of the MCA, 95% confidence intervals are computed for the transport estimates. Based on the MCA, the "best guess" of the volume transport is 291.0 ± 13.65 km3 and 3.89 ± 0.18 Gt for the total salt transport.
Numerical relativity waveform surrogate model for generically precessing binary black hole mergers
NASA Astrophysics Data System (ADS)
Blackman, Jonathan; Field, Scott E.; Scheel, Mark A.; Galley, Chad R.; Ott, Christian D.; Boyle, Michael; Kidder, Lawrence E.; Pfeiffer, Harald P.; Szilágyi, Béla
2017-07-01
A generic, noneccentric binary black hole (BBH) system emits gravitational waves (GWs) that are completely described by seven intrinsic parameters: the black hole spin vectors and the ratio of their masses. Simulating a BBH coalescence by solving Einstein's equations numerically is computationally expensive, requiring days to months of computing resources for a single set of parameter values. Since theoretical predictions of the GWs are often needed for many different source parameters, a fast and accurate model is essential. We present the first surrogate model for GWs from the coalescence of BBHs including all seven dimensions of the intrinsic noneccentric parameter space. The surrogate model, which we call NRSur7dq2, is built from the results of 744 numerical relativity simulations. NRSur7dq2 covers spin magnitudes up to 0.8 and mass ratios up to 2, includes all ℓ≤4 modes, begins about 20 orbits before merger, and can be evaluated in ˜50 ms . We find the largest NRSur7dq2 errors to be comparable to the largest errors in the numerical relativity simulations, and more than an order of magnitude smaller than the errors of other waveform models. Our model, and more broadly the methods developed here, will enable studies that were not previously possible when using highly accurate waveforms, such as parameter inference and tests of general relativity with GW observations.
Wang, Zhipeng; Wang, Shujing; Zhu, Yanbo; Xin, Pumin
2017-01-01
Ionospheric delay is one of the largest and most variable sources of error for Ground-Based Augmentation System (GBAS) users because inospheric activity is unpredictable. Under normal conditions, GBAS eliminates ionospheric delays, but during extreme ionospheric storms, GBAS users and GBAS ground facilities may experience different ionospheric delays, leading to considerable differential errors and threatening the safety of users. Therefore, ionospheric monitoring and assessment are important parts of GBAS integrity monitoring. To study the effects of the ionosphere on the GBAS of Guangdong Province, China, GPS data collected from 65 reference stations were processed using the improved “Simple Truth” algorithm. In addition, the ionospheric characteristics of Guangdong Province were calculated and an ionospheric threat model was established. Finally, we evaluated the influence of the standard deviation and maximum ionospheric gradient on GBAS. The results show that, under normal ionospheric conditions, the vertical protection level of GBAS was increased by 0.8 m for the largest over bound σvig (sigma of vertical ionospheric gradient), and in the case of the maximum ionospheric gradient conditions, the differential correction error may reach 5 m. From an airworthiness perspective, when the satellite is at a low elevation, this interference does not cause airworthiness risks, but when the satellite is at a high elevation, this interference can cause airworthiness risks. PMID:29019953
Wang, Zhipeng; Wang, Shujing; Zhu, Yanbo; Xin, Pumin
2017-10-11
Ionospheric delay is one of the largest and most variable sources of error for Ground-Based Augmentation System (GBAS) users because inospheric activity is unpredictable. Under normal conditions, GBAS eliminates ionospheric delays, but during extreme ionospheric storms, GBAS users and GBAS ground facilities may experience different ionospheric delays, leading to considerable differential errors and threatening the safety of users. Therefore, ionospheric monitoring and assessment are important parts of GBAS integrity monitoring. To study the effects of the ionosphere on the GBAS of Guangdong Province, China, GPS data collected from 65 reference stations were processed using the improved "Simple Truth" algorithm. In addition, the ionospheric characteristics of Guangdong Province were calculated and an ionospheric threat model was established. Finally, we evaluated the influence of the standard deviation and maximum ionospheric gradient on GBAS. The results show that, under normal ionospheric conditions, the vertical protection level of GBAS was increased by 0.8 m for the largest over bound σ v i g (sigma of vertical ionospheric gradient), and in the case of the maximum ionospheric gradient conditions, the differential correction error may reach 5 m. From an airworthiness perspective, when the satellite is at a low elevation, this interference does not cause airworthiness risks, but when the satellite is at a high elevation, this interference can cause airworthiness risks.
How Do Simulated Error Experiences Impact Attitudes Related to Error Prevention?
Breitkreuz, Karen R; Dougal, Renae L; Wright, Melanie C
2016-10-01
The objective of this project was to determine whether simulated exposure to error situations changes attitudes in a way that may have a positive impact on error prevention behaviors. Using a stratified quasi-randomized experiment design, we compared risk perception attitudes of a control group of nursing students who received standard error education (reviewed medication error content and watched movies about error experiences) to an experimental group of students who reviewed medication error content and participated in simulated error experiences. Dependent measures included perceived memorability of the educational experience, perceived frequency of errors, and perceived caution with respect to preventing errors. Experienced nursing students perceived the simulated error experiences to be more memorable than movies. Less experienced students perceived both simulated error experiences and movies to be highly memorable. After the intervention, compared with movie participants, simulation participants believed errors occurred more frequently. Both types of education increased the participants' intentions to be more cautious and reported caution remained higher than baseline for medication errors 6 months after the intervention. This study provides limited evidence of an advantage of simulation over watching movies describing actual errors with respect to manipulating attitudes related to error prevention. Both interventions resulted in long-term impacts on perceived caution in medication administration. Simulated error experiences made participants more aware of how easily errors can occur, and the movie education made participants more aware of the devastating consequences of errors.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Morcrette, C. J.; Van Weverberg, K.; Ma, H. -Y.
The Clouds Above the United States and Errors at the Surface (CAUSES) project is aimed at gaining a better understanding of the physical processes that are leading to the creation of warm screen-temperature biases over the American Midwest, which are seen in many numerical models. Here in Part 1, a series of 5-day hindcasts, each initialised from re-analyses and performed by 11 different models, are evaluated against screen-temperature observations. All the models have a warm bias over parts of the Midwest. Several ways of quantifying the impact of the initial conditions on the evolution of the simulations are presented, showingmore » that within a day or so all models have produced a warm bias that is representative of their bias after 5 days, and not closely tied to the conditions at the initial time. Although the surface temperature biases sometimes coincide with locations where the re-analyses themselves have a bias, there are many regions in each of the models where biases grow over the course of 5 days or are larger than the biases present in the reanalyses. At the Southern Great Plains site, the model biases are shown to not be confined to the surface, but extend several kilometres into the atmosphere. In most of the models, there is a strong diurnal cycle in the screen-temperature bias and in some models the biases are largest around midday, while in the others it is largest during the night. While the different physical processes that are contributing to a given model having a screen-temperature error will be discussed in more detail in the companion papers (Parts 2 and 3) the fact that there is a spatial coherence in the phase of the diurnal cycle of the error across wide regions and that there are numerous locations across the Midwest where the diurnal cycle of the error is highly correlated with the diurnal cycle of the error at SGP suggest that the detailed evaluations of the role of different processes in contributing to errors at SGP will be representative of errors that are prevalent over a much larger spatial scale.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Morcrette, Cyril J.; Van Weverberg, Kwinten; Ma, H
2018-02-16
The Clouds Above the United States and Errors at the Surface (CAUSES) project is aimed at gaining a better understanding of the physical processes that are leading to the creation of warm screen-temperature biases over the American Midwest, which are seen in many numerical models. Here in Part 1, a series of 5-day hindcasts, each initialised from re-analyses and performed by 11 different models, are evaluated against screen-temperature observations. All the models have a warm bias over parts of the Midwest. Several ways of quantifying the impact of the initial conditions on the evolution of the simulations are presented, showingmore » that within a day or so all models have produced a warm bias that is representative of their bias after 5 days, and not closely tied to the conditions at the initial time. Although the surface temperature biases sometimes coincide with locations where the re-analyses themselves have a bias, there are many regions in each of the models where biases grow over the course of 5 days or are larger than the biases present in the reanalyses. At the Southern Great Plains site, the model biases are shown to not be confined to the surface, but extend several kilometres into the atmosphere. In most of the models, there is a strong diurnal cycle in the screen-temperature bias and in some models the biases are largest around midday, while in the others it is largest during the night. While the different physical processes that are contributing to a given model having a screen-temperature error will be discussed in more detail in the companion papers (Parts 2 and 3) the fact that there is a spatial coherence in the phase of the diurnal cycle of the error across wide regions and that there are numerous locations across the Midwest where the diurnal cycle of the error is highly correlated with the diurnal cycle of the error at SGP suggest that the detailed evaluations of the role of different processes in contributing to errors at SGP will be representative of errors that are prevalent over a much larger spatial scale.« less
NASA Technical Reports Server (NTRS)
Tai, Chang-Kou
1988-01-01
Direct estimation of the absolute dynamic topography from satellite altimetry has been confined to the largest scales (basically the basin-scale) owing to the fact that the signal-to-noise ratio is more unfavorable everywhere else. But even for the largest scales, the results are contaminated by the orbit error and geoid uncertainties. Recently a more accurate Earth gravity model (GEM-T1) became available, providing the opportunity to examine the whole question of direct estimation under a more critical limelight. It is found that our knowledge of the Earth's gravity field has indeed improved a great deal. However, it is not yet possible to claim definitively that our knowledge of the ocean circulation has improved through direct estimation. Yet, the improvement in the gravity model has come to the point that it is no longer possible to attribute the discrepancy at the basin scales between altimetric and hydrographic results as mostly due to geoid uncertainties. A substantial part of the difference must be due to other factors; i.e., the orbit error, or the uncertainty of the hydrographically derived dynamic topography.
NASA Astrophysics Data System (ADS)
Feng, Xiao-Li; Li, Yu-Xiao; Gu, Jian-Zhong; Zhuo, Yi-Zhong
2009-10-01
The relaxation property of both Eigen model and Crow-Kimura model with a single peak fitness landscape is studied from phase transition point of view. We first analyze the eigenvalue spectra of the replication mutation matrices. For sufficiently long sequences, the almost crossing point between the largest and second-largest eigenvalues locates the error threshold at which critical slowing down behavior appears. We calculate the critical exponent in the limit of infinite sequence lengths and compare it with the result from numerical curve fittings at sufficiently long sequences. We find that for both models the relaxation time diverges with exponent 1 at the error (mutation) threshold point. Results obtained from both methods agree quite well. From the unlimited correlation length feature, the first order phase transition is further confirmed. Finally with linear stability theory, we show that the two model systems are stable for all ranges of mutation rate. The Eigen model is asymptotically stable in terms of mutant classes, and the Crow-Kimura model is completely stable.
Influence of Tooth Spacing Error on Gears With and Without Profile Modifications
NASA Technical Reports Server (NTRS)
Padmasolala, Giri; Lin, Hsiang H.; Oswald, Fred B.
2000-01-01
A computer simulation was conducted to investigate the effectiveness of profile modification for reducing dynamic loads in gears with different tooth spacing errors. The simulation examined varying amplitudes of spacing error and differences in the span of teeth over which the error occurs. The modification considered included both linear and parabolic tip relief. The analysis considered spacing error that varies around most of the gear circumference (similar to a typical sinusoidal error pattern) as well as a shorter span of spacing errors that occurs on only a few teeth. The dynamic analysis was performed using a revised version of a NASA gear dynamics code, modified to add tooth spacing errors to the analysis. Results obtained from the investigation show that linear tip relief is more effective in reducing dynamic loads on gears with small spacing errors but parabolic tip relief becomes more effective as the amplitude of spacing error increases. In addition, the parabolic modification is more effective for the more severe error case where the error is spread over a longer span of teeth. The findings of this study can be used to design robust tooth profile modification for improving dynamic performance of gear sets with different tooth spacing errors.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hunter, J. L.; Sutton, T. M.
2013-07-01
In Monte Carlo iterated-fission-source calculations relative uncertainties on local tallies tend to be larger in lower-power regions and smaller in higher-power regions. Reducing the largest uncertainties to an acceptable level simply by running a larger number of neutron histories is often prohibitively expensive. The uniform fission site method has been developed to yield a more spatially-uniform distribution of relative uncertainties. This is accomplished by biasing the density of fission neutron source sites while not biasing the solution. The method is integrated into the source iteration process, and does not require any auxiliary forward or adjoint calculations. For a given amountmore » of computational effort, the use of the method results in a reduction of the largest uncertainties relative to the standard algorithm. Two variants of the method have been implemented and tested. Both have been shown to be effective. (authors)« less
A new systematic calibration method of ring laser gyroscope inertial navigation system
NASA Astrophysics Data System (ADS)
Wei, Guo; Gao, Chunfeng; Wang, Qi; Wang, Qun; Xiong, Zhenyu; Long, Xingwu
2016-10-01
Inertial navigation system has been the core component of both military and civil navigation systems. Before the INS is put into application, it is supposed to be calibrated in the laboratory in order to compensate repeatability error caused by manufacturing. Discrete calibration method cannot fulfill requirements of high-accurate calibration of the mechanically dithered ring laser gyroscope navigation system with shock absorbers. This paper has analyzed theories of error inspiration and separation in detail and presented a new systematic calibration method for ring laser gyroscope inertial navigation system. Error models and equations of calibrated Inertial Measurement Unit are given. Then proper rotation arrangement orders are depicted in order to establish the linear relationships between the change of velocity errors and calibrated parameter errors. Experiments have been set up to compare the systematic errors calculated by filtering calibration result with those obtained by discrete calibration result. The largest position error and velocity error of filtering calibration result are only 0.18 miles and 0.26m/s compared with 2 miles and 1.46m/s of discrete calibration result. These results have validated the new systematic calibration method and proved its importance for optimal design and accuracy improvement of calibration of mechanically dithered ring laser gyroscope inertial navigation system.
Crainiceanu, Ciprian M.; Caffo, Brian S.; Di, Chong-Zhi; Punjabi, Naresh M.
2009-01-01
We introduce methods for signal and associated variability estimation based on hierarchical nonparametric smoothing with application to the Sleep Heart Health Study (SHHS). SHHS is the largest electroencephalographic (EEG) collection of sleep-related data, which contains, at each visit, two quasi-continuous EEG signals for each subject. The signal features extracted from EEG data are then used in second level analyses to investigate the relation between health, behavioral, or biometric outcomes and sleep. Using subject specific signals estimated with known variability in a second level regression becomes a nonstandard measurement error problem. We propose and implement methods that take into account cross-sectional and longitudinal measurement error. The research presented here forms the basis for EEG signal processing for the SHHS. PMID:20057925
Dispensing error rate after implementation of an automated pharmacy carousel system.
Oswald, Scott; Caldwell, Richard
2007-07-01
A study was conducted to determine filling and dispensing error rates before and after the implementation of an automated pharmacy carousel system (APCS). The study was conducted in a 613-bed acute and tertiary care university hospital. Before the implementation of the APCS, filling and dispensing rates were recorded during October through November 2004 and January 2005. Postimplementation data were collected during May through June 2006. Errors were recorded in three areas of pharmacy operations: first-dose or missing medication fill, automated dispensing cabinet fill, and interdepartmental request fill. A filling error was defined as an error caught by a pharmacist during the verification step. A dispensing error was defined as an error caught by a pharmacist observer after verification by the pharmacist. Before implementation of the APCS, 422 first-dose or missing medication orders were observed between October 2004 and January 2005. Independent data collected in December 2005, approximately six weeks after the introduction of the APCS, found that filling and error rates had increased. The filling rate for automated dispensing cabinets was associated with the largest decrease in errors. Filling and dispensing error rates had decreased by December 2005. In terms of interdepartmental request fill, no dispensing errors were noted in 123 clinic orders dispensed before the implementation of the APCS. One dispensing error out of 85 clinic orders was identified after implementation of the APCS. The implementation of an APCS at a university hospital decreased medication filling errors related to automated cabinets only and did not affect other filling and dispensing errors.
NASA Astrophysics Data System (ADS)
Cha, Min Kyoung; Ko, Hyun Soo; Jung, Woo Young; Ryu, Jae Kwang; Choe, Bo-Young
2015-08-01
The Accuracy of registration between positron emission tomography (PET) and computed tomography (CT) images is one of the important factors for reliable diagnosis in PET/CT examinations. Although quality control (QC) for checking alignment of PET and CT images should be performed periodically, the procedures have not been fully established. The aim of this study is to determine optimal quality control (QC) procedures that can be performed at the user level to ensure the accuracy of PET/CT registration. Two phantoms were used to carry out this study: the American college of Radiology (ACR)-approved PET phantom and National Electrical Manufacturers Association (NEMA) International Electrotechnical Commission (IEC) body phantom, containing fillable spheres. All PET/CT images were acquired on a Biograph TruePoint 40 PET/CT scanner using routine protocols. To measure registration error, the spatial coordinates of the estimated centers of the target slice (spheres) was calculated independently for the PET and the CT images in two ways. We compared the images from the ACR-approved PET phantom to that from the NEMA IEC body phantom. Also, we measured the total time required from phantom preparation to image analysis. The first analysis method showed a total difference of 0.636 ± 0.11 mm for the largest hot sphere and 0.198 ± 0.09 mm for the largest cold sphere in the case of the ACR-approved PET phantom. In the NEMA IEC body phantom, the total difference was 3.720 ± 0.97 mm for the largest hot sphere and 4.800 ± 0.85 mm for the largest cold sphere. The second analysis method showed that the differences in the x location at the line profile of the lesion on PET and CT were (1.33, 1.33) mm for a bone lesion, (-1.26, -1.33) mm for an air lesion and (-1.67, -1.60) mm for a hot sphere lesion for the ACR-approved PET phantom. For the NEMA IEC body phantom, the differences in the x location at the line profile of the lesion on PET and CT were (-1.33, 4.00) mm for the air lesion and (1.33, -1.29) mm for a hot sphere lesion. These registration errors from this study were reasonable compared to the errors reported in previous studies. Meanwhile, the total time required from phantom preparation was 67.72 ± 4.50 min for the ACR-approved PET phantom and 96.78 ± 8.50 min for the NEMA IEC body phantom. When the registration errors and the lead times are considered, the method using the ACR-approved PET phantom was more practical and useful than the method using the NEMA IEC body phantom.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Laaksomaa, Marko, E-mail: marko.laaksomaa@pshp.fi; Kapanen, Mika; Department of Medical Physics, Tampere University Hospital
We evaluated adequate setup margins for the radiotherapy (RT) of pelvic tumors based on overall position errors of bony landmarks. We also estimated the difference in setup accuracy between the male and female patients. Finally, we compared the patient rotation for 2 immobilization devices. The study cohort included consecutive 64 male and 64 female patients. Altogether, 1794 orthogonal setup images were analyzed. Observer-related deviation in image matching and the effect of patient rotation were explicitly determined. Overall systematic and random errors were calculated in 3 orthogonal directions. Anisotropic setup margins were evaluated based on residual errors after weekly image guidance.more » The van Herk formula was used to calculate the margins. Overall, 100 patients were immobilized with a house-made device. The patient rotation was compared against 28 patients immobilized with CIVCO's Kneefix and Feetfix. We found that the usually applied isotropic setup margin of 8 mm covered all the uncertainties related to patient setup for most RT treatments of the pelvis. However, margins of even 10.3 mm were needed for the female patients with very large pelvic target volumes centered either in the symphysis or in the sacrum containing both of these structures. This was because the effect of rotation (p ≤ 0.02) and the observer variation in image matching (p ≤ 0.04) were significantly larger for the female patients than for the male patients. Even with daily image guidance, the required margins remained larger for the women. Patient rotations were largest about the lateral axes. The difference between the required margins was only 1 mm for the 2 immobilization devices. The largest component of overall systematic position error came from patient rotation. This emphasizes the need for rotation correction. Overall, larger position errors and setup margins were observed for the female patients with pelvic cancer than for the male patients.« less
Wei, Wenjuan; Xiong, Jianyin; Zhang, Yinping
2013-01-01
Mass transfer models are useful in predicting the emissions of volatile organic compounds (VOCs) and formaldehyde from building materials in indoor environments. They are also useful for human exposure evaluation and in sustainable building design. The measurement errors in the emission characteristic parameters in these mass transfer models, i.e., the initial emittable concentration (C 0), the diffusion coefficient (D), and the partition coefficient (K), can result in errors in predicting indoor VOC and formaldehyde concentrations. These errors have not yet been quantitatively well analyzed in the literature. This paper addresses this by using modelling to assess these errors for some typical building conditions. The error in C 0, as measured in environmental chambers and applied to a reference living room in Beijing, has the largest influence on the model prediction error in indoor VOC and formaldehyde concentration, while the error in K has the least effect. A correlation between the errors in D, K, and C 0 and the error in the indoor VOC and formaldehyde concentration prediction is then derived for engineering applications. In addition, the influence of temperature on the model prediction of emissions is investigated. It shows the impact of temperature fluctuations on the prediction errors in indoor VOC and formaldehyde concentrations to be less than 7% at 23±0.5°C and less than 30% at 23±2°C. PMID:24312497
No Time for Complacency: Teen Births in California.
ERIC Educational Resources Information Center
Constantine, Norman A.; Nevarez, Carmen R.
California's recent investment in teen pregnancy prevention has contributed to the largest decline in teen birth rates and the second largest percentage reduction of all 50 states. California's annual teen birth rate is now similar to the national rate. This occurred while the highest teen birth rate group, Latinas, increased as a proportion of…
Lessons From the Largest Historic Floods Documented by the U.S. Geological Survey
NASA Astrophysics Data System (ADS)
Costa, J. E.
2003-12-01
A recent controversy over the flood risk downstream from a USGS streamgaging station in southern California that recorded a large debris flow led to the decision to closely examine a sample of the largest floods documented in the US. Twenty-nine floods that define the envelope curve of the largest rainfall-runoff floods were examined in detail, including field visits. These floods have a profound impact on local, regional, and national interpretations of potential peak discharges and flood risk. These 29 floods occured throughout the US from the northern Chesapeake Bay in Maryland to Kauai, Hawaii, and over time from 1935-1978. Methods used to compute peak discharges were slope-area (21/29), culvert computations (2/29), measurements lost or not available for study (2/29), bridge contraction, culvert flow, and flow over road (1/29), rating curve extension (1/29), current meter measurement (1/29), and rating curve and current meter measurement (1/29). While field methods and tools have improved significantly over the last 70 years (e.g. total stations, GPS, GIS, hydroacoustics, digital plotters and computer programs like SAC and CAP), the primary methods of hydraulic analysis for indirect measurements of outstanding floods has not changed: today flow is still assumed to be 1-D and gradually varied. Unsteady or multi-dimensional flow models are rarely if ever used to determine peak discharges. Problems identified in this sample of 29 floods include debris flows misidentified as water floods, small drainage areas determined from small-scale maps and mislocated sites, high-water marks set by transient hydraulic phenomena, possibility of disconnected flow surfaces, scour assumptions in sand channels, poor site selection, incorrect approach angle for road overflow, and missing or lost records. Each published flood magnitude was checked by applying modern computer models with original field data, or by re-calculating computations. Four of 29 floods in this sample were found to have errors resulting in a change of the peak discharge of more than 10%.
van de Plas, Afke; Slikkerveer, Mariëlle; Hoen, Saskia; Schrijnemakers, Rick; Driessen, Johanna; de Vries, Frank; van den Bemt, Patricia
2017-01-01
In this controlled before-after study the effect of improvements, derived from Lean Six Sigma strategy, on parenteral medication administration errors and the potential risk of harm was determined. During baseline measurement, on control versus intervention ward, at least one administration error occurred in 14 (74%) and 6 (46%) administrations with potential risk of harm in 6 (32%) and 1 (8%) administrations. Most administration errors with high potential risk of harm occurred in bolus injections: 8 (57%) versus 2 (67%) bolus injections were injected too fast with a potential risk of harm in 6 (43%) and 1 (33%) bolus injections on control and intervention ward. Implemented improvement strategies, based on major causes of too fast administration of bolus injections, were: Substitution of bolus injections by infusions, education, availability of administration information and drug round tabards. Post intervention, on the control ward in 76 (76%) administrations at least one error was made (RR 1.03; CI95:0.77-1.38), with a potential risk of harm in 14 (14%) administrations (RR 0.45; CI95:0.20-1.02). In 40 (68%) administrations on the intervention ward at least one error occurred (RR 1.47; CI95:0.80-2.71) but no administrations were associated with a potential risk of harm. A shift in wrong duration administration errors from bolus injections to infusions, with a reduction of potential risk of harm, seems to have occurred on the intervention ward. Although data are insufficient to prove an effect, Lean Six Sigma was experienced as a suitable strategy to select tailored improvements. Further studies are required to prove the effect of the strategy on parenteral medication administration errors.
van de Plas, Afke; Slikkerveer, Mariëlle; Hoen, Saskia; Schrijnemakers, Rick; Driessen, Johanna; de Vries, Frank; van den Bemt, Patricia
2017-01-01
In this controlled before-after study the effect of improvements, derived from Lean Six Sigma strategy, on parenteral medication administration errors and the potential risk of harm was determined. During baseline measurement, on control versus intervention ward, at least one administration error occurred in 14 (74%) and 6 (46%) administrations with potential risk of harm in 6 (32%) and 1 (8%) administrations. Most administration errors with high potential risk of harm occurred in bolus injections: 8 (57%) versus 2 (67%) bolus injections were injected too fast with a potential risk of harm in 6 (43%) and 1 (33%) bolus injections on control and intervention ward. Implemented improvement strategies, based on major causes of too fast administration of bolus injections, were: Substitution of bolus injections by infusions, education, availability of administration information and drug round tabards. Post intervention, on the control ward in 76 (76%) administrations at least one error was made (RR 1.03; CI95:0.77-1.38), with a potential risk of harm in 14 (14%) administrations (RR 0.45; CI95:0.20-1.02). In 40 (68%) administrations on the intervention ward at least one error occurred (RR 1.47; CI95:0.80-2.71) but no administrations were associated with a potential risk of harm. A shift in wrong duration administration errors from bolus injections to infusions, with a reduction of potential risk of harm, seems to have occurred on the intervention ward. Although data are insufficient to prove an effect, Lean Six Sigma was experienced as a suitable strategy to select tailored improvements. Further studies are required to prove the effect of the strategy on parenteral medication administration errors. PMID:28674608
Updating expected action outcome in the medial frontal cortex involves an evaluation of error type.
Maier, Martin E; Steinhauser, Marco
2013-10-02
Forming expectations about the outcome of an action is an important prerequisite for action control and reinforcement learning in the human brain. The medial frontal cortex (MFC) has been shown to play an important role in the representation of outcome expectations, particularly when an update of expected outcome becomes necessary because an error is detected. However, error detection alone is not always sufficient to compute expected outcome because errors can occur in various ways and different types of errors may be associated with different outcomes. In the present study, we therefore investigate whether updating expected outcome in the human MFC is based on an evaluation of error type. Our approach was to consider an electrophysiological correlate of MFC activity on errors, the error-related negativity (Ne/ERN), in a task in which two types of errors could occur. Because the two error types were associated with different amounts of monetary loss, updating expected outcomes on error trials required an evaluation of error type. Our data revealed a pattern of Ne/ERN amplitudes that closely mirrored the amount of monetary loss associated with each error type, suggesting that outcome expectations are updated based on an evaluation of error type. We propose that this is achieved by a proactive evaluation process that anticipates error types by continuously monitoring error sources or by dynamically representing possible response-outcome relations.
Shultz, R; Kedgley, A E; Jenkyn, T R
2011-05-01
The trajectories of skin-mounted markers tracked with optical motion capture are assumed to be an adequate representation of the underlying bone motions. However, it is well known that soft tissue artifact (STA) exists between marker and bone. This study quantifies the STA associated with the hindfoot and midfoot marker clusters of a multi-segment foot model. To quantify STA of the hindfoot and midfoot marker clusters with respect to the calcaneus and navicular respectively, fluoroscopic images were collected on 27 subjects during four quasi-static positions, (1) quiet standing (non-weight bearing), (2) at heel strike (weight-bearing), (3) at midstance (weight-bearing) and (4) at toe-off (weight-bearing). The translation and rotation components of STA were calculated in the sagittal plane. Translational STA at the calcaneus varied from 5.9±7.3mm at heel-strike to 12.1±0.3mm at toe-off. For the navicular the translational STA ranged from 7.6±7.6mm at heel strike to 16.4±16.7mm at toe-off. Rotational STA was relatively smaller for both bones at all foot positions. For the calcaneus they varied between 0.1±2.2° at heel-strike to 0.2±0.6° at toe-off. For the navicular, the rotational STA ranged from 0.6±0.9° at heel-strike to 0.7±0.7° at toe-off. The largest translational STA found in this study (16mm for the navicular) was smaller than those reported in the literature for the thigh and the lower leg, but was larger than the STA of individual spherical markers affixed to the foot. The largest errors occurred at toe-off position for all subjects for both the hindfoot and midfoot clusters. Future studies are recommended to quantify true three-dimensional STA of the entire foot during gait. Copyright © 2011. Published by Elsevier B.V.
Analyzing communication errors in an air medical transport service.
Dalto, Joseph D; Weir, Charlene; Thomas, Frank
2013-01-01
Poor communication can result in adverse events. Presently, no standards exist for classifying and analyzing air medical communication errors. This study sought to determine the frequency and types of communication errors reported within an air medical quality and safety assurance reporting system. Of 825 quality assurance reports submitted in 2009, 278 were randomly selected and analyzed for communication errors. Each communication error was classified and mapped to Clark's communication level hierarchy (ie, levels 1-4). Descriptive statistics were performed, and comparisons were evaluated using chi-square analysis. Sixty-four communication errors were identified in 58 reports (21% of 278). Of the 64 identified communication errors, only 18 (28%) were classified by the staff to be communication errors. Communication errors occurred most often at level 1 (n = 42/64, 66%) followed by level 4 (21/64, 33%). Level 2 and 3 communication failures were rare (, 1%). Communication errors were found in a fifth of quality and safety assurance reports. The reporting staff identified less than a third of these errors. Nearly all communication errors (99%) occurred at either the lowest level of communication (level 1, 66%) or the highest level (level 4, 33%). An air medical communication ontology is necessary to improve the recognition and analysis of communication errors. Copyright © 2013 Air Medical Journal Associates. Published by Elsevier Inc. All rights reserved.
On the sensitivity of TG-119 and IROC credentialing to TPS commissioning errors.
McVicker, Drew; Yin, Fang-Fang; Adamson, Justus D
2016-01-08
We investigate the sensitivity of IMRT commissioning using the TG-119 C-shape phantom and credentialing with the IROC head and neck phantom to treatment planning system commissioning errors. We introduced errors into the various aspects of the commissioning process for a 6X photon energy modeled using the analytical anisotropic algorithm within a commercial treatment planning system. Errors were implemented into the various components of the dose calculation algorithm including primary photons, secondary photons, electron contamination, and MLC parameters. For each error we evaluated the probability that it could be committed unknowingly during the dose algorithm commissioning stage, and the probability of it being identified during the verification stage. The clinical impact of each commissioning error was evaluated using representative IMRT plans including low and intermediate risk prostate, head and neck, mesothelioma, and scalp; the sensitivity of the TG-119 and IROC phantoms was evaluated by comparing dosimetric changes to the dose planes where film measurements occur and change in point doses where dosimeter measurements occur. No commissioning errors were found to have both a low probability of detection and high clinical severity. When errors do occur, the IROC credentialing and TG 119 commissioning criteria are generally effective at detecting them; however, for the IROC phantom, OAR point-dose measurements are the most sensitive despite being currently excluded from IROC analysis. Point-dose measurements with an absolute dose constraint were the most effective at detecting errors, while film analysis using a gamma comparison and the IROC film distance to agreement criteria were less effective at detecting the specific commissioning errors implemented here.
Effect of inventory method on niche models: random versus systematic error
Heather E. Lintz; Andrew N. Gray; Bruce McCune
2013-01-01
Data from large-scale biological inventories are essential for understanding and managing Earth's ecosystems. The Forest Inventory and Analysis Program (FIA) of the U.S. Forest Service is the largest biological inventory in North America; however, the FIA inventory recently changed from an amalgam of different approaches to a nationally-standardized approach in...
[Diagnostic Errors in Medicine].
Buser, Claudia; Bankova, Andriyana
2015-12-09
The recognition of diagnostic errors in everyday practice can help improve patient safety. The most common diagnostic errors are the cognitive errors, followed by system-related errors and no fault errors. The cognitive errors often result from mental shortcuts, known as heuristics. The rate of cognitive errors can be reduced by a better understanding of heuristics and the use of checklists. The autopsy as a retrospective quality assessment of clinical diagnosis has a crucial role in learning from diagnostic errors. Diagnostic errors occur more often in primary care in comparison to hospital settings. On the other hand, the inpatient errors are more severe than the outpatient errors.
Reducing patient identification errors related to glucose point-of-care testing.
Alreja, Gaurav; Setia, Namrata; Nichols, James; Pantanowitz, Liron
2011-01-01
Patient identification (ID) errors in point-of-care testing (POCT) can cause test results to be transferred to the wrong patient's chart or prevent results from being transmitted and reported. Despite the implementation of patient barcoding and ongoing operator training at our institution, patient ID errors still occur with glucose POCT. The aim of this study was to develop a solution to reduce identification errors with POCT. Glucose POCT was performed by approximately 2,400 clinical operators throughout our health system. Patients are identified by scanning in wristband barcodes or by manual data entry using portable glucose meters. Meters are docked to upload data to a database server which then transmits data to any medical record matching the financial number of the test result. With a new model, meters connect to an interface manager where the patient ID (a nine-digit account number) is checked against patient registration data from admission, discharge, and transfer (ADT) feeds and only matched results are transferred to the patient's electronic medical record. With the new process, the patient ID is checked prior to testing, and testing is prevented until ID errors are resolved. When averaged over a period of a month, ID errors were reduced to 3 errors/month (0.015%) in comparison with 61.5 errors/month (0.319%) before implementing the new meters. Patient ID errors may occur with glucose POCT despite patient barcoding. The verification of patient identification should ideally take place at the bedside before testing occurs so that the errors can be addressed in real time. The introduction of an ADT feed directly to glucose meters reduced patient ID errors in POCT.
Reducing patient identification errors related to glucose point-of-care testing
Alreja, Gaurav; Setia, Namrata; Nichols, James; Pantanowitz, Liron
2011-01-01
Background: Patient identification (ID) errors in point-of-care testing (POCT) can cause test results to be transferred to the wrong patient's chart or prevent results from being transmitted and reported. Despite the implementation of patient barcoding and ongoing operator training at our institution, patient ID errors still occur with glucose POCT. The aim of this study was to develop a solution to reduce identification errors with POCT. Materials and Methods: Glucose POCT was performed by approximately 2,400 clinical operators throughout our health system. Patients are identified by scanning in wristband barcodes or by manual data entry using portable glucose meters. Meters are docked to upload data to a database server which then transmits data to any medical record matching the financial number of the test result. With a new model, meters connect to an interface manager where the patient ID (a nine-digit account number) is checked against patient registration data from admission, discharge, and transfer (ADT) feeds and only matched results are transferred to the patient's electronic medical record. With the new process, the patient ID is checked prior to testing, and testing is prevented until ID errors are resolved. Results: When averaged over a period of a month, ID errors were reduced to 3 errors/month (0.015%) in comparison with 61.5 errors/month (0.319%) before implementing the new meters. Conclusion: Patient ID errors may occur with glucose POCT despite patient barcoding. The verification of patient identification should ideally take place at the bedside before testing occurs so that the errors can be addressed in real time. The introduction of an ADT feed directly to glucose meters reduced patient ID errors in POCT. PMID:21633490
Southard, Rodney E.; Veilleux, Andrea G.
2014-01-01
Regression analysis techniques were used to develop a set of equations for rural ungaged stream sites for estimating discharges with 50-, 20-, 10-, 4-, 2-, 1-, 0.5-, and 0.2-percent annual exceedance probabilities, which are equivalent to annual flood-frequency recurrence intervals of 2, 5, 10, 25, 50, 100, 200, and 500 years, respectively. Basin and climatic characteristics were computed using geographic information software and digital geospatial data. A total of 35 characteristics were computed for use in preliminary statewide and regional regression analyses. Annual exceedance-probability discharge estimates were computed for 278 streamgages by using the expected moments algorithm to fit a log-Pearson Type III distribution to the logarithms of annual peak discharges for each streamgage using annual peak-discharge data from water year 1844 to 2012. Low-outlier and historic information were incorporated into the annual exceedance-probability analyses, and a generalized multiple Grubbs-Beck test was used to detect potentially influential low floods. Annual peak flows less than a minimum recordable discharge at a streamgage were incorporated into the at-site station analyses. An updated regional skew coefficient was determined for the State of Missouri using Bayesian weighted least-squares/generalized least squares regression analyses. At-site skew estimates for 108 long-term streamgages with 30 or more years of record and the 35 basin characteristics defined for this study were used to estimate the regional variability in skew. However, a constant generalized-skew value of -0.30 and a mean square error of 0.14 were determined in this study. Previous flood studies indicated that the distinct physical features of the three physiographic provinces have a pronounced effect on the magnitude of flood peaks. Trends in the magnitudes of the residuals from preliminary statewide regression analyses from previous studies confirmed that regional analyses in this study were similar and related to three primary physiographic provinces. The final regional regression analyses resulted in three sets of equations. For Regions 1 and 2, the basin characteristics of drainage area and basin shape factor were statistically significant. For Region 3, because of the small amount of data from streamgages, only drainage area was statistically significant. Average standard errors of prediction ranged from 28.7 to 38.4 percent for flood region 1, 24.1 to 43.5 percent for flood region 2, and 25.8 to 30.5 percent for region 3. The regional regression equations are only applicable to stream sites in Missouri with flows not significantly affected by regulation, channelization, backwater, diversion, or urbanization. Basins with about 5 percent or less impervious area were considered to be rural. Applicability of the equations are limited to the basin characteristic values that range from 0.11 to 8,212.38 square miles (mi2) and basin shape from 2.25 to 26.59 for Region 1, 0.17 to 4,008.92 mi2 and basin shape 2.04 to 26.89 for Region 2, and 2.12 to 2,177.58 mi2 for Region 3. Annual peak data from streamgages were used to qualitatively assess the largest floods recorded at streamgages in Missouri since the 1915 water year. Based on existing streamgage data, the 1983 flood event was the largest flood event on record since 1915. The next five largest flood events, in descending order, took place in 1993, 1973, 2008, 1994 and 1915. Since 1915, five of six of the largest floods on record occurred from 1973 to 2012.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hashii, Haruko, E-mail: haruko@pmrc.tsukuba.ac.jp; Hashimoto, Takayuki; Okawa, Ayako
2013-03-01
Purpose: Radiation therapy for cancer may be required for patients with implantable cardiac devices. However, the influence of secondary neutrons or scattered irradiation from high-energy photons (≥10 MV) on implantable cardioverter-defibrillators (ICDs) is unclear. This study was performed to examine this issue in 2 ICD models. Methods and Materials: ICDs were positioned around a water phantom under conditions simulating clinical radiation therapy. The ICDs were not irradiated directly. A control ICD was positioned 140 cm from the irradiation isocenter. Fractional irradiation was performed with 18-MV and 10-MV photon beams to give cumulative in-field doses of 600 Gy and 1600 Gy,more » respectively. Errors were checked after each fraction. Soft errors were defined as severe (change to safety back-up mode), moderate (memory interference, no changes in device parameters), and minor (slight memory change, undetectable by computer). Results: Hard errors were not observed. For the older ICD model, the incidences of severe, moderate, and minor soft errors at 18 MV were 0.75, 0.5, and 0.83/50 Gy at the isocenter. The corresponding data for 10 MV were 0.094, 0.063, and 0 /50 Gy. For the newer ICD model at 18 MV, these data were 0.083, 2.3, and 5.8 /50 Gy. Moderate and minor errors occurred at 18 MV in control ICDs placed 140 cm from the isocenter. The error incidences were 0, 1, and 0 /600 Gy at the isocenter for the newer model, and 0, 1, and 6 /600Gy for the older model. At 10 MV, no errors occurred in control ICDs. Conclusions: ICD errors occurred more frequently at 18 MV irradiation, which suggests that the errors were mainly caused by secondary neutrons. Soft errors of ICDs were observed with high energy photon beams, but most were not critical in the newer model. These errors may occur even when the device is far from the irradiation field.« less
A Very Simple Method to Calculate the (Positive) Largest Lyapunov Exponent Using Interval Extensions
NASA Astrophysics Data System (ADS)
Mendes, Eduardo M. A. M.; Nepomuceno, Erivelton G.
2016-12-01
In this letter, a very simple method to calculate the positive Largest Lyapunov Exponent (LLE) based on the concept of interval extensions and using the original equations of motion is presented. The exponent is estimated from the slope of the line derived from the lower bound error when considering two interval extensions of the original system. It is shown that the algorithm is robust, fast and easy to implement and can be considered as alternative to other algorithms available in the literature. The method has been successfully tested in five well-known systems: Logistic, Hénon, Lorenz and Rössler equations and the Mackey-Glass system.
Increased User Satisfaction Through an Improved Message System
NASA Technical Reports Server (NTRS)
Weissert, C. L.
1997-01-01
With all of the enhancements in software methodology and testing, there is no guarantee that software can be delivered such that no user errors occur, How to handle these errors when they occur has become a major research topic within human-computer interaction (HCI). Users of the Multimission Spacecraft Analysis Subsystem(MSAS) at the Jet Propulsion Laboratory (JPL), a system of X and motif graphical user interfaces for analyzing spacecraft data, complained about the lack of information about the error cause and have suggested that recovery actions be included in the system error messages...The system was evaluated through usability surveys and was shown to be successful.
Macrae, Toby; Tyler, Ann A
2014-10-01
The authors compared preschool children with co-occurring speech sound disorder (SSD) and language impairment (LI) to children with SSD only in their numbers and types of speech sound errors. In this post hoc quasi-experimental study, independent samples t tests were used to compare the groups in the standard score from different tests of articulation/phonology, percent consonants correct, and the number of omission, substitution, distortion, typical, and atypical error patterns used in the production of different wordlists that had similar levels of phonetic and structural complexity. In comparison with children with SSD only, children with SSD and LI used similar numbers but different types of errors, including more omission patterns ( p < .001, d = 1.55) and fewer distortion patterns ( p = .022, d = 1.03). There were no significant differences in substitution, typical, and atypical error pattern use. Frequent omission error pattern use may reflect a more compromised linguistic system characterized by absent phonological representations for target sounds (see Shriberg et al., 2005). Research is required to examine the diagnostic potential of early frequent omission error pattern use in predicting later diagnoses of co-occurring SSD and LI and/or reading problems.
Scaffolding--How Can Contingency Lead to Successful Learning When Dealing with Errors?
ERIC Educational Resources Information Center
Wischgoll, Anke; Pauli, Christine; Reusser, Kurt
2015-01-01
Errors indicate learners' misunderstanding and can provide learning opportunities. Providing learning support which is contingent on learners' needs when errors occur is considered effective for developing learners' understanding. The current investigation examines how tutors and tutees interact productively with errors when working on a…
Seong-Cheol, Park; Chong Sik, Lee; Seok Min, Kim; Eu Jene, Choi; Do Hee, Lee; Jung Kyo, Lee
2016-12-22
Recently, the use of magnetic dental implants has been re-popularized with the introduction of strong rare earth metal, for example, neodymium, magnets. Unrecognized magnetic dental implants can cause critical magnetic resonance image distortions. We report a case involving surgical failure caused by a magnetic dental implant. A 62-year-old man underwent deep brain stimulation for medically insufficiently controlled Parkinson's disease. Stereotactic magnetic resonance imaging performed for the first deep brain stimulation showed that the overdenture was removed. However, a dental implant remained and contained a neodymium magnet, which was unrecognized at the time of imaging; the magnet caused localized non-linear distortions that were the largest around the dental magnets. In the magnetic field, the subthalamic area was distorted by a 4.6 mm right shift and counter clockwise rotation. However, distortions were visually subtle in the operation field and small for distant stereotactic markers, with approximately 1-2 mm distortions. The surgeon considered the distortion to be normal asymmetry or variation. Stereotactic marker distortion was calculated to be in the acceptable range in the surgical planning software. Targeting errors, approximately 5 mm on the right side and 2 mm on the left side, occurred postoperatively. Both leads were revised after the removal of dental magnets. Dental magnets may cause surgical failures and should be checked and removed before stereotactic surgery. Our findings should be considered when reviewing surgical precautions and making distortion-detection algorithm improvements.
Ball bearing vibrations amplitude modeling and test comparisons
NASA Technical Reports Server (NTRS)
Hightower, Richard A., III; Bailey, Dave
1995-01-01
Bearings generate disturbances that, when combined with structural gains of a momentum wheel, contribute to induced vibration in the wheel. The frequencies generated by a ball bearing are defined by the bearing's geometry and defects. The amplitudes at these frequencies are dependent upon the actual geometry variations from perfection; therefore, a geometrically perfect bearing will produce no amplitudes at the kinematic frequencies that the design generates. Because perfect geometry can only be approached, emitted vibrations do occur. The most significant vibration is at the spin frequency and can be balanced out in the build process. Other frequencies' amplitudes, however, cannot be balanced out. Momentum wheels are usually the single largest source of vibrations in a spacecraft and can contribute to pointing inaccuracies if emitted vibrations ring the structure or are in the high-gain bandwidth of a sensitive pointing control loop. It is therefore important to be able to provide an a priori knowledge of possible amplitudes that are singular in source or are a result of interacting defects that do not reveal themselves in normal frequency prediction equations. This paper will describe the computer model that provides for the incorporation of bearing geometry errors and then develops an estimation of actual amplitudes and frequencies. Test results were correlated with the model. A momentum wheel was producing an unacceptable 74 Hz amplitude. The model was used to simulate geometry errors and proved successful in identifying a cause that was verified when the parts were inspected.
Menéndez, Lumila Paula
2017-05-01
Intraobserver error (INTRA-OE) is the difference between repeated measurements of the same variable made by the same observer. The objective of this work was to evaluate INTRA-OE from 3D landmarks registered with a Microscribe, in different datasets: (A) the 3D coordinates, (B) linear measurements calculated from A, and (C) the six-first principal component axes. INTRA-OE was analyzed by digitizing 42 landmarks from 23 skulls in three events two weeks apart from each other. Systematic error was tested through repeated measures ANOVA (ANOVA-RM), while random error through intraclass correlation coefficient. Results showed that the largest differences between the three observations were found in the first dataset. Some anatomical points like nasion, ectoconchion, temporosphenoparietal, asterion, and temporomandibular presented the highest INTRA-OE. In the second dataset, local distances had higher INTRA-OE than global distances while the third dataset showed the lowest INTRA-OE. © 2016 American Academy of Forensic Sciences.
Assessment of Computational Fluid Dynamics (CFD) Models for Shock Boundary-Layer Interaction
NASA Technical Reports Server (NTRS)
DeBonis, James R.; Oberkampf, William L.; Wolf, Richard T.; Orkwis, Paul D.; Turner, Mark G.; Babinsky, Holger
2011-01-01
A workshop on the computational fluid dynamics (CFD) prediction of shock boundary-layer interactions (SBLIs) was held at the 48th AIAA Aerospace Sciences Meeting. As part of the workshop numerous CFD analysts submitted solutions to four experimentally measured SBLIs. This paper describes the assessment of the CFD predictions. The assessment includes an uncertainty analysis of the experimental data, the definition of an error metric and the application of that metric to the CFD solutions. The CFD solutions provided very similar levels of error and in general it was difficult to discern clear trends in the data. For the Reynolds Averaged Navier-Stokes methods the choice of turbulence model appeared to be the largest factor in solution accuracy. Large-eddy simulation methods produced error levels similar to RANS methods but provided superior predictions of normal stresses.
Cellulose nanocrystals the next big nano-thing?
Michael T. Postek; Andras Vladar; John Dagata; Natalia Farkas; Bin Ming; Ronald Sabo; Theodore H. Wegner; James Beecher
2008-01-01
Biomass surrounds us from the smallest alga to the largest redwood tree. Even the largest trees owe their strength to a newly-appreciated class of nanomaterials known as cellulose nanocrystals (CNC). Cellulose, the worldâs most abundant natural, renewable, biodegradable polymer, occurs as whisker like microfibrils that are biosynthesized and deposited in plant material...
SIRTF Focal Plane Survey: A Pre-flight Error Analysis
NASA Technical Reports Server (NTRS)
Bayard, David S.; Brugarolas, Paul B.; Boussalis, Dhemetrios; Kang, Bryan H.
2003-01-01
This report contains a pre-flight error analysis of the calibration accuracies expected from implementing the currently planned SIRTF focal plane survey strategy. The main purpose of this study is to verify that the planned strategy will meet focal plane survey calibration requirements (as put forth in the SIRTF IOC-SV Mission Plan [4]), and to quantify the actual accuracies expected. The error analysis was performed by running the Instrument Pointing Frame (IPF) Kalman filter on a complete set of simulated IOC-SV survey data, and studying the resulting propagated covariances. The main conclusion of this study is that the all focal plane calibration requirements can be met with the currently planned survey strategy. The associated margins range from 3 to 95 percent, and tend to be smallest for frames having a 0.14" requirement, and largest for frames having a more generous 0.28" (or larger) requirement. The smallest margin of 3 percent is associated with the IRAC 3.6 and 5.8 micron array centers (frames 068 and 069), and the largest margin of 95 percent is associated with the MIPS 160 micron array center (frame 087). For pointing purposes, the most critical calibrations are for the IRS Peakup sweet spots and short wavelength slit centers (frames 019, 023, 052, 028, 034). Results show that these frames are meeting their 0.14" requirements with an expected accuracy of approximately 0.1", which corresponds to a 28 percent margin.
Foot Structure in Japanese Speech Errors: Normal vs. Pathological
ERIC Educational Resources Information Center
Miyakoda, Haruko
2008-01-01
Although many studies of speech errors have been presented in the literature, most have focused on errors occurring at either the segmental or feature level. Few, if any, studies have dealt with the prosodic structure of errors. This paper aims to fill this gap by taking up the issue of prosodic structure in Japanese speech errors, with a focus on…
NASA Model of "Threat and Error" in Pediatric Cardiac Surgery: Patterns of Error Chains.
Hickey, Edward; Pham-Hung, Eric; Nosikova, Yaroslavna; Halvorsen, Fredrik; Gritti, Michael; Schwartz, Steven; Caldarone, Christopher A; Van Arsdell, Glen
2017-04-01
We introduced the National Aeronautics and Space Association threat-and-error model to our surgical unit. All admissions are considered flights, which should pass through stepwise deescalations in risk during surgical recovery. We hypothesized that errors significantly influence risk deescalation and contribute to poor outcomes. Patient flights (524) were tracked in real time for threats, errors, and unintended states by full-time performance personnel. Expected risk deescalation was wean from mechanical support, sternal closure, extubation, intensive care unit (ICU) discharge, and discharge home. Data were accrued from clinical charts, bedside data, reporting mechanisms, and staff interviews. Infographics of flights were openly discussed weekly for consensus. In 12% (64 of 524) of flights, the child failed to deescalate sequentially through expected risk levels; unintended increments instead occurred. Failed deescalations were highly associated with errors (426; 257 flights; p < 0.0001). Consequential errors (263; 173 flights) were associated with a 29% rate of failed deescalation versus 4% in flights with no consequential error (p < 0.0001). The most dangerous errors were apical errors typically (84%) occurring in the operating room, which caused chains of propagating unintended states (n = 110): these had a 43% (47 of 110) rate of failed deescalation (versus 4%; p < 0.0001). Chains of unintended state were often (46%) amplified by additional (up to 7) errors in the ICU that would worsen clinical deviation. Overall, failed deescalations in risk were extremely closely linked to brain injury (n = 13; p < 0.0001) or death (n = 7; p < 0.0001). Deaths and brain injury after pediatric cardiac surgery almost always occur from propagating error chains that originate in the operating room and are often amplified by additional ICU errors. Copyright © 2017 The Society of Thoracic Surgeons. Published by Elsevier Inc. All rights reserved.
Goulet, Eric D B; Baker, Lindsay B
2017-12-01
The B-722 Laqua Twin is a low cost, portable, and battery operated sodium analyzer, which can be used for the assessment of sweat sodium concentration. The Laqua Twin is reliable and provides a degree of accuracy similar to more expensive analyzers; however, its interunit measurement error remains unknown. The purpose of this study was to compare the sodium concentration values of 70 sweat samples measured using three different Laqua Twin units. Mean absolute errors, random errors and constant errors among the different Laqua Twins ranged respectively between 1.7 mmol/L to 3.5 mmol/L, 2.5 mmol/L to 3.7 mmol/L and -0.6 mmol/L to 3.9 mmol/L. Proportional errors among Laqua Twins were all < 2%. Based on a within-subject biological variability in sweat sodium concentration of ± 12%, the maximal allowable imprecision among instruments was considered to be £ 6%. In that respect, the within (2.9%), between (4.5%), and total (5.4%) measurement error coefficient of variations were all < 6%. For a given sweat sodium concentration value, the largest observed difference in mean and lower and upper bound error of measurements among instruments were, respectively, 4.7 mmol/L, 2.3 mmol/L, and 7.0 mmol/L. In conclusion, our findings show that the interunit measurement error of the B-722 Laqua Twin is low and methodologically acceptable.
Latent error detection: A golden two hours for detection.
Saward, Justin R E; Stanton, Neville A
2017-03-01
Undetected error in safety critical contexts generates a latent condition that can contribute to a future safety failure. The detection of latent errors post-task completion is observed in naval air engineers using a diary to record work-related latent error detection (LED) events. A systems view is combined with multi-process theories to explore sociotechnical factors associated with LED. Perception of cues in different environments facilitates successful LED, for which the deliberate review of past tasks within two hours of the error occurring and whilst remaining in the same or similar sociotechnical environment to that which the error occurred appears most effective. Identified ergonomic interventions offer potential mitigation for latent errors; particularly in simple everyday habitual tasks. It is thought safety critical organisations should look to engineer further resilience through the application of LED techniques that engage with system cues across the entire sociotechnical environment, rather than relying on consistent human performance. Crown Copyright © 2016. Published by Elsevier Ltd. All rights reserved.
Risk Factors for Increased Severity of Paediatric Medication Administration Errors
Sears, Kim; Goodman, William M.
2012-01-01
Patients' risks from medication errors are widely acknowledged. Yet not all errors, if they occur, have the same risks for severe consequences. Facing resource constraints, policy makers could prioritize factors having the greatest severe–outcome risks. This study assists such prioritization by identifying work-related risk factors most clearly associated with more severe consequences. Data from three Canadian paediatric centres were collected, without identifiers, on actual or potential errors that occurred. Three hundred seventy-two errors were reported, with outcome severities ranging from time delays up to fatalities. Four factors correlated significantly with increased risk for more severe outcomes: insufficient training; overtime; precepting a student; and off-service patient. Factors' impacts on severity also vary with error class: for wrong-time errors, the factors precepting a student or working overtime significantly increase severe-outcomes risk. For other types, caring for an off-service patient has greatest severity risk. To expand such research, better standardization is needed for categorizing outcome severities. PMID:23968607
Clinical errors that can occur in the treatment decision-making process in psychotherapy.
Park, Jake; Goode, Jonathan; Tompkins, Kelley A; Swift, Joshua K
2016-09-01
Clinical errors occur in the psychotherapy decision-making process whenever a less-than-optimal treatment or approach is chosen when working with clients. A less-than-optimal approach may be one that a client is unwilling to try or fully invest in based on his/her expectations and preferences, or one that may have little chance of success based on contraindications and/or limited research support. The doctor knows best and the independent choice models are two decision-making models that are frequently used within psychology, but both are associated with an increased likelihood of errors in the treatment decision-making process. In particular, these models fail to integrate all three components of the definition of evidence-based practice in psychology (American Psychological Association, 2006). In this article we describe both models and provide examples of clinical errors that can occur in each. We then introduce the shared decision-making model as an alternative that is less prone to clinical errors. PsycINFO Database Record (c) 2016 APA, all rights reserved
Error Analysis in Mathematics. Technical Report #1012
ERIC Educational Resources Information Center
Lai, Cheng-Fei
2012-01-01
Error analysis is a method commonly used to identify the cause of student errors when they make consistent mistakes. It is a process of reviewing a student's work and then looking for patterns of misunderstanding. Errors in mathematics can be factual, procedural, or conceptual, and may occur for a number of reasons. Reasons why students make…
Error Tendencies in Processing Student Feedback for Instructional Decision Making.
ERIC Educational Resources Information Center
Schermerhorn, John R., Jr.; And Others
1985-01-01
Seeks to assist instructors in recognizing two basic errors that can occur in processing student evaluation data on instructional development efforts; offers a research framework for future investigations of the error tendencies and related issues; and suggests ways in which instructors can confront and manage error tendencies in practice. (MBR)
Understanding EFL Students' Errors in Writing
ERIC Educational Resources Information Center
Phuket, Pimpisa Rattanadilok Na; Othman, Normah Binti
2015-01-01
Writing is the most difficult skill in English, so most EFL students tend to make errors in writing. In assisting the learners to successfully acquire writing skill, the analysis of errors and the understanding of their sources are necessary. This study attempts to explore the major sources of errors occurred in the writing of EFL students. It…
Dimensional accuracy of ceramic self-ligating brackets and estimates of theoretical torsional play.
Lee, Youngran; Lee, Dong-Yul; Kim, Yoon-Ji R
2016-09-01
To ascertain the dimensional accuracies of some commonly used ceramic self-ligation brackets and the amount of torsional play in various bracket-archwire combinations. Four types of 0.022-inch slot ceramic self-ligating brackets (upper right central incisor), three types of 0.018-inch ceramic self-ligating brackets (upper right central incisor), and three types of rectangular archwires (0.016 × 0.022-inch beta-titanium [TMA] (Ormco, Orange, Calif), 0.016 × 0.022-inch stainless steel [SS] (Ortho Technology, Tampa, Fla), and 0.019 × 0.025-inch SS (Ortho Technology)) were measured using a stereomicroscope to determine slot widths and wire cross-sectional dimensions. The mean acquired dimensions of the brackets and wires were applied to an equation devised by Meling to estimate torsional play angle (γ). In all bracket systems, the slot tops were significantly wider than the slot bases (P < .001), yielding a divergent slot profile. Clarity-SLs (3M Unitek, Monrovia, Calif) showed the greatest divergence among the 0.022-inch brackets, and Clippy-Cs (Tomy, Futaba, Fukushima, Japan) among the 0.018-inch brackets. The Damon Clear (Ormco) bracket had the smallest dimensional error (0.542%), whereas the 0.022-inch Empower Clear (American Orthodontics, Sheboygan, Wis) bracket had the largest (3.585%). The largest amount of theoretical play is observed using the Empower Clear (American Orthodontics) 0.022-inch bracket combined with the 0.016 × 0.022-inch TMA wire (Ormco), whereas the least amount occurs using the 0.018 Clippy-C (Tomy) combined with 0.016 × 0.022-inch SS wire (Ortho Technology).
Remote sensing of channels and riparian zones with a narrow-beam aquatic-terrestrial LIDAR
Jim McKean; Dave Nagel; Daniele Tonina; Philip Bailey; Charles Wayne Wright; Carolyn Bohn; Amar Nayegandhi
2009-01-01
The high-resolution Experimental Advanced Airborne Research LIDAR (EAARL) is a new technology for cross-environment surveys of channels and floodplains. EAARL measurements of basic channel geometry, such as wetted cross-sectional area, are within a few percent of those from control field surveys. The largest channel mapping errors are along stream banks. The LIDAR data...
Positive sliding mode control for blood glucose regulation
NASA Astrophysics Data System (ADS)
Menani, Karima; Mohammadridha, Taghreed; Magdelaine, Nicolas; Abdelaziz, Mourad; Moog, Claude H.
2017-11-01
Biological systems involving positive variables as concentrations are some examples of so-called positive systems. This is the case of the glycemia-insulinemia system considered in this paper. To cope with these physical constraints, it is shown that a positive sliding mode control (SMC) can be designed for glycemia regulation. The largest positive invariant set (PIS) is obtained for the insulinemia subsystem in open and closed loop. The existence of a positive SMC for glycemia regulation is shown here for the first time. Necessary conditions to design the sliding surface and the discontinuity gain are derived to guarantee a positive SMC for the insulin dynamics. SMC is designed to be positive everywhere in the largest closed-loop PIS of plasma insulin system. Two-stage SMC is employed; the last stage SMC2 block uses the glycemia error to design the desired insulin trajectory. Then the plasma insulin state is forced to track the reference via SMC1. The resulting desired insulin trajectory is the required virtual control input of the glycemia system to eliminate blood glucose (BG) error. The positive control is tested in silico on type-1 diabetic patients model derived from real-life clinical data.
Refraction Correction in 3D Transcranial Ultrasound Imaging
Lindsey, Brooks D.; Smith, Stephen W.
2014-01-01
We present the first correction of refraction in three-dimensional (3D) ultrasound imaging using an iterative approach that traces propagation paths through a two-layer planar tissue model, applying Snell’s law in 3D. This approach is applied to real-time 3D transcranial ultrasound imaging by precomputing delays offline for several skull thicknesses, allowing the user to switch between three sets of delays for phased array imaging at the push of a button. Simulations indicate that refraction correction may be expected to increase sensitivity, reduce beam steering errors, and partially restore lost spatial resolution, with the greatest improvements occurring at the largest steering angles. Distorted images of cylindrical lesions were created by imaging through an acrylic plate in a tissue-mimicking phantom. As a result of correcting for refraction, lesions were restored to 93.6% of their original diameter in the lateral direction and 98.1% of their original shape along the long axis of the cylinders. In imaging two healthy volunteers, the mean brightness increased by 8.3% and showed no spatial dependency. PMID:24275538
Lessons from the Salk Polio Vaccine: Methods for and Risks of Rapid Translation
Juskewitch, B.A., Justin E.; Tapia, B.A., Carmen J.; Windebank, Anthony J.
2010-01-01
Abstract The Salk inactivated poliovirus vaccine is one of the most rapid examples of bench‐to‐bedside translation in medicine. In the span of 6 years, the key basic lab discoveries facilitating the development of the vaccine were made, optimization and safety testing was completed in both animals and human volunteers, the largest clinical trial in history of 1.8 million children was conducted, and the results were released to an eagerly awaiting public. Such examples of rapid translation cannot only offer clues to what factors can successfully drive and accelerate the translational process but also what mistakes can occur (and thus should be avoided) during such a swift process. In this commentary, we explore the translational path of the Salk polio vaccine from the key basic science discoveries to the 1954 Field Trials and delve into the scientific and sociopolitical factors that aided in its rapid development. Moreover, we look at the Cutter and Wyeth incidents after the vaccine’s approval and the errors that led to them. Clin Trans Sci 2010; Volume 3: 182–185 PMID:20718820
Peak discharge of a Pleistocene lava-dam outburst flood in Grand Canyon, Arizona, USA
Fenton, C.R.; Webb, R.H.; Cerling, T.E.
2006-01-01
The failure of a lava dam 165,000 yr ago produced the largest known flood on the Colorado River in Grand Canyon. The Hyaloclastite Dam was up to 366 m high, and geochemical evidence linked this structure to outburst-flood deposits that occurred for 32 km downstream. Using the Hyaloclastite outburst-flood deposits as paleostage indicators, we used dam-failure and unsteady flow modeling to estimate a peak discharge and flow hydrograph. Failure of the Hyaloclastite Dam released a maximum 11 ?? 109 m3 of water in 31 h. Peak discharges, estimated from uncertainty in channel geometry, dam height, and hydraulic characteristics, ranged from 2.3 to 5.3 ?? 105 m3 s-1 for the Hyaloclastite outburst flood. This discharge is an order of magnitude greater than the largest known discharge on the Colorado River (1.4 ?? 104 m3 s-1) and the largest peak discharge resulting from failure of a constructed dam in the USA (6.5 ?? 104 m3 s-1). Moreover, the Hyaloclastite outburst flood is the oldest documented Quaternary flood and one of the largest to have occurred in the continental USA. The peak discharge for this flood ranks in the top 30 floods (>105 m3 s-1) known worldwide and in the top ten largest floods in North America. ?? 2005 University of Washington. All rights reserved.
Peak discharge of a Pleistocene lava-dam outburst flood in Grand Canyon, Arizona, USA
NASA Astrophysics Data System (ADS)
Fenton, Cassandra R.; Webb, Robert H.; Cerling, Thure E.
2006-03-01
The failure of a lava dam 165,000 yr ago produced the largest known flood on the Colorado River in Grand Canyon. The Hyaloclastite Dam was up to 366 m high, and geochemical evidence linked this structure to outburst-flood deposits that occurred for 32 km downstream. Using the Hyaloclastite outburst-flood deposits as paleostage indicators, we used dam-failure and unsteady flow modeling to estimate a peak discharge and flow hydrograph. Failure of the Hyaloclastite Dam released a maximum 11 × 10 9 m 3 of water in 31 h. Peak discharges, estimated from uncertainty in channel geometry, dam height, and hydraulic characteristics, ranged from 2.3 to 5.3 × 10 5 m 3 s -1 for the Hyaloclastite outburst flood. This discharge is an order of magnitude greater than the largest known discharge on the Colorado River (1.4 × 10 4 m 3 s -1) and the largest peak discharge resulting from failure of a constructed dam in the USA (6.5 × 10 4 m 3 s -1). Moreover, the Hyaloclastite outburst flood is the oldest documented Quaternary flood and one of the largest to have occurred in the continental USA. The peak discharge for this flood ranks in the top 30 floods (>10 5 m 3 s -1) known worldwide and in the top ten largest floods in North America.
Holocene South Asian Monsoon Climate Change - Potential Mechanisms and Effects on Past Civilizations
NASA Astrophysics Data System (ADS)
Staubwasser, M.; Sirocko, F.; Grootes, P. M.; Erlenkeuser, H.; Segl, M.
2002-12-01
Planktonic oxygen isotope ratios from the laminated sediment core 63KA off the river Indus delta dated with 80 AMS radiocarbon ages reveal significant climate changes in the south Asian monsoon system throughout the Holocene. The most prominent event of the early-mid Holocene occurred after 8.4 ka BP and is within dating error of the GISP/GRIP event centered at 8.2 ka BP. The late Holocene is generally more variable, and shows non-periodic cycles in the multi-centennial frequency band. The largest change of the entire Holocene occurred at 4.2 ka BP and is concordant with the end of urban Harappan civilization in the Indus valley. Opposing isotopic trends across the northern Arabian Sea surface indicate a reduction in Indus river discharge at that time. Consequently, sustained drought may have initiated the archaeologically recorded interval of southeastward habitat tracking within the Harappan cultural domain. The hemispheric significance of the 4.2 ka BP event is evident from concordant climate change in the eastern Mediterranean and the Middle East. The late Holocene cycles in South Asia, which most likely represent drought cycles, vary between 250 and 800 years and are coherent with the evolution of cosmogenic radiocarbon production rates in the atmosphere. This suggests that solar variability is the fundamental cause behind late Holocene rainfall changes at least over south Asia.
The Cataclysmic 1991 Eruption of Mount Pinatubo, Philippines
Newhall, Christopher G.; Hendley, James W.; Stauffer, Peter H.
1997-01-01
The second-largest volcanic eruption of this century, and by far the largest eruption to affect a densely populated area, occurred at Mount Pinatubo in the Philippines on June 15, 1991. The eruption produced high-speed avalanches of hot ash and gas, giant mudflows, and a cloud of volcanic ash hundreds of miles across. The impacts of the eruption continue to this day.
Environmental Assessment for the Replacement of Water Reservoirs
2005-03-25
Intensive extraction of groundwater does not occur at Travis because of poor water- bearing subsurface geology. Intensive extraction SECTION 3.0...largest contiguous estuarine marsh and the largest wetland in the continental United States (CH2M HILL, 2001). Suisun Marsh drains into Grizzly and...and other support facilities. • Community (Commercial) – Uses include the exchange, commissary, banking, dining facilities, eating
Kalmár, Éva; Lasher, Jason Richard; Tarry, Thomas Dean; Myers, Andrea; Szakonyi, Gerda; Dombi, György; Baki, Gabriella; Alexander, Kenneth S.
2013-01-01
The availability of suppositories in Hungary, especially in clinical pharmacy practice, is usually provided by extemporaneous preparations. Due to the known advantages of rectal drug administration, its benefits are frequently utilized in pediatrics. However, errors during the extemporaneous manufacturing process can lead to non-homogenous drug distribution within the dosage units. To determine the root cause of these errors and provide corrective actions, we studied suppository samples prepared with exactly known errors using both cerimetric titration and HPLC technique. Our results show that the most frequent technological error occurs when the pharmacist fails to use the correct displacement factor in the calculations which could lead to a 4.6% increase/decrease in the assay in individual dosage units. The second most important source of error can occur when the molding excess is calculated solely for the suppository base. This can further dilute the final suppository drug concentration causing the assay to be as low as 80%. As a conclusion we emphasize that the application of predetermined displacement factors in calculations for the formulation of suppositories is highly important, which enables the pharmacist to produce a final product containing exactly the determined dose of an active substance despite the different densities of the components. PMID:25161378
Kalmár, Eva; Lasher, Jason Richard; Tarry, Thomas Dean; Myers, Andrea; Szakonyi, Gerda; Dombi, György; Baki, Gabriella; Alexander, Kenneth S
2014-09-01
The availability of suppositories in Hungary, especially in clinical pharmacy practice, is usually provided by extemporaneous preparations. Due to the known advantages of rectal drug administration, its benefits are frequently utilized in pediatrics. However, errors during the extemporaneous manufacturing process can lead to non-homogenous drug distribution within the dosage units. To determine the root cause of these errors and provide corrective actions, we studied suppository samples prepared with exactly known errors using both cerimetric titration and HPLC technique. Our results show that the most frequent technological error occurs when the pharmacist fails to use the correct displacement factor in the calculations which could lead to a 4.6% increase/decrease in the assay in individual dosage units. The second most important source of error can occur when the molding excess is calculated solely for the suppository base. This can further dilute the final suppository drug concentration causing the assay to be as low as 80%. As a conclusion we emphasize that the application of predetermined displacement factors in calculations for the formulation of suppositories is highly important, which enables the pharmacist to produce a final product containing exactly the determined dose of an active substance despite the different densities of the components.
Medication errors: an overview for clinicians.
Wittich, Christopher M; Burkle, Christopher M; Lanier, William L
2014-08-01
Medication error is an important cause of patient morbidity and mortality, yet it can be a confusing and underappreciated concept. This article provides a review for practicing physicians that focuses on medication error (1) terminology and definitions, (2) incidence, (3) risk factors, (4) avoidance strategies, and (5) disclosure and legal consequences. A medication error is any error that occurs at any point in the medication use process. It has been estimated by the Institute of Medicine that medication errors cause 1 of 131 outpatient and 1 of 854 inpatient deaths. Medication factors (eg, similar sounding names, low therapeutic index), patient factors (eg, poor renal or hepatic function, impaired cognition, polypharmacy), and health care professional factors (eg, use of abbreviations in prescriptions and other communications, cognitive biases) can precipitate medication errors. Consequences faced by physicians after medication errors can include loss of patient trust, civil actions, criminal charges, and medical board discipline. Methods to prevent medication errors from occurring (eg, use of information technology, better drug labeling, and medication reconciliation) have been used with varying success. When an error is discovered, patients expect disclosure that is timely, given in person, and accompanied with an apology and communication of efforts to prevent future errors. Learning more about medication errors may enhance health care professionals' ability to provide safe care to their patients. Copyright © 2014 Mayo Foundation for Medical Education and Research. Published by Elsevier Inc. All rights reserved.
ERIC Educational Resources Information Center
Pouplier, Marianne; Marin, Stefania; Waltl, Susanne
2014-01-01
Purpose: Phonetic accommodation in speech errors has traditionally been used to identify the processing level at which an error has occurred. Recent studies have challenged the view that noncanonical productions may solely be due to phonetic, not phonological, processing irregularities, as previously assumed. The authors of the present study…
Error Analysis of Brailled Instructional Materials Produced by Public School Personnel in Texas
ERIC Educational Resources Information Center
Herzberg, Tina
2010-01-01
In this study, a detailed error analysis was performed to determine if patterns of errors existed in braille transcriptions. The most frequently occurring errors were the insertion of letters or words that were not contained in the original print material; the incorrect usage of the emphasis indicator; and the incorrect formatting of titles,…
Multiple-generator errors are unavoidable under model misspecification.
Jewett, D L; Zhang, Z
1995-08-01
Model misspecification poses a major problem for dipole source localization (DSL) because it causes insidious multiple-generator errors (MulGenErrs) to occur in the fitted dipole parameters. This paper describes how and why this occurs, based upon simple algebraic considerations. MulGenErrs must occur, to some degree, in any DSL analysis of real data because there is model misspecification and mathematically the equations used for the simultaneously active generators must be of a different form than the equations for each generator active alone.
Haller, U; Welti, S; Haenggi, D; Fink, D
2005-06-01
The number of liability cases but also the size of individual claims due to alleged treatment errors are increasing steadily. Spectacular sentences, especially in the USA, encourage this trend. Wherever human beings work, errors happen. The health care system is particularly susceptible and shows a high potential for errors. Therefore risk management has to be given top priority in hospitals. Preparing the introduction of critical incident reporting (CIR) as the means to notify errors is time-consuming and calls for a change in attitude because in many places the necessary base of trust has to be created first. CIR is not made to find the guilty and punish them but to uncover the origins of errors in order to eliminate them. The Department of Anesthesiology of the University Hospital of Basel has developed an electronic error notification system, which, in collaboration with the Swiss Medical Association, allows each specialist society to participate electronically in a CIR system (CIRS) in order to create the largest database possible and thereby to allow statements concerning the extent and type of error sources in medicine. After a pilot project in 2000-2004, the Swiss Society of Gynecology and Obstetrics is now progressively introducing the 'CIRS Medical' of the Swiss Medical Association. In our country, such programs are vulnerable to judicial intervention due to the lack of explicit legal guarantees of protection. High-quality data registration and skillful counseling are all the more important. Hospital directors and managers are called upon to examine those incidents which are based on errors inherent in the system.
Chu, David; Xiao, Jane; Shah, Payal; Todd, Brett
2018-06-20
Cognitive errors are a major contributor to medical error. Traditionally, medical errors at teaching hospitals are analyzed in morbidity and mortality (M&M) conferences. We aimed to describe the frequency of cognitive errors in relation to the occurrence of diagnostic and other error types, in cases presented at an emergency medicine (EM) resident M&M conference. We conducted a retrospective study of all cases presented at a suburban US EM residency monthly M&M conference from September 2011 to August 2016. Each case was reviewed using the electronic medical record (EMR) and notes from the M&M case by two EM physicians. Each case was categorized by type of primary medical error that occurred as described by Okafor et al. When a diagnostic error occurred, the case was reviewed for contributing cognitive and non-cognitive factors. Finally, when a cognitive error occurred, the case was classified into faulty knowledge, faulty data gathering or faulty synthesis, as described by Graber et al. Disagreements in error type were mediated by a third EM physician. A total of 87 M&M cases were reviewed; the two reviewers agreed on 73 cases, and 14 cases required mediation by a third reviewer. Forty-eight cases involved diagnostic errors, 47 of which were cognitive errors. Of these 47 cases, 38 involved faulty synthesis, 22 involved faulty data gathering and only 11 involved faulty knowledge. Twenty cases contained more than one type of cognitive error. Twenty-nine cases involved both a resident and an attending physician, while 17 cases involved only an attending physician. Twenty-one percent of the resident cases involved all three cognitive errors, while none of the attending cases involved all three. Forty-one percent of the resident cases and only 6% of the attending cases involved faulty knowledge. One hundred percent of the resident cases and 94% of the attending cases involved faulty synthesis. Our review of 87 EM M&M cases revealed that cognitive errors are commonly involved in cases presented, and that these errors are less likely due to deficient knowledge and more likely due to faulty synthesis. M&M conferences may therefore provide an excellent forum to discuss cognitive errors and how to reduce their occurrence.
Errors in radiation oncology: A study in pathways and dosimetric impact
Drzymala, Robert E.; Purdy, James A.; Michalski, Jeff
2005-01-01
As complexity for treating patients increases, so does the risk of error. Some publications have suggested that record and verify (R&V) systems may contribute in propagating errors. Direct data transfer has the potential to eliminate most, but not all, errors. And although the dosimetric consequences may be obvious in some cases, a detailed study does not exist. In this effort, we examined potential errors in terms of scenarios, pathways of occurrence, and dosimetry. Our goal was to prioritize error prevention according to likelihood of event and dosimetric impact. For conventional photon treatments, we investigated errors of incorrect source‐to‐surface distance (SSD), energy, omitted wedge (physical, dynamic, or universal) or compensating filter, incorrect wedge or compensating filter orientation, improper rotational rate for arc therapy, and geometrical misses due to incorrect gantry, collimator or table angle, reversed field settings, and setup errors. For electron beam therapy, errors investigated included incorrect energy, incorrect SSD, along with geometric misses. For special procedures we examined errors for total body irradiation (TBI, incorrect field size, dose rate, treatment distance) and LINAC radiosurgery (incorrect collimation setting, incorrect rotational parameters). Likelihood of error was determined and subsequently rated according to our history of detecting such errors. Dosimetric evaluation was conducted by using dosimetric data, treatment plans, or measurements. We found geometric misses to have the highest error probability. They most often occurred due to improper setup via coordinate shift errors or incorrect field shaping. The dosimetric impact is unique for each case and depends on the proportion of fields in error and volume mistreated. These errors were short‐lived due to rapid detection via port films. The most significant dosimetric error was related to a reversed wedge direction. This may occur due to incorrect collimator angle or wedge orientation. For parallel‐opposed 60° wedge fields, this error could be as high as 80% to a point off‐axis. Other examples of dosimetric impact included the following: SSD, ~2%/cm for photons or electrons; photon energy (6 MV vs. 18 MV), on average 16% depending on depth, electron energy, ~0.5cm of depth coverage per MeV (mega‐electron volt). Of these examples, incorrect distances were most likely but rapidly detected by in vivo dosimetry. Errors were categorized by occurrence rate, methods and timing of detection, longevity, and dosimetric impact. Solutions were devised according to these criteria. To date, no one has studied the dosimetric impact of global errors in radiation oncology. Although there is heightened awareness that with increased use of ancillary devices and automation, there must be a parallel increase in quality check systems and processes, errors do and will continue to occur. This study has helped us identify and prioritize potential errors in our clinic according to frequency and dosimetric impact. For example, to reduce the use of an incorrect wedge direction, our clinic employs off‐axis in vivo dosimetry. To avoid a treatment distance setup error, we use both vertical table settings and optical distance indicator (ODI) values to properly set up fields. As R&V systems become more automated, more accurate and efficient data transfer will occur. This will require further analysis. Finally, we have begun examining potential intensity‐modulated radiation therapy (IMRT) errors according to the same criteria. PACS numbers: 87.53.Xd, 87.53.St PMID:16143793
Doñamayor, Nuria; Dinani, Jakob; Römisch, Manuel; Ye, Zheng; Münte, Thomas F
2014-10-01
Neural responses to performance errors and external feedback have been suggested to be altered in obsessive-compulsive disorder. In the current study, an associative learning task was used in healthy participants assessed for obsessive-compulsive symptoms by the OCI-R questionnaire. The task included a condition with equivocal feedback that did not inform about the participants' performance. Following incorrect responses, an error-related negativity and an error positivity were observed. In the feedback phase, the largest feedback-related negativity was observed following equivocal feedback. Theta and beta oscillatory components were found following incorrect and correct responses, respectively, and an increase in theta power was associated with negative and equivocal feedback. Changes over time were also explored as an indicator for possible learning effects. Finally, event-related potentials and oscillatory components were found to be uncorrelated with OCI-R scores in the current non-clinical sample. Copyright © 2014 Elsevier B.V. All rights reserved.
Integrated Modeling Activities for the James Webb Space Telescope: Optical Jitter Analysis
NASA Technical Reports Server (NTRS)
Hyde, T. Tupper; Ha, Kong Q.; Johnston, John D.; Howard, Joseph M.; Mosier, Gary E.
2004-01-01
This is a continuation of a series of papers on the integrated modeling activities for the James Webb Space Telescope(JWST). Starting with the linear optical model discussed in part one, and using the optical sensitivities developed in part two, we now assess the optical image motion and wavefront errors from the structural dynamics. This is often referred to as "jitter: analysis. The optical model is combined with the structural model and the control models to create a linear structural/optical/control model. The largest jitter is due to spacecraft reaction wheel assembly disturbances which are harmonic in nature and will excite spacecraft and telescope structural. The structural/optic response causes image quality degradation due to image motion (centroid error) as well as dynamic wavefront error. Jitter analysis results are used to predict imaging performance, improve the structural design, and evaluate the operational impact of the disturbance sources.
A study of GPS measurement errors due to noise and multipath interference for CGADS
NASA Technical Reports Server (NTRS)
Axelrad, Penina; MacDoran, Peter F.; Comp, Christopher J.
1996-01-01
This report describes a study performed by the Colorado Center for Astrodynamics Research (CCAR) on GPS measurement errors in the Codeless GPS Attitude Determination System (CGADS) due to noise and multipath interference. Preliminary simulation models fo the CGADS receiver and orbital multipath are described. The standard FFT algorithms for processing the codeless data is described and two alternative algorithms - an auto-regressive/least squares (AR-LS) method, and a combined adaptive notch filter/least squares (ANF-ALS) method, are also presented. Effects of system noise, quantization, baseband frequency selection, and Doppler rates on the accuracy of phase estimates with each of the processing methods are shown. Typical electrical phase errors for the AR-LS method are 0.2 degrees, compared to 0.3 and 0.5 degrees for the FFT and ANF-ALS algorithms, respectively. Doppler rate was found to have the largest effect on the performance.
Correction of electrode modelling errors in multi-frequency EIT imaging.
Jehl, Markus; Holder, David
2016-06-01
The differentiation of haemorrhagic from ischaemic stroke using electrical impedance tomography (EIT) requires measurements at multiple frequencies, since the general lack of healthy measurements on the same patient excludes time-difference imaging methods. It has previously been shown that the inaccurate modelling of electrodes constitutes one of the largest sources of image artefacts in non-linear multi-frequency EIT applications. To address this issue, we augmented the conductivity Jacobian matrix with a Jacobian matrix with respect to electrode movement. Using this new algorithm, simulated ischaemic and haemorrhagic strokes in a realistic head model were reconstructed for varying degrees of electrode position errors. The simultaneous recovery of conductivity spectra and electrode positions removed most artefacts caused by inaccurately modelled electrodes. Reconstructions were stable for electrode position errors of up to 1.5 mm standard deviation along both surface dimensions. We conclude that this method can be used for electrode model correction in multi-frequency EIT.
NASA Astrophysics Data System (ADS)
Gómez, Breogán; Miguez-Macho, Gonzalo
2017-04-01
Nudging techniques are commonly used to constrain the evolution of numerical models to a reference dataset that is typically of a lower resolution. The nudged model retains some of the features of the reference field while incorporating its own dynamics to the solution. These characteristics have made nudging very popular in dynamic downscaling applications that cover from shot range, single case studies, to multi-decadal regional climate simulations. Recently, a variation of this approach called Spectral Nudging, has gained popularity for its ability to maintain the higher temporal and spatial variability of the model results, while forcing the large scales in the solution with a coarser resolution field. In this work, we focus on a not much explored aspect of this technique: the impact of selecting different cut-off wave numbers and spin-up times. We perform four-day long simulations with the WRF model, daily for three different one-month periods that include a free run and several Spectral Nudging experiments with cut-off wave numbers ranging from the smallest to the largest possible (full Grid Nudging). Results show that Spectral Nudging is very effective at imposing the selected scales onto the solution, while allowing the limited area model to incorporate finer scale features. The model error diminishes rapidly as the nudging expands over broader parts of the spectrum, but this decreasing trend ceases sharply at cut-off wave numbers equivalent to a length scale of about 1000 km, and the error magnitude changes minimally thereafter. This scale corresponds to the Rossby Radius of deformation, separating synoptic from convective scales in the flow. When nudging above this value is applied, a shifting of the synoptic patterns can occur in the solution, yielding large model errors. However, when selecting smaller scales, the fine scale contribution of the model is damped, thus making 1000 km the appropriate scale threshold to nudge in order to balance both effects. Finally, we note that longer spin-up times are needed for model errors to stabilize when using Spectral Nudging than with Grid Nudging. Our results suggest that this time is between 36 and 48 hours.
Pajunen, Tuuli; Saranto, Kaija; Lehtonen, Lasse
2016-01-01
Background The rapid expansion in the use of electronic health records (EHR) has increased the number of medical errors originating in health information systems (HIS). The sociotechnical approach helps in understanding risks in the development, implementation, and use of EHR and health information technology (HIT) while accounting for complex interactions of technology within the health care system. Objective This study addresses two important questions: (1) “which of the common EHR error types are associated with perceived high- and extreme-risk severity ratings among EHR users?”, and (2) “which variables are associated with high- and extreme-risk severity ratings?” Methods This study was a quantitative, non-experimental, descriptive study of EHR users. We conducted a cross-sectional web-based questionnaire study at the largest hospital district in Finland. Statistical tests included the reliability of the summative scales tested with Cronbach’s alpha. Logistic regression served to assess the association of the independent variables to each of the eight risk factors examined. Results A total of 2864 eligible respondents provided the final data. Almost half of the respondents reported a high level of risk related to the error type “extended EHR unavailability”. The lowest overall risk level was associated with “selecting incorrectly from a list of items”. In multivariate analyses, profession and clinical unit proved to be the strongest predictors for high perceived risk. Physicians perceived risk levels to be the highest (P<.001 in six of eight error types), while emergency departments, operating rooms, and procedure units were associated with higher perceived risk levels (P<.001 in four of eight error types). Previous participation in eLearning courses on EHR-use was associated with lower risk for some of the risk factors. Conclusions Based on a large number of Finnish EHR users in hospitals, this study indicates that HIT safety hazards should be taken very seriously, particularly in operating rooms, procedure units, emergency departments, and intensive care units/critical care units. Health care organizations should use proactive and systematic assessments of EHR risks before harmful events occur. An EHR training program should be compulsory for all EHR users in order to address EHR safety concerns resulting from the failure to use HIT appropriately. PMID:27154599
Soshi, Takahiro; Ando, Kumiko; Noda, Takamasa; Nakazawa, Kanako; Tsumura, Hideki; Okada, Takayuki
2014-01-01
Post-error slowing (PES) is an error recovery strategy that contributes to action control, and occurs after errors in order to prevent future behavioral flaws. Error recovery often malfunctions in clinical populations, but the relationship between behavioral traits and recovery from error is unclear in healthy populations. The present study investigated the relationship between impulsivity and error recovery by simulating a speeded response situation using a Go/No-go paradigm that forced the participants to constantly make accelerated responses prior to stimuli disappearance (stimulus duration: 250 ms). Neural correlates of post-error processing were examined using event-related potentials (ERPs). Impulsivity traits were measured with self-report questionnaires (BIS-11, BIS/BAS). Behavioral results demonstrated that the commission error for No-go trials was 15%, but PES did not take place immediately. Delayed PES was negatively correlated with error rates and impulsivity traits, showing that response slowing was associated with reduced error rates and changed with impulsivity. Response-locked error ERPs were clearly observed for the error trials. Contrary to previous studies, error ERPs were not significantly related to PES. Stimulus-locked N2 was negatively correlated with PES and positively correlated with impulsivity traits at the second post-error Go trial: larger N2 activity was associated with greater PES and less impulsivity. In summary, under constant speeded conditions, error monitoring was dissociated from post-error action control, and PES did not occur quickly. Furthermore, PES and its neural correlate (N2) were modulated by impulsivity traits. These findings suggest that there may be clinical and practical efficacy of maintaining cognitive control of actions during error recovery under common daily environments that frequently evoke impulsive behaviors.
Soshi, Takahiro; Ando, Kumiko; Noda, Takamasa; Nakazawa, Kanako; Tsumura, Hideki; Okada, Takayuki
2015-01-01
Post-error slowing (PES) is an error recovery strategy that contributes to action control, and occurs after errors in order to prevent future behavioral flaws. Error recovery often malfunctions in clinical populations, but the relationship between behavioral traits and recovery from error is unclear in healthy populations. The present study investigated the relationship between impulsivity and error recovery by simulating a speeded response situation using a Go/No-go paradigm that forced the participants to constantly make accelerated responses prior to stimuli disappearance (stimulus duration: 250 ms). Neural correlates of post-error processing were examined using event-related potentials (ERPs). Impulsivity traits were measured with self-report questionnaires (BIS-11, BIS/BAS). Behavioral results demonstrated that the commission error for No-go trials was 15%, but PES did not take place immediately. Delayed PES was negatively correlated with error rates and impulsivity traits, showing that response slowing was associated with reduced error rates and changed with impulsivity. Response-locked error ERPs were clearly observed for the error trials. Contrary to previous studies, error ERPs were not significantly related to PES. Stimulus-locked N2 was negatively correlated with PES and positively correlated with impulsivity traits at the second post-error Go trial: larger N2 activity was associated with greater PES and less impulsivity. In summary, under constant speeded conditions, error monitoring was dissociated from post-error action control, and PES did not occur quickly. Furthermore, PES and its neural correlate (N2) were modulated by impulsivity traits. These findings suggest that there may be clinical and practical efficacy of maintaining cognitive control of actions during error recovery under common daily environments that frequently evoke impulsive behaviors. PMID:25674058
Error reporting in transfusion medicine at a tertiary care centre: a patient safety initiative.
Elhence, Priti; Shenoy, Veena; Verma, Anupam; Sachan, Deepti
2012-11-01
Errors in the transfusion process can compromise patient safety. A study was undertaken at our center to identify the errors in the transfusion process and their causes in order to reduce their occurrence by corrective and preventive actions. All near miss, no harm events and adverse events reported in the 'transfusion process' during 1 year study period were recorded, classified and analyzed at a tertiary care teaching hospital in North India. In total, 285 transfusion related events were reported during the study period. Of these, there were four adverse (1.5%), 10 no harm (3.5%) and 271 (95%) near miss events. Incorrect blood component transfusion rate was 1 in 6031 component units. ABO incompatible transfusion rate was one in 15,077 component units issued or one in 26,200 PRBC units issued and acute hemolytic transfusion reaction due to ABO incompatible transfusion was 1 in 60,309 component units issued. Fifty-three percent of the antecedent near miss events were bedside events. Patient sample handling errors were the single largest category of errors (n=94, 33%) followed by errors in labeling and blood component handling and storage in user areas. The actual and near miss event data obtained through this initiative provided us with clear evidence about latent defects and critical points in the transfusion process so that corrective and preventive actions could be taken to reduce errors and improve transfusion safety.
The comparison of cervical repositioning errors according to smartphone addiction grades.
Lee, Jeonhyeong; Seo, Kyochul
2014-04-01
[Purpose] The purpose of this study was to compare cervical repositioning errors according to smartphone addiction grades of adults in their 20s. [Subjects and Methods] A survey of smartphone addiction was conducted of 200 adults. Based on the survey results, 30 subjects were chosen to participate in this study, and they were divided into three groups of 10; a Normal Group, a Moderate Addiction Group, and a Severe Addiction Group. After attaching a C-ROM, we measured the cervical repositioning errors of flexion, extension, right lateral flexion and left lateral flexion. [Results] Significant differences in the cervical repositioning errors of flexion, extension, and right and left lateral flexion were found among the Normal Group, Moderate Addiction Group, and Severe Addiction Group. In particular, the Severe Addiction Group showed the largest errors. [Conclusion] The result indicates that as smartphone addiction becomes more severe, a person is more likely to show impaired proprioception, as well as impaired ability to recognize the right posture. Thus, musculoskeletal problems due to smartphone addiction should be resolved through social cognition and intervention, and physical therapeutic education and intervention to educate people about correct postures.
NASA Technical Reports Server (NTRS)
Beck, S. M.
1975-01-01
A mobile self-contained Faraday cup system for beam current measurments of nominal 600 MeV protons was designed, constructed, and used at the NASA Space Radiation Effects Laboratory. The cup is of reentrant design with a length of 106.7 cm and an outside diameter of 20.32 cm. The inner diameter is 15.24 cm and the base thickness is 30.48 cm. The primary absorber is commercially available lead hermetically sealed in a 0.32-cm-thick copper jacket. Several possible systematic errors in using the cup are evaluated. The largest source of error arises from high-energy electrons which are ejected from the entrance window and enter the cup. A total systematic error of -0.83 percent is calculated to be the decrease from the true current value. From data obtained in calibrating helium-filled ion chambers with the Faraday cup, the mean energy required to produce one ion pair in helium is found to be 30.76 + or - 0.95 eV for nominal 600 MeV protons. This value agrees well, within experimental error, with reported values of 29.9 eV and 30.2 eV.
LANDSAT-4/5 image data quality analysis
NASA Technical Reports Server (NTRS)
Malaret, E.; Bartolucci, L. A.; Lozano, D. F.; Anuta, P. E.; Mcgillem, C. D.
1984-01-01
A LANDSAT Thematic Mapper (TM) quality evaluation study was conducted to identify geometric and radiometric sensor errors in the post-launch environment. The study began with the launch of LANDSAT-4. Several error conditions were found, including band-to-band misregistration and detector-to detector radiometric calibration errors. Similar analysis was made for the LANDSAT-5 Thematic Mapper and compared with results for LANDSAT-4. Remaining band-to-band misregistration was found to be within tolerances and detector-to-detector calibration errors were not severe. More coherent noise signals were observed in TM-5 than in TM-4, although the amplitude was generally less. The scan direction differences observed in TM-4 were still evident in TM-5. The largest effect was in Band 4 where nearly a one digital count difference was observed. Resolution estimation was carried out using roads in TM-5 for the primary focal plane bands rather than field edges as in TM-4. Estimates using roads gave better resolution. Thermal IR band calibration studies were conducted and new nonlinear calibration procedures were defined for TM-5. The overall conclusion is that there are no first order errors in TM-5 and any remaining problems are second or third order.
[Error analysis of functional articulation disorders in children].
Zhou, Qiao-juan; Yin, Heng; Shi, Bing
2008-08-01
To explore the clinical characteristic of functional articulation disorders in children and provide more evidence for differential diagnosis and speech therapy. 172 children with functional articulation disorders were grouped by age. Children aged 4-5 years were assigned to one group, and those aged 6-10 years were to another group. Their phonological samples were collected and analyzed. In the two groups, substitution and omission (deletion) were the mainly articulation errors in these children, dental consonants were the main wrong sounds, and bilabial and labio-dental were rarely wrong. In age 4-5 group, sequence according to the error frequency from the highest to lowest was dental, velar, lingual, apical, bilabial, and labio-dental. In age 6-10 group, the sequence was dental, lingual, apical, velar, bilabial, labio-dental. Lateral misarticulation and palatalized misarticulation occurred more often in age 6-10 group than age 4-5 group and were only found in lingual and dental consonants in two groups. Misarticulation of functional articulation disorders mainly occurs in dental and rarely in bilabial and labio-dental. Substitution and omission are the most often occurred errors. Lateral misarticulation and palatalized misarticulation occur mainly in lingual and dental consonants.
NASA Astrophysics Data System (ADS)
Allen, Douglas R.; Hoppel, Karl W.; Kuhl, David D.
2018-03-01
Extraction of wind and temperature information from stratospheric ozone assimilation is examined within the context of the Navy Global Environmental Model (NAVGEM) hybrid 4-D variational assimilation (4D-Var) data assimilation (DA) system. Ozone can improve the wind and temperature through two different DA mechanisms: (1) through the flow-of-the-day
ensemble background error covariance that is blended together with the static background error covariance and (2) via the ozone continuity equation in the tangent linear model and adjoint used for minimizing the cost function. All experiments assimilate actual conventional data in order to maintain a similar realistic troposphere. In the stratosphere, the experiments assimilate simulated ozone and/or radiance observations in various combinations. The simulated observations are constructed for a case study based on a 16-day cycling truth experiment (TE), which is an analysis with no stratospheric observations. The impact of ozone on the analysis is evaluated by comparing the experiments to the TE for the last 6 days, allowing for a 10-day spin-up. Ozone assimilation benefits the wind and temperature when data are of sufficient quality and frequency. For example, assimilation of perfect (no applied error) global hourly ozone data constrains the stratospheric wind and temperature to within ˜ 2 m s-1 and ˜ 1 K. This demonstrates that there is dynamical information in the ozone distribution that can potentially be used to improve the stratosphere. This is particularly important for the tropics, where radiance observations have difficulty constraining wind due to breakdown of geostrophic balance. Global ozone assimilation provides the largest benefit when the hybrid blending coefficient is an intermediate value (0.5 was used in this study), rather than 0.0 (no ensemble background error covariance) or 1.0 (no static background error covariance), which is consistent with other hybrid DA studies. When perfect global ozone is assimilated in addition to radiance observations, wind and temperature error decreases of up to ˜ 3 m s-1 and ˜ 1 K occur in the tropical upper stratosphere. Assimilation of noisy global ozone (2 % errors applied) results in error reductions of ˜ 1 m s-1 and ˜ 0.5 K in the tropics and slightly increased temperature errors in the Northern Hemisphere polar region. Reduction of the ozone sampling frequency also reduces the benefit of ozone throughout the stratosphere, with noisy polar-orbiting data having only minor impacts on wind and temperature when assimilated with radiances. An examination of ensemble cross-correlations between ozone and other variables shows that a single ozone observation behaves like a potential vorticity (PV) charge
, or a monopole of PV, with rotation about a vertical axis and vertically oriented temperature dipole. Further understanding of this relationship may help in designing observation systems that would optimize the impact of ozone on the dynamics.
Ironic Effects of Drawing Attention to Story Errors
Eslick, Andrea N.; Fazio, Lisa K.; Marsh, Elizabeth J.
2014-01-01
Readers learn errors embedded in fictional stories and use them to answer later general knowledge questions (Marsh, Meade, & Roediger, 2003). Suggestibility is robust and occurs even when story errors contradict well-known facts. The current study evaluated whether suggestibility is linked to participants’ inability to judge story content as correct versus incorrect. Specifically, participants read stories containing correct and misleading information about the world; some information was familiar (making error discovery possible), while some was more obscure. To improve participants’ monitoring ability, we highlighted (in red font) a subset of story phrases requiring evaluation; readers no longer needed to find factual information. Rather, they simply needed to evaluate its correctness. Readers were more likely to answer questions with story errors if they were highlighted in red font, even if they contradicted well-known facts. Though highlighting to-be-evaluated information freed cognitive resources for monitoring, an ironic effect occurred: Drawing attention to specific errors increased rather than decreased later suggestibility. Failure to monitor for errors, not failure to identify the information requiring evaluation, leads to suggestibility. PMID:21294039
Failure analysis and modeling of a VAXcluster system
NASA Technical Reports Server (NTRS)
Tang, Dong; Iyer, Ravishankar K.; Subramani, Sujatha S.
1990-01-01
This paper discusses the results of a measurement-based analysis of real error data collected from a DEC VAXcluster multicomputer system. In addition to evaluating basic system dependability characteristics such as error and failure distributions and hazard rates for both individual machines and for the VAXcluster, reward models were developed to analyze the impact of failures on the system as a whole. The results show that more than 46 percent of all failures were due to errors in shared resources. This is despite the fact that these errors have a recovery probability greater than 0.99. The hazard rate calculations show that not only errors, but also failures occur in bursts. Approximately 40 percent of all failures occur in bursts and involved multiple machines. This result indicates that correlated failures are significant. Analysis of rewards shows that software errors have the lowest reward (0.05 vs 0.74 for disk errors). The expected reward rate (reliability measure) of the VAXcluster drops to 0.5 in 18 hours for the 7-out-of-7 model and in 80 days for the 3-out-of-7 model.
[Monitoring medication errors in an internal medicine service].
Smith, Ann-Loren M; Ruiz, Inés A; Jirón, Marcela A
2014-01-01
Patients admitted to internal medicine services receive multiple drugs and thus are at risk of medication errors. To determine the frequency of medication errors (ME) among patients admitted to an internal medicine service of a high complexity hospital. A prospective observational study conducted in 225 patients admitted to an internal medicine service. Each stage of drug utilization system (prescription, transcription, dispensing, preparation and administration) was directly observed by trained pharmacists not related to hospital staff during three months. ME were described and categorized according to the National Coordinating Council for Medication Error Reporting and Prevention. In each stage of medication use, the frequency of ME and their characteristics were determined. A total of 454 drugs were prescribed to the studied patients. In 138 (30,4%) indications, at least one ME occurred, involving 67 (29,8%) patients. Twenty four percent of detected ME occurred during administration, mainly due to wrong time schedules. Anticoagulants were the therapeutic group with the highest occurrence of ME. At least one ME occurred in approximately one third of patients studied, especially during the administration stage. These errors could affect the medication safety and avoid achieving therapeutic goals. Strategies to improve the quality and safe use of medications can be implemented using this information.
An approach to develop an algorithm to detect the climbing height in radial-axial ring rolling
NASA Astrophysics Data System (ADS)
Husmann, Simon; Hohmann, Magnus; Kuhlenkötter, Bernd
2017-10-01
Radial-axial ring rolling is the mainly used forming process to produce seamless rings, which are applied in miscellaneous industries like the energy sector, the aerospace technology or in the automotive industry. Due to the simultaneously forming in two opposite rolling gaps and the fact that ring rolling is a mass forming process, different errors could occur during the rolling process. Ring climbing is one of the most occurring process errors leading to a distortion of the ring's cross section and a deformation of the rings geometry. The conventional sensors of a radial-axial rolling machine could not detect this error. Therefore, it is a common strategy to roll a slightly bigger ring, so that random occurring process errors could be reduce afterwards by removing the additional material. The LPS installed an image processing system to the radial rolling gap of their ring rolling machine to enable the recognition and measurement of climbing rings and by this, to reduce the additional material. This paper presents the algorithm which enables the image processing system to detect the error of a climbing ring and ensures comparable reliable results for the measurement of the climbing height of the rings.
Formulation of a strategy for monitoring control integrity in critical digital control systems
NASA Technical Reports Server (NTRS)
Belcastro, Celeste M.; Fischl, Robert; Kam, Moshe
1991-01-01
Advanced aircraft will require flight critical computer systems for stability augmentation as well as guidance and control that must perform reliably in adverse, as well as nominal, operating environments. Digital system upset is a functional error mode that can occur in electromagnetically harsh environments, involves no component damage, can occur simultaneously in all channels of a redundant control computer, and is software dependent. A strategy is presented for dynamic upset detection to be used in the evaluation of critical digital controllers during the design and/or validation phases of development. Critical controllers must be able to be used in adverse environments that result from disturbances caused by an electromagnetic source such as lightning, high intensity radiated field (HIRF), and nuclear electromagnetic pulses (NEMP). The upset detection strategy presented provides dynamic monitoring of a given control computer for degraded functional integrity that can result from redundancy management errors and control command calculation error that could occur in an electromagnetically harsh operating environment. The use is discussed of Kalman filtering, data fusion, and decision theory in monitoring a given digital controller for control calculation errors, redundancy management errors, and control effectiveness.
ERIC Educational Resources Information Center
Deutsch, Avital; Dank, Maya
2011-01-01
A common characteristic of subject-predicate agreement errors (usually termed attraction errors) in complex noun phrases is an asymmetrical pattern of error distribution, depending on the inflectional state of the nouns comprising the complex noun phrase. That is, attraction is most likely to occur when the head noun is the morphologically…
ERIC Educational Resources Information Center
van den Bemt, P. M. L. A.; Robertz, R.; de Jong, A. L.; van Roon, E. N.; Leufkens, H. G. M.
2007-01-01
Background: Medication errors can result in harm, unless barriers to prevent them are present. Drug administration errors are less likely to be prevented, because they occur in the last stage of the drug distribution process. This is especially the case in non-alert patients, as patients often form the final barrier to prevention of errors.…
Brain-based individual difference measures of reading skill in deaf and hearing adults.
Mehravari, Alison S; Emmorey, Karen; Prat, Chantel S; Klarman, Lindsay; Osterhout, Lee
2017-07-01
Most deaf children and adults struggle to read, but some deaf individuals do become highly proficient readers. There is disagreement about the specific causes of reading difficulty in the deaf population, and consequently, disagreement about the effectiveness of different strategies for teaching reading to deaf children. Much of the disagreement surrounds the question of whether deaf children read in similar or different ways as hearing children. In this study, we begin to answer this question by using real-time measures of neural language processing to assess if deaf and hearing adults read proficiently in similar or different ways. Hearing and deaf adults read English sentences with semantic, grammatical, and simultaneous semantic/grammatical errors while event-related potentials (ERPs) were recorded. The magnitude of individuals' ERP responses was compared to their standardized reading comprehension test scores, and potentially confounding variables like years of education, speechreading skill, and language background of deaf participants were controlled for. The best deaf readers had the largest N400 responses to semantic errors in sentences, while the best hearing readers had the largest P600 responses to grammatical errors in sentences. These results indicate that equally proficient hearing and deaf adults process written language in different ways, suggesting there is little reason to assume that literacy education should necessarily be the same for hearing and deaf children. The results also show that the most successful deaf readers focus on semantic information while reading, which suggests aspects of education that may promote improved literacy in the deaf population. Copyright © 2017 Elsevier Ltd. All rights reserved.
Jurgens, Anneke; Anderson, Angelika; Moore, Dennis W
2012-01-01
To investigate the integrity with which parents and carers implement PECS in naturalistic settings, utilizing a sample of videos obtained from YouTube. Twenty-one YouTube videos meeting selection criteria were identified. The videos were reviewed for instances of seven implementer errors and, where appropriate, presence of a physical prompter. Forty-three per cent of videos and 61% of PECS exchanges contained errors in parent implementation of specific teaching strategies of the PECS training protocol. Vocal prompts, incorrect error correction and the absence of timely reinforcement occurred most frequently, while gestural prompts, insistence on speech, incorrect use of the open hand prompt and not waiting for the learner to initiate occurred less frequently. Results suggest that parents engage in vocal prompting and incorrect use of the 4-step error correction strategy when using PECS with their children, errors likely to result in prompt dependence.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Koepferl, Christine M.; Robitaille, Thomas P.; Dale, James E., E-mail: koepferl@usm.lmu.de
We use a large data set of realistic synthetic observations (produced in Paper I of this series) to assess how observational techniques affect the measurement physical properties of star-forming regions. In this part of the series (Paper II), we explore the reliability of the measured total gas mass, dust surface density and dust temperature maps derived from modified blackbody fitting of synthetic Herschel observations. We find from our pixel-by-pixel analysis of the measured dust surface density and dust temperature a worrisome error spread especially close to star formation sites and low-density regions, where for those “contaminated” pixels the surface densitiesmore » can be under/overestimated by up to three orders of magnitude. In light of this, we recommend to treat the pixel-based results from this technique with caution in regions with active star formation. In regions of high background typical in the inner Galactic plane, we are not able to recover reliable surface density maps of individual synthetic regions, since low-mass regions are lost in the far-infrared background. When measuring the total gas mass of regions in moderate background, we find that modified blackbody fitting works well (absolute error: + 9%; −13%) up to 10 kpc distance (errors increase with distance). Commonly, the initial images are convolved to the largest common beam-size, which smears contaminated pixels over large areas. The resulting information loss makes this commonly used technique less verifiable as now χ {sup 2} values cannot be used as a quality indicator of a fitted pixel. Our control measurements of the total gas mass (without the step of convolution to the largest common beam size) produce similar results (absolute error: +20%; −7%) while having much lower median errors especially for the high-mass stellar feedback phase. In upcoming papers (Paper III; Paper IV) of this series we test the reliability of measured star formation rate with direct and indirect techniques.« less
Prevalence and cost of hospital medical errors in the general and elderly United States populations.
Mallow, Peter J; Pandya, Bhavik; Horblyuk, Ruslan; Kaplan, Harold S
2013-12-01
The primary objective of this study was to quantify the differences in the prevalence rate and costs of hospital medical errors between the general population and an elderly population aged ≥65 years. Methods from an actuarial study of medical errors were modified to identify medical errors in the Premier Hospital Database using data from 2009. Visits with more than four medical errors were removed from the population to avoid over-estimation of cost. Prevalence rates were calculated based on the total number of inpatient visits. There were 3,466,596 total inpatient visits in 2009. Of these, 1,230,836 (36%) occurred in people aged ≥ 65. The prevalence rate was 49 medical errors per 1000 inpatient visits in the general cohort and 79 medical errors per 1000 inpatient visits for the elderly cohort. The top 10 medical errors accounted for more than 80% of the total in the general cohort and the 65+ cohort. The most costly medical error for the general population was postoperative infection ($569,287,000). Pressure ulcers were most costly ($347,166,257) in the elderly population. This study was conducted with a hospital administrative database, and assumptions were necessary to identify medical errors in the database. Further, there was no method to identify errors of omission or misdiagnoses within the database. This study indicates that prevalence of hospital medical errors for the elderly is greater than the general population and the associated cost of medical errors in the elderly population is quite substantial. Hospitals which further focus their attention on medical errors in the elderly population may see a significant reduction in costs due to medical errors as a disproportionate percentage of medical errors occur in this age group.
Regenbogen, Scott E; Greenberg, Caprice C; Studdert, David M; Lipsitz, Stuart R; Zinner, Michael J; Gawande, Atul A
2007-11-01
To identify the most prevalent patterns of technical errors in surgery, and evaluate commonly recommended interventions in light of these patterns. The majority of surgical adverse events involve technical errors, but little is known about the nature and causes of these events. We examined characteristics of technical errors and common contributing factors among closed surgical malpractice claims. Surgeon reviewers analyzed 444 randomly sampled surgical malpractice claims from four liability insurers. Among 258 claims in which injuries due to error were detected, 52% (n = 133) involved technical errors. These technical errors were further analyzed with a structured review instrument designed by qualitative content analysis. Forty-nine percent of the technical errors caused permanent disability; an additional 16% resulted in death. Two-thirds (65%) of the technical errors were linked to manual error, 9% to errors in judgment, and 26% to both manual and judgment error. A minority of technical errors involved advanced procedures requiring special training ("index operations"; 16%), surgeons inexperienced with the task (14%), or poorly supervised residents (9%). The majority involved experienced surgeons (73%), and occurred in routine, rather than index, operations (84%). Patient-related complexities-including emergencies, difficult or unexpected anatomy, and previous surgery-contributed to 61% of technical errors, and technology or systems failures contributed to 21%. Most technical errors occur in routine operations with experienced surgeons, under conditions of increased patient complexity or systems failure. Commonly recommended interventions, including restricting high-complexity operations to experienced surgeons, additional training for inexperienced surgeons, and stricter supervision of trainees, are likely to address only a minority of technical errors. Surgical safety research should instead focus on improving decision-making and performance in routine operations for complex patients and circumstances.
Lower extremity EMG-driven modeling of walking with automated adjustment of musculoskeletal geometry
Meyer, Andrew J.; Patten, Carolynn
2017-01-01
Neuromusculoskeletal disorders affecting walking ability are often difficult to manage, in part due to limited understanding of how a patient’s lower extremity muscle excitations contribute to the patient’s lower extremity joint moments. To assist in the study of these disorders, researchers have developed electromyography (EMG) driven neuromusculoskeletal models utilizing scaled generic musculoskeletal geometry. While these models can predict individual muscle contributions to lower extremity joint moments during walking, the accuracy of the predictions can be hindered by errors in the scaled geometry. This study presents a novel EMG-driven modeling method that automatically adjusts surrogate representations of the patient’s musculoskeletal geometry to improve prediction of lower extremity joint moments during walking. In addition to commonly adjusted neuromusculoskeletal model parameters, the proposed method adjusts model parameters defining muscle-tendon lengths, velocities, and moment arms. We evaluated our EMG-driven modeling method using data collected from a high-functioning hemiparetic subject walking on an instrumented treadmill at speeds ranging from 0.4 to 0.8 m/s. EMG-driven model parameter values were calibrated to match inverse dynamic moments for five degrees of freedom in each leg while keeping musculoskeletal geometry close to that of an initial scaled musculoskeletal model. We found that our EMG-driven modeling method incorporating automated adjustment of musculoskeletal geometry predicted net joint moments during walking more accurately than did the same method without geometric adjustments. Geometric adjustments improved moment prediction errors by 25% on average and up to 52%, with the largest improvements occurring at the hip. Predicted adjustments to musculoskeletal geometry were comparable to errors reported in the literature between scaled generic geometric models and measurements made from imaging data. Our results demonstrate that with appropriate experimental data, joint moment predictions for walking generated by an EMG-driven model can be improved significantly when automated adjustment of musculoskeletal geometry is included in the model calibration process. PMID:28700708
An Acoustic-Based Method to Detect and Quantify the Effect of Exhalation into a Dry Powder Inhaler.
Holmes, Martin S; Seheult, Jansen N; O'Connell, Peter; D'Arcy, Shona; Ehrhardt, Carsten; Healy, Anne Marie; Costello, Richard W; Reilly, Richard B
2015-08-01
Dry powder inhaler (DPI) users frequently exhale into their inhaler mouthpiece before the inhalation step. This error in technique compromises the integrity of the drug and results in poor bronchodilation. This study investigated the effect of four exhalation factors (exhalation flow rate, distance from mouth to inhaler, exhalation duration, and relative air humidity) on dry powder dose delivery. Given that acoustic energy can be related to the factors associated with exhalation sounds, we then aimed to develop a method of identifying and quantifying this critical inhaler technique error using acoustic based methods. An in vitro test rig was developed to simulate this critical error. The effect of the four factors on subsequent drug delivery were investigated using multivariate regression models. In a further study we then used an acoustic monitoring device to unobtrusively record the sounds 22 asthmatic patients made whilst using a Diskus(™) DPI. Acoustic energy was employed to automatically detect and analyze exhalation events in the audio files. All exhalation factors had a statistically significant effect on drug delivery (p<0.05); distance from the inhaler mouthpiece had the largest effect size. Humid air exhalations were found to reduce the fine particle fraction (FPF) compared to dry air. In a dataset of 110 audio files from 22 asthmatic patients, the acoustic method detected exhalations with an accuracy of 89.1%. We were able to classify exhalations occurring 5 cm or less in the direction of the inhaler mouthpiece or recording device with a sensitivity of 72.2% and specificity of 85.7%. Exhaling into a DPI has a significant detrimental effect. Acoustic based methods can be employed to objectively detect and analyze exhalations during inhaler use, thus providing a method of remotely monitoring inhaler technique and providing personalized inhaler technique feedback.
Comparison of the Radiative Two-Flux and Diffusion Approximations
NASA Technical Reports Server (NTRS)
Spuckler, Charles M.
2006-01-01
Approximate solutions are sometimes used to determine the heat transfer and temperatures in a semitransparent material in which conduction and thermal radiation are acting. A comparison of the Milne-Eddington two-flux approximation and the diffusion approximation for combined conduction and radiation heat transfer in a ceramic material was preformed to determine the accuracy of the diffusion solution. A plane gray semitransparent layer without a substrate and a non-gray semitransparent plane layer on an opaque substrate were considered. For the plane gray layer the material is semitransparent for all wavelengths and the scattering and absorption coefficients do not vary with wavelength. For the non-gray plane layer the material is semitransparent with constant absorption and scattering coefficients up to a specified wavelength. At higher wavelengths the non-gray plane layer is assumed to be opaque. The layers are heated on one side and cooled on the other by diffuse radiation and convection. The scattering and absorption coefficients were varied. The error in the diffusion approximation compared to the Milne-Eddington two flux approximation was obtained as a function of scattering coefficient and absorption coefficient. The percent difference in interface temperatures and heat flux through the layer obtained using the Milne-Eddington two-flux and diffusion approximations are presented as a function of scattering coefficient and absorption coefficient. The largest errors occur for high scattering and low absorption except for the back surface temperature of the plane gray layer where the error is also larger at low scattering and low absorption. It is shown that the accuracy of the diffusion approximation can be improved for some scattering and absorption conditions if a reflectance obtained from a Kubelka-Munk type two flux theory is used instead of a reflection obtained from the Fresnel equation. The Kubelka-Munk reflectance accounts for surface reflection and radiation scattered back by internal scattering sites while the Fresnel reflection only accounts for surface reflections.
Local rollback for fault-tolerance in parallel computing systems
Blumrich, Matthias A [Yorktown Heights, NY; Chen, Dong [Yorktown Heights, NY; Gara, Alan [Yorktown Heights, NY; Giampapa, Mark E [Yorktown Heights, NY; Heidelberger, Philip [Yorktown Heights, NY; Ohmacht, Martin [Yorktown Heights, NY; Steinmacher-Burow, Burkhard [Boeblingen, DE; Sugavanam, Krishnan [Yorktown Heights, NY
2012-01-24
A control logic device performs a local rollback in a parallel super computing system. The super computing system includes at least one cache memory device. The control logic device determines a local rollback interval. The control logic device runs at least one instruction in the local rollback interval. The control logic device evaluates whether an unrecoverable condition occurs while running the at least one instruction during the local rollback interval. The control logic device checks whether an error occurs during the local rollback. The control logic device restarts the local rollback interval if the error occurs and the unrecoverable condition does not occur during the local rollback interval.
Thierry-Chef, I; Pernicka, F; Marshall, M; Cardis, E; Andreo, P
2002-01-01
An international collaborative study of cancer risk among workers in the nuclear industry is tinder way to estimate direetly the cancer risk following protracted low-dose exposure to ionising radiation. An essential aspect of this study is the characterisation and quantification of errors in available dose estimates. One major source of errors is dosemeter response in workplace exposure conditions. Little information is available on energy and geometry response for most of the 124 different dosemeters used historically in participating facilities. Experiments were therefore set up to assess this. using 10 dosemeter types representative of those used over time. Results show that the largest errors were associated with the response of early dosemeters to low-energy photon radiation. Good response was found with modern dosemeters. even at low energy. These results are being used to estimate errors in the response for each dosemeter type, used in the participating facilities, so that these can be taken into account in the estimates of cancer risk.
Minimizing driver errors: examining factors leading to failed target tracking and detection.
DOT National Transportation Integrated Search
2013-06-01
Driving a motor vehicle is a common practice for many individuals. Although driving becomes : repetitive and a very habitual task, errors can occur that lead to accidents. One factor that can be a : cause for such errors is a lapse in attention or a ...
Nurses' Behaviors and Visual Scanning Patterns May Reduce Patient Identification Errors
ERIC Educational Resources Information Center
Marquard, Jenna L.; Henneman, Philip L.; He, Ze; Jo, Junghee; Fisher, Donald L.; Henneman, Elizabeth A.
2011-01-01
Patient identification (ID) errors occurring during the medication administration process can be fatal. The aim of this study is to determine whether differences in nurses' behaviors and visual scanning patterns during the medication administration process influence their capacities to identify patient ID errors. Nurse participants (n = 20)…
End-to-end test of spatial accuracy in Gamma Knife treatments for trigeminal neuralgia
DOE Office of Scientific and Technical Information (OSTI.GOV)
Brezovich, Ivan A., E-mail: ibrezovich@uabmc.edu; Wu, Xingen; Duan, Jun
2014-11-01
Purpose: Spatial accuracy is most crucial when small targets like the trigeminal nerve are treated. Although current quality assurance procedures typically verify that individual apparatus, like the MRI scanner, CT scanner, Gamma Knife, etc., are meeting specifications, the cumulative error of all equipment and procedures combined may exceed safe margins. This study uses an end-to-end approach to assess the overall targeting errors that may have occurred in individual patients previously treated for trigeminal neuralgia. Methods: The trigeminal nerve is simulated by a 3 mm long, 3.175 mm (1/8 in.) diameter MRI-contrast filled cavity embedded within a PMMA plastic capsule. Themore » capsule is positioned within the head frame such that the location of the cavity matches the Gamma Knife coordinates of an arbitrarily chosen, previously treated patient. Gafchromic EBT2 film is placed at the center of the cavity in coronal and sagittal orientations. The films are marked with a pinprick to identify the cavity center. Treatments are planned for radiation delivery with 4 mm collimators according to MRI and CT scans using the clinical localizer boxes and acquisition protocols. Shots are planned so that the 50% isodose surface encompasses the cavity. Following irradiation, the films are scanned and analyzed. Targeting errors are defined as the distance between the pinprick, which represents the intended target, and the centroid of the 50% isodose line, which is the center of the radiation field that was actually delivered. Results: Averaged over ten patient simulations, targeting errors along the x, y, and z coordinates (patient’s left-to-right, posterior-to-anterior, and head-to-foot) were, respectively, −0.060 ± 0.363, −0.350 ± 0.253, and 0.348 ± 0.204 mm when MRI was used for treatment planning. Planning according to CT exhibited generally smaller errors, namely, 0.109 ± 0.167, −0.191 ± 0.144, and 0.211 ± 0.094 mm. The largest errors along individual axes in MRI- and CT-planned treatments were, respectively, −0.761 mm in the y-direction and 0.428 mm in the x-direction, well within safe limits. Conclusions: The highly accurate dose delivery was possible because the Gamma Knife, MRI scanner, and other equipment performed within tight limits and scans were acquired using the thinnest slices and smallest pixel sizes available. Had the individual devices performed only near the limits of their specifications, the cumulative error could have left parts of the trigeminal nerve undertreated. The presented end-to-end test gives assurance that patients had received the expected high quality treatment. End-to-end tests should become part of clinical practice.« less
Hickey, Edward J; Nosikova, Yaroslavna; Pham-Hung, Eric; Gritti, Michael; Schwartz, Steven; Caldarone, Christopher A; Redington, Andrew; Van Arsdell, Glen S
2015-02-01
We hypothesized that the National Aeronautics and Space Administration "threat and error" model (which is derived from analyzing >30,000 commercial flights, and explains >90% of crashes) is directly applicable to pediatric cardiac surgery. We implemented a unit-wide performance initiative, whereby every surgical admission constitutes a "flight" and is tracked in real time, with the aim of identifying errors. The first 500 consecutive patients (524 flights) were analyzed, with an emphasis on the relationship between error cycles and permanent harmful outcomes. Among 524 patient flights (risk adjustment for congenital heart surgery category: 1-6; median: 2) 68 (13%) involved residual hemodynamic lesions, 13 (2.5%) permanent end-organ injuries, and 7 deaths (1.3%). Preoperatively, 763 threats were identified in 379 (72%) flights. Only 51% of patient flights (267) were error free. In the remaining 257 flights, 430 errors occurred, most commonly related to proficiency (280; 65%) or judgment (69, 16%). In most flights with errors (173 of 257; 67%), an unintended clinical state resulted, ie, the error was consequential. In 60% of consequential errors (n = 110; 21% of total), subsequent cycles of additional error/unintended states occurred. Cycles, particularly those containing multiple errors, were very significantly associated with permanent harmful end-states, including residual hemodynamic lesions (P < .0001), end-organ injury (P < .0001), and death (P < .0001). Deaths were almost always preceded by cycles (6 of 7; P < .0001). Human error, if not mitigated, often leads to cycles of error and unintended patient states, which are dangerous and precede the majority of harmful outcomes. Efforts to manage threats and error cycles (through crew resource management techniques) are likely to yield large increases in patient safety. Copyright © 2015. Published by Elsevier Inc.
A catalog of aftershock sequences in Greece (1971 1997): Their spatial and temporal characteristics
NASA Astrophysics Data System (ADS)
Drakatos, George; Latoussakis, John
A complete catalog of aftershock sequences is provided for main earthquakes with ML 5.0, which occurred in the area of Greece and surrounding regions the last twenty-seven years. The Monthly Bulletins of the Institute of Geodynamics (National Observatory of Athens) have been used as data source. In order to get a homogeneous catalog, several selection criteria have been applied and hence a catalog of 44 aftershock sequences is compiled. The relations between the duration of the sequence, the number of aftershocks, the magnitude of the largest aftershock and its delay time from the main shock as well as the subsurface rupture length versus the magnitude of the main shock are calculated. The results show that linearity exists between the subsurface rupture length and the magnitude of the main shock independent of the slip type, as well as between the magnitude of the main shock (M) and its largest aftershock (Ma). The mean difference M-Ma is almost one unit. In the 40% of the analyzed sequences, the largest aftershock occurred within one day after the main shock.The fact that the aftershock sequences show the same behavior for earthquakes that occur in the same region supports the theory that the spatial and temporal characteristics are strongly related to the stress distribution of the fault area.
NASA Astrophysics Data System (ADS)
Androsov, Alexey; Nerger, Lars; Schnur, Reiner; Schröter, Jens; Albertella, Alberta; Rummel, Reiner; Savcenko, Roman; Bosch, Wolfgang; Skachko, Sergey; Danilov, Sergey
2018-05-01
General ocean circulation models are not perfect. Forced with observed atmospheric fluxes they gradually drift away from measured distributions of temperature and salinity. We suggest data assimilation of absolute dynamical ocean topography (DOT) observed from space geodetic missions as an option to reduce these differences. Sea surface information of DOT is transferred into the deep ocean by defining the analysed ocean state as a weighted average of an ensemble of fully consistent model solutions using an error-subspace ensemble Kalman filter technique. Success of the technique is demonstrated by assimilation into a global configuration of the ocean circulation model FESOM over 1 year. The dynamic ocean topography data are obtained from a combination of multi-satellite altimetry and geoid measurements. The assimilation result is assessed using independent temperature and salinity analysis derived from profiling buoys of the AGRO float data set. The largest impact of the assimilation occurs at the first few analysis steps where both the model ocean topography and the steric height (i.e. temperature and salinity) are improved. The continued data assimilation over 1 year further improves the model state gradually. Deep ocean fields quickly adjust in a sustained manner: A model forecast initialized from the model state estimated by the data assimilation after only 1 month shows that improvements induced by the data assimilation remain in the model state for a long time. Even after 11 months, the modelled ocean topography and temperature fields show smaller errors than the model forecast without any data assimilation.
Some practical problems in implementing randomization.
Downs, Matt; Tucker, Kathryn; Christ-Schmidt, Heidi; Wittes, Janet
2010-06-01
While often theoretically simple, implementing randomization to treatment in a masked, but confirmable, fashion can prove difficult in practice. At least three categories of problems occur in randomization: (1) bad judgment in the choice of method, (2) design and programming errors in implementing the method, and (3) human error during the conduct of the trial. This article focuses on these latter two types of errors, dealing operationally with what can go wrong after trial designers have selected the allocation method. We offer several case studies and corresponding recommendations for lessening the frequency of problems in allocating treatment or for mitigating the consequences of errors. Recommendations include: (1) reviewing the randomization schedule before starting a trial, (2) being especially cautious of systems that use on-demand random number generators, (3) drafting unambiguous randomization specifications, (4) performing thorough testing before entering a randomization system into production, (5) maintaining a dataset that captures the values investigators used to randomize participants, thereby allowing the process of treatment allocation to be reproduced and verified, (6) resisting the urge to correct errors that occur in individual treatment assignments, (7) preventing inadvertent unmasking to treatment assignments in kit allocations, and (8) checking a sample of study drug kits to allow detection of errors in drug packaging and labeling. Although we performed a literature search of documented randomization errors, the examples that we provide and the resultant recommendations are based largely on our own experience in industry-sponsored clinical trials. We do not know how representative our experience is or how common errors of the type we have seen occur. Our experience underscores the importance of verifying the integrity of the treatment allocation process before and during a trial. Clinical Trials 2010; 7: 235-245. http://ctj.sagepub.com.
NASA Technical Reports Server (NTRS)
Zhang, Y.-C.; Rossow, W. B.; Lacis, A. A.
1995-01-01
The largest uncertainty in upwelling shortwave (SW) fluxes (approximately equal 10-15 W/m(exp 2), regional daily mean) is caused by uncertainties in land surface albedo, whereas the largest uncertainty in downwelling SW at the surface (approximately equal 5-10 W/m(exp 2), regional daily mean) is related to cloud detection errors. The uncertainty of upwelling longwave (LW) fluxes (approximately 10-20 W/m(exp 2), regional daily mean) depends on the accuracy of the surface temperature for the surface LW fluxes and the atmospheric temperature for the top of atmosphere LW fluxes. The dominant source of uncertainty is downwelling LW fluxes at the surface (approximately equal 10-15 W/m(exp 2)) is uncertainty in atmospheric temperature and, secondarily, atmospheric humidity; clouds play little role except in the polar regions. The uncertainties of the individual flux components and the total net fluxes are largest over land (15-20 W/m(exp 2)) because of uncertainties in surface albedo (especially its spectral dependence) and surface temperature and emissivity (including its spectral dependence). Clouds are the most important modulator of the SW fluxes, but over land areas, uncertainties in net SW at the surface depend almost as much on uncertainties in surface albedo. Although atmospheric and surface temperature variations cause larger LW flux variations, the most notable feature of the net LW fluxes is the changing relative importance of clouds and water vapor with latitude. Uncertainty in individual flux values is dominated by sampling effects because of large natrual variations, but uncertainty in monthly mean fluxes is dominated by bias errors in the input quantities.
Young Adult Women and the Pilgrimage of Motherhood
ERIC Educational Resources Information Center
Lipperini, Patricia T.
2016-01-01
Motherhood is a complex experience that can be transformative, offering women opportunities for personal enrichment and spiritual development. Because the largest incidence of births occurs to women in the Millennial or late Generation X generations, this complex, potentially transformative experience occurs at a critical time in young adult…
A large silent earthquake and the future rupture of the Guerrero seismic
NASA Astrophysics Data System (ADS)
Kostoglodov, V.; Lowry, A.; Singh, S.; Larson, K.; Santiago, J.; Franco, S.; Bilham, R.
2003-04-01
The largest global earthquakes typically occur at subduction zones, at the seismogenic boundary between two colliding tectonic plates. These earthquakes release elastic strains accumulated over many decades of plate motion. Forecasts of these events have large errors resulting from poor knowledge of the seismic cycle. The discovery of slow slip events or "silent earthquakes" in Japan, Alaska, Cascadia and Mexico provides a new glimmer of hope. In these subduction zones, the seismogenic part of the plate interface is loading not steadily as hitherto believed, but incrementally, partitioning the stress buildup with the slow slip events. If slow aseismic slip is limited to the region downdip of the future rupture zone, slip events may increase the stress at the base of the seismogenic region, incrementing it closer to failure. However if some aseismic slip occurs on the future rupture zone, the partitioning may significantly reduce the stress buildup rate (SBR) and delay a future large earthquake. Here we report characteristics of the largest slow earthquake observed to date (Mw 7.5), and its implications for future failure of the Guerrero seismic gap, Mexico. The silent earthquake began in October 2001 and lasted for 6-7 months. Slow slip produced measurable displacements over an area of 550x250 km2. Average slip on the interface was about 10 cm and the equivalent magnitude, Mw, was 7.5. A shallow subhorizontal configuration of the plate interface in Guererro is a controlling factor for the physical conditions favorable for such extensive slow slip. The total coupled zone in Guerrero is 120-170 km wide while the seismogenic, shallowest portion is only 50 km. This future rupture zone may slip contemporaneously with the deeper aseismic sleep, thereby reducing SBR. The slip partitioning between seismogenic and transition coupled zones may diminish SBR up to 50%. These two factors are probably responsible for a long (at least since 1911) quiet on the Guerrero seismic gap in Mexico. The discovery of silent earthquakes in Guerrero in 1972, 1979, 1998, and 2001-2002 calls for a reassessment of the seismic potential and careful seismotectonic monitoring of the seismic gaps in Mexico.
Clinical Dental Faculty Members' Perceptions of Diagnostic Errors and How to Avoid Them.
Nikdel, Cathy; Nikdel, Kian; Ibarra-Noriega, Ana; Kalenderian, Elsbeth; Walji, Muhammad F
2018-04-01
Diagnostic errors are increasingly recognized as a source of preventable harm in medicine, yet little is known about their occurrence in dentistry. The aim of this study was to gain a deeper understanding of clinical dental faculty members' perceptions of diagnostic errors, types of errors that may occur, and possible contributing factors. The authors conducted semi-structured interviews with ten domain experts at one U.S. dental school in May-August 2016 about their perceptions of diagnostic errors and their causes. The interviews were analyzed using an inductive process to identify themes and key findings. The results showed that the participants varied in their definitions of diagnostic errors. While all identified missed diagnosis and wrong diagnosis, only four participants perceived that a delay in diagnosis was a diagnostic error. Some participants perceived that an error occurs only when the choice of treatment leads to harm. Contributing factors associated with diagnostic errors included the knowledge and skills of the dentist, not taking adequate time, lack of communication among colleagues, and cognitive biases such as premature closure based on previous experience. Strategies suggested by the participants to prevent these errors were taking adequate time when investigating a case, forming study groups, increasing communication, and putting more emphasis on differential diagnosis. These interviews revealed differing perceptions of dental diagnostic errors among clinical dental faculty members. To address the variations, the authors recommend adopting shared language developed by the medical profession to increase understanding.
Zhao, Shuzhen; He, Lujia; Feng, Chenchen; He, Xiaoli
2018-06-01
Laboratory errors in blood collection center (BCC) are most common in the preanalytical phase. It is, therefore, of vital importance for administrators to take measures to improve healthcare quality and patient safety.In 2015, a case bundle management strategy was applied in a large outpatient BCC to improve its medical quality and patient safety.Unqualified blood sampling, complications, patient waiting time, largest number of patients waiting during peak hours, patient complaints, and patient satisfaction were compared over the period from 2014 to 2016.The strategy reduced unqualified blood sampling, complications, patient waiting time, largest number of patients waiting during peak hours, and patient complaints, while improving patient satisfaction.This strategy was effective in improving BCC healthcare quality and patient safety.
NASA Technical Reports Server (NTRS)
Shook, D. F.; Pierce, C. R.
1972-01-01
Proton recoil distributions were obtained by using organic liquid scintillators of different size. The measured distributions are converted to neutron spectra by differentiation analysis for comparison to the unfolded spectra of the largest scintillator. The approximations involved in the differentiation analysis are indicated to have small effects on the precision of neutron spectra measured with the smaller scintillators but introduce significant error for the largest scintillator. In the case of the smallest cylindrical scintillator, nominally 1.2 by 1.3 cm, the efficiency is shown to be insensitive to multiple scattering and to the angular distribution to the incident flux. These characteristics of the smaller scintillator make possible its use to measure scalar flux spectra within media high efficiency is not required.
NASA Technical Reports Server (NTRS)
Nese, Jon M.; Dutton, John A.
1993-01-01
The predictability of the weather and climatic states of a low-order moist general circulation model is quantified using a dynamic systems approach, and the effect of incorporating a simple oceanic circulation on predictability is evaluated. The predictability and the structure of the model attractors are compared using Liapunov exponents, local divergence rates, and the correlation and Liapunov dimensions. It was found that the activation of oceanic circulation increases the average error doubling time of the atmosphere and the coupled ocean-atmosphere system by 10 percent and decreases the variance of the largest local divergence rate by 20 percent. When an oceanic circulation develops, the average predictability of annually averaged states is improved by 25 percent and the variance of the largest local divergence rate decreases by 25 percent.
... retina, at the back of your eye. A refractive error If either your cornea or lens is egg ... too close to the television or squinting. Other refractive errors Astigmatism may occur in combination with other refractive ...
Neutrino Nucleon Elastic Scattering in MiniBooNE
NASA Astrophysics Data System (ADS)
Cox, D. Christopher
2007-12-01
Neutrino nucleon elastic scattering νN→νN is a fundamental process of the weak interaction, and can be used to study the structure of the nucleon. This is the third largest scattering process in MiniBooNE comprising ˜15% of all neutrino interactions. Analysis of this sample has yielded a neutral current elastic differential cross section as a function of Q2 that agrees within errors to model predictions.
Czarnecki, John B.; Gillip, Jonathan A.; Jones, Perry M.; Yeatts, Daniel S.
2009-01-01
To assess the effect that increased water use is having on the long-term availability of groundwater within the Ozark Plateaus aquifer system, a groundwater-flow model was developed using MODFLOW 2000 for a model area covering 7,340 square miles for parts of Arkansas, Kansas, Missouri, and Oklahoma. Vertically the model is divided into five units. From top to bottom these units of variable thickness are: the Western Interior Plains confining unit, the Springfield Plateau aquifer, the Ozark confining unit, the Ozark aquifer, and the St. Francois confining unit. Large mined zones contained within the Springfield Plateau aquifer are represented in the model as extensive voids with orders-of-magnitude larger hydraulic conductivity than the adjacent nonmined zones. Water-use data were compiled for the period 1960 to 2006, with the most complete data sets available for the period 1985 to 2006. In 2006, total water use from the Ozark aquifer for Missouri was 87 percent (8,531,520 cubic feet per day) of the total pumped from the Ozark aquifer, with Kansas at 7 percent (727,452 cubic feet per day), and Oklahoma at 6 percent (551,408 cubic feet per day); water use for Arkansas within the model area was minor. Water use in the model from the Springfield Plateau aquifer in 2005 was specified from reported and estimated values as 569,047 cubic feet per day. Calibration of the model was made against average water-level altitudes in the Ozark aquifer for the period 1980 to 1989 and against waterlevel altitudes obtained in 2006 for the Springfield Plateau and Ozark aquifers. Error in simulating water-level altitudes was largest where water-level altitude gradients were largest, particularly near large cones of depression. Groundwater flow within the model area occurs generally from the highlands of the Springfield Plateau in southwestern Missouri toward the west, with localized flow occurring towards rivers and pumping centers including the five largest pumping centers near Joplin, Missouri; Carthage, Missouri; Noel, Missouri; Pittsburg, Kansas; and Miami, Oklahoma.Hypothetical scenarios involving various increases in groundwater-pumping rates were analyzed with the calibrated groundwater-flow model to assess changes in the flow system from 2007 to the year 2057. Pumping rates were increased between 0 and 4 percent per year starting with the 2006 rates for all wells in the model. Sustained pumping at 2006 rates was feasible at the five pumping centers until 2057; however, increases in pumping resulted in dewatering the aquifer and thus pumpage increases were not sustainable in Carthage and Noel for the 1 percent per year pumpage increase and greater hypothetical scenarios, and in Joplin and Miami for the 4 percent per year pumpage increase hypothetical scenarios.Zone-budget analyses were performed to assess the groundwater flow into and out of three zones specified within the Ozark-aquifer layer of the model. The three zones represented the model part of the Ozark aquifer in Kansas (zone 1), Oklahoma (zone 2), and Missouri and Arkansas (zone 3). Groundwater pumping causes substantial reductions in water in storage and induces flow through the Ozark confining unit for all hypothetical scenarios evaluated. Net simulated flow in 2057 from Kansas (zone 1) to Missouri (zone 3) ranges from 74,044 cubic feet per day for 2006 pumping rates (hypothetical scenario 1) to 625,319 cubic feet per day for a 4 percent increase in pumping per year (hypothetical scenario 5). Pumping from wells completed in the Ozark aquifer is the largest component of flow out of zone 3 in Missouri and Arkansas, and varies between 88 to 91 percent of the total flow out of zone 3 for all of the hypothetical scenarios. The largest component of flow into Oklahoma (zone 2) comes from the overlying Ozark confining unit, which is consistently about 45 percent of the total. Flow from the release of water in storage, from general-head boundaries, and from zones 1 and 3 is considerably smaller values that range from 3 to 22 percent of the total flow into zone 2. The largest flow out of the Oklahoma part of the model occurs from pumping from wells and ranges from 52 to 69 percent of the total.
Antidepressant and antipsychotic medication errors reported to United States poison control centers.
Kamboj, Alisha; Spiller, Henry A; Casavant, Marcel J; Chounthirath, Thitphalak; Hodges, Nichole L; Smith, Gary A
2018-05-08
To investigate unintentional therapeutic medication errors associated with antidepressant and antipsychotic medications in the United States and expand current knowledge on the types of errors commonly associated with these medications. A retrospective analysis of non-health care facility unintentional therapeutic errors associated with antidepressant and antipsychotic medications was conducted using data from the National Poison Data System. From 2000 to 2012, poison control centers received 207 670 calls reporting unintentional therapeutic errors associated with antidepressant or antipsychotic medications that occurred outside of a health care facility, averaging 15 975 errors annually. The rate of antidepressant-related errors increased by 50.6% from 2000 to 2004, decreased by 6.5% from 2004 to 2006, and then increased 13.0% from 2006 to 2012. The rate of errors related to antipsychotic medications increased by 99.7% from 2000 to 2004 and then increased by 8.8% from 2004 to 2012. Overall, 70.1% of reported errors occurred among adults, and 59.3% were among females. The medications most frequently associated with errors were selective serotonin reuptake inhibitors (30.3%), atypical antipsychotics (24.1%), and other types of antidepressants (21.5%). Most medication errors took place when an individual inadvertently took or was given a medication twice (41.0%), inadvertently took someone else's medication (15.6%), or took the wrong medication (15.6%). This study provides a comprehensive overview of non-health care facility unintentional therapeutic errors associated with antidepressant and antipsychotic medications. The frequency and rate of these errors increased significantly from 2000 to 2012. Given that use of these medications is increasing in the US, this study provides important information about the epidemiology of the associated medication errors. Copyright © 2018 John Wiley & Sons, Ltd.
Claims, errors, and compensation payments in medical malpractice litigation.
Studdert, David M; Mello, Michelle M; Gawande, Atul A; Gandhi, Tejal K; Kachalia, Allen; Yoon, Catherine; Puopolo, Ann Louise; Brennan, Troyen A
2006-05-11
In the current debate over tort reform, critics of the medical malpractice system charge that frivolous litigation--claims that lack evidence of injury, substandard care, or both--is common and costly. Trained physicians reviewed a random sample of 1452 closed malpractice claims from five liability insurers to determine whether a medical injury had occurred and, if so, whether it was due to medical error. We analyzed the prevalence, characteristics, litigation outcomes, and costs of claims that lacked evidence of error. For 3 percent of the claims, there were no verifiable medical injuries, and 37 percent did not involve errors. Most of the claims that were not associated with errors (370 of 515 [72 percent]) or injuries (31 of 37 [84 percent]) did not result in compensation; most that involved injuries due to error did (653 of 889 [73 percent]). Payment of claims not involving errors occurred less frequently than did the converse form of inaccuracy--nonpayment of claims associated with errors. When claims not involving errors were compensated, payments were significantly lower on average than were payments for claims involving errors (313,205 dollars vs. 521,560 dollars, P=0.004). Overall, claims not involving errors accounted for 13 to 16 percent of the system's total monetary costs. For every dollar spent on compensation, 54 cents went to administrative expenses (including those involving lawyers, experts, and courts). Claims involving errors accounted for 78 percent of total administrative costs. Claims that lack evidence of error are not uncommon, but most are denied compensation. The vast majority of expenditures go toward litigation over errors and payment of them. The overhead costs of malpractice litigation are exorbitant. Copyright 2006 Massachusetts Medical Society.
Sensitivity and specificity of dosing alerts for dosing errors among hospitalized pediatric patients
Stultz, Jeremy S; Porter, Kyle; Nahata, Milap C
2014-01-01
Objectives To determine the sensitivity and specificity of a dosing alert system for dosing errors and to compare the sensitivity of a proprietary system with and without institutional customization at a pediatric hospital. Methods A retrospective analysis of medication orders, orders causing dosing alerts, reported adverse drug events, and dosing errors during July, 2011 was conducted. Dosing errors with and without alerts were identified and the sensitivity of the system with and without customization was compared. Results There were 47 181 inpatient pediatric orders during the studied period; 257 dosing errors were identified (0.54%). The sensitivity of the system for identifying dosing errors was 54.1% (95% CI 47.8% to 60.3%) if customization had not occurred and increased to 60.3% (CI 54.0% to 66.3%) with customization (p=0.02). The sensitivity of the system for underdoses was 49.6% without customization and 60.3% with customization (p=0.01). Specificity of the customized system for dosing errors was 96.2% (CI 96.0% to 96.3%) with a positive predictive value of 8.0% (CI 6.8% to 9.3). All dosing errors had an alert over-ridden by the prescriber and 40.6% of dosing errors with alerts were administered to the patient. The lack of indication-specific dose ranges was the most common reason why an alert did not occur for a dosing error. Discussion Advances in dosing alert systems should aim to improve the sensitivity and positive predictive value of the system for dosing errors. Conclusions The dosing alert system had a low sensitivity and positive predictive value for dosing errors, but might have prevented dosing errors from reaching patients. Customization increased the sensitivity of the system for dosing errors. PMID:24496386
Hubbeling, Dieneke
2016-09-01
This paper addresses the concept of moral luck. Moral luck is discussed in the context of medical error, especially an error of omission that occurs frequently, but only rarely has adverse consequences. As an example, a failure to compare the label on a syringe with the drug chart results in the wrong medication being administered and the patient dies. However, this error may have previously occurred many times with no tragic consequences. Discussions on moral luck can highlight conflicting intuitions. Should perpetrators receive a harsher punishment because of an adverse outcome, or should they be dealt with in the same way as colleagues who have acted similarly, but with no adverse effects? An additional element to the discussion, specifically with medical errors, is that according to the evidence currently available, punishing individual practitioners does not seem to be effective in preventing future errors. The following discussion, using relevant philosophical and empirical evidence, posits a possible solution for the moral luck conundrum in the context of medical error: namely, making a distinction between the duty to make amends and assigning blame. Blame should be assigned on the basis of actual behavior, while the duty to make amends is dependent on the outcome.
Ribeiro, Igor Oliveira; Andreoli, Rita Valéria; Kayano, Mary Toshie; de Sousa, Thaiane Rodrigues; Medeiros, Adan Sady; Guimarães, Patrícia Costa; Barbosa, Cybelli G G; Godoi, Ricardo H M; Martin, Scot T; de Souza, Rodrigo Augusto Ferreira
2018-05-15
The present study examines the spatiotemporal variability and interrelations of the atmospheric methane (CH 4 ), carbon monoxide (CO) and biomass burning (BB) outbreaks retrieved from satellite data over the Amazon region during the 2003-2012 period. In the climatological context, we found consistent seasonal cycles of BB outbreaks and CO in the Amazon, both variables showing a peak during the dry season. The dominant CO variability mode features the largest positive loadings in the southern Amazon, and describes the interannual CO variations related to BB outbreaks along the deforestation arc during the dry season. In line with CO variability and BB outbreaks, the results show strong correspondence with the spatiotemporal variability of CH 4 in the southern Amazon during years of intense drought. Indeed, the areas with the largest positive CH 4 anomalies in southern Amazon overlap the areas with high BB outbreaks and positive CO anomalies. The analyses also showed that high (low) BB outbreaks in the southern Amazon occur during dry (wet) years. In consequence, the interannual climate variability modulates the BB outbreaks in the southern Amazon, which in turn have considerable impacts on CO and CH 4 interannual variability in the region. Therefore, the BB outbreaks might play a major role in modulating the CH 4 and CO variations, at least in the southern Amazon. This study also provides a comparison between the estimate of satellite and aircraft measurements for the CH 4 over the southern Amazon, which indicates relatively small differences from the aircraft measurements in the lower troposphere, with errors ranging from 0.18% to 1.76%. Copyright © 2017 Elsevier B.V. All rights reserved.
Martínez-Lavín, Manuel; Amezcua-Guerra, Luis
2017-10-01
This article critically reviews HPV vaccine serious adverse events described in pre-licensure randomized trials and in post-marketing case series. HPV vaccine randomized trials were identified in PubMed. Safety data were extracted. Post-marketing case series describing HPV immunization adverse events were reviewed. Most HPV vaccine randomized trials did not use inert placebo in the control group. Two of the largest randomized trials found significantly more severe adverse events in the tested HPV vaccine arm of the study. Compared to 2871 women receiving aluminum placebo, the group of 2881 women injected with the bivalent HPV vaccine had more deaths on follow-up (14 vs. 3, p = 0.012). Compared to 7078 girls injected with the 4-valent HPV vaccine, 7071 girls receiving the 9-valent dose had more serious systemic adverse events (3.3 vs. 2.6%, p = 0.01). For the 9-valent dose, our calculated number needed to seriously harm is 140 (95% CI, 79–653) [DOSAGE ERROR CORRECTED] . The number needed to vaccinate is 1757 (95% CI, 131 to infinity). Practically, none of the serious adverse events occurring in any arm of both studies were judged to be vaccine-related. Pre-clinical trials, post-marketing case series, and the global drug adverse reaction database (VigiBase) describe similar post-HPV immunization symptom clusters. Two of the largest randomized HPV vaccine trials unveiled more severe adverse events in the tested HPV vaccine arm of the study. Nine-valent HPV vaccine has a worrisome number needed to vaccinate/number needed to harm quotient. Pre-clinical trials and post-marketing case series describe similar post-HPV immunization symptoms.
Error detection and reduction in blood banking.
Motschman, T L; Moore, S B
1996-12-01
Error management plays a major role in facility process improvement efforts. By detecting and reducing errors, quality and, therefore, patient care improve. It begins with a strong organizational foundation of management attitude with clear, consistent employee direction and appropriate physical facilities. Clearly defined critical processes, critical activities, and SOPs act as the framework for operations as well as active quality monitoring. To assure that personnel can detect an report errors they must be trained in both operational duties and error management practices. Use of simulated/intentional errors and incorporation of error detection into competency assessment keeps employees practiced, confident, and diminishes fear of the unknown. Personnel can clearly see that errors are indeed used as opportunities for process improvement and not for punishment. The facility must have a clearly defined and consistently used definition for reportable errors. Reportable errors should include those errors with potentially harmful outcomes as well as those errors that are "upstream," and thus further away from the outcome. A well-written error report consists of who, what, when, where, why/how, and follow-up to the error. Before correction can occur, an investigation to determine the underlying cause of the error should be undertaken. Obviously, the best corrective action is prevention. Correction can occur at five different levels; however, only three of these levels are directed at prevention. Prevention requires a method to collect and analyze data concerning errors. In the authors' facility a functional error classification method and a quality system-based classification have been useful. An active method to search for problems uncovers them further upstream, before they can have disastrous outcomes. In the continual quest for improving processes, an error management program is itself a process that needs improvement, and we must strive to always close the circle of quality assurance. Ultimately, the goal of better patient care will be the reward.
Effects of Programmed Teaching Errors on Acquisition and Durability of Self-Care Skills
ERIC Educational Resources Information Center
Donnelly, Maeve G.; Karsten, Amanda M.
2017-01-01
This investigation sheds light on necessary and sufficient conditions to establish self-care behavior chains among people with developmental disabilities. First, a descriptive assessment (DA) identified the types of teaching errors that occurred during self-care instruction. Second, the relative effects of three teaching errors observed during the…
An Evaluation of Programmed Treatment-integrity Errors during Discrete-trial Instruction
ERIC Educational Resources Information Center
Carroll, Regina A.; Kodak, Tiffany; Fisher, Wayne W.
2013-01-01
This study evaluated the effects of programmed treatment-integrity errors on skill acquisition for children with an autism spectrum disorder (ASD) during discrete-trial instruction (DTI). In Study 1, we identified common treatment-integrity errors that occur during academic instruction in schools. In Study 2, we simultaneously manipulated 3…
A variable acceleration calibration system
NASA Astrophysics Data System (ADS)
Johnson, Thomas H.
2011-12-01
A variable acceleration calibration system that applies loads using gravitational and centripetal acceleration serves as an alternative, efficient and cost effective method for calibrating internal wind tunnel force balances. Two proof-of-concept variable acceleration calibration systems are designed, fabricated and tested. The NASA UT-36 force balance served as the test balance for the calibration experiments. The variable acceleration calibration systems are shown to be capable of performing three component calibration experiments with an approximate applied load error on the order of 1% of the full scale calibration loads. Sources of error are indentified using experimental design methods and a propagation of uncertainty analysis. Three types of uncertainty are indentified for the systems and are attributed to prediction error, calibration error and pure error. Angular velocity uncertainty is shown to be the largest indentified source of prediction error. The calibration uncertainties using a production variable acceleration based system are shown to be potentially equivalent to current methods. The production quality system can be realized using lighter materials and a more precise instrumentation. Further research is needed to account for balance deflection, forcing effects due to vibration, and large tare loads. A gyroscope measurement technique is shown to be capable of resolving the balance deflection angle calculation. Long term research objectives include a demonstration of a six degree of freedom calibration, and a large capacity balance calibration.
The GEOS Ozone Data Assimilation System: Specification of Error Statistics
NASA Technical Reports Server (NTRS)
Stajner, Ivanka; Riishojgaard, Lars Peter; Rood, Richard B.
2000-01-01
A global three-dimensional ozone data assimilation system has been developed at the Data Assimilation Office of the NASA/Goddard Space Flight Center. The Total Ozone Mapping Spectrometer (TOMS) total ozone and the Solar Backscatter Ultraviolet (SBUV) or (SBUV/2) partial ozone profile observations are assimilated. The assimilation, into an off-line ozone transport model, is done using the global Physical-space Statistical Analysis Scheme (PSAS). This system became operational in December 1999. A detailed description of the statistical analysis scheme, and in particular, the forecast and observation error covariance models is given. A new global anisotropic horizontal forecast error correlation model accounts for a varying distribution of observations with latitude. Correlations are largest in the zonal direction in the tropics where data is sparse. Forecast error variance model is proportional to the ozone field. The forecast error covariance parameters were determined by maximum likelihood estimation. The error covariance models are validated using x squared statistics. The analyzed ozone fields in the winter 1992 are validated against independent observations from ozone sondes and HALOE. There is better than 10% agreement between mean Halogen Occultation Experiment (HALOE) and analysis fields between 70 and 0.2 hPa. The global root-mean-square (RMS) difference between TOMS observed and forecast values is less than 4%. The global RMS difference between SBUV observed and analyzed ozone between 50 and 3 hPa is less than 15%.
Ionospheric threats to the integrity of airborne GPS users
NASA Astrophysics Data System (ADS)
Datta-Barua, Seebany
The Global Positioning System (GPS) has both revolutionized and entwined the worlds of aviation and atmospheric science. As the largest and most unpredictable source of GPS positioning error, the ionospheric layer of the atmosphere, if left unchecked, can endanger the safety, or "integrity," of the single frequency airborne user. An augmentation system is a differential-GPS-based navigation system that provides integrity through independent ionospheric monitoring by reference stations. However, the monitor stations are not in general colocated with the user's GPS receiver. The augmentation system must protect users from possible ionosphere density variations occurring between its measurements and the user's. This study analyzes observations from ionospherically active periods to identify what types of ionospheric disturbances may cause threats to user safety if left unmitigated. This work identifies when such disturbances may occur using a geomagnetic measure of activity and then considers two disturbances as case studies. The first case study indicates the need for a non-trivial threat model for the Federal Aviation Administration's Local Area Augmentation System (LAAS) that was not known prior to the work. The second case study uses ground- and space-based data to model an ionospheric disturbance of interest to the Federal Aviation Administration's Wide Area Augmentation System (WAAS). This work is a step in the justification for, and possible future refinement of, one of the WAAS integrity algorithms. For both WAAS and LAAS, integrity threats are basically caused by events that may be occurring but are unobservable. Prior to the data available in this solar cycle, events of such magnitude were not known to be possible. This work serves as evidence that the ionospheric threat models developed for WARS and LAAS are warranted and that they are sufficiently conservative to maintain user integrity even under extreme ionospheric behavior.
The U.S. Financial Crisis: The Global Dimension With Implications for U.S. Policy
2008-11-18
financial crisis. Some of the largest and most venerable banks, investment houses, and insurance companies have either declared bankruptcy or have had to...of the largest and most venerable banks, investment houses, and insurance companies have either declared bankruptcy or have had to be rescued...and inadequate capital backing credit default swaps ( insurance against defaults and bankruptcy) have occurred. The second level of the crisis is
Approximating natural connectivity of scale-free networks based on largest eigenvalue
NASA Astrophysics Data System (ADS)
Tan, S.-Y.; Wu, J.; Li, M.-J.; Lu, X.
2016-06-01
It has been recently proposed that natural connectivity can be used to efficiently characterize the robustness of complex networks. The natural connectivity has an intuitive physical meaning and a simple mathematical formulation, which corresponds to an average eigenvalue calculated from the graph spectrum. However, as a network model close to the real-world system that widely exists, the scale-free network is found difficult to obtain its spectrum analytically. In this article, we investigate the approximation of natural connectivity based on the largest eigenvalue in both random and correlated scale-free networks. It is demonstrated that the natural connectivity of scale-free networks can be dominated by the largest eigenvalue, which can be expressed asymptotically and analytically to approximate natural connectivity with small errors. Then we show that the natural connectivity of random scale-free networks increases linearly with the average degree given the scaling exponent and decreases monotonically with the scaling exponent given the average degree. Moreover, it is found that, given the degree distribution, the more assortative a scale-free network is, the more robust it is. Experiments in real networks validate our methods and results.
The Structure of Segmental Errors in the Speech of Deaf Children.
ERIC Educational Resources Information Center
Levitt, H.; And Others
1980-01-01
A quantitative description of the segmental errors occurring in the speech of deaf children is developed. Journal availability: Elsevier North Holland, Inc., 52 Vanderbilt Avenue, New York, NY 10017. (Author)
Method, apparatus and system to compensate for drift by physically unclonable function circuitry
Hamlet, Jason
2016-11-22
Techniques and mechanisms to detect and compensate for drift by a physically uncloneable function (PUF) circuit. In an embodiment, first state information is registered as reference information to be made available for subsequent evaluation of whether drift by PUF circuitry has occurred. The first state information is associated with a first error correction strength. The first state information is generated based on a first PUF value output by the PUF circuitry. In another embodiment, second state information is determined based on a second PUF value that is output by the PUF circuitry. An evaluation of whether drift has occurred is performed based on the first state information and the second state information, the evaluation including determining whether a threshold error correction strength is exceeded concurrent with a magnitude of error being less than the first error correction strength.
First-year Analysis of the Operating Room Black Box Study.
Jung, James J; Jüni, Peter; Lebovic, Gerald; Grantcharov, Teodor
2018-06-18
To characterize intraoperative errors, events, and distractions, and measure technical skills of surgeons in minimally invasive surgery practice. Adverse events in the operating room (OR) are common contributors of morbidity and mortality in surgical patients. Adverse events often occur due to deviations in performance and environmental factors. Although comprehensive intraoperative data analysis and transparent disclosure have been advocated to better understand how to improve surgical safety, they have rarely been done. We conducted a prospective cohort study in 132 consecutive patients undergoing elective laparoscopic general surgery at an academic hospital during the first year after the definite implementation of a multiport data capture system called the OR Black Box to identify intraoperative errors, events, and distractions. Expert analysts characterized intraoperative distractions, errors, and events, and measured trainee involvement as main operator. Technical skills were compared, crude and risk-adjusted, among the attending surgeon and trainees. Auditory distractions occurred a median of 138 times per case [interquartile range (IQR) 96-190]. At least 1 cognitive distraction appeared in 84 cases (64%). Medians of 20 errors (IQR 14-36) and 8 events (IQR 4-12) were identified per case. Both errors and events occurred often in dissection and reconstruction phases of operation. Technical skills of residents were lower than those of the attending surgeon (P = 0.015). During elective laparoscopic operations, frequent intraoperative errors and events, variation in surgeons' technical skills, and a high amount of environmental distractions were identified using the OR Black Box.
Zimmerman, Dale L; Fang, Xiangming; Mazumdar, Soumya; Rushton, Gerard
2007-01-10
The assignment of a point-level geocode to subjects' residences is an important data assimilation component of many geographic public health studies. Often, these assignments are made by a method known as automated geocoding, which attempts to match each subject's address to an address-ranged street segment georeferenced within a streetline database and then interpolate the position of the address along that segment. Unfortunately, this process results in positional errors. Our study sought to model the probability distribution of positional errors associated with automated geocoding and E911 geocoding. Positional errors were determined for 1423 rural addresses in Carroll County, Iowa as the vector difference between each 100%-matched automated geocode and its true location as determined by orthophoto and parcel information. Errors were also determined for 1449 60%-matched geocodes and 2354 E911 geocodes. Huge (> 15 km) outliers occurred among the 60%-matched geocoding errors; outliers occurred for the other two types of geocoding errors also but were much smaller. E911 geocoding was more accurate (median error length = 44 m) than 100%-matched automated geocoding (median error length = 168 m). The empirical distributions of positional errors associated with 100%-matched automated geocoding and E911 geocoding exhibited a distinctive Greek-cross shape and had many other interesting features that were not capable of being fitted adequately by a single bivariate normal or t distribution. However, mixtures of t distributions with two or three components fit the errors very well. Mixtures of bivariate t distributions with few components appear to be flexible enough to fit many positional error datasets associated with geocoding, yet parsimonious enough to be feasible for nascent applications of measurement-error methodology to spatial epidemiology.
Error begat error: design error analysis and prevention in social infrastructure projects.
Love, Peter E D; Lopez, Robert; Edwards, David J; Goh, Yang M
2012-09-01
Design errors contribute significantly to cost and schedule growth in social infrastructure projects and to engineering failures, which can result in accidents and loss of life. Despite considerable research that has addressed their error causation in construction projects they still remain prevalent. This paper identifies the underlying conditions that contribute to design errors in social infrastructure projects (e.g. hospitals, education, law and order type buildings). A systemic model of error causation is propagated and subsequently used to develop a learning framework for design error prevention. The research suggests that a multitude of strategies should be adopted in congruence to prevent design errors from occurring and so ensure that safety and project performance are ameliorated. Copyright © 2011. Published by Elsevier Ltd.
Computerized N-acetylcysteine physician order entry by template protocol for acetaminophen toxicity.
Thompson, Trevonne M; Lu, Jenny J; Blackwood, Louisa; Leikin, Jerrold B
2011-01-01
Some medication dosing protocols are logistically complex for traditional physician ordering. The use of computerized physician order entry (CPOE) with templates, or order sets, may be useful to reduce medication administration errors. This study evaluated the rate of medication administration errors using CPOE order sets for N-acetylcysteine (NAC) use in treating acetaminophen poisoning. An 18-month retrospective review of computerized inpatient pharmacy records for NAC use was performed. All patients who received NAC for the treatment of acetaminophen poisoning were included. Each record was analyzed to determine the form of NAC given and whether an administration error occurred. In the 82 cases of acetaminophen poisoning in which NAC was given, no medication administration errors were identified. Oral NAC was given in 31 (38%) cases; intravenous NAC was given in 51 (62%) cases. In this retrospective analysis of N-acetylcysteine administration using computerized physician order entry and order sets, no medication administration errors occurred. CPOE is an effective tool in safely executing complicated protocols in an inpatient setting.
Nowak, Michał S; Goś, Roman; Smigielski, Janusz
2008-01-01
To determine the prevalence of refractive errors in population. A retrospective review of medical examinations for entry to the military service from The Area Military Medical Commission in Lodz. Ophthalmic examinations were performed. We used statistic analysis to review the results. Statistic analysis revealed that refractive errors occurred in 21.68% of the population. The most commen refractive error was myopia. 1) The most commen ocular diseases are refractive errors, especially myopia (21.68% in total). 2) Refractive surgery and contact lenses should be allowed as the possible correction of refractive errors for military service.
Panesar, Sukhmeet S; Netuveli, Gopalakrishnan; Carson-Stevens, Andrew; Javad, Sundas; Patel, Bhavesh; Parry, Gareth; Donaldson, Liam J; Sheikh, Aziz
2013-11-21
The Orthopaedic Error Index for hospitals aims to provide the first national assessment of the relative safety of provision of orthopaedic surgery. Cross-sectional study (retrospective analysis of records in a database). The National Reporting and Learning System is the largest national repository of patient-safety incidents in the world with over eight million error reports. It offers a unique opportunity to develop novel approaches to enhancing patient safety, including investigating the relative safety of different healthcare providers and specialties. We extracted all orthopaedic error reports from the system over 1 year (2009-2010). The Orthopaedic Error Index was calculated as a sum of the error propensity and severity. All relevant hospitals offering orthopaedic surgery in England were then ranked by this metric to identify possible outliers that warrant further attention. 155 hospitals reported 48 971 orthopaedic-related patient-safety incidents. The mean Orthopaedic Error Index was 7.09/year (SD 2.72); five hospitals were identified as outliers. Three of these units were specialist tertiary hospitals carrying out complex surgery; the remaining two outlier hospitals had unusually high Orthopaedic Error Indexes: mean 14.46 (SD 0.29) and 15.29 (SD 0.51), respectively. The Orthopaedic Error Index has enabled identification of hospitals that may be putting patients at disproportionate risk of orthopaedic-related iatrogenic harm and which therefore warrant further investigation. It provides the prototype of a summary index of harm to enable surveillance of unsafe care over time across institutions. Further validation and scrutiny of the method will be required to assess its potential to be extended to other hospital specialties in the UK and also internationally to other health systems that have comparable national databases of patient-safety incidents.
The potential public health issues related to exposure to natural asbestos deposits (commonly termed naturally occurring asbestos, NO A) has gained the regulatory and media spotlight in recent years. Arguably the most well known example is Libby, Montana, the site of the largest ...
Earthquakes, September-October 1978
Person, W.J.
1979-01-01
The months of September and October were somewhat quiet seismically speaking. One major earthquake, magnitude (M) 7.7 occurred in Iran on September 16. In Germany, a magntidue 5.0 earthquake caused damage and considerable alarm to many people in parts of that country. In the United States, the largest earthquake occurred along the California-Nevada border region.
Kessels-Habraken, Marieke; Van der Schaaf, Tjerk; De Jonge, Jan; Rutte, Christel
2010-05-01
Medical errors in health care still occur frequently. Unfortunately, errors cannot be completely prevented and 100% safety can never be achieved. Therefore, in addition to error reduction strategies, health care organisations could also implement strategies that promote timely error detection and correction. Reporting and analysis of so-called near misses - usually defined as incidents without adverse consequences for patients - are necessary to gather information about successful error recovery mechanisms. This study establishes the need for a clearer and more consistent definition of near misses to enable large-scale reporting and analysis in order to obtain such information. Qualitative incident reports and interviews were collected on four units of two Dutch general hospitals. Analysis of the 143 accompanying error handling processes demonstrated that different incident types each provide unique information about error handling. Specifically, error handling processes underlying incidents that did not reach the patient differed significantly from those of incidents that reached the patient, irrespective of harm, because of successful countermeasures that had been taken after error detection. We put forward two possible definitions of near misses and argue that, from a practical point of view, the optimal definition may be contingent on organisational context. Both proposed definitions could yield large-scale reporting of near misses. Subsequent analysis could enable health care organisations to improve the safety and quality of care proactively by (1) eliminating failure factors before real accidents occur, (2) enhancing their ability to intercept errors in time, and (3) improving their safety culture. Copyright 2010 Elsevier Ltd. All rights reserved.
Huynh, Chi; Wong, Ian C K; Correa-West, Jo; Terry, David; McCarthy, Suzanne
2017-04-01
Since the publication of To Err Is Human: Building a Safer Health System in 1999, there has been much research conducted into the epidemiology, nature and causes of medication errors in children, from prescribing and supply to administration. It is reassuring to see growing evidence of improving medication safety in children; however, based on media reports, it can be seen that serious and fatal medication errors still occur. This critical opinion article examines the problem of medication errors in children and provides recommendations for research, training of healthcare professionals and a culture shift towards dealing with medication errors. There are three factors that we need to consider to unravel what is missing and why fatal medication errors still occur. (1) Who is involved and affected by the medication error? (2) What factors hinder staff and organisations from learning from mistakes? Does the fear of litigation and criminal charges deter healthcare professionals from voluntarily reporting medication errors? (3) What are the educational needs required to prevent medication errors? It is important to educate future healthcare professionals about medication errors and human factors to prevent these from happening. Further research is required to apply aviation's 'black box' principles in healthcare to record and learn from near misses and errors to prevent future events. There is an urgent need for the black box investigations to be published and made public for the benefit of other organisations that may have similar potential risks for adverse events. International sharing of investigations and learning is also needed.
Role of Grammatical Gender and Semantics in German Word Production
ERIC Educational Resources Information Center
Vigliocco, Gabriella; Vinson, David P.; Indefrey, Peter; Levelt, Willem J. M.; Hellwig, Frauke
2004-01-01
Semantic substitution errors (e.g., saying "arm" when "leg" is intended) are among the most common types of errors occurring during spontaneous speech. It has been shown that grammatical gender of German target nouns is preserved in the errors (E. Mane, 1999). In 3 experiments, the authors explored different accounts of the grammatical gender…
ERIC Educational Resources Information Center
Harshman, Jordan; Yezierski, Ellen
2016-01-01
Determining the error of measurement is a necessity for researchers engaged in bench chemistry, chemistry education research (CER), and a multitude of other fields. Discussions regarding what constructs measurement error entails and how to best measure them have occurred, but the critiques about traditional measures have yielded few alternatives.…
The Frame Constraint on Experimentally Elicited Speech Errors in Japanese
ERIC Educational Resources Information Center
Saito, Akie; Inoue, Tomoyoshi
2017-01-01
The so-called syllable position effect in speech errors has been interpreted as reflecting constraints posed by the frame structure of a given language, which is separately operating from linguistic content during speech production. The effect refers to the phenomenon that when a speech error occurs, replaced and replacing sounds tend to be in the…
A Linguistic Analysis of Errors in the Compositions of Arba Minch University Students
ERIC Educational Resources Information Center
Tizazu, Yoseph
2014-01-01
This study reports the dominant linguistic errors that occur in the written productions of Arba Minch University (hereafter AMU) students. A sample of paragraphs was collected for two years from students ranging from freshmen to graduating level. The sampled compositions were then coded, described, and explained using error analysis method. Both…
Idea Evaluation: Error in Evaluating Highly Original Ideas
ERIC Educational Resources Information Center
Licuanan, Brian F.; Dailey, Lesley R.; Mumford, Michael D.
2007-01-01
Idea evaluation is a critical aspect of creative thought. However, a number of errors might occur in the evaluation of new ideas. One error commonly observed is the tendency to underestimate the originality of truly novel ideas. In the present study, an attempt was made to assess whether analysis of the process leading to the idea generation and…
ERIC Educational Resources Information Center
Ramos, Erica; Alfonso, Vincent C.; Schermerhorn, Susan M.
2009-01-01
The interpretation of cognitive test scores often leads to decisions concerning the diagnosis, educational placement, and types of interventions used for children. Therefore, it is important that practitioners administer and score cognitive tests without error. This study assesses the frequency and types of examiner errors that occur during the…
Code of Federal Regulations, 2010 CFR
2010-04-01
... this paragraph (b)(2) include the following— (i) A mathematical error; (ii) An entry on a document that... errors or omissions that occurred before the publication of these regulations. Any reasonable method used... February 24, 1994, will be considered proper, provided that the method is consistent with the rules of...
The Impact of Bar Code Medication Administration Technology on Reported Medication Errors
ERIC Educational Resources Information Center
Holecek, Andrea
2011-01-01
The use of bar-code medication administration technology is on the rise in acute care facilities in the United States. The technology is purported to decrease medication errors that occur at the point of administration. How significantly this technology affects actual rate and severity of error is unknown. This descriptive, longitudinal research…
A recent Cleanroom success story: The Redwing project
NASA Technical Reports Server (NTRS)
Hausler, Philip A.
1992-01-01
Redwing is the largest completed Cleanroom software engineering project in IBM, both in terms of lines of code and project staffing. The product provides a decision-support facility that utilizes artificial intelligence (AI) technology for predicting and preventing complex operating problems in an MVS environment. The project used the Cleanroom process for development and realized a defect rate of 2.6 errors/KLOC, measured from first execution. This represents the total amount of errors that were found in testing and installation at three field test sites. Development productivity was 486 LOC/PM, which included all development labor expended in design specification through completion of incremental testing. In short, the Redwing team produced a complex systems software product with an extraordinarily low error rate, while maintaining high productivity. All of this was accomplished by a project team using Cleanroom for the first time. An 'introductory implementation' of Cleanroom was defined and used on Redwing. This paper describes the quality and productivity results, the Redwing project, and how Cleanroom was implemented.
Prism adaptation in alternately exposed hands.
Redding, Gordon M; Wallace, Benjamin
2013-08-01
We assessed intermanual transfer of the proprioceptive realignment aftereffects of prism adaptation in right-handers by examining alternate target pointing with the two hands for 40 successive trials, 20 with each hand. Adaptation for the right hand was not different as a function of exposure sequence order or postexposure test order, in contrast with adaptation for the left hand. Adaptation was greater for the left hand when the right hand started the alternate pointing than when the sequence of target-pointing movements started with the left hand. Also, the largest left-hand adaptation appeared when that hand was tested first after exposure. Terminal error during exposure varied in cycles for the two hands, converging on zero when the right hand led, but no difference appeared between the two hands when the left hand led. These results suggest that transfer of proprioceptive realignment occurs from the right to the left hand during both exposure and postexposure testing. Such transfer reflects the process of maintaining spatial alignment between the two hands. Normally, the left hand appears to be calibrated with the right-hand spatial map, and when the two hands are misaligned, the left-hand spatial map is realigned with the right-hand spatial map.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bishop, Joseph E.; Emery, John M.; Battaile, Corbett C.
Two fundamental approximations in macroscale solid-mechanics modeling are (1) the assumption of scale separation in homogenization theory and (2) the use of a macroscopic plasticity material model that represents, in a mean sense, the multitude of inelastic processes occurring at the microscale. With the goal of quantifying the errors induced by these approximations on engineering quantities of interest, we perform a set of direct numerical simulations (DNS) in which polycrystalline microstructures are embedded throughout a macroscale structure. The largest simulations model over 50,000 grains. The microstructure is idealized using a randomly close-packed Voronoi tessellation in which each polyhedral Voronoi cellmore » represents a grain. An face centered cubic crystal-plasticity model is used to model the mechanical response of each grain. The overall grain structure is equiaxed, and each grain is randomly oriented with no overall texture. The detailed results from the DNS simulations are compared to results obtained from conventional macroscale simulations that use homogeneous isotropic plasticity models. The macroscale plasticity models are calibrated using a representative volume element of the idealized microstructure. Furthermore, we envision that DNS modeling will be used to gain new insights into the mechanics of material deformation and failure.« less
Splash-cup plants accelerate raindrops to disperse seeds.
Amador, Guillermo J; Yamada, Yasukuni; McCurley, Matthew; Hu, David L
2013-02-01
The conical flowers of splash-cup plants Chrysosplenium and Mazus catch raindrops opportunistically, exploiting the subsequent splash to disperse their seeds. In this combined experimental and theoretical study, we elucidate their mechanism for maximizing dispersal distance. We fabricate conical plant mimics using three-dimensional printing, and use high-speed video to visualize splash profiles and seed travel distance. Drop impacts that strike the cup off-centre achieve the largest dispersal distances of up to 1 m. Such distances are achieved because splash speeds are three to five times faster than incoming drop speeds, and so faster than the traditionally studied splashes occurring upon horizontal surfaces. This anomalous splash speed is because of the superposition of two components of momentum, one associated with a component of the drop's motion parallel to the splash-cup surface, and the other associated with film spreading induced by impact with the splash-cup. Our model incorporating these effects predicts the observed dispersal distance within 6-18% error. According to our experiments, the optimal cone angle for the splash-cup is 40°, a value consistent with the average of five species of splash-cup plants. This optimal angle arises from the competing effects of velocity amplification and projectile launching angle.
Changes of precipitation extremes indices in São Francisco River Basin, Brazil from 1947 to 2012
NASA Astrophysics Data System (ADS)
Bezerra, Bergson G.; Silva, Lindenberg L.; Santos e Silva, Claudio M.; de Carvalho, Gilvani Gomes
2018-02-01
The São Francisco River is strategically important due to its hydroelectric potential and for bringing the largest water body of Brazilian Semiarid region, supplying water for irrigation, urban, and industrial activities. Thereby, for the purpose of characterizing changes on the precipitation patterns over São Francisco River basin, 11 extremes precipitation indices as defined by the joint WMO/CCI/ETCCDMI/CLIVAR project were calculated using daily observation from the 59 rain gauges during 1947-2012 period. The extreme climatic indices were calculated with the RClimDex software, which performs an exhaustive data quality control, intending to identify spurious errors and dataset inconsistencies. Weak and significant regional changes were observed in both CDD and SDII indices. Most precipitation extremes indices decreased but without statistical significance. The spatial analysis of indices did not show clearly regional changes due to the complexity of hydrometeorology of the region. In some cases, two rainfall stations exhibited opposite trends with the same significance level although they are separated by a few kilometers. This has occurred more frequently in Lower-Middle São Francisco, probably associated with intense land cover change over the last decades in this region.
Precision controllability of the YF-17 airplane
NASA Technical Reports Server (NTRS)
Sisk, T. R.; Mataeny, N. W.
1980-01-01
A flying qualities evaluation conducted on the YF-17 airplane permitted assessment of its precision controllability in the transonic flight regime over the allowable angle of attack range. The precision controllability (tailchase tracking) study was conducted in constant-g and windup turn tracking maneuvers with the command augmentation system (CAS) on, automatic maneuver flaps, and the caged pipper gunsight depressed 70 mils. This study showed that the YF-17 airplane tracks essentially as well at 7 g's to 8 g's as earlier fighters did at 4 g's to 5 g's before they encountered wing rock. The pilots considered the YF-17 airplane one of the best tracking airplanes they had flown. Wing rock at the higher angles of attack degraded tracking precision, and lack of control harmony made precision controllability more difficult. The revised automatic maneuver flap schedule incorporated in the airplane at the time of the tests did not appear to be optimum. The largest tracking errors and greatest pilot workload occurred at high normal load factors at low angles of attack. The pilots reported that the high-g maneuvers caused some tunnel vision and that they found it difficult to think clearly after repeated maneuvers.
Residue-Specific α-Helix Propensities from Molecular Simulation
Best, Robert B.; de Sancho, David; Mittal, Jeetain
2012-01-01
Formation of α-helices is a fundamental process in protein folding and assembly. By studying helix formation in molecular simulations of a series of alanine-based peptides, we obtain the temperature-dependent α-helix propensities of all 20 naturally occurring residues with two recent additive force fields, Amber ff03w and Amber ff99SB∗. Encouragingly, we find that the overall helix propensity of many residues is captured well by both energy functions, with Amber ff99SB∗ being more accurate. Nonetheless, there are some residues that deviate considerably from experiment, which can be attributed to two aspects of the energy function: i), variations of the charge model used to determine the atomic partial charges, with residues whose backbone charges differ most from alanine tending to have the largest error; ii), side-chain torsion potentials, as illustrated by the effect of modifications to the torsion angles of I, L, D, N. We find that constrained refitting of residue charges for charged residues in Amber ff99SB∗ significantly improves their helix propensity. The resulting parameters should more faithfully reproduce helix propensities in simulations of protein folding and disordered proteins. PMID:22455930
Analysis of the PLL phase error in presence of simulated ionospheric scintillation events
NASA Astrophysics Data System (ADS)
Forte, B.
2012-01-01
The functioning of standard phase locked loops (PLL), including those used to track radio signals from Global Navigation Satellite Systems (GNSS), is based on a linear approximation which holds in presence of small phase errors. Such an approximation represents a reasonable assumption in most of the propagation channels. However, in presence of a fading channel the phase error may become large, making the linear approximation no longer valid. The PLL is then expected to operate in a non-linear regime. As PLLs are generally designed and expected to operate in their linear regime, whenever the non-linear regime comes into play, they will experience a serious limitation in their capability to track the corresponding signals. The phase error and the performance of a typical PLL embedded into a commercial multiconstellation GNSS receiver were analyzed in presence of simulated ionospheric scintillation. Large phase errors occurred during scintillation-induced signal fluctuations although cycle slips only occurred during the signal re-acquisition after a loss of lock. Losses of lock occurred whenever the signal faded below the minimumC/N0threshold allowed for tracking. The simulations were performed for different signals (GPS L1C/A, GPS L2C, GPS L5 and Galileo L1). L5 and L2C proved to be weaker than L1. It appeared evident that the conditions driving the PLL phase error in the specific case of GPS receivers in presence of scintillation-induced signal perturbations need to be evaluated in terms of the combination of the minimumC/N0 tracking threshold, lock detector thresholds, possible cycle slips in the tracking PLL and accuracy of the observables (i.e. the error propagation onto the observables stage).
Listening to the 2011 magnitude 9.0 Tohoku-Oki, Japan, earthquake
Peng, Zhigang; Aiken, Chastity; Kilb, Debi; Shelly, David R.; Enescu, Bogdan
2012-01-01
The magnitude 9.0 Tohoku-Oki, Japan, earthquake on 11 March 2011 is the largest earthquake to date in Japan’s modern history and is ranked as the fourth largest earthquake in the world since 1900. This earthquake occurred within the northeast Japan subduction zone (Figure 1), where the Pacific plate is subducting beneath the Okhotsk plate at rate of ∼8–9 cm/yr (DeMets et al. 2010). This type of extremely large earthquake within a subduction zone is generally termed a “megathrust” earthquake. Strong shaking from this magnitude 9 earthquake engulfed the entire Japanese Islands, reaching a maximum acceleration ∼3 times that of gravity (3 g). Two days prior to the main event, a foreshock sequence occurred, including one earthquake of magnitude 7.2. Following the main event, numerous aftershocks occurred around the main slip region; the largest of these was magnitude 7.9. The entire foreshocks-mainshock-aftershocks sequence was well recorded by thousands of sensitive seismometers and geodetic instruments across Japan, resulting in the best-recorded megathrust earthquake in history. This devastating earthquake resulted in significant damage and high death tolls caused primarily by the associated large tsunami. This tsunami reached heights of more than 30 m, and inundation propagated inland more than 5 km from the Pacific coast, which also caused a nuclear crisis that is still affecting people’s lives in certain regions of Japan.
Study on the total amount control of atmospheric pollutant based on GIS.
Wang, Jian-Ping; Guo, Xi-Kun
2005-08-01
To provide effective environmental management for total amount control of atmospheric pollutants. An atmospheric diffusion model of sulfur dioxide on the surface of the earth was established and tested in Shantou of Guangdong Province on the basis of an overall assessment of regional natural environment, social economic state of development, pollution sources and atmospheric environmental quality. Compared with actual monitoring results in a studied region, simulation values fell within the range of two times of error and were evenly distributed in the two sides of the monitored values. Predicted with the largest emission model method, the largest emission of sulfur dioxide would be 54,279.792 tons per year in 2010. The mathematical model established and revised on the basis of GIS is more rational and suitable for the regional characteristics of total amount control of air pollutants.
Method and apparatus for detecting timing errors in a system oscillator
Gliebe, Ronald J.; Kramer, William R.
1993-01-01
A method of detecting timing errors in a system oscillator for an electronic device, such as a power supply, includes the step of comparing a system oscillator signal with a delayed generated signal and generating a signal representative of the timing error when the system oscillator signal is not identical to the delayed signal. An LED indicates to an operator that a timing error has occurred. A hardware circuit implements the above-identified method.
Spectral purity study for IPDA lidar measurement of CO2
NASA Astrophysics Data System (ADS)
Ma, Hui; Liu, Dong; Xie, Chen-Bo; Tan, Min; Deng, Qian; Xu, Ji-Wei; Tian, Xiao-Min; Wang, Zhen-Zhu; Wang, Bang-Xin; Wang, Ying-Jian
2018-02-01
A high sensitivity and global covered observation of carbon dioxide (CO2) is expected by space-borne integrated path differential absorption (IPDA) lidar which has been designed as the next generation measurement. The stringent precision of space-borne CO2 data, for example 1ppm or better, is required to address the largest number of carbon cycle science questions. Spectral purity, which is defined as the ratio of effective absorbed energy to the total energy transmitted, is one of the most important system parameters of IPDA lidar which directly influences the precision of CO2. Due to the column averaged dry air mixing ratio of CO2 is inferred from comparison of the two echo pulse signals, the laser output usually accompanied by an unexpected spectrally broadband background radiation would posing significant systematic error. In this study, the spectral energy density line shape and spectral impurity line shape are modeled as Lorentz line shape for the simulation, and the latter is assumed as an unabsorbed component by CO2. An error equation is deduced according to IPDA detecting theory for calculating the system error caused by spectral impurity. For a spectral purity of 99%, the induced error could reach up to 8.97 ppm.
Method and apparatus for faulty memory utilization
Cher, Chen-Yong; Andrade Costa, Carlos H.; Park, Yoonho; Rosenburg, Bryan S.; Ryu, Kyung D.
2016-04-19
A method for faulty memory utilization in a memory system includes: obtaining information regarding memory health status of at least one memory page in the memory system; determining an error tolerance of the memory page when the information regarding memory health status indicates that a failure is predicted to occur in an area of the memory system affecting the memory page; initiating a migration of data stored in the memory page when it is determined that the data stored in the memory page is non-error-tolerant; notifying at least one application regarding a predicted operating system failure and/or a predicted application failure when it is determined that data stored in the memory page is non-error-tolerant and cannot be migrated; and notifying at least one application regarding the memory failure predicted to occur when it is determined that data stored in the memory page is error-tolerant.
On the interaction of deaffrication and consonant harmony*
Dinnsen, Daniel A.; Gierut, Judith A.; Morrisette, Michele L.; Green, Christopher R.; Farris-Trimble, Ashley W.
2010-01-01
Error patterns in children’s phonological development are often described as simplifying processes that can interact with one another with different consequences. Some interactions limit the applicability of an error pattern, and others extend it to more words. Theories predict that error patterns interact to their full potential. While specific interactions have been documented for certain pairs of processes, no developmental study has shown that the range of typologically predicted interactions occurs for those processes. To determine whether this anomaly is an accidental gap or a systematic peculiarity of particular error patterns, two commonly occurring processes were considered, namely Deaffrication and Consonant Harmony. Results are reported from a cross-sectional and longitudinal study of 12 children (age 3;0 – 5;0) with functional phonological delays. Three interaction types were attested to varying degrees. The longitudinal results further instantiated the typology and revealed a characteristic trajectory of change. Implications of these findings are explored. PMID:20513256
Blind Braille readers mislocate tactile stimuli.
Sterr, Annette; Green, Lisa; Elbert, Thomas
2003-05-01
In a previous experiment, we observed that blind Braille readers produce errors when asked to identify on which finger of one hand a light tactile stimulus had occurred. With the present study, we aimed to specify the characteristics of this perceptual error in blind and sighted participants. The experiment confirmed that blind Braille readers mislocalised tactile stimuli more often than sighted controls, and that the localisation errors occurred significantly more often at the right reading hand than at the non-reading hand. Most importantly, we discovered that the reading fingers showed the smallest error frequency, but the highest rate of stimulus attribution. The dissociation of perceiving and locating tactile stimuli in the blind suggests altered tactile information processing. Neuroplasticity, changes in tactile attention mechanisms as well as the idea that blind persons may employ different strategies for tactile exploration and object localisation are discussed as possible explanations for the results obtained.
Medication Errors in Patients with Enteral Feeding Tubes in the Intensive Care Unit.
Sohrevardi, Seyed Mojtaba; Jarahzadeh, Mohammad Hossein; Mirzaei, Ehsan; Mirjalili, Mahtabalsadat; Tafti, Arefeh Dehghani; Heydari, Behrooz
2017-01-01
Most patients admitted to Intensive Care Units (ICU) have problems in using oral medication or ingesting solid forms of drugs. Selecting the most suitable dosage form in such patients is a challenge. The current study was conducted to assess the frequency and types of errors of oral medication administration in patients with enteral feeding tubes or suffering swallowing problems. A cross-sectional study was performed in the ICU of Shahid Sadoughi Hospital, Yazd, Iran. Patients were assessed for the incidence and types of medication errors occurring in the process of preparation and administration of oral medicines. Ninety-four patients were involved in this study and 10,250 administrations were observed. Totally, 4753 errors occurred among the studied patients. The most commonly used drugs were pantoprazole tablet, piracetam syrup, and losartan tablet. A total of 128 different types of drugs and nine different oral pharmaceutical preparations were prescribed for the patients. Forty-one (35.34%) out of 116 different solid drugs (except effervescent tablets and powders) could be substituted by liquid or injectable forms. The most common error was the wrong time of administration. Errors of wrong dose preparation and administration accounted for 24.04% and 25.31% of all errors, respectively. In this study, at least three-fourth of the patients experienced medication errors. The occurrence of these errors can greatly impair the quality of the patients' pharmacotherapy, and more attention should be paid to this issue.
Evaluation of causes and frequency of medication errors during information technology downtime.
Hanuscak, Tara L; Szeinbach, Sheryl L; Seoane-Vazquez, Enrique; Reichert, Brendan J; McCluskey, Charles F
2009-06-15
The causes and frequency of medication errors occurring during information technology downtime were evaluated. Individuals from a convenience sample of 78 hospitals who were directly responsible for supporting and maintaining clinical information systems (CISs) and automated dispensing systems (ADSs) were surveyed using an online tool between February 2007 and May 2007 to determine if medication errors were reported during periods of system downtime. The errors were classified using the National Coordinating Council for Medication Error Reporting and Prevention severity scoring index. The percentage of respondents reporting downtime was estimated. Of the 78 eligible hospitals, 32 respondents with CIS and ADS responsibilities completed the online survey for a response rate of 41%. For computerized prescriber order entry, patch installations and system upgrades caused an average downtime of 57% over a 12-month period. Lost interface and interface malfunction were reported for centralized and decentralized ADSs, with an average downtime response of 34% and 29%, respectively. The average downtime response was 31% for software malfunctions linked to clinical decision-support systems. Although patient harm did not result from 30 (54%) medication errors, the potential for harm was present for 9 (16%) of these errors. Medication errors occurred during CIS and ADS downtime despite the availability of backup systems and standard protocols to handle periods of system downtime. Efforts should be directed to reduce the frequency and length of down-time in order to minimize medication errors during such downtime.
Physician assistants and the disclosure of medical error.
Brock, Douglas M; Quella, Alicia; Lipira, Lauren; Lu, Dave W; Gallagher, Thomas H
2014-06-01
Evolving state law, professional societies, and national guidelines, including those of the American Medical Association and Joint Commission, recommend that patients receive transparent communication when a medical error occurs. Recommendations for error disclosure typically consist of an explanation that an error has occurred, delivery of an explicit apology, an explanation of the facts around the event, its medical ramifications and how care will be managed, and a description of how similar errors will be prevented in the future. Although error disclosure is widely endorsed in the medical and nursing literature, there is little discussion of the unique role that the physician assistant (PA) might play in these interactions. PAs are trained in the medical model and technically practice under the supervision of a physician. They are also commonly integrated into interprofessional health care teams in surgical and urgent care settings. PA practice is characterized by widely varying degrees of provider autonomy. How PAs should collaborate with physicians in sensitive error disclosure conversations with patients is unclear. With the number of practicing PAs growing rapidly in nearly all domains of medicine, their role in the error disclosure process warrants exploration. The authors call for educational societies and accrediting agencies to support policy to establish guidelines for PA disclosure of error. They encourage medical and PA researchers to explore and report best-practice disclosure roles for PAs. Finally, they recommend that PA educational programs implement trainings in disclosure skills, and hospitals and supervising physicians provide and support training for practicing PAs.
Safe drinking water and waterborne outbreaks.
Moreira, N A; Bondelind, M
2017-02-01
The present work compiles a review on drinking waterborne outbreaks, with the perspective of production and distribution of microbiologically safe water, during 2000-2014. The outbreaks are categorised in raw water contamination, treatment deficiencies and distribution network failure. The main causes for contamination were: for groundwater, intrusion of animal faeces or wastewater due to heavy rain; in surface water, discharge of wastewater into the water source and increased turbidity and colour; at treatment plants, malfunctioning of the disinfection equipment; and for distribution systems, cross-connections, pipe breaks and wastewater intrusion into the network. Pathogens causing the largest number of affected consumers were Cryptosporidium, norovirus, Giardia, Campylobacter, and rotavirus. The largest number of different pathogens was found for the treatment works and the distribution network. The largest number of affected consumers with gastrointestinal illness was for contamination events from a surface water source, while the largest number of individual events occurred for the distribution network.
NASA Astrophysics Data System (ADS)
Uchide, Takahiko; Song, Seok Goo
2018-03-01
The 2016 Gyeongju earthquake (ML 5.8) was the largest instrumentally recorded inland event in South Korea. It occurred in the southeast of the Korean Peninsula and was preceded by a large ML 5.1 foreshock. The aftershock seismicity data indicate that these earthquakes occurred on two closely collocated parallel faults that are oblique to the surface trace of the Yangsan fault. We investigate the rupture properties of these earthquakes using finite-fault slip inversion analyses. The obtained models indicate that the ruptures propagated NNE-ward and SSW-ward for the main shock and the large foreshock, respectively. This indicates that these earthquakes occurred on right-step faults and were initiated around a fault jog. The stress drops were up to 62 and 43 MPa for the main shock and the largest foreshock, respectively. These high stress drops imply high strength excess, which may be overcome by the stress concentration around the fault jog.
Medication errors in a rural hospital.
Madegowda, Bharathi; Hill, Pamela D; Anderson, Mary Ann
2007-06-01
The purpose of this investigation was to compare and contrast three nursing shifts in a small rural Midwestern hospital with regard to the number of reported medication errors, the units on which they occurred, and the types and severity of errors. Results can be beneficial in planning and implementing a quality improvement program in the area of medication administration with the nursing staff.
Pioneer-Venus radio occultation (ORO) data reduction: Profiles of 13 cm absorptivity
NASA Technical Reports Server (NTRS)
Steffes, Paul G.
1990-01-01
In order to characterize possible variations in the abundance and distribution of subcloud sulfuric acid vapor, 13 cm radio occultation signals from 23 orbits that occurred in late 1986 and 1987 (Season 10) and 7 orbits that occurred in 1979 (Season 1) were processed. The data were inverted via inverse Abel transform to produce 13 cm absorptivity profiles. Pressure and temperature profiles obtained with the Pioneer-Venus night probe and the northern probe were used along with the absorptivity profiles to infer upper limits for vertical profiles of the abundance of gaseous H2SO4. In addition to inverting the data, error bars were placed on the absorptivity profiles and H2SO4 abundance profiles using the standard propagation of errors. These error bars were developed by considering the effects of statistical errors only. The profiles show a distinct pattern with regard to latitude which is consistent with latitude variations observed in data obtained during the occultation seasons nos. 1 and 2. However, when compared with the earlier data, the recent occultation studies suggest that the amount of sulfuric acid vapor occurring at and below the main cloud layer may have decreased between early 1979 and late 1986.
Warker, Jill A.
2013-01-01
Adults can rapidly learn artificial phonotactic constraints such as /f/ only occurs at the beginning of syllables by producing syllables that contain those constraints. This implicit learning is then reflected in their speech errors. However, second-order constraints in which the placement of a phoneme depends on another characteristic of the syllable (e.g., if the vowel is /æ/, /f/ occurs at the beginning of syllables and /s/ occurs at the end of syllables but if the vowel is /I/, the reverse is true) require a longer learning period. Two experiments question the transience of second-order learning and whether consolidation plays a role in learning phonological dependencies. Using speech errors as a measure of learning, Experiment 1 investigated the durability of learning, and Experiment 2 investigated the time-course of learning. Experiment 1 found that learning is still present in speech errors a week later. Experiment 2 looked at whether more time in the form of a consolidation period or more experience in the form of more trials was necessary for learning to be revealed in speech errors. Both consolidation and more trials led to learning; however, consolidation provided a more substantial benefit. PMID:22686839
An evaluation of programmed treatment-integrity errors during discrete-trial instruction.
Carroll, Regina A; Kodak, Tiffany; Fisher, Wayne W
2013-01-01
This study evaluated the effects of programmed treatment-integrity errors on skill acquisition for children with an autism spectrum disorder (ASD) during discrete-trial instruction (DTI). In Study 1, we identified common treatment-integrity errors that occur during academic instruction in schools. In Study 2, we simultaneously manipulated 3 integrity errors during DTI. In Study 3, we evaluated the effects of each of the 3 integrity errors separately on skill acquisition during DTI. Results showed that participants either demonstrated slower skill acquisition or did not acquire the target skills when instruction included treatment-integrity errors. © Society for the Experimental Analysis of Behavior.
Tara L. Keyser
2012-01-01
Growth dominance provides a quantitative description of the relative contribution of individual trees to stand growth. Positive dominance occurs when the largest individuals account for a greater proportion of growth period increment than total biomass. Conversely, negative dominance occurs when the smallest trees account for a greater proportion of the growth period...
Designed to chronicle the pollution problem occurring at one of the world's largest Superfund sites and to address the potential of the application of the approach of the recovery of metal resources occurring in the acid mine drainage causing the pollution problem. Acid mine drai...
NASA Technical Reports Server (NTRS)
Almloef, Jan; Deleeuw, Bradley J.; Taylor, Peter R.; Bauschlicher, Charles W., Jr.; Siegbahn, Per
1989-01-01
The requirements for very accurate ab initio quantum chemical prediction of dissociation energies are examined using a detailed investigation of the nitrogen molecule. Although agreement with experiment to within 1 kcal/mol is not achieved even with the most elaborate multireference CI (configuration interaction) wave functions and largest basis sets currently feasible, it is possible to obtain agreement to within about 2 kcal/mol, or 1 percent of the dissociation energy. At this level it is necessary to account for core-valence correlation effects and to include up to h-type functions in the basis. The effect of i-type functions, the use of different reference configuration spaces, and basis set superposition error were also investigated. After discussing these results, the remaining sources of error in our best calculations are examined.
Basilakos, Alexandra; Rorden, Chris; Bonilha, Leonardo; Moser, Dana; Fridriksson, Julius
2015-01-01
Background and Purpose Acquired apraxia of speech (AOS) is a motor speech disorder caused by brain damage. AOS often co-occurs with aphasia, a language disorder in which patients may also demonstrate speech production errors. The overlap of speech production deficits in both disorders has raised questions regarding if AOS emerges from a unique pattern of brain damage or as a sub-element of the aphasic syndrome. The purpose of this study was to determine whether speech production errors in AOS and aphasia are associated with distinctive patterns of brain injury. Methods Forty-three patients with history of a single left-hemisphere stroke underwent comprehensive speech and language testing. The Apraxia of Speech Rating Scale was used to rate speech errors specific to AOS versus speech errors that can also be associated with AOS and/or aphasia. Localized brain damage was identified using structural MRI, and voxel-based lesion-impairment mapping was used to evaluate the relationship between speech errors specific to AOS, those that can occur in AOS and/or aphasia, and brain damage. Results The pattern of brain damage associated with AOS was most strongly associated with damage to cortical motor regions, with additional involvement of somatosensory areas. Speech production deficits that could be attributed to AOS and/or aphasia were associated with damage to the temporal lobe and the inferior pre-central frontal regions. Conclusion AOS likely occurs in conjunction with aphasia due to the proximity of the brain areas supporting speech and language, but the neurobiological substrate for each disorder differs. PMID:25908457
Basilakos, Alexandra; Rorden, Chris; Bonilha, Leonardo; Moser, Dana; Fridriksson, Julius
2015-06-01
Acquired apraxia of speech (AOS) is a motor speech disorder caused by brain damage. AOS often co-occurs with aphasia, a language disorder in which patients may also demonstrate speech production errors. The overlap of speech production deficits in both disorders has raised questions on whether AOS emerges from a unique pattern of brain damage or as a subelement of the aphasic syndrome. The purpose of this study was to determine whether speech production errors in AOS and aphasia are associated with distinctive patterns of brain injury. Forty-three patients with history of a single left-hemisphere stroke underwent comprehensive speech and language testing. The AOS Rating Scale was used to rate speech errors specific to AOS versus speech errors that can also be associated with both AOS and aphasia. Localized brain damage was identified using structural magnetic resonance imaging, and voxel-based lesion-impairment mapping was used to evaluate the relationship between speech errors specific to AOS, those that can occur in AOS or aphasia, and brain damage. The pattern of brain damage associated with AOS was most strongly associated with damage to cortical motor regions, with additional involvement of somatosensory areas. Speech production deficits that could be attributed to AOS or aphasia were associated with damage to the temporal lobe and the inferior precentral frontal regions. AOS likely occurs in conjunction with aphasia because of the proximity of the brain areas supporting speech and language, but the neurobiological substrate for each disorder differs. © 2015 American Heart Association, Inc.
Effect of bar-code technology on the safety of medication administration.
Poon, Eric G; Keohane, Carol A; Yoon, Catherine S; Ditmore, Matthew; Bane, Anne; Levtzion-Korach, Osnat; Moniz, Thomas; Rothschild, Jeffrey M; Kachalia, Allen B; Hayes, Judy; Churchill, William W; Lipsitz, Stuart; Whittemore, Anthony D; Bates, David W; Gandhi, Tejal K
2010-05-06
Serious medication errors are common in hospitals and often occur during order transcription or administration of medication. To help prevent such errors, technology has been developed to verify medications by incorporating bar-code verification technology within an electronic medication-administration system (bar-code eMAR). We conducted a before-and-after, quasi-experimental study in an academic medical center that was implementing the bar-code eMAR. We assessed rates of errors in order transcription and medication administration on units before and after implementation of the bar-code eMAR. Errors that involved early or late administration of medications were classified as timing errors and all others as nontiming errors. Two clinicians reviewed the errors to determine their potential to harm patients and classified those that could be harmful as potential adverse drug events. We observed 14,041 medication administrations and reviewed 3082 order transcriptions. Observers noted 776 nontiming errors in medication administration on units that did not use the bar-code eMAR (an 11.5% error rate) versus 495 such errors on units that did use it (a 6.8% error rate)--a 41.4% relative reduction in errors (P<0.001). The rate of potential adverse drug events (other than those associated with timing errors) fell from 3.1% without the use of the bar-code eMAR to 1.6% with its use, representing a 50.8% relative reduction (P<0.001). The rate of timing errors in medication administration fell by 27.3% (P<0.001), but the rate of potential adverse drug events associated with timing errors did not change significantly. Transcription errors occurred at a rate of 6.1% on units that did not use the bar-code eMAR but were completely eliminated on units that did use it. Use of the bar-code eMAR substantially reduced the rate of errors in order transcription and in medication administration as well as potential adverse drug events, although it did not eliminate such errors. Our data show that the bar-code eMAR is an important intervention to improve medication safety. (ClinicalTrials.gov number, NCT00243373.) 2010 Massachusetts Medical Society
Berkvens, Rafael; Peremans, Herbert; Weyn, Maarten
2016-10-02
Localization systems are increasingly valuable, but their location estimates are only useful when the uncertainty of the estimate is known. This uncertainty is currently calculated as the location error given a ground truth, which is then used as a static measure in sometimes very different environments. In contrast, we propose the use of the conditional entropy of a posterior probability distribution as a complementary measure of uncertainty. This measure has the advantage of being dynamic, i.e., it can be calculated during localization based on individual sensor measurements, does not require a ground truth, and can be applied to discrete localization algorithms. Furthermore, for every consistent location estimation algorithm, both the location error and the conditional entropy measures must be related, i.e., a low entropy should always correspond with a small location error, while a high entropy can correspond with either a small or large location error. We validate this relationship experimentally by calculating both measures of uncertainty in three publicly available datasets using probabilistic Wi-Fi fingerprinting with eight different implementations of the sensor model. We show that the discrepancy between these measures, i.e., many location estimates having a high location error while simultaneously having a low conditional entropy, is largest for the least realistic implementations of the probabilistic sensor model. Based on the results presented in this paper, we conclude that conditional entropy, being dynamic, complementary to location error, and applicable to both continuous and discrete localization, provides an important extra means of characterizing a localization method.
Berkvens, Rafael; Peremans, Herbert; Weyn, Maarten
2016-01-01
Localization systems are increasingly valuable, but their location estimates are only useful when the uncertainty of the estimate is known. This uncertainty is currently calculated as the location error given a ground truth, which is then used as a static measure in sometimes very different environments. In contrast, we propose the use of the conditional entropy of a posterior probability distribution as a complementary measure of uncertainty. This measure has the advantage of being dynamic, i.e., it can be calculated during localization based on individual sensor measurements, does not require a ground truth, and can be applied to discrete localization algorithms. Furthermore, for every consistent location estimation algorithm, both the location error and the conditional entropy measures must be related, i.e., a low entropy should always correspond with a small location error, while a high entropy can correspond with either a small or large location error. We validate this relationship experimentally by calculating both measures of uncertainty in three publicly available datasets using probabilistic Wi-Fi fingerprinting with eight different implementations of the sensor model. We show that the discrepancy between these measures, i.e., many location estimates having a high location error while simultaneously having a low conditional entropy, is largest for the least realistic implementations of the probabilistic sensor model. Based on the results presented in this paper, we conclude that conditional entropy, being dynamic, complementary to location error, and applicable to both continuous and discrete localization, provides an important extra means of characterizing a localization method. PMID:27706099
De Rosario, Helios; Page, Álvaro; Besa, Antonio
2017-09-06
The accurate location of the main axes of rotation (AoR) is a crucial step in many applications of human movement analysis. There are different formal methods to determine the direction and position of the AoR, whose performance varies across studies, depending on the pose and the source of errors. Most methods are based on minimizing squared differences between observed and modelled marker positions or rigid motion parameters, implicitly assuming independent and uncorrelated errors, but the largest error usually results from soft tissue artefacts (STA), which do not have such statistical properties and are not effectively cancelled out by such methods. However, with adequate methods it is possible to assume that STA only account for a small fraction of the observed motion and to obtain explicit formulas through differential analysis that relate STA components to the resulting errors in AoR parameters. In this paper such formulas are derived for three different functional calibration techniques (Geometric Fitting, mean Finite Helical Axis, and SARA), to explain why each technique behaves differently from the others, and to propose strategies to compensate for those errors. These techniques were tested with published data from a sit-to-stand activity, where the true axis was defined using bi-planar fluoroscopy. All the methods were able to estimate the direction of the AoR with an error of less than 5°, whereas there were errors in the location of the axis of 30-40mm. Such location errors could be reduced to less than 17mm by the methods based on equations that use rigid motion parameters (mean Finite Helical Axis, SARA) when the translation component was calculated using the three markers nearest to the axis. Copyright © 2017 Elsevier Ltd. All rights reserved.
Atwood, E.L.
1958-01-01
Response bias errors are studied by comparing questionnaire responses from waterfowl hunters using four large public hunting areas with actual hunting data from these areas during two hunting seasons. To the extent that the data permit, the sources of the error in the responses were studied and the contribution of each type to the total error was measured. Response bias errors, including both prestige and memory bias, were found to be very large as compared to non-response and sampling errors. Good fits were obtained with the seasonal kill distribution of the actual hunting data and the negative binomial distribution and a good fit was obtained with the distribution of total season hunting activity and the semi-logarithmic curve. A comparison of the actual seasonal distributions with the questionnaire response distributions revealed that the prestige and memory bias errors are both positive. The comparisons also revealed the tendency for memory bias errors to occur at digit frequencies divisible by five and for prestige bias errors to occur at frequencies which are multiples of the legal daily bag limit. A graphical adjustment of the response distributions was carried out by developing a smooth curve from those frequency classes not included in the predictable biased frequency classes referred to above. Group averages were used in constructing the curve, as suggested by Ezekiel [1950]. The efficiency of the technique described for reducing response bias errors in hunter questionnaire responses on seasonal waterfowl kill is high in large samples. The graphical method is not as efficient in removing response bias errors in hunter questionnaire responses on seasonal hunting activity where an average of 60 percent was removed.
Describing Phonological Paraphasias in Three Variants of Primary Progressive Aphasia.
Dalton, Sarah Grace Hudspeth; Shultz, Christine; Henry, Maya L; Hillis, Argye E; Richardson, Jessica D
2018-03-01
The purpose of this study was to describe the linguistic environment of phonological paraphasias in 3 variants of primary progressive aphasia (semantic, logopenic, and nonfluent) and to describe the profiles of paraphasia production for each of these variants. Discourse samples of 26 individuals diagnosed with primary progressive aphasia were investigated for phonological paraphasias using the criteria established for the Philadelphia Naming Test (Moss Rehabilitation Research Institute, 2013). Phonological paraphasias were coded for paraphasia type, part of speech of the target word, target word frequency, type of segment in error, word position of consonant errors, type of error, and degree of change in consonant errors. Eighteen individuals across the 3 variants produced phonological paraphasias. Most paraphasias were nonword, followed by formal, and then mixed, with errors primarily occurring on nouns and verbs, with relatively few on function words. Most errors were substitutions, followed by addition and deletion errors, and few sequencing errors. Errors were evenly distributed across vowels, consonant singletons, and clusters, with more errors occurring in initial and medial positions of words than in the final position of words. Most consonant errors consisted of only a single-feature change, with few 2- or 3-feature changes. Importantly, paraphasia productions by variant differed from these aggregate results, with unique production patterns for each variant. These results suggest that a system where paraphasias are coded as present versus absent may be insufficient to adequately distinguish between the 3 subtypes of PPA. The 3 variants demonstrate patterns that may be used to improve phenotyping and diagnostic sensitivity. These results should be integrated with recent findings on phonological processing and speech rate. Future research should attempt to replicate these results in a larger sample of participants with longer speech samples and varied elicitation tasks. https://doi.org/10.23641/asha.5558107.
A Regional CO2 Observing System Simulation Experiment for the ASCENDS Satellite Mission
NASA Technical Reports Server (NTRS)
Wang, J. S.; Kawa, S. R.; Eluszkiewicz, J.; Baker, D. F.; Mountain, M.; Henderson, J.; Nehrkorn, T.; Zaccheo, T. S.
2014-01-01
Top-down estimates of the spatiotemporal variations in emissions and uptake of CO2 will benefit from the increasing measurement density brought by recent and future additions to the suite of in situ and remote CO2 measurement platforms. In particular, the planned NASA Active Sensing of CO2 Emissions over Nights, Days, and Seasons (ASCENDS) satellite mission will provide greater coverage in cloudy regions, at high latitudes, and at night than passive satellite systems, as well as high precision and accuracy. In a novel approach to quantifying the ability of satellite column measurements to constrain CO2 fluxes, we use a portable library of footprints (surface influence functions) generated by the WRF-STILT Lagrangian transport model in a regional Bayesian synthesis inversion. The regional Lagrangian framework is well suited to make use of ASCENDS observations to constrain fluxes at high resolution, in this case at 1 degree latitude x 1 degree longitude and weekly for North America. We consider random measurement errors only, modeled as a function of mission and instrument design specifications along with realistic atmospheric and surface conditions. We find that the ASCENDS observations could potentially reduce flux uncertainties substantially at biome and finer scales. At the 1 degree x 1 degree, weekly scale, the largest uncertainty reductions, on the order of 50 percent, occur where and when there is good coverage by observations with low measurement errors and the a priori uncertainties are large. Uncertainty reductions are smaller for a 1.57 micron candidate wavelength than for a 2.05 micron wavelength, and are smaller for the higher of the two measurement error levels that we consider (1.0 ppm vs. 0.5 ppm clear-sky error at Railroad Valley, Nevada). Uncertainty reductions at the annual, biome scale range from 40 percent to 75 percent across our four instrument design cases, and from 65 percent to 85 percent for the continent as a whole. Our uncertainty reductions at various scales are substantially smaller than those from a global ASCENDS inversion on a coarser grid, demonstrating how quantitative results can depend on inversion methodology. The a posteriori flux uncertainties we obtain, ranging from 0.01 to 0.06 Pg C yr-1 across the biomes, would meet requirements for improved understanding of long-term carbon sinks suggested by a previous study.
2010-03-15
Swiss cheese model of human error causation. ................................................................... 3 2. Results for the classification of...based on Reason’s “ Swiss cheese ” model of human error (1990). Figure 1 describes how an accident is likely to occur when all of the errors, or “holes...align. A detailed description of HFACS can be found in Wiegmann and Shappell (2003). Figure 1. The Swiss cheese model of human error
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wojahn, Christopher K.
2015-10-20
This HDL code (hereafter referred to as "software") implements circuitry in Xilinx Virtex-5QV Field Programmable Gate Array (FPGA) hardware. This software allows the device to self-check the consistency of its own configuration memory for radiation-induced errors. The software then provides the capability to correct any single-bit errors detected in the memory using the device's inherent circuitry, or reload corrupted memory frames when larger errors occur that cannot be corrected with the device's built-in error correction and detection scheme.
1984-06-01
space discretization error . 1. I 3 1. INTRODUCTION Reaction- diffusion processes occur in many branches of biology and physical chemistry. Examples...to model reaction- diffusion phenomena. The primary goal of this adaptive method is to keep a particular norm of the space discretization error less...AD-A142 253 AN ADAPTIVE MET6 OFD LNES WITH ERROR CONTROL FOR 1 INST FOR PHYSICAL SCIENCE AND TECH. I BABUSKAAAO C7 EA OH S UMR AN UNVC EEP R
Automated Classification of Phonological Errors in Aphasic Language
Ahuja, Sanjeev B.; Reggia, James A.; Berndt, Rita S.
1984-01-01
Using heuristically-guided state space search, a prototype program has been developed to simulate and classify phonemic errors occurring in the speech of neurologically-impaired patients. Simulations are based on an interchangeable rule/operator set of elementary errors which represent a theory of phonemic processing faults. This work introduces and evaluates a novel approach to error simulation and classification, it provides a prototype simulation tool for neurolinguistic research, and it forms the initial phase of a larger research effort involving computer modelling of neurolinguistic processes.
Armstrong, Gail E; Dietrich, Mary; Norman, Linda; Barnsteiner, Jane; Mion, Lorraine
Approximately a quarter of medication errors in the hospital occur at the administration phase, which is solely under the purview of the bedside nurse. The purpose of this study was to assess bedside nurses' perceived skills and attitudes about updated safety concepts and examine their impact on medication administration errors and adherence to safe medication administration practices. Findings support the premise that medication administration errors result from an interplay among system-, unit-, and nurse-level factors.
Horizon sensor errors calculated by computer models compared with errors measured in orbit
NASA Technical Reports Server (NTRS)
Ward, K. A.; Hogan, R.; Andary, J.
1982-01-01
Using a computer program to model the earth's horizon and to duplicate the signal processing procedure employed by the ESA (Earth Sensor Assembly), errors due to radiance variation have been computed for a particular time of the year. Errors actually occurring in flight at the same time of year are inferred from integrated rate gyro data for a satellite of the TIROS series of NASA weather satellites (NOAA-A). The predicted performance is compared with actual flight history.
Error quantification of abnormal extreme high waves in Operational Oceanographic System in Korea
NASA Astrophysics Data System (ADS)
Jeong, Sang-Hun; Kim, Jinah; Heo, Ki-Young; Park, Kwang-Soon
2017-04-01
In winter season, large-height swell-like waves have occurred on the East coast of Korea, causing property damages and loss of human life. It is known that those waves are generated by a local strong wind made by temperate cyclone moving to eastward in the East Sea of Korean peninsula. Because the waves are often occurred in the clear weather, in particular, the damages are to be maximized. Therefore, it is necessary to predict and forecast large-height swell-like waves to prevent and correspond to the coastal damages. In Korea, an operational oceanographic system (KOOS) has been developed by the Korea institute of ocean science and technology (KIOST) and KOOS provides daily basis 72-hours' ocean forecasts such as wind, water elevation, sea currents, water temperature, salinity, and waves which are computed from not only meteorological and hydrodynamic model (WRF, ROMS, MOM, and MOHID) but also wave models (WW-III and SWAN). In order to evaluate the model performance and guarantee a certain level of accuracy of ocean forecasts, a Skill Assessment (SA) system was established as a one of module in KOOS. It has been performed through comparison of model results with in-situ observation data and model errors have been quantified with skill scores. Statistics which are used in skill assessment are including a measure of both errors and correlations such as root-mean-square-error (RMSE), root-mean-square-error percentage (RMSE%), mean bias (MB), correlation coefficient (R), scatter index (SI), circular correlation (CC) and central frequency (CF) that is a frequency with which errors lie within acceptable error criteria. It should be utilized simultaneously not only to quantify an error but also to improve an accuracy of forecasts by providing a feedback interactively. However, in an abnormal phenomena such as high-height swell-like waves in the East coast of Korea, it requires more advanced and optimized error quantification method that allows to predict the abnormal waves well and to improve the accuracy of forecasts by supporting modification of physics and numeric on numerical models through sensitivity test. In this study, we proposed an appropriate method of error quantification especially on abnormal high waves which are occurred by local weather condition. Furthermore, we introduced that how the quantification errors are contributed to improve wind-wave modeling by applying data assimilation and utilizing reanalysis data.
2008-01-01
strategies, increasing the prevalence of both hypoglycemia and anemia in the ICU.14–20 The change in allogeneic blood transfusion practices occurred in...measurements in samples with low HCT levels.4,5,7,8,12 The error occurs because de- creased red blood cell causes less displacement of plasma, resulting...Nonlinear component regression was performed be- cause HCT has a nonlinear effect on accuracy of POC glucometers. A dual parameter correction factor was
NASA Astrophysics Data System (ADS)
Barré, Jérôme; Edwards, David; Worden, Helen; Arellano, Avelino; Gaubert, Benjamin; Da Silva, Arlindo; Lahoz, William; Anderson, Jeffrey
2016-09-01
This paper describes the second phase of an Observing System Simulation Experiment (OSSE) that utilizes the synthetic measurements from a constellation of satellites measuring atmospheric composition from geostationary (GEO) Earth orbit presented in part I of the study. Our OSSE is focused on carbon monoxide observations over North America, East Asia and Europe where most of the anthropogenic sources are located. Here we assess the impact of a potential GEO constellation on constraining northern hemisphere (NH) carbon monoxide (CO) using data assimilation. We show how cloud cover affects the GEO constellation data density with the largest cloud cover (i.e., lowest data density) occurring during Asian summer. We compare the modeled state of the atmosphere (Control Run), before CO data assimilation, with the known "true" state of the atmosphere (Nature Run) and show that our setup provides realistic atmospheric CO fields and emission budgets. Overall, the Control Run underestimates CO concentrations in the northern hemisphere, especially in areas close to CO sources. Assimilation experiments show that constraining CO close to the main anthropogenic sources significantly reduces errors in NH CO compared to the Control Run. We assess the changes in error reduction when only single satellite instruments are available as compared to the full constellation. We find large differences in how measurements for each continental scale observation system affect the hemispherical improvement in long-range transport patterns, especially due to seasonal cloud cover. A GEO constellation will provide the most efficient constraint on NH CO during winter when CO lifetime is longer and increments from data assimilation associated with source regions are advected further around the globe.
Fiber Scrambling for High Precision Spectrographs
NASA Astrophysics Data System (ADS)
Kaplan, Zachary; Spronck, J. F. P.; Fischer, D.
2011-05-01
The detection of Earth-like exoplanets with the radial velocity method requires extreme Doppler precision and long-term stability in order to measure tiny reflex velocities in the host star. Recent planet searches have led to the detection of so called "super-Earths” (up to a few Earth masses) that induce radial velocity changes of about 1 m/s. However, the detection of true Earth analogs requires a precision of 10 cm/s. One of the largest factors limiting Doppler precision is variation in the Point Spread Function (PSF) from observation to observation due to changes in the illumination of the slit and spectrograph optics. Thus, this stability has become a focus of current instrumentation work. Fiber optics have been used since the 1980's to couple telescopes to high-precision spectrographs, initially for simpler mechanical design and control. However, fiber optics are also naturally efficient scramblers. Scrambling refers to a fiber's ability to produce an output beam independent of input. Our research is focused on characterizing the scrambling properties of several types of fibers, including circular, square and octagonal fibers. By measuring the intensity distribution after the fiber as a function of input beam position, we can simulate guiding errors that occur at an observatory. Through this, we can determine which fibers produce the most uniform outputs for the severest guiding errors, improving the PSF and allowing sub-m/s precision. However, extensive testing of fibers of supposedly identical core diameter, length and shape from the same manufacturer has revealed the "personality” of individual fibers. Personality describes differing intensity patterns for supposedly duplicate fibers illuminated identically. Here, we present our results on scrambling characterization as a function of fiber type, while studying individual fiber personality.
Uncertainty estimates of a GRACE inversion modelling technique over Greenland using a simulation
NASA Astrophysics Data System (ADS)
Bonin, Jennifer; Chambers, Don
2013-07-01
The low spatial resolution of GRACE causes leakage, where signals in one location spread out into nearby regions. Because of this leakage, using simple techniques such as basin averages may result in an incorrect estimate of the true mass change in a region. A fairly simple least squares inversion technique can be used to more specifically localize mass changes into a pre-determined set of basins of uniform internal mass distribution. However, the accuracy of these higher resolution basin mass amplitudes has not been determined, nor is it known how the distribution of the chosen basins affects the results. We use a simple `truth' model over Greenland as an example case, to estimate the uncertainties of this inversion method and expose those design parameters which may result in an incorrect high-resolution mass distribution. We determine that an appropriate level of smoothing (300-400 km) and process noise (0.30 cm2 of water) gets the best results. The trends of the Greenland internal basins and Iceland can be reasonably estimated with this method, with average systematic errors of 3.5 cm yr-1 per basin. The largest mass losses found from GRACE RL04 occur in the coastal northwest (-19.9 and -33.0 cm yr-1) and southeast (-24.2 and -27.9 cm yr-1), with small mass gains (+1.4 to +7.7 cm yr-1) found across the northern interior. Acceleration of mass change is measurable at the 95 per cent confidence level in four northwestern basins, but not elsewhere in Greenland. Due to an insufficiently detailed distribution of basins across internal Canada, the trend estimates of Baffin and Ellesmere Islands are expected to be incorrect due to systematic errors caused by the inversion technique.
Version 1.3 AIM SOFIE measured methane (CH4): Validation and seasonal climatology
NASA Astrophysics Data System (ADS)
Rong, P. P.; Russell, J. M.; Marshall, B. T.; Siskind, D. E.; Hervig, M. E.; Gordley, L. L.; Bernath, P. F.; Walker, K. A.
2016-11-01
The V1.3 methane (CH4) measured by the Aeronomy of Ice in the Mesosphere (AIM) Solar Occultation for Ice Experiment (SOFIE) instrument is validated in the vertical range of 25-70 km. The random error for SOFIE CH4 is 0.1-1% up to 50 km and degrades to 9% at ˜ 70 km. The systematic error remains at 4% throughout the stratosphere and lower mesosphere. Comparisons with CH4 data taken by the SCISAT Atmospheric Chemistry Experiment-Fourier Transform Spectrometer (ACE-FTS) and the Envisat Michelson Interferometer for Passive Atmospheric Sounding (MIPAS) show an agreement within 15% in the altitude range 30-60 km. Below 25 km SOFIE CH4 is systematically higher (≥20%), while above 65 km it is lower by a similar percentage. The sign change from the positive to negative bias occurs between 55 km and 60 km (or 40 km and 45 km) in the Northern (or Southern) Hemisphere. Methane, H2O, and 2CH4 + H2O yearly differences from their values in 2009 are examined using SOFIE and MIPAS CH4 and the Aura Microwave Limb Sounder (MLS) measured H2O. It is concluded that 2CH4 + H2O is conserved with altitude up to an upper limit between 35 km and 50 km depending on the season. In summer this altitude is higher. In the Northern Hemisphere the difference relative to 2009 is the largest in late spring and the established difference prevails throughout summer and fall, suggesting that summer and fall are dynamically quiet. In both hemispheres during winter there are disturbances (with a period of 1 month) that travel downward throughout the stratosphere with a speed similar to the winter descent.
NASA Astrophysics Data System (ADS)
Sofyan, Hizir; Maulia, Eva; Miftahuddin
2017-11-01
A country has several important parameters to achieve economic prosperity, such as tax revenue and inflation rate. One of the largest revenues of the State Budget in Indonesia comes from the tax sector. Meanwhile, the rate of inflation occurring in a country can be used as an indicator, to measure the good and bad economic problems faced by the country. Given the importance of tax revenue and inflation rate control in achieving economic prosperity, it is necessary to analyze the structure of tax revenue relations and inflation rate. This study aims to produce the best VECM (Vector Error Correction Model) with optimal lag using various alpha and perform structural analysis using the Impulse Response Function (IRF) of the VECM models to examine the relationship of tax revenue, and inflation in Banda Aceh. The results showed that the best model for the data of tax revenue and inflation rate in Banda Aceh City using alpha 0.01 is VECM with optimal lag 2, while the best model for data of tax revenue and inflation rate in Banda Aceh City using alpha 0.05 and 0,1 VECM with optimal lag 3. However, the VECM model with alpha 0.01 yielded four significant models of income tax model, inflation rate of Banda Aceh, inflation rate of health and inflation rate of education in Banda Aceh. While the VECM model with alpha 0.05 and 0.1 yielded one significant model that is income tax model. Based on the VECM models, then there are two structural analysis IRF which is formed to look at the relationship of tax revenue, and inflation in Banda Aceh, the IRF with VECM (2) and IRF with VECM (3).
Study of earthquakes using a borehole seismic network at Koyna, India
NASA Astrophysics Data System (ADS)
Gupta, Harsh; Satyanarayana, Hari VS; Shashidhar, Dodla; Mallika, Kothamasu; Ranjan Mahato, Chitta; Shankar Maity, Bhavani
2017-04-01
Koyna, located near the west coast of India, is a classical site of artificial water reservoir triggered earthquakes. Triggered earthquakes started soon after the impoundment of the Koyna Dam in 1962. The activity has continued till now including the largest triggered earthquake of M 6.3 in 1967; 22 earthquakes of M ≥ 5 and several thousands smaller earthquakes. The latest significant earthquake of ML 3.7 occurred on 24th November 2016. In spite of having a network of 23 broad band 3-component seismic stations in the near vicinity of the Koyna earthquake zone, locations of earthquakes had errors of 1 km. The main reason was the presence of 1 km thick very heterogeneous Deccan Traps cover that introduced noise and locations could not be improved. To improve the accuracy of location of earthquakes, a unique network of eight borehole seismic stations surrounding the seismicity was designed. Six of these have been installed at depths varying from 981 m to 1522 m during 2015 and 2016, well below the Deccan Traps cover. During 2016 a total of 2100 earthquakes were located. There has been a significant improvement in the location of earthquakes and the absolute errors of location have come down to ± 300 m. All earthquakes of ML ≥ 0.5 are now located, compared to ML ≥1.0 earlier. Based on seismicity and logistics, a block of 2 km x 2 km area has been chosen for the 3 km deep pilot borehole. The installation of the borehole seismic network has further elucidated the correspondence between rate of water loading/unloading the reservoir and triggered seismicity.
Signature-forecasting and early outbreak detection system
Naumova, Elena N.; MacNeill, Ian B.
2008-01-01
SUMMARY Daily disease monitoring via a public health surveillance system provides valuable information on population risks. Efficient statistical tools for early detection of rapid changes in the disease incidence are a must for modern surveillance. The need for statistical tools for early detection of outbreaks that are not based on historical information is apparent. A system is discussed for monitoring cases of infections with a view to early detection of outbreaks and to forecasting the extent of detected outbreaks. We propose a set of adaptive algorithms for early outbreak detection that does not rely on extensive historical recording. We also include knowledge of infection disease epidemiology into forecasts. To demonstrate this system we use data from the largest water-borne outbreak of cryptosporidiosis, which occurred in Milwaukee in 1993. Historical data are smoothed using a loess-type smoother. Upon receipt of a new datum, the smoothing is updated and estimates are made of the first two derivatives of the smooth curve, and these are used for near-term forecasting. Recent data and the near-term forecasts are used to compute a color-coded warning index, which quantify the level of concern. The algorithms for computing the warning index have been designed to balance Type I errors (false prediction of an epidemic) and Type II errors (failure to correctly predict an epidemic). If the warning index signals a sufficiently high probability of an epidemic, then a forecast of the possible size of the outbreak is made. This longer term forecast is made by fitting a ‘signature’ curve to the available data. The effectiveness of the forecast depends upon the extent to which the signature curve captures the shape of outbreaks of the infection under consideration. PMID:18716671
Diagnosis of extratropical variability in seasonal integrations of the ECMWF model
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ferranti, L.; Molteni, F.; Brankovic, C.
1994-06-01
Properties of the general circulation simulated by the ECMWF model are discussed using a set of seasonal integrations at T63 resolution. For each season, over the period of 5 years, 1986-1990, three integrations initiated on consecutive days were run with prescribed observed sea surface temperature (SST). This paper presents a series of diagnostics of extratropical variability in the model, with particular emphasis on the northern winter. Time-filtered maps of variability indicate that in this season there is insufficient storm track activity penetrating into the Eurasian continent. Related to this the maximum of lower-frequency variability for northern spring are more realistic.more » Blocking is defined objectively in terms of the geostrophic wind at 500 mb. Consistent with the low-frequency transience, in the Euro-Atlantic sector the position of maximum blocking in the model is displaced eastward. The composite structure of blocks over the Pacific is realistic, though their frequency is severely underestimated at all times of year. Shortcomings in the simulated wintertime general circulation were also revealed by studying the projection of 5-day mean fields onto empirical orthogonal functions (EOFs) of the observed flow. The largest differences were apparent for statistics of EOFs of the zonal mean flow. Analysis of weather regime activity, defined from the EOFs, suggested that regimes with positive PNA index were overpopulated, while the negative PNA regimes were underpopulated. A further comparison between observed and modeled low-frequency variance revealed that underestimation of low-frequency variability occurs along the same axes that explain most of the spatial structure of the error in the mean field, suggesting a common dynamical origin for these two aspects of the systematic error. 17 refs., 17 figs., 4 tabs.« less
Lessons learnt from Dental Patient Safety Case Reports
Obadan, Enihomo M.; Ramoni, Rachel B.; Kalenderian, Elsbeth
2015-01-01
Background Errors are commonplace in dentistry, it is therefore our imperative as dental professionals to intercept them before they lead to an adverse event, and/or mitigate their effects when an adverse event occurs. This requires a systematic approach at both the profession-level, encapsulated in the Agency for Healthcare Research and Quality’s Patient Safety Initiative structure, as well as at the practice-level, where Crew Resource Management is a tested paradigm. Supporting patient safety at both the dental practice and profession levels relies on understanding the types and causes of errors, an area in which little is known. Methods A retrospective review of dental adverse events reported in the literature was performed. Electronic bibliographic databases were searched and data were extracted on background characteristics, incident description, case characteristics, clinic setting where adverse event originated, phase of patient care that adverse event was detected, proximal cause, type of patient harm, degree of harm and recovery actions. Results 182 publications (containing 270 cases) were identified through our search. Delayed and unnecessary treatment/disease progression after misdiagnosis was the largest type of harm reported. 24.4% of reviewed cases were reported to have experienced permanent harm. One of every ten case reports reviewed (11.1%) reported that the adverse event resulted in the death of the affected patient. Conclusions Published case reports provide a window into understanding the nature and extent of dental adverse events, but for as much as the findings revealed about adverse events, they also identified the need for more broad-based contributions to our collective body of knowledge about adverse events in the dental office and their causes. Practical Implications Siloed and incomplete contributions to our understanding of adverse events in the dental office are threats to dental patients’ safety. PMID:25925524
NASA Astrophysics Data System (ADS)
Masoumi, Salim; McClusky, Simon; Koulali, Achraf; Tregoning, Paul
2017-04-01
Improper modeling of horizontal tropospheric gradients in GPS analysis induces errors in estimated parameters, with the largest impact on heights and tropospheric zenith delays. The conventional two-axis tilted plane model of horizontal gradients fails to provide an accurate representation of tropospheric gradients under weather conditions with asymmetric horizontal changes of refractivity. A new parametrization of tropospheric gradients whereby an arbitrary number of gradients are estimated as discrete directional wedges is shown via simulations to significantly improve the accuracy of recovered tropospheric zenith delays in asymmetric gradient scenarios. In a case study of an extreme rain event that occurred in September 2002 in southern France, the new directional parametrization is able to isolate the strong gradients in particular azimuths around the GPS stations consistent with the "V" shape spatial pattern of the observed precipitation. In another study of a network of GPS stations in the Sierra Nevada region where highly asymmetric tropospheric gradients are known to exist, the new directional model significantly improves the repeatabilities of the stations in asymmetric gradient situations while causing slightly degraded repeatabilities for the stations in normal symmetric gradient conditions. The average improvement over the entire network is ˜31%, while the improvement for one of the worst affected sites P631 is ˜49% (from 8.5 mm to 4.3 mm) in terms of weighted root-mean-square (WRMS) error and ˜82% (from -1.1 to -0.2) in terms of skewness. At the same station, the use of the directional model changes the estimates of zenith wet delay by 15 mm (˜25%).
NASA Technical Reports Server (NTRS)
Barre, Jerome; Edwards, David; Worden, Helen; Arellano, Avelino; Gaubert, Benjamin; Da Silva, Arlindo; Lahoz, William; Anderson, Jeffrey
2016-01-01
This paper describes the second phase of an Observing System Simulation Experiment (OSSE) that utilizes the synthetic measurements from a constellation of satellites measuring atmospheric composition from geostationary (GEO) Earth orbit presented in part I of the study. Our OSSE is focused on carbon monoxide observations over North America, East Asia and Europe where most of the anthropogenic sources are located. Here we assess the impact of a potential GEO constellation on constraining northern hemisphere (NH) carbon monoxide (CO) using data assimilation. We show how cloud cover affects the GEO constellation data density with the largest cloud cover (i.e., lowest data density) occurring during Asian summer. We compare the modeled state of the atmosphere (Control Run), before CO data assimilation, with the known 'true' state of the atmosphere (Nature Run) and show that our setup provides realistic atmospheric CO fields and emission budgets. Overall, the Control Run underestimates CO concentrations in the northern hemisphere, especially in areas close to CO sources. Assimilation experiments show that constraining CO close to the main anthropogenic sources significantly reduces errors in NH CO compared to the Control Run. We assess the changes in error reduction when only single satellite instruments are available as compared to the full constellation. We find large differences in how measurements for each continental scale observation system affect the hemispherical improvement in long-range transport patterns, especially due to seasonal cloud cover. A GEO constellation will provide the most efficient constraint on NH CO during winter when CO lifetime is longer and increments from data assimilation associated with source regions are advected further around the globe.
Predicting Software Assurance Using Quality and Reliability Measures
2014-12-01
errors are not found in unit testing . The rework effort to correct requirement and design problems in later phases can be as high as 300 to 1,000...Literature 31 Appendix B: Quality Cannot Be Tested In 35 Bibliography 38 CMU/SEI-2014-TN-026 | ii CMU/SEI-2014-TN-026 | iii List of Figures...Removal Densities During Development 10 Figure 8: Quality and Security-Focused Workflow 14 Figure 9: Testing Reliability Results for the Largest Project
ERIC Educational Resources Information Center
General Accounting Office, Washington, DC. Health, Education, and Human Services Div.
The District of Columbia Public Schools (DCPS) is one of the largest public school districts in the United States. Since 1989-90, there have been questions about several aspects of DCPS's enrollment-count process. A valid enrollment-count process and an accurate count are critical to DCPS's district- and school-level planning, staffing, funding,…
1969-01-01
job requiremeits in these skills , and (2) developing technique.; for improving literacy skills through training. In addition, manpower pools for a given...job requirements in these skills , and (2) developing techniques for improving literacy skills through training. In addition, manpower pools for a...visual and psychomotor skills -for accurate .nd’efficient operation, and performance variations among gunners are the largest source of error in system
Induced Earthquakes Are Not All Alike: Examples from Texas Since 2008 (Invited)
NASA Astrophysics Data System (ADS)
Frohlich, C.
2013-12-01
The EarthScope Transportable Array passed through Texas between 2008 and 2011, providing an opportunity to identify and accurately locate earthquakes near and/or within oil/gas fields and injection waste disposal operations. In five widely separated geographical locations, the results suggest seismic activity may be induced/triggered. However, the different regions exhibit different relationships between injection/production operations and seismic activity: In the Barnett Shale of northeast Texas, small earthquakes occurred only near higher-volume (volume rate > 150,000 BWPM) injection disposal wells. These included widely reported earthquakes occurring near Dallas-Fort Worth and Cleburne in 2008 and 2009. Near Alice in south Texas, M3.9 earthquakes occurred in 1997 and 2010 on the boundary of the Stratton Field, which had been highly productive for both oil and gas since the 1950's. Both earthquakes occurred during an era of net declining production, but their focal depths and location at the field boundary suggest an association with production activity. In the Eagle Ford of south central Texas, earthquakes occurred near wells following significant increases in extraction (water+produced oil) volumes as well as injection. The largest earthquake, the M4.8 Fashing earthquake of 20 October 2011, occurred after significant increases in extraction. In the Cogdell Field near Snyder (west Texas), a sequence of earthquakes beginning in 2006 followed significant increases in the injection of CO2 at nearby wells. The largest with M4.4 occurred on 11 September 2011. This is the largest known earthquake possibly attributable to CO2 injection. Near Timpson in east Texas a sequence of earthquakes beginning in 2008, including an M4.8 earthquake on 17 May 2012, occurred within three km of two high-volume injection disposal wells that had begun operation in 2007. These were the first known earthquakes at this location. In summary, the observations find possible induced/triggered earthquakes associated with recent increases in injection, recent increases in extraction, with CO2 injection, and with declining production. In all areas, during the 2008-2011 period there were no earthquakes occurring near vast majority of extraction/production wells; thus, the principal puzzle is why these activities sometimes induce seismicity and sometimes do not.
Validity of Activity Monitor Step Detection Is Related to Movement Patterns.
Hickey, Amanda; John, Dinesh; Sasaki, Jeffer E; Mavilia, Marianna; Freedson, Patty
2016-02-01
There is a need to examine step-counting accuracy of activity monitors during different types of movements. The purpose of this study was to compare activity monitor and manually counted steps during treadmill and simulated free-living activities and to compare the activity monitor steps to the StepWatch (SW) in a natural setting. Fifteen participants performed laboratory-based treadmill (2.4, 4.8, 7.2 and 9.7 km/h) and simulated free-living activities (eg, cleaning room) while wearing an activPAL, Omron HJ720-ITC, Yamax Digi- Walker SW-200, 2 ActiGraph GT3Xs (1 in "low-frequency extension" [AGLFE] and 1 in "normal-frequency" mode), an ActiGraph 7164, and a SW. Participants also wore monitors for 1-day in their free-living environment. Linear mixed models identified differences between activity monitor steps and the criterion in the laboratory/free-living settings. Most monitors performed poorly during treadmill walking at 2.4 km/h. Cleaning a room had the largest errors of all simulated free-living activities. The accuracy was highest for forward/rhythmic movements for all monitors. In the free-living environment, the AGLFE had the largest discrepancy with the SW. This study highlights the need to verify step-counting accuracy of activity monitors with activities that include different movement types/directions. This is important to understand the origin of errors in step-counting during free-living conditions.
NASA Astrophysics Data System (ADS)
He, Jianbin; Yu, Simin; Cai, Jianping
2016-12-01
Lyapunov exponent is an important index for describing chaotic systems behavior, and the largest Lyapunov exponent can be used to determine whether a system is chaotic or not. For discrete-time dynamical systems, the Lyapunov exponents are calculated by an eigenvalue method. In theory, according to eigenvalue method, the more accurate calculations of Lyapunov exponent can be obtained with the increment of iterations, and the limits also exist. However, due to the finite precision of computer and other reasons, the results will be numeric overflow, unrecognized, or inaccurate, which can be stated as follows: (1) The iterations cannot be too large, otherwise, the simulation result will appear as an error message of NaN or Inf; (2) If the error message of NaN or Inf does not appear, then with the increment of iterations, all Lyapunov exponents will get close to the largest Lyapunov exponent, which leads to inaccurate calculation results; (3) From the viewpoint of numerical calculation, obviously, if the iterations are too small, then the results are also inaccurate. Based on the analysis of Lyapunov-exponent calculation in discrete-time systems, this paper investigates two improved algorithms via QR orthogonal decomposition and SVD orthogonal decomposition approaches so as to solve the above-mentioned problems. Finally, some examples are given to illustrate the feasibility and effectiveness of the improved algorithms.
A study for systematic errors of the GLA forecast model in tropical regions
NASA Technical Reports Server (NTRS)
Chen, Tsing-Chang; Baker, Wayman E.; Pfaendtner, James; Corrigan, Martin
1988-01-01
From the sensitivity studies performed with the Goddard Laboratory for Atmospheres (GLA) analysis/forecast system, it was revealed that the forecast errors in the tropics affect the ability to forecast midlatitude weather in some cases. Apparently, the forecast errors occurring in the tropics can propagate to midlatitudes. Therefore, the systematic error analysis of the GLA forecast system becomes a necessary step in improving the model's forecast performance. The major effort of this study is to examine the possible impact of the hydrological-cycle forecast error on dynamical fields in the GLA forecast system.
Initial Breakdown Pulse Parameters in Intracloud and Cloud-to-Ground Lightning Flashes
NASA Astrophysics Data System (ADS)
Smith, E. M.; Marshall, T. C.; Karunarathne, S.; Siedlecki, R.; Stolzenburg, M.
2018-02-01
This study analyzes the largest initial breakdown (IB) pulse in flashes from four storms in Florida; data from three sensor arrays are used. The range-normalized, zero-to-peak amplitude of the largest IB pulse was determined along with its altitude, duration, and timing within each flash. Appropriate data were available for 40 intracloud (IC) and 32 cloud-to-ground (CG) flashes. Histograms of amplitude of the largest IB pulse by flash type were similar, with mean (median) values of 1.49 (1.05) V/m for IC flashes and -1.35 (-0.87) V/m for CG flashes. The largest IB pulse in 30 IC flashes showed a weak inverse relation between pulse amplitude and altitude. Amplitude of the largest IB pulse for 25 CG flashes showed no altitude correlation. Duration of the largest IB pulse in ICs averaged twice as long as in CGs (96 μs versus 46 μs), and all of the CG durations were <100 μs. Among the ICs, there is a positive relation between largest IB pulse duration and amplitude; the linear correlation coefficient is 0.385 with outliers excluded. The largest IB pulse in IC flashes typically occurred at a longer time after the first IB pulse (average 4.1 ms) than was the case in CG flashes (average 0.6 ms). In both flash types, the largest IB pulse was the first IB pulse in about 30% of the cases. In one storm all 42 IC flashes with triggered data had IB pulses.
Initial Breakdown Pulse Amplitudes in Intracloud and Cloud-to-Ground Lightning Flashes
NASA Astrophysics Data System (ADS)
Marshall, T. C.; Smith, E. M.; Stolzenburg, M.; Karunarathne, S.; Siedlecki, R. D., II
2017-12-01
This study analyzes the largest initial breakdown (IB) pulse in flashes from three storms in Florida. The study was motivated in part by the possibility that IB pulses of IC flashes may cause of terrestrial gamma-ray flashes (TGFs). The range-normalized, zero-to-peak amplitude of the largest IB pulse within each flash was determined along with its altitude, duration, and occurrence time in the flash. Appropriate data were available for 40 intracloud (IC) and 32 cloud-to-ground (CG) flashes. Histograms of the magnitude of the largest IB pulse amplitude by flash type were similar, with mean (median) values of 1.49 (1.05) V/m for IC flashes and -1.35 (-0.87) V/m for CG flashes. The mean amplitude of the largest IC IB pulses are substantially smaller (roughly an order of magnitude smaller) than the few known pulse amplitudes of TGF events and TGF candidate events. The largest IB pulse in 30 IC flashes showed a weak inverse relation between pulse amplitude and altitude. Amplitude of the largest IB pulse for 25 CG flashes showed no altitude correlation. Duration of the largest IB pulse in ICs averaged twice as long as in CGs (96 μs versus 46 μs); all of the CG durations were <100 μs. Among the ICs, there is a positive relation between largest IB pulse duration and amplitude; the linear correlation coefficient is 0.385 with outliers excluded. The largest IB pulse in IC flashes typically occurred at a longer time after the first IB pulse (average 4.1 ms) than was the case in CG flashes (average 0.6 ms). In both flash types, the largest IB pulse was the first IB pulse in about 30% of the cases.
Estimating Climatological Bias Errors for the Global Precipitation Climatology Project (GPCP)
NASA Technical Reports Server (NTRS)
Adler, Robert; Gu, Guojun; Huffman, George
2012-01-01
A procedure is described to estimate bias errors for mean precipitation by using multiple estimates from different algorithms, satellite sources, and merged products. The Global Precipitation Climatology Project (GPCP) monthly product is used as a base precipitation estimate, with other input products included when they are within +/- 50% of the GPCP estimates on a zonal-mean basis (ocean and land separately). The standard deviation s of the included products is then taken to be the estimated systematic, or bias, error. The results allow one to examine monthly climatologies and the annual climatology, producing maps of estimated bias errors, zonal-mean errors, and estimated errors over large areas such as ocean and land for both the tropics and the globe. For ocean areas, where there is the largest question as to absolute magnitude of precipitation, the analysis shows spatial variations in the estimated bias errors, indicating areas where one should have more or less confidence in the mean precipitation estimates. In the tropics, relative bias error estimates (s/m, where m is the mean precipitation) over the eastern Pacific Ocean are as large as 20%, as compared with 10%-15% in the western Pacific part of the ITCZ. An examination of latitudinal differences over ocean clearly shows an increase in estimated bias error at higher latitudes, reaching up to 50%. Over land, the error estimates also locate regions of potential problems in the tropics and larger cold-season errors at high latitudes that are due to snow. An empirical technique to area average the gridded errors (s) is described that allows one to make error estimates for arbitrary areas and for the tropics and the globe (land and ocean separately, and combined). Over the tropics this calculation leads to a relative error estimate for tropical land and ocean combined of 7%, which is considered to be an upper bound because of the lack of sign-of-the-error canceling when integrating over different areas with a different number of input products. For the globe the calculated relative error estimate from this study is about 9%, which is also probably a slight overestimate. These tropical and global estimated bias errors provide one estimate of the current state of knowledge of the planet's mean precipitation.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Xing, Y; Macq, B; Bondar, L
Purpose: To quantify the accuracy in predicting the Bragg peak position using simulated in-room measurements of prompt gamma (PG) emissions for realistic treatment error scenarios that combine several sources of errors. Methods: Prompt gamma measurements by a knife-edge slit camera were simulated using an experimentally validated analytical simulation tool. Simulations were performed, for 143 treatment error scenarios, on an anthropomorphic phantom and a pencil beam scanning plan for nasal cavity. Three types of errors were considered: translation along each axis, rotation around each axis, and CT-calibration errors with magnitude ranging respectively, between −3 and 3 mm, −5 and 5 degrees,more » and between −5 and +5%. We investigated the correlation between the Bragg peak (BP) shift and the horizontal shift of PG profiles. The shifts were calculated between the planned (reference) position and the position by the error scenario. The prediction error for one spot was calculated as the absolute difference between the PG profile shift and the BP shift. Results: The PG shift was significantly and strongly correlated with the BP shift for 92% of the cases (p<0.0001, Pearson correlation coefficient R>0.8). Moderate but significant correlations were obtained for all cases that considered only CT-calibration errors and for 1 case that combined translation and CT-errors (p<0.0001, R ranged between 0.61 and 0.8). The average prediction errors for the simulated scenarios ranged between 0.08±0.07 and 1.67±1.3 mm (grand mean 0.66±0.76 mm). The prediction error was moderately correlated with the value of the BP shift (p=0, R=0.64). For the simulated scenarios the average BP shift ranged between −8±6.5 mm and 3±1.1 mm. Scenarios that considered combinations of the largest treatment errors were associated with large BP shifts. Conclusion: Simulations of in-room measurements demonstrate that prompt gamma profiles provide reliable estimation of the Bragg peak position for complex error scenarios. Yafei Xing and Luiza Bondar are funded by BEWARE grants from the Walloon Region. The work presents simulations results for a prompt gamma camera prototype developed by IBA.« less
The accuracy of the measurements in Ulugh Beg's star catalogue
NASA Astrophysics Data System (ADS)
Krisciunas, K.
1992-12-01
The star catalogue compiled by Ulugh Beg and his collaborators in Samarkand (ca. 1437) is the only catalogue primarily based on original observations between the times of Ptolemy and Tycho Brahe. Evans (1987) has given convincing evidence that Ulugh Beg's star catalogue was based on measurements made with a zodiacal armillary sphere graduated to 15(') , with interpolation to 0.2 units. He and Shevchenko (1990) were primarily interested in the systematic errors in ecliptic longitude. Shevchenko's analysis of the random errors was limited to the twelve zodiacal constellations. We have analyzed all 843 ecliptic longitudes and latitudes attributed to Ulugh Beg by Knobel (1917). This required multiplying all the longitude errors by the respective values of the cosine of the celestial latitudes. We find a random error of +/- 17minp 7 for ecliptic longitude and +/- 16minp 5 for ecliptic latitude. On the whole, the random errors are largest near the ecliptic, decreasing towards the ecliptic poles. For all of Ulugh Beg's measurements (excluding outliers) the mean systematic error is -10minp 8 +/- 0minp 8 for ecliptic longitude and 7minp 5 +/- 0minp 7 for ecliptic latitude, with the errors in the sense ``computed minus Ulugh Beg''. For the brighter stars (those designated alpha , beta , and gamma in the respective constellations), the mean systematic errors are -11minp 3 +/- 1minp 9 for ecliptic longitude and 9minp 4 +/- 1minp 5 for ecliptic latitude. Within the errors this matches the systematic error in both coordinates for alpha Vir. With greater confidence we may conclude that alpha Vir was the principal reference star in the catalogues of Ulugh Beg and Ptolemy. Evans, J. 1987, J. Hist. Astr. 18, 155. Knobel, E. B. 1917, Ulugh Beg's Catalogue of Stars, Washington, D. C.: Carnegie Institution. Shevchenko, M. 1990, J. Hist. Astr. 21, 187.
ERIC Educational Resources Information Center
Micceri, Theodore; Parasher, Pradnya; Waugh, Gordon W.; Herreid, Charlene
2009-01-01
An extensive review of the research literature and a study comparing over 36,000 survey responses with archival true scores indicated that one should expect a minimum of at least three percent random error for the least ambiguous of self-report measures. The Gulliver Effect occurs when a small proportion of error in a sizable subpopulation exerts…
Optical Oversampled Analog-to-Digital Conversion
1992-06-29
hologram weights and interconnects in the digital image halftoning configuration. First, no temporal error diffusion occurs in the digital image... halftoning error diffusion ar- chitecture as demonstrated by Equation (6.1). Equation (6.2) ensures that the hologram weights sum to one so that the exact...optimum halftone image should be faster. Similarly, decreased convergence time suggests that an error diffusion filter with larger spatial dimensions
Masked and unmasked error-related potentials during continuous control and feedback
NASA Astrophysics Data System (ADS)
Lopes Dias, Catarina; Sburlea, Andreea I.; Müller-Putz, Gernot R.
2018-06-01
The detection of error-related potentials (ErrPs) in tasks with discrete feedback is well established in the brain–computer interface (BCI) field. However, the decoding of ErrPs in tasks with continuous feedback is still in its early stages. Objective. We developed a task in which subjects have continuous control of a cursor’s position by means of a joystick. The cursor’s position was shown to the participants in two different modalities of continuous feedback: normal and jittered. The jittered feedback was created to mimic the instability that could exist if participants controlled the trajectory directly with brain signals. Approach. This paper studies the electroencephalographic (EEG)—measurable signatures caused by a loss of control over the cursor’s trajectory, causing a target miss. Main results. In both feedback modalities, time-locked potentials revealed the typical frontal-central components of error-related potentials. Errors occurring during the jittered feedback (masked errors) were delayed in comparison to errors occurring during normal feedback (unmasked errors). Masked errors displayed lower peak amplitudes than unmasked errors. Time-locked classification analysis allowed a good distinction between correct and error classes (average Cohen-, average TPR = 81.8% and average TNR = 96.4%). Time-locked classification analysis between masked error and unmasked error classes revealed results at chance level (average Cohen-, average TPR = 60.9% and average TNR = 58.3%). Afterwards, we performed asynchronous detection of ErrPs, combining both masked and unmasked trials. The asynchronous detection of ErrPs in a simulated online scenario resulted in an average TNR of 84.0% and in an average TPR of 64.9%. Significance. The time-locked classification results suggest that the masked and unmasked errors were indistinguishable in terms of classification. The asynchronous classification results suggest that the feedback modality did not hinder the asynchronous detection of ErrPs.
Death Certification Errors and the Effect on Mortality Statistics.
McGivern, Lauri; Shulman, Leanne; Carney, Jan K; Shapiro, Steven; Bundock, Elizabeth
Errors in cause and manner of death on death certificates are common and affect families, mortality statistics, and public health research. The primary objective of this study was to characterize errors in the cause and manner of death on death certificates completed by non-Medical Examiners. A secondary objective was to determine the effects of errors on national mortality statistics. We retrospectively compared 601 death certificates completed between July 1, 2015, and January 31, 2016, from the Vermont Electronic Death Registration System with clinical summaries from medical records. Medical Examiners, blinded to original certificates, reviewed summaries, generated mock certificates, and compared mock certificates with original certificates. They then graded errors using a scale from 1 to 4 (higher numbers indicated increased impact on interpretation of the cause) to determine the prevalence of minor and major errors. They also compared International Classification of Diseases, 10th Revision (ICD-10) codes on original certificates with those on mock certificates. Of 601 original death certificates, 319 (53%) had errors; 305 (51%) had major errors; and 59 (10%) had minor errors. We found no significant differences by certifier type (physician vs nonphysician). We did find significant differences in major errors in place of death ( P < .001). Certificates for deaths occurring in hospitals were more likely to have major errors than certificates for deaths occurring at a private residence (59% vs 39%, P < .001). A total of 580 (93%) death certificates had a change in ICD-10 codes between the original and mock certificates, of which 348 (60%) had a change in the underlying cause-of-death code. Error rates on death certificates in Vermont are high and extend to ICD-10 coding, thereby affecting national mortality statistics. Surveillance and certifier education must expand beyond local and state efforts. Simplifying and standardizing underlying literal text for cause of death may improve accuracy, decrease coding errors, and improve national mortality statistics.
High-resolution wavefront control of high-power laser systems
DOE Office of Scientific and Technical Information (OSTI.GOV)
Brase, J; Brown, C; Carrano, C
1999-07-08
Nearly every new large-scale laser system application at LLNL has requirements for beam control which exceed the current level of available technology. For applications such as inertial confinement fusion, laser isotope separation, laser machining, and laser the ability to transport significant power to a target while maintaining good beam quality is critical. There are many ways that laser wavefront quality can be degraded. Thermal effects due to the interaction of high-power laser or pump light with the internal optical components or with the ambient gas are common causes of wavefront degradation. For many years, adaptive optics based on thing deformablemore » glass mirrors with piezoelectric or electrostrictive actuators have be used to remove the low-order wavefront errors from high-power laser systems. These adaptive optics systems have successfully improved laser beam quality, but have also generally revealed additional high-spatial-frequency errors, both because the low-order errors have been reduced and because deformable mirrors have often introduced some high-spatial-frequency components due to manufacturing errors. Many current and emerging laser applications fall into the high-resolution category where there is an increased need for the correction of high spatial frequency aberrations which requires correctors with thousands of degrees of freedom. The largest Deformable Mirrors currently available have less than one thousand degrees of freedom at a cost of approximately $1M. A deformable mirror capable of meeting these high spatial resolution requirements would be cost prohibitive. Therefore a new approach using a different wavefront control technology is needed. One new wavefront control approach is the use of liquid-crystal (LC) spatial light modulator (SLM) technology for the controlling the phase of linearly polarized light. Current LC SLM technology provides high-spatial-resolution wavefront control, with hundreds of thousands of degrees of freedom, more than two orders of magnitude greater than the best Deformable Mirrors currently made. Even with the increased spatial resolution, the cost of these devices is nearly two orders of magnitude less than the cost of the largest deformable mirror.« less
Use of modeling to identify vulnerabilities to human error in laparoscopy.
Funk, Kenneth H; Bauer, James D; Doolen, Toni L; Telasha, David; Nicolalde, R Javier; Reeber, Miriam; Yodpijit, Nantakrit; Long, Myra
2010-01-01
This article describes an exercise to investigate the utility of modeling and human factors analysis in understanding surgical processes and their vulnerabilities to medical error. A formal method to identify error vulnerabilities was developed and applied to a test case of Veress needle insertion during closed laparoscopy. A team of 2 surgeons, a medical assistant, and 3 engineers used hierarchical task analysis and Integrated DEFinition language 0 (IDEF0) modeling to create rich models of the processes used in initial port creation. Using terminology from a standardized human performance database, detailed task descriptions were written for 4 tasks executed in the process of inserting the Veress needle. Key terms from the descriptions were used to extract from the database generic errors that could occur. Task descriptions with potential errors were translated back into surgical terminology. Referring to the process models and task descriptions, the team used a modified failure modes and effects analysis (FMEA) to consider each potential error for its probability of occurrence, its consequences if it should occur and be undetected, and its probability of detection. The resulting likely and consequential errors were prioritized for intervention. A literature-based validation study confirmed the significance of the top error vulnerabilities identified using the method. Ongoing work includes design and evaluation of procedures to correct the identified vulnerabilities and improvements to the modeling and vulnerability identification methods. Copyright 2010 AAGL. Published by Elsevier Inc. All rights reserved.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Klein, Fred W.
A significant seismic hazard exists in south Hawaii from large tectonic earthquakes that can reach magnitude 8 and intensity XII. This paper quantifies the hazard by estimating the horizontal peak ground acceleration (PGA) in south Hawaii which occurs with a 90% probability of not being exceeded during exposure times from 10 to 250 years. The largest earthquakes occur beneath active, unbuttressed and mobile flanks of volcanoes in their shield building stage.
Karnon, Jonathan; Campbell, Fiona; Czoski-Murray, Carolyn
2009-04-01
Medication errors can lead to preventable adverse drug events (pADEs) that have significant cost and health implications. Errors often occur at care interfaces, and various interventions have been devised to reduce medication errors at the point of admission to hospital. The aim of this study is to assess the incremental costs and effects [measured as quality adjusted life years (QALYs)] of a range of such interventions for which evidence of effectiveness exists. A previously published medication errors model was adapted to describe the pathway of errors occurring at admission through to the occurrence of pADEs. The baseline model was populated using literature-based values, and then calibrated to observed outputs. Evidence of effects was derived from a systematic review of interventions aimed at preventing medication error at hospital admission. All five interventions, for which evidence of effectiveness was identified, are estimated to be extremely cost-effective when compared with the baseline scenario. Pharmacist-led reconciliation intervention has the highest expected net benefits, and a probability of being cost-effective of over 60% by a QALY value of pound10 000. The medication errors model provides reasonably strong evidence that some form of intervention to improve medicines reconciliation is a cost-effective use of NHS resources. The variation in the reported effectiveness of the few identified studies of medication error interventions illustrates the need for extreme attention to detail in the development of interventions, but also in their evaluation and may justify the primary evaluation of more than one specification of included interventions.
Remote Triggering in the Koyna-Warna Reservoir-Induced Seismic Zone, Western India
NASA Astrophysics Data System (ADS)
Bansal, Abhey Ram; Rao, N. Purnachandra; Peng, Zhigang; Shashidhar, D.; Meng, Xiaofeng
2018-03-01
Dynamic triggering following large distant earthquakes has been observed in many regions globally. In this study, we present evidence for remote dynamic triggering in the Koyna-Warna region of Western India, which is known to be a premier site of reservoir-induced seismicity. Using data from a closely spaced broadband network of 11 stations operated in the region since 2005, we conduct a systematic search for dynamic triggering following 20 large distant earthquakes with dynamic stresses of at least 1 kPa in the region. We find that the only positive cases of dynamic triggering occurred during 11 April 2012, Mw8.6 Indian Ocean earthquake and its largest aftershock of Mw8.2. In the first case, microearthquakes started to occur in the first few cycles of the Love waves, and the largest event of magnitude 3.3 occurred during the first few cycles of the Rayleigh waves. The increase of microseismicity lasted for up to five days, including a magnitude 4.8 event occurred approximately three days later. Our results suggest that the Koyna-Warna region is stress sensitive and susceptible for remote dynamic triggering, although the apparent triggering threshold appears to be slightly higher than other regions.
Saichev, A; Sornette, D
2005-05-01
Using the epidemic-type aftershock sequence (ETAS) branching model of triggered seismicity, we apply the formalism of generating probability functions to calculate exactly the average difference between the magnitude of a mainshock and the magnitude of its largest aftershock over all generations. This average magnitude difference is found empirically to be independent of the mainshock magnitude and equal to 1.2, a universal behavior known as Båth's law. Our theory shows that Båth's law holds only sufficiently close to the critical regime of the ETAS branching process. Allowing for error bars +/- 0.1 for Båth's constant value around 1.2, our exact analytical treatment of Båth's law provides new constraints on the productivity exponent alpha and the branching ratio n: 0.9 approximately < alpha < or =1. We propose a method for measuring alpha based on the predicted renormalization of the Gutenberg-Richter distribution of the magnitudes of the largest aftershock. We also introduce the "second Båth law for foreshocks:" the probability that a main earthquake turns out to be the foreshock does not depend on its magnitude rho.
Applying lessons learned to enhance human performance and reduce human error for ISS operations
DOE Office of Scientific and Technical Information (OSTI.GOV)
Nelson, W.R.
1999-01-01
A major component of reliability, safety, and mission success for space missions is ensuring that the humans involved (flight crew, ground crew, mission control, etc.) perform their tasks and functions as required. This includes compliance with training and procedures during normal conditions, and successful compensation when malfunctions or unexpected conditions occur. A very significant issue that affects human performance in space flight is human error. Human errors can invalidate carefully designed equipment and procedures. If certain errors combine with equipment failures or design flaws, mission failure or loss of life can occur. The control of human error during operation ofmore » the International Space Station (ISS) will be critical to the overall success of the program. As experience from Mir operations has shown, human performance plays a vital role in the success or failure of long duration space missions. The Department of Energy{close_quote}s Idaho National Engineering and Environmental Laboratory (INEEL) is developing a systematic approach to enhance human performance and reduce human errors for ISS operations. This approach is based on the systematic identification and evaluation of lessons learned from past space missions such as Mir to enhance the design and operation of ISS. This paper will describe previous INEEL research on human error sponsored by NASA and how it can be applied to enhance human reliability for ISS. {copyright} {ital 1999 American Institute of Physics.}« less
Applying lessons learned to enhance human performance and reduce human error for ISS operations
DOE Office of Scientific and Technical Information (OSTI.GOV)
Nelson, W.R.
1998-09-01
A major component of reliability, safety, and mission success for space missions is ensuring that the humans involved (flight crew, ground crew, mission control, etc.) perform their tasks and functions as required. This includes compliance with training and procedures during normal conditions, and successful compensation when malfunctions or unexpected conditions occur. A very significant issue that affects human performance in space flight is human error. Human errors can invalidate carefully designed equipment and procedures. If certain errors combine with equipment failures or design flaws, mission failure or loss of life can occur. The control of human error during operation ofmore » the International Space Station (ISS) will be critical to the overall success of the program. As experience from Mir operations has shown, human performance plays a vital role in the success or failure of long duration space missions. The Department of Energy`s Idaho National Engineering and Environmental Laboratory (INEEL) is developed a systematic approach to enhance human performance and reduce human errors for ISS operations. This approach is based on the systematic identification and evaluation of lessons learned from past space missions such as Mir to enhance the design and operation of ISS. This paper describes previous INEEL research on human error sponsored by NASA and how it can be applied to enhance human reliability for ISS.« less
Medication errors in home care: a qualitative focus group study.
Berland, Astrid; Bentsen, Signe Berit
2017-11-01
To explore registered nurses' experiences of medication errors and patient safety in home care. The focus of care for older patients has shifted from institutional care towards a model of home care. Medication errors are common in this situation and can result in patient morbidity and mortality. An exploratory qualitative design with focus group interviews was used. Four focus group interviews were conducted with 20 registered nurses in home care. The data were analysed using content analysis. Five categories were identified as follows: lack of information, lack of competence, reporting medication errors, trade name products vs. generic name products, and improving routines. Medication errors occur frequently in home care and can threaten the safety of patients. Insufficient exchange of information and poor communication between the specialist and home-care health services, and between general practitioners and healthcare workers can lead to medication errors. A lack of competence in healthcare workers can also lead to medication errors. To prevent these, it is important that there should be up-to-date information and communication between healthcare workers during the transfer of patients from specialist to home care. Ensuring competence among healthcare workers with regard to medication is also important. In addition, there should be openness and accurate reporting of medication errors, as well as in setting routines for the preparation, alteration and administration of medicines. To prevent medication errors in home care, up-to-date information and communication between healthcare workers is important when patients are transferred from specialist to home care. It is also important to ensure adequate competence with regard to medication, and that there should be openness when medication errors occur, as well as in setting routines for the preparation, alteration and administration of medications. © 2017 John Wiley & Sons Ltd.
Saha, Amartya K.; Moses, Christopher S.; Price, Rene M.; Engel, Victor; Smith, Thomas J.; Anderson, Gordon
2012-01-01
Water budget parameters are estimated for Shark River Slough (SRS), the main drainage within Everglades National Park (ENP) from 2002 to 2008. Inputs to the water budget include surface water inflows and precipitation while outputs consist of evapotranspiration, discharge to the Gulf of Mexico and seepage losses due to municipal wellfield extraction. The daily change in volume of SRS is equated to the difference between input and outputs yielding a residual term consisting of component errors and net groundwater exchange. Results predict significant net groundwater discharge to the SRS peaking in June and positively correlated with surface water salinity at the mangrove ecotone, lagging by 1 month. Precipitation, the largest input to the SRS, is offset by ET (the largest output); thereby highlighting the importance of increasing fresh water inflows into ENP for maintaining conditions in terrestrial, estuarine, and marine ecosystems of South Florida.
Mathemagical Computing: Order of Operations and New Software.
ERIC Educational Resources Information Center
Ecker, Michael W.
1989-01-01
Describes mathematical problems which occur when using the computer as a calculator. Considers errors in BASIC calculation and the order of mathematical operations. Identifies errors in spreadsheet and calculator programs. Comments on sorting programs and provides a source for Mathemagical Black Holes. (MVL)
Tedim, Fantina; Remelgado, Ruben; Martins, João; Carvalho, Salete
2015-01-01
Portugal is a European country with highest forest fires density and burned area. Since beginning of official forest fires database in 1980, an increase in number of fires and burned area as well as appearance of large and catastrophic fires have characterized fire activity in Portugal. In 1980s, the largest fires were just a little bit over 10,000 ha. However, in the beginning of 21st century several fires occurred with a burned area over 20,000 ha. Some of these events can be classified as mega-fires due to their ecological and socioeconomic severity. The present study aimed to discuss the characterization of large forest fires trend, in order to understand if the largest fires that occurred in Portugal were exceptional events or evidences of a new trend, and the constraints of fire size to characterize fire effects because, usually, it is assumed that larger the fire higher the damages. Using Portuguese forest fire database and satellite imagery, the present study showed that the largest fires could be seen at the same time as exceptional events and as evidence of a new fire regime. It highlighted the importance of size and patterns of unburned patches within fire perimeter as well as heterogeneity of fire ecological severity, usually not included in fire regime description, which are critical to fire management and research. The findings of this research can be used in forest risk reduction and suppression planning.
Precision Orbit Derived Atmospheric Density: Development and Performance
NASA Astrophysics Data System (ADS)
McLaughlin, C.; Hiatt, A.; Lechtenberg, T.; Fattig, E.; Mehta, P.
2012-09-01
Precision orbit ephemerides (POE) are used to estimate atmospheric density along the orbits of CHAMP (Challenging Minisatellite Payload) and GRACE (Gravity Recovery and Climate Experiment). The densities are calibrated against accelerometer derived densities and considering ballistic coefficient estimation results. The 14-hour density solutions are stitched together using a linear weighted blending technique to obtain continuous solutions over the entire mission life of CHAMP and through 2011 for GRACE. POE derived densities outperform the High Accuracy Satellite Drag Model (HASDM), Jacchia 71 model, and NRLMSISE-2000 model densities when comparing cross correlation and RMS with accelerometer derived densities. Drag is the largest error source for estimating and predicting orbits for low Earth orbit satellites. This is one of the major areas that should be addressed to improve overall space surveillance capabilities; in particular, catalog maintenance. Generally, density is the largest error source in satellite drag calculations and current empirical density models such as Jacchia 71 and NRLMSISE-2000 have significant errors. Dynamic calibration of the atmosphere (DCA) has provided measurable improvements to the empirical density models and accelerometer derived densities of extremely high precision are available for a few satellites. However, DCA generally relies on observations of limited accuracy and accelerometer derived densities are extremely limited in terms of measurement coverage at any given time. The goal of this research is to provide an additional data source using satellites that have precision orbits available using Global Positioning System measurements and/or satellite laser ranging. These measurements strike a balance between the global coverage provided by DCA and the precise measurements of accelerometers. The temporal resolution of the POE derived density estimates is around 20-30 minutes, which is significantly worse than that of accelerometer derived density estimates. However, major variations in density are observed in the POE derived densities. These POE derived densities in combination with other data sources can be assimilated into physics based general circulation models of the thermosphere and ionosphere with the possibility of providing improved density forecasts for satellite drag analysis. POE derived density estimates were initially developed using CHAMP and GRACE data so comparisons could be made with accelerometer derived density estimates. This paper presents the results of the most extensive calibration of POE derived densities compared to accelerometer derived densities and provides the reasoning for selecting certain parameters in the estimation process. The factors taken into account for these selections are the cross correlation and RMS performance compared to the accelerometer derived densities and the output of the ballistic coefficient estimation that occurs simultaneously with the density estimation. This paper also presents the complete data set of CHAMP and GRACE results and shows that the POE derived densities match the accelerometer densities better than empirical models or DCA. This paves the way to expand the POE derived densities to include other satellites with quality GPS and/or satellite laser ranging observations.
Maximum magnitude in the Lower Rhine Graben
NASA Astrophysics Data System (ADS)
Vanneste, Kris; Merino, Miguel; Stein, Seth; Vleminckx, Bart; Brooks, Eddie; Camelbeeck, Thierry
2014-05-01
Estimating Mmax, the assumed magnitude of the largest future earthquakes expected on a fault or in an area, involves large uncertainties. No theoretical basis exists to infer Mmax because even where we know the long-term rate of motion across a plate boundary fault, or the deformation rate across an intraplate zone, neither predict how strain will be released. As a result, quite different estimates can be made based on the assumptions used. All one can say with certainty is that Mmax is at least as large as the largest earthquake in the available record. However, because catalogs are often short relative to the average recurrence time of large earthquakes, larger earthquakes than anticipated often occur. Estimating Mmax is especially challenging within plates, where deformation rates are poorly constrained, large earthquakes are rarer and variable in space and time, and often occur on previously unrecognized faults. We explore this issue for the Lower Rhine Graben seismic zone where the largest known earthquake, the 1756 Düren earthquake, has magnitude 5.7 and should occur on average about every 400 years. However, paleoseismic studies suggest that earthquakes with magnitudes up to 6.7 occurred during the Late Pleistocene and Holocene. What to assume for Mmax is crucial for critical facilities like nuclear power plants that should be designed to withstand the maximum shaking in 10,000 years. Using the observed earthquake frequency-magnitude data, we generate synthetic earthquake histories, and sample them over shorter intervals corresponding to the real catalog's completeness. The maximum magnitudes appearing most often in the simulations tend to be those of earthquakes with mean recurrence time equal to the catalog length. Because catalogs are often short relative to the average recurrence time of large earthquakes, we expect larger earthquakes than observed to date to occur. In a next step, we will compute hazard maps for different return periods based on the synthetic catalogs, in order to determine the influence of underestimating Mmax.
Quantum error-correcting code for ternary logic
NASA Astrophysics Data System (ADS)
Majumdar, Ritajit; Basu, Saikat; Ghosh, Shibashis; Sur-Kolay, Susmita
2018-05-01
Ternary quantum systems are being studied because they provide more computational state space per unit of information, known as qutrit. A qutrit has three basis states, thus a qubit may be considered as a special case of a qutrit where the coefficient of one of the basis states is zero. Hence both (2 ×2 ) -dimensional and (3 ×3 ) -dimensional Pauli errors can occur on qutrits. In this paper, we (i) explore the possible (2 ×2 ) -dimensional as well as (3 ×3 ) -dimensional Pauli errors in qutrits and show that any pairwise bit swap error can be expressed as a linear combination of shift errors and phase errors, (ii) propose a special type of error called a quantum superposition error and show its equivalence to arbitrary rotation, (iii) formulate a nine-qutrit code which can correct a single error in a qutrit, and (iv) provide its stabilizer and circuit realization.
Rate, causes and reporting of medication errors in Jordan: nurses' perspectives.
Mrayyan, Majd T; Shishani, Kawkab; Al-Faouri, Ibrahim
2007-09-01
The aim of the study was to describe Jordanian nurses' perceptions about various issues related to medication errors. This is the first nursing study about medication errors in Jordan. This was a descriptive study. A convenient sample of 799 nurses from 24 hospitals was obtained. Descriptive and inferential statistics were used for data analysis. Over the course of their nursing career, the average number of recalled committed medication errors per nurse was 2.2. Using incident reports, the rate of medication errors reported to nurse managers was 42.1%. Medication errors occurred mainly when medication labels/packaging were of poor quality or damaged. Nurses failed to report medication errors because they were afraid that they might be subjected to disciplinary actions or even lose their jobs. In the stepwise regression model, gender was the only predictor of medication errors in Jordan. Strategies to reduce or eliminate medication errors are required.
NASA Technical Reports Server (NTRS)
Landon, Lauren Blackwell; Vessey, William B.; Barrett, Jamie D.
2015-01-01
A team is defined as: "two or more individuals who interact socially and adaptively, have shared or common goals, and hold meaningful task interdependences; it is hierarchically structured and has a limited life span; in it expertise and roles are distributed; and it is embedded within an organization/environmental context that influences and is influenced by ongoing processes and performance outcomes" (Salas, Stagl, Burke, & Goodwin, 2007, p. 189). From the NASA perspective, a team is commonly understood to be a collection of individuals that is assigned to support and achieve a particular mission. Thus, depending on context, this definition can encompass both the spaceflight crew and the individuals and teams in the larger multi-team system who are assigned to support that crew during a mission. The Team Risk outcomes of interest are predominantly performance related, with a secondary emphasis on long-term health; this is somewhat unique in the NASA HRP in that most Risk areas are medically related and primarily focused on long-term health consequences. In many operational environments (e.g., aviation), performance is assessed as the avoidance of errors. However, the research on performance errors is ambiguous. It implies that actions may be dichotomized into "correct" or "incorrect" responses, where incorrect responses or errors are always undesirable. Researchers have argued that this dichotomy is a harmful oversimplification, and it would be more productive to focus on the variability of human performance and how organizations can manage that variability (Hollnagel, Woods, & Leveson, 2006) (Category III1). Two problems occur when focusing on performance errors: 1) the errors are infrequent and, therefore, difficult to observe and record; and 2) the errors do not directly correspond to failure. Research reveals that humans are fairly adept at correcting or compensating for performance errors before such errors result in recognizable or recordable failures. Astronauts are notably adept high performers. Most failures are recorded only when multiple, small errors occur and humans are unable to recognize and correct or compensate for these errors in time to prevent a failure (Dismukes, Berman, Loukopoulos, 2007) (Category III). More commonly, observers record variability in levels of performance. Some teams commit no observable errors but fail to achieve performance objectives or perform only adequately, while other teams commit some errors but perform spectacularly. Successful performance, therefore, cannot be viewed as simply the absence of errors or the avoidance of failure Johnson Space Center (JSC) Joint Leadership Team, 2008). While failure is commonly attributed to making a major error, focusing solely on the elimination of error(s) does not significantly reduce the risk of failure. Failure may also occur when performance is simply insufficient or an effort is incapable of adjusting sufficiently to a contextual change (e.g., changing levels of autonomy).
ERIC Educational Resources Information Center
Ludtke, Oliver; Marsh, Herbert W.; Robitzsch, Alexander; Trautwein, Ulrich
2011-01-01
In multilevel modeling, group-level variables (L2) for assessing contextual effects are frequently generated by aggregating variables from a lower level (L1). A major problem of contextual analyses in the social sciences is that there is no error-free measurement of constructs. In the present article, 2 types of error occurring in multilevel data…
da Silva, Brianna A; Krishnamurthy, Mahesh
2016-01-01
A 71-year-old female accidentally received thiothixene (Navane), an antipsychotic, instead of her anti-hypertensive medication amlodipine (Norvasc) for 3 months. She sustained physical and psychological harm including ambulatory dysfunction, tremors, mood swings, and personality changes. Despite the many opportunities for intervention, multiple health care providers overlooked her symptoms. Errors occurred at multiple care levels, including prescribing, initial pharmacy dispensation, hospitalization, and subsequent outpatient follow-up. This exemplifies the Swiss Cheese Model of how errors can occur within a system. Adverse drug events (ADEs) account for more than 3.5 million physician office visits and 1 million emergency department visits each year. It is believed that preventable medication errors impact more than 7 million patients and cost almost $21 billion annually across all care settings. About 30% of hospitalized patients have at least one discrepancy on discharge medication reconciliation. Medication errors and ADEs are an underreported burden that adversely affects patients, providers, and the economy. Medication reconciliation including an 'indication review' for each prescription is an important aspect of patient safety. The decreasing frequency of pill bottle reviews, suboptimal patient education, and poor communication between healthcare providers are factors that threaten patient safety. Medication error and ADEs cost billions of health care dollars and are detrimental to the provider-patient relationship.
Utilizing measure-based feedback in control-mastery theory: A clinical error.
Snyder, John; Aafjes-van Doorn, Katie
2016-09-01
Clinical errors and ruptures are an inevitable part of clinical practice. Often times, therapists are unaware that a clinical error or rupture has occurred, leaving no space for repair, and potentially leading to patient dropout and/or less effective treatment. One way to overcome our blind spots is by frequently and systematically collecting measure-based feedback from the patient. Patient feedback measures that focus on the process of psychotherapy such as the Patient's Experience of Attunement and Responsiveness scale (PEAR) can be used in conjunction with treatment outcome measures such as the Outcome Questionnaire 45.2 (OQ-45.2) to monitor the patient's therapeutic experience and progress. The regular use of these types of measures can aid clinicians in the identification of clinical errors and the associated patient deterioration that might otherwise go unnoticed and unaddressed. The current case study describes an instance of clinical error that occurred during the 2-year treatment of a highly traumatized young woman. The clinical error was identified using measure-based feedback and subsequently understood and addressed from the theoretical standpoint of the control-mastery theory of psychotherapy. An alternative hypothetical response is also presented and explained using control-mastery theory. (PsycINFO Database Record (c) 2016 APA, all rights reserved).
Lobaugh, Lauren M Y; Martin, Lizabeth D; Schleelein, Laura E; Tyler, Donald C; Litman, Ronald S
2017-09-01
Wake Up Safe is a quality improvement initiative of the Society for Pediatric Anesthesia that contains a deidentified registry of serious adverse events occurring in pediatric anesthesia. The aim of this study was to describe and characterize reported medication errors to find common patterns amenable to preventative strategies. In September 2016, we analyzed approximately 6 years' worth of medication error events reported to Wake Up Safe. Medication errors were classified by: (1) medication category; (2) error type by phase of administration: prescribing, preparation, or administration; (3) bolus or infusion error; (4) provider type and level of training; (5) harm as defined by the National Coordinating Council for Medication Error Reporting and Prevention; and (6) perceived preventability. From 2010 to the time of our data analysis in September 2016, 32 institutions had joined and submitted data on 2087 adverse events during 2,316,635 anesthetics. These reports contained details of 276 medication errors, which comprised the third highest category of events behind cardiac and respiratory related events. Medication errors most commonly involved opioids and sedative/hypnotics. When categorized by phase of handling, 30 events occurred during preparation, 67 during prescribing, and 179 during administration. The most common error type was accidental administration of the wrong dose (N = 84), followed by syringe swap (accidental administration of the wrong syringe, N = 49). Fifty-seven (21%) reported medication errors involved medications prepared as infusions as opposed to 1 time bolus administrations. Medication errors were committed by all types of anesthesia providers, most commonly by attendings. Over 80% of reported medication errors reached the patient and more than half of these events caused patient harm. Fifteen events (5%) required a life sustaining intervention. Nearly all cases (97%) were judged to be either likely or certainly preventable. Our findings characterize the most common types of medication errors in pediatric anesthesia practice and provide guidance on future preventative strategies. Many of these errors will be almost entirely preventable with the use of prefilled medication syringes to avoid accidental ampule swap, bar-coding at the point of medication administration to prevent syringe swap and to confirm the proper dose, and 2-person checking of medication infusions for accuracy.
The effect of using different regions of interest on local and mean skin temperature.
Maniar, Nirav; Bach, Aaron J E; Stewart, Ian B; Costello, Joseph T
2015-01-01
The dynamic nature of tissue temperature and the subcutaneous properties, such as blood flow, fatness, and metabolic rate, leads to variation in local skin temperature. Therefore, we investigated the effects of using multiple regions of interest when calculating weighted mean skin temperature from four local sites. Twenty-six healthy males completed a single trial in a thermonetural laboratory (mean ± SD): 24.0 (1.2)°C; 56 (8%) relative humidity; <0.1 m/s air speed). Mean skin temperature was calculated from four local sites (neck, scapula, hand and shin) in accordance with International Standards using digital infrared thermography. A 50 mm × 50 mm, defined by strips of aluminium tape, created six unique regions of interest, top left quadrant, top right quadrant, bottom left quadrant, bottom right quadrant, centre quadrant and the entire region of interest, at each of the local sites. The largest potential error in weighted mean skin temperature was calculated using a combination of a) the coolest and b) the warmest regions of interest at each of the local sites. Significant differences between the six regions interest were observed at the neck (P<0.01), scapula (P<0.001) and shin (P<0.05); but not at the hand (P = 0.482). The largest difference (± SEM) at each site was as follows: neck 0.2 (0.1)°C; scapula 0.2 (0.0)°C; shin 0.1 (0.0)°C and hand 0.1 (0.1)°C. The largest potential error (mean ± SD) in weighted mean skin temperature was 0.4 (0.1)°C (P<0.001) and the associated 95% limits of agreement for these differences was 0.2-0.5 °C. Although we observed differences in local and mean skin temperature based on the region of interest employed, these differences were minimal and are not considered physiologically meaningful. Copyright © 2015 Elsevier Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Richmond, Peter; Roehner, Bertrand M.
2016-05-01
The Farr-Bertillon law says that for all age-groups the death rate of married people is lower than the death rate of people who are not married (i.e. single, widowed or divorced). Although this law has been known for over 150 years, it has never been established with well-controlled accuracy (e.g. error bars). This even let some authors argue that it was a statistical artifact. It is true that the data must be selected with great care, especially for age groups of small size (e.g. widowers under 25). The observations reported in this paper were selected in the way experiments are designed in physics, that is to say with the objective of minimizing error bars. Data appropriate for mid-age groups may be unsuitable for young age groups and vice versa. The investigation led to the following results. (1) The FB effect is very similar for men and women, except that (at least in western countries) its amplitude is 20% higher for men. (2) There is a marked difference between single/divorced persons on the one hand, for whom the effect is largest around the age of 40, and widowed persons on the other hand, for whom the effect is largest around the age of 25. (3) When different causes of death are distinguished, the effect is largest for suicide and smallest for cancer. For heart disease and cerebrovascular accidents, the fact of being married divides the death rate by 2.2 compared to non-married persons. (4) For young widowers the death rates are up to 10 times higher than for married persons of same age. This extreme form of the FB effect will be referred to as the ;young widower effect;. Chinese data are used to explore this effect more closely. A possible connection between the FB effect and Martin Raff's ;Stay alive; effect for the cells in an organism is discussed in the last section.
Joint Inversion for 3-Dimensional S-Velocity Mantle Structure Along the Tethyan Margin
2007-09-01
Hindu Kush and encompasses northeastern Africa, the Arabian peninsula, the Middle East, and part of the Atlantic Ocean for reference. We have fitted...several microplates within an area of one quarter of the Earth’s circumference yields this region rich with tectonic complexity. The three...assigned the largest errors. For the oceans we use a constraint of 10 km for Moho depth, but only for points also covered by data from our other data sets
Yazıcı, Yüksel Aydın; Şen, Humman; Aliustaoğlu, Suheyla; Sezer, Yiğit; İnce, Cengiz Haluk
2015-05-01
Malpractice is an occasion that occurs due to defective treatment in the course of providing health services. Neither all of the errors within the medical practices are medical malpractices, nor all of the medical malpractices result in harm and judicial process. Injuries occurring at the time of treatment process may result from a complication or medical malpractice. This study aims to evaluate the reports of the controversial cases brought to trial with the claim of medical malpractice, compiled by The Council of Forensic Medicine. Our study includes all of the cases brought to the Ministry of Justice, Council of Forensic Medicine General Assembly with the claim of medical malpractice within a period of 11 years between 2000 and 2011 (n=330). In our study, we saw that 33.3% of the 330 cases were detected as "medical malpractice" by the General assembly. Within this 33.3% segment cases, 14.2% of them resulted from treatment errors such as wrong or incomplete treatment and surgery, use of wrong medication, running late for a true diagnosis after necessary examination, inappropriate medical processes as well as applied treatment having causality with an emergent injury to the patient. 9.7% of them emerged from diagnosis errors like failure to diagnose, wrong diagnosis, lack of consultation request, lack of transfer to a top centre, lack of intervention resulting from not recognizing the postoperative complication on time. 8.8% of them occurred because of careless intervention such as lack of necessary care and attention, lack of post operation follow-ups, lack of essential informing, absenteeism when called for a patient, intervention under suboptimal conditions. Whereas 0.3% of them developed from errors due to inexperience, 0.3% of them were detected to have occurred because of the administrative mistakes following malfunction of healthcare system. It is very important to analyze the errors properly in order to get the medical malpractice under control. Going through the errors, on which process of health service they occur and their owners; keeping the record of all examinations and treatments in the course of health service regularly and properly will be a cornerstone for both occupational and forensic medicine practices to be standardized.
Wind Power Forecasting Error Distributions: An International Comparison; Preprint
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hodge, B. M.; Lew, D.; Milligan, M.
2012-09-01
Wind power forecasting is expected to be an important enabler for greater penetration of wind power into electricity systems. Because no wind forecasting system is perfect, a thorough understanding of the errors that do occur can be critical to system operation functions, such as the setting of operating reserve levels. This paper provides an international comparison of the distribution of wind power forecasting errors from operational systems, based on real forecast data. The paper concludes with an assessment of similarities and differences between the errors observed in different locations.
Drifter observations of submesoscale flow kinematics in the coastal ocean
NASA Astrophysics Data System (ADS)
Ohlmann, J. C.; Molemaker, M. J.; Baschek, B.; Holt, B.; Marmorino, G.; Smith, G.
2017-01-01
Fronts and eddies identified with aerial guidance are seeded with drifters to quantify submesoscale flow kinematics. The Lagrangian observations show mean divergence and vorticity values that can exceed 5 times the Coriolis frequency. Values are the largest observed in the field to date and represent an extreme departure from geostrophic dynamics. The study also quantifies errors and biases associated with Lagrangian observations of the underlying velocity strain tensor. The greatest error results from undersampling, even with a large number of drifters. A significant bias comes from inhomogeneous sampling of convergent regions that accumulate drifters within a few hours of deployment. The study demonstrates a Lagrangian sampling paradigm for targeted submesoscale structures over a broad range of scales and presents flow kinematic values associated with vertical velocities O(10) m h-1 that can have profound implications on ocean biogeochemistry.
CCSDT calculations of molecular equilibrium geometries
NASA Astrophysics Data System (ADS)
Halkier, Asger; Jørgensen, Poul; Gauss, Jürgen; Helgaker, Trygve
1997-08-01
CCSDT equilibrium geometries of CO, CH 2, F 2, HF, H 2O and N 2 have been calculated using the correlation-consistent cc-pVXZ basis sets. Similar calculations have been performed for SCF, CCSD and CCSD(T). In general, bond lengths decrease when improving the basis set and increase when improving the N-electron treatment. CCSD(T) provides an excellent approximation to CCSDT for bond lengths as the largest difference between CCSDT and CCSD(T) is 0.06 pm. At the CCSDT/cc-pVQZ level, basis set deficiencies, neglect of higher-order excitations, and incomplete treatment of core-correlation all give rise to errors of a few tenths of a pm, but to a large extent, these errors cancel. The CCSDT/cc-pVQZ bond lengths deviate on average only by 0.11 pm from experiment.
The Mathematics of Computer Error.
ERIC Educational Resources Information Center
Wood, Eric
1988-01-01
Why a computer error occurred is considered by analyzing the binary system and decimal fractions. How the computer stores numbers is then described. Knowledge of the mathematics behind computer operation is important if one wishes to understand and have confidence in the results of computer calculations. (MNS)
Maassen, Gerard H
2010-08-01
In this Journal, Lewis and colleagues introduced a new Reliable Change Index (RCI(WSD)), which incorporated the within-subject standard deviation (WSD) of a repeated measurement design as the standard error. In this note, two opposite errors in using WSD this way are demonstrated. First, being the standard error of measurement of only a single assessment makes WSD too small when practice effects are absent. Then, too many individuals will be designated reliably changed. Second, WSD can grow unlimitedly to the extent that differential practice effects occur. This can even make RCI(WSD) unable to detect any reliable change.
Deetz, Carl O; Nolan, Debra K; Scott, Mitchell G
2012-01-01
A long-standing practice in clinical laboratories has been to automatically repeat laboratory tests when values trigger automated "repeat rules" in the laboratory information system such as a critical test result. We examined 25,553 repeated laboratory values for 30 common chemistry tests from December 1, 2010, to February 28, 2011, to determine whether this practice is necessary and whether it may be possible to reduce repeat testing to improve efficiency and turnaround time for reporting critical values. An "error" was defined to occur when the difference between the initial and verified values exceeded the College of American Pathologists/Clinical Laboratory Improvement Amendments allowable error limit. The initial values from 2.6% of all repeated tests (668) were errors. Of these 668 errors, only 102 occurred for values within the analytic measurement range. Median delays in reporting critical values owing to repeated testing ranged from 5 (blood gases) to 17 (glucose) minutes.
Buhay, W.M.; Simpson, S.; Thorleifson, H.; Lewis, M.; King, J.; Telka, A.; Wilkinson, Philip M.; Babb, J.; Timsic, S.; Bailey, D.
2009-01-01
A short sediment core (162 cm), covering the period AD 920-1999, was sampled from the south basin of Lake Winnipeg for a suite of multi-proxy analyses leading towards a detailed characterisation of the recent millennial lake environment and hydroclimate of southern Manitoba, Canada. Information on the frequency and duration of major dry periods in southern Manitoba, in light of the changes that are likely to occur as a result of an increasingly warming atmosphere, is of specific interest in this study. Intervals of relatively enriched lake sediment cellulose oxygen isotope values (??18Ocellulose) were found to occur from AD 1180 to 1230 (error range: AD 1104-1231 to 1160-1280), 1610-1640 (error range: AD 1571-1634 to 1603-1662), 1670-1720 (error range: AD 1643-1697 to 1692-1738) and 1750-1780 (error range: AD 1724-1766 to 1756-1794). Regional water balance, inferred from calculated Lake Winnipeg water oxygen isotope values (??18Oinf-lw), suggest that the ratio of lake evaporation to catchment input may have been 25-40% higher during these isotopically distinct periods. Associated with the enriched d??18Ocellulose intervals are some depleted carbon isotope values associated with more abundantly preserved sediment organic matter (d??13COM). These suggest reduced microbial oxidation of terrestrially derived organic matter and/or subdued lake productivity during periods of minimised input of nutrients from the catchment area. With reference to other corroborating evidence, it is suggested that the AD 1180-1230, 1610-1640, 1670-1720 and 1750-1780 intervals represent four distinctly drier periods (droughts) in southern Manitoba, Canada. Additionally, lower-magnitude and duration dry periods may have also occurred from 1320 to 1340 (error range: AD 1257-1363), 1530-1540 (error range: AD 1490-1565 to 1498-1572) and 1570-1580 (error range: AD 1531-1599 to 1539-1606). ?? 2009 John Wiley & Sons, Ltd.
Avoiding common pitfalls in qualitative data collection and transcription.
Easton, K L; McComish, J F; Greenberg, R
2000-09-01
The subjective nature of qualitative research necessitates scrupulous scientific methods to ensure valid results. Although qualitative methods such as grounded theory, phenomenology, and ethnography yield rich data, consumers of research need to be able to trust the findings reported in such studies. Researchers are responsible for establishing the trustworthiness of qualitative research through a variety of ways. Specific challenges faced in the field can seriously threaten the dependability of the data. However, by minimizing potential errors that can occur when doing fieldwork, researchers can increase the trustworthiness of the study. The purpose of this article is to present three of the pitfalls that can occur in qualitative research during data collection and transcription: equipment failure, environmental hazards, and transcription errors. Specific strategies to minimize the risk for avoidable errors will be discussed.
Uncertainty in predictions of forest carbon dynamics: separating driver error from model error.
Spadavecchia, L; Williams, M; Law, B E
2011-07-01
We present an analysis of the relative magnitude and contribution of parameter and driver uncertainty to the confidence intervals on estimates of net carbon fluxes. Model parameters may be difficult or impractical to measure, while driver fields are rarely complete, with data gaps due to sensor failure and sparse observational networks. Parameters are generally derived through some optimization method, while driver fields may be interpolated from available data sources. For this study, we used data from a young ponderosa pine stand at Metolius, Central Oregon, and a simple daily model of coupled carbon and water fluxes (DALEC). An ensemble of acceptable parameterizations was generated using an ensemble Kalman filter and eddy covariance measurements of net C exchange. Geostatistical simulations generated an ensemble of meteorological driving variables for the site, consistent with the spatiotemporal autocorrelations inherent in the observational data from 13 local weather stations. Simulated meteorological data were propagated through the model to derive the uncertainty on the CO2 flux resultant from driver uncertainty typical of spatially extensive modeling studies. Furthermore, the model uncertainty was partitioned between temperature and precipitation. With at least one meteorological station within 25 km of the study site, driver uncertainty was relatively small ( 10% of the total net flux), while parameterization uncertainty was larger, 50% of the total net flux. The largest source of driver uncertainty was due to temperature (8% of the total flux). The combined effect of parameter and driver uncertainty was 57% of the total net flux. However, when the nearest meteorological station was > 100 km from the study site, uncertainty in net ecosystem exchange (NEE) predictions introduced by meteorological drivers increased by 88%. Precipitation estimates were a larger source of bias in NEE estimates than were temperature estimates, although the biases partly compensated for each other. The time scales on which precipitation errors occurred in the simulations were shorter than the temporal scales over which drought developed in the model, so drought events were reasonably simulated. The approach outlined here provides a means to assess the uncertainty and bias introduced by meteorological drivers in regional-scale ecological forecasting.
Ter Braak, Cajo J F; Peres-Neto, Pedro; Dray, Stéphane
2017-01-01
Statistical testing of trait-environment association from data is a challenge as there is no common unit of observation: the trait is observed on species, the environment on sites and the mediating abundance on species-site combinations. A number of correlation-based methods, such as the community weighted trait means method (CWM), the fourth-corner correlation method and the multivariate method RLQ, have been proposed to estimate such trait-environment associations. In these methods, valid statistical testing proceeds by performing two separate resampling tests, one site-based and the other species-based and by assessing significance by the largest of the two p -values (the p max test). Recently, regression-based methods using generalized linear models (GLM) have been proposed as a promising alternative with statistical inference via site-based resampling. We investigated the performance of this new approach along with approaches that mimicked the p max test using GLM instead of fourth-corner. By simulation using models with additional random variation in the species response to the environment, the site-based resampling tests using GLM are shown to have severely inflated type I error, of up to 90%, when the nominal level is set as 5%. In addition, predictive modelling of such data using site-based cross-validation very often identified trait-environment interactions that had no predictive value. The problem that we identify is not an "omitted variable bias" problem as it occurs even when the additional random variation is independent of the observed trait and environment data. Instead, it is a problem of ignoring a random effect. In the same simulations, the GLM-based p max test controlled the type I error in all models proposed so far in this context, but still gave slightly inflated error in more complex models that included both missing (but important) traits and missing (but important) environmental variables. For screening the importance of single trait-environment combinations, the fourth-corner test is shown to give almost the same results as the GLM-based tests in far less computing time.
Fleming, Kevin K; Bandy, Carole L; Kimble, Matthew O
2010-01-01
The decision to shoot a gun engages executive control processes that can be biased by cultural stereotypes and perceived threat. The neural locus of the decision to shoot is likely to be found in the anterior cingulate cortex (ACC), where cognition and affect converge. Male military cadets at Norwich University (N=37) performed a weapon identification task in which they made rapid decisions to shoot when images of guns appeared briefly on a computer screen. Reaction times, error rates, and electroencephalogram (EEG) activity were recorded. Cadets reacted more quickly and accurately when guns were primed by images of Middle-Eastern males wearing traditional clothing. However, cadets also made more false positive errors when tools were primed by these images. Error-related negativity (ERN) was measured for each response. Deeper ERNs were found in the medial-frontal cortex following false positive responses. Cadets who made fewer errors also produced deeper ERNs, indicating stronger executive control. Pupil size was used to measure autonomic arousal related to perceived threat. Images of Middle-Eastern males in traditional clothing produced larger pupil sizes. An image of Osama bin Laden induced the largest pupil size, as would be predicted for the exemplar of Middle East terrorism. Cadets who showed greater increases in pupil size also made more false positive errors. Regression analyses were performed to evaluate predictions based on current models of perceived threat, stereotype activation, and cognitive control. Measures of pupil size (perceived threat) and ERN (cognitive control) explained significant proportions of the variance in false positive errors to Middle-Eastern males in traditional clothing, while measures of reaction time, signal detection response bias, and stimulus discriminability explained most of the remaining variance.
Fleming, Kevin K.; Bandy, Carole L.; Kimble, Matthew O.
2014-01-01
The decision to shoot engages executive control processes that can be biased by cultural stereotypes and perceived threat. The neural locus of the decision to shoot is likely to be found in the anterior cingulate cortex (ACC) where cognition and affect converge. Male military cadets at Norwich University (N=37) performed a weapon identification task in which they made rapid decisions to shoot when images of guns appeared briefly on a computer screen. Reaction times, error rates, and EEG activity were recorded. Cadets reacted more quickly and accurately when guns were primed by images of middle-eastern males wearing traditional clothing. However, cadets also made more false positive errors when tools were primed by these images. Error-related negativity (ERN) was measured for each response. Deeper ERN’s were found in the medial-frontal cortex following false positive responses. Cadets who made fewer errors also produced deeper ERN’s, indicating stronger executive control. Pupil size was used to measure autonomic arousal related to perceived threat. Images of middle-eastern males in traditional clothing produced larger pupil sizes. An image of Osama bin Laden induced the largest pupil size, as would be predicted for the exemplar of Middle East terrorism. Cadets who showed greater increases in pupil size also made more false positive errors. Regression analyses were performed to evaluate predictions based on current models of perceived threat, stereotype activation, and cognitive control. Measures of pupil size (perceived threat) and ERN (cognitive control) explained significant proportions of the variance in false positive errors to middle-eastern males in traditional clothing, while measures of reaction time, signal detection response bias, and stimulus discriminability explained most of the remaining variance. PMID:19813139
Rapid production of optimal-quality reduced-resolution representations of very large databases
Sigeti, David E.; Duchaineau, Mark; Miller, Mark C.; Wolinsky, Murray; Aldrich, Charles; Mineev-Weinstein, Mark B.
2001-01-01
View space representation data is produced in real time from a world space database representing terrain features. The world space database is first preprocessed. A database is formed having one element for each spatial region corresponding to a finest selected level of detail. A multiresolution database is then formed by merging elements and a strict error metric is computed for each element at each level of detail that is independent of parameters defining the view space. The multiresolution database and associated strict error metrics are then processed in real time for real time frame representations. View parameters for a view volume comprising a view location and field of view are selected. The error metric with the view parameters is converted to a view-dependent error metric. Elements with the coarsest resolution are chosen for an initial representation. Data set first elements from the initial representation data set are selected that are at least partially within the view volume. The first elements are placed in a split queue ordered by the value of the view-dependent error metric. If the number of first elements in the queue meets or exceeds a predetermined number of elements or whether the largest error metric is less than or equal to a selected upper error metric bound, the element at the head of the queue is force split and the resulting elements are inserted into the queue. Force splitting is continued until the determination is positive to form a first multiresolution set of elements. The first multiresolution set of elements is then outputted as reduced resolution view space data representing the terrain features.
Statistics of the epoch of reionization 21-cm signal - I. Power spectrum error-covariance
NASA Astrophysics Data System (ADS)
Mondal, Rajesh; Bharadwaj, Somnath; Majumdar, Suman
2016-02-01
The non-Gaussian nature of the epoch of reionization (EoR) 21-cm signal has a significant impact on the error variance of its power spectrum P(k). We have used a large ensemble of seminumerical simulations and an analytical model to estimate the effect of this non-Gaussianity on the entire error-covariance matrix {C}ij. Our analytical model shows that {C}ij has contributions from two sources. One is the usual variance for a Gaussian random field which scales inversely of the number of modes that goes into the estimation of P(k). The other is the trispectrum of the signal. Using the simulated 21-cm Signal Ensemble, an ensemble of the Randomized Signal and Ensembles of Gaussian Random Ensembles we have quantified the effect of the trispectrum on the error variance {C}II. We find that its relative contribution is comparable to or larger than that of the Gaussian term for the k range 0.3 ≤ k ≤ 1.0 Mpc-1, and can be even ˜200 times larger at k ˜ 5 Mpc-1. We also establish that the off-diagonal terms of {C}ij have statistically significant non-zero values which arise purely from the trispectrum. This further signifies that the error in different k modes are not independent. We find a strong correlation between the errors at large k values (≥0.5 Mpc-1), and a weak correlation between the smallest and largest k values. There is also a small anticorrelation between the errors in the smallest and intermediate k values. These results are relevant for the k range that will be probed by the current and upcoming EoR 21-cm experiments.
Improved Uncertainty Quantification in Groundwater Flux Estimation Using GRACE
NASA Astrophysics Data System (ADS)
Reager, J. T., II; Rao, P.; Famiglietti, J. S.; Turmon, M.
2015-12-01
Groundwater change is difficult to monitor over large scales. One of the most successful approaches is in the remote sensing of time-variable gravity using NASA Gravity Recovery and Climate Experiment (GRACE) mission data, and successful case studies have created the opportunity to move towards a global groundwater monitoring framework for the world's largest aquifers. To achieve these estimates, several approximations are applied, including those in GRACE processing corrections, the formulation of the formal GRACE errors, destriping and signal recovery, and the numerical model estimation of snow water, surface water and soil moisture storage states used to isolate a groundwater component. A major weakness in these approaches is inconsistency: different studies have used different sources of primary and ancillary data, and may achieve different results based on alternative choices in these approximations. In this study, we present two cases of groundwater change estimation in California and the Colorado River basin, selected for their good data availability and varied climates. We achieve a robust numerical estimate of post-processing uncertainties resulting from land-surface model structural shortcomings and model resolution errors. Groundwater variations should demonstrate less variability than the overlying soil moisture state does, as groundwater has a longer memory of past events due to buffering by infiltration and drainage rate limits. We apply a model ensemble approach in a Bayesian framework constrained by the assumption of decreasing signal variability with depth in the soil column. We also discuss time variable errors vs. time constant errors, across-scale errors v. across-model errors, and error spectral content (across scales and across model). More robust uncertainty quantification for GRACE-based groundwater estimates would take all of these issues into account, allowing for more fair use in management applications and for better integration of GRACE-based measurements with observations from other sources.
Electromagnetic Emissions from a Modular Low Voltage Electro-Impulse De-Icing System
1989-03-01
composite wing section employed in these tests. Cy O’Young of the Boeing Commercial Airplance Company is thanked for his many he-pful suggestions during this...a voltage spike which occurred simultaneous wi’h the discharge of the coil. A 2.2 volt spike would be adequate to create a transmission error on a...signal was a voltage spike which occurs simultaneous with discharge of the coil. A 2.2 volt spike would be adequate to create an error on a digital
Reducing Uncertainty in the American Community Survey through Data-Driven Regionalization
Spielman, Seth E.; Folch, David C.
2015-01-01
The American Community Survey (ACS) is the largest survey of US households and is the principal source for neighborhood scale information about the US population and economy. The ACS is used to allocate billions in federal spending and is a critical input to social scientific research in the US. However, estimates from the ACS can be highly unreliable. For example, in over 72% of census tracts, the estimated number of children under 5 in poverty has a margin of error greater than the estimate. Uncertainty of this magnitude complicates the use of social data in policy making, research, and governance. This article presents a heuristic spatial optimization algorithm that is capable of reducing the margins of error in survey data via the creation of new composite geographies, a process called regionalization. Regionalization is a complex combinatorial problem. Here rather than focusing on the technical aspects of regionalization we demonstrate how to use a purpose built open source regionalization algorithm to process survey data in order to reduce the margins of error to a user-specified threshold. PMID:25723176
Troposphere Delay Raytracing Applied in VLBI Analysis
NASA Astrophysics Data System (ADS)
Eriksson, David; MacMillan, Daniel; Gipson, John
2014-12-01
Tropospheric delay modeling error is one of the largest sources of error in VLBI analysis. For standard operational solutions, we use the VMF1 elevation-dependent mapping functions derived from European Centre for Medium Range Forecasting (ECMWF) data. These mapping functions assume that tropospheric delay at a site is azimuthally symmetric. As this assumption does not reflect reality, we have instead determined the raytrace delay along the signal path through the three-dimensional troposphere refractivity field for each VLBI quasar observation. We calculated the troposphere refractivity fields from the pressure, temperature, specific humidity, and geopotential height fields of the NASA GSFC GEOS-5 numerical weather model. We discuss results using raytrace delay in the analysis of the CONT11 R&D sessions. When applied in VLBI analysis, baseline length repeatabilities were better for 70% of baselines with raytraced delays than with VMF1 mapping functions. Vertical repeatabilities were better for 2/3 of all stations. The reference frame scale bias error was 0.02 ppb for raytracing versus 0.08 ppb and 0.06 ppb for VMF1 and NMF, respectively.
Kim, Joo Hyoung; Cha, Jung Yul; Hwang, Chung Ju
2012-12-01
This in vitro study was undertaken to evaluate the physical, chemical, and biological properties of commercially available metal orthodontic brackets in South Korea, because national standards for these products are lacking. FOUR BRACKET BRANDS WERE TESTED FOR DIMENSIONAL ACCURACY, (MANUFACTURING ERRORS IN ANGULATION AND TORQUE), CYTOTOXICITY, COMPOSITION, ELUTION, AND CORROSION: Archist (Daeseung Medical), Victory (3M Unitek), Kosaka (Tomy), and Confidence (Shinye Odontology Materials). The tested rackets showed no significant differences in manufacturing errors in angulation, but Confidence brackets showed a significant difference in manufacturing errors in torque. None of the brackets were cytotoxic to mouse fibroblasts. The metal ion components did not show a regular increasing or decreasing trend of elution over time, but the volume of the total eluted metal ions increased: Archist brackets had the maximal Cr elution and Confidence brackets appeared to have the largest volume of total eluted metal ions because of excessive Ni elution. Confidence brackets showed the lowest corrosion resistance during potentiodynamic polarization. The results of this study could potentially be applied in establishing national standards for metal orthodontic brackets and in evaluating commercially available products.
Reducing uncertainty in the american community survey through data-driven regionalization.
Spielman, Seth E; Folch, David C
2015-01-01
The American Community Survey (ACS) is the largest survey of US households and is the principal source for neighborhood scale information about the US population and economy. The ACS is used to allocate billions in federal spending and is a critical input to social scientific research in the US. However, estimates from the ACS can be highly unreliable. For example, in over 72% of census tracts, the estimated number of children under 5 in poverty has a margin of error greater than the estimate. Uncertainty of this magnitude complicates the use of social data in policy making, research, and governance. This article presents a heuristic spatial optimization algorithm that is capable of reducing the margins of error in survey data via the creation of new composite geographies, a process called regionalization. Regionalization is a complex combinatorial problem. Here rather than focusing on the technical aspects of regionalization we demonstrate how to use a purpose built open source regionalization algorithm to process survey data in order to reduce the margins of error to a user-specified threshold.
Cohen, Aaron M
2008-01-01
We participated in the i2b2 smoking status classification challenge task. The purpose of this task was to evaluate the ability of systems to automatically identify patient smoking status from discharge summaries. Our submission included several techniques that we compared and studied, including hot-spot identification, zero-vector filtering, inverse class frequency weighting, error-correcting output codes, and post-processing rules. We evaluated our approaches using the same methods as the i2b2 task organizers, using micro- and macro-averaged F1 as the primary performance metric. Our best performing system achieved a micro-F1 of 0.9000 on the test collection, equivalent to the best performing system submitted to the i2b2 challenge. Hot-spot identification, zero-vector filtering, classifier weighting, and error correcting output coding contributed additively to increased performance, with hot-spot identification having by far the largest positive effect. High performance on automatic identification of patient smoking status from discharge summaries is achievable with the efficient and straightforward machine learning techniques studied here.
A global perspective of the limits of prediction skill based on the ECMWF ensemble
NASA Astrophysics Data System (ADS)
Zagar, Nedjeljka
2016-04-01
In this talk presents a new model of the global forecast error growth applied to the forecast errors simulated by the ensemble prediction system (ENS) of the ECMWF. The proxy for forecast errors is the total spread of the ECMWF operational ensemble forecasts obtained by the decomposition of the wind and geopotential fields in the normal-mode functions. In this way, the ensemble spread can be quantified separately for the balanced and inertio-gravity (IG) modes for every forecast range. Ensemble reliability is defined for the balanced and IG modes comparing the ensemble spread with the control analysis in each scale. The results show that initial uncertainties in the ECMWF ENS are largest in the tropical large-scale modes and their spatial distribution is similar to the distribution of the short-range forecast errors. Initially the ensemble spread grows most in the smallest scales and in the synoptic range of the IG modes but the overall growth is dominated by the increase of spread in balanced modes in synoptic and planetary scales in the midlatitudes. During the forecasts, the distribution of spread in the balanced and IG modes grows towards the climatological spread distribution characteristic of the analyses. The ENS system is found to be somewhat under-dispersive which is associated with the lack of tropical variability, primarily the Kelvin waves. The new model of the forecast error growth has three fitting parameters to parameterize the initial fast growth and a more slow exponential error growth later on. The asymptotic values of forecast errors are independent of the exponential growth rate. It is found that the asymptotic values of the errors due to unbalanced dynamics are around 10 days while the balanced and total errors saturate in 3 to 4 weeks. Reference: Žagar, N., R. Buizza, and J. Tribbia, 2015: A three-dimensional multivariate modal analysis of atmospheric predictability with application to the ECMWF ensemble. J. Atmos. Sci., 72, 4423-4444.
NASA Astrophysics Data System (ADS)
Kocifaj, Miroslav; Gueymard, Christian A.
2011-02-01
Aerosol optical depth (AOD) has a crucial importance for estimating the optical properties of the atmosphere, and is constantly present in optical models of aerosol systems. Any error in aerosol optical depth (∂AOD) has direct and indirect consequences. On the one hand, such errors affect the accuracy of radiative transfer models (thus implying, e.g., potential errors in the evaluation of radiative forcing by aerosols). Additionally, any error in determining AOD is reflected in the retrieved microphysical properties of aerosol particles, which might therefore be inaccurate. Three distinct effects (circumsolar radiation, optical mass, and solar disk's brightness distribution) affecting ∂AOD are qualified and quantified in the present study. The contribution of circumsolar (CS) radiation to the measured flux density of direct solar radiation has received more attention than the two other effects in the literature. It varies rapidly with meteorological conditions and size distribution of the aerosol particles, but also with instrument field of view. Numerical simulations of the three effects just mentioned were conducted, assuming otherwise "perfect" experimental conditions. The results show that CS is responsible for the largest error in AOD, while the effect of brightness distribution (BD) has only a negligible impact. The optical mass (OM) effect yields negligible errors in AOD generally, but noticeable errors for low sun (within 10° of the horizon). In general, the OM and BD effects result in negative errors in AOD (i.e. the true AOD is smaller than that of the experimental determination), conversely to CS. Although the rapid increase in optical mass at large zenith angles can change the sign of ∂AOD, the CS contribution frequently plays the leading role in ∂AOD. To maximize the accuracy in AOD retrievals, the CS effect should not be ignored. In practice, however, this effect can be difficult to evaluate correctly unless the instantaneous aerosols size distribution is known from, e.g., inversion techniques.
Yeck, William; Weingarten, Matthew; Benz, Harley M.; McNamara, Daniel E.; Bergman, E.; Herrmann, R.B; Rubinstein, Justin L.; Earle, Paul
2016-01-01
The Mw 5.1 Fairview, Oklahoma, earthquake on 13 February 2016 and its associated seismicity produced the largest moment release in the central and eastern United States since the 2011 Mw 5.7 Prague, Oklahoma, earthquake sequence and is one of the largest earthquakes potentially linked to wastewater injection. This energetic sequence has produced five earthquakes with Mw 4.4 or larger. Almost all of these earthquakes occur in Precambrian basement on a partially unmapped 14 km long fault. Regional injection into the Arbuckle Group increased approximately sevenfold in the 36 months prior to the start of the sequence (January 2015). We suggest far-field pressurization from clustered, high-rate wells greater than 12 km from this sequence induced these earthquakes. As compared to the Fairview sequence, seismicity is diffuse near high-rate wells, where pressure changes are expected to be largest. This points to the critical role that preexisting faults play in the occurrence of large induced earthquakes.
Shortleaf pine seed production in natural stands in the Ouachita and Ozark mountains
Michael G. Shelton; Robert F. Wittwer
1996-01-01
Seed production of shortleaf pine (Pinus echinata Mill.) was monitored from 1965 to 1974 to determine the periodicity qf seed crops in both woods-run stands and seed-production areas. One bumper and two good seed crops occurred during the 9-yr period. The two largest crops occurred in successive years, then seed production was low for 4 yr before...
Golz, Jürgen; MacLeod, Donald I A
2003-05-01
We analyze the sources of error in specifying color in CRT displays. These include errors inherent in the use of the color matching functions of the CIE 1931 standard observer when only colorimetric, not radiometric, calibrations are available. We provide transformation coefficients that prove to correct the deficiencies of this observer very well. We consider four different candidate sets of cone sensitivities. Some of these differ substantially; variation among candidate cone sensitivities exceeds the variation among phosphors. Finally, the effects of the recognized forms of observer variation on the visual responses (cone excitations or cone contrasts) generated by CRT stimuli are investigated and quantitatively specified. Cone pigment polymorphism gives rise to variation of a few per cent in relative excitation by the different phosphors--a variation larger than the errors ensuing from the adoption of the CIE standard observer, though smaller than the differences between some candidate cone sensitivities. Macular pigmentation has a larger influence, affecting mainly responses to the blue phosphor. The estimated combined effect of all sources of observer variation is comparable in magnitude with the largest differences between competing cone sensitivity estimates but is not enough to disrupt very seriously the relation between the L and M cone weights and the isoluminance settings of individual observers. It is also comparable with typical instrumental colorimetric errors, but we discuss these only briefly.
Post-processing through linear regression
NASA Astrophysics Data System (ADS)
van Schaeybroeck, B.; Vannitsem, S.
2011-03-01
Various post-processing techniques are compared for both deterministic and ensemble forecasts, all based on linear regression between forecast data and observations. In order to evaluate the quality of the regression methods, three criteria are proposed, related to the effective correction of forecast error, the optimal variability of the corrected forecast and multicollinearity. The regression schemes under consideration include the ordinary least-square (OLS) method, a new time-dependent Tikhonov regularization (TDTR) method, the total least-square method, a new geometric-mean regression (GM), a recently introduced error-in-variables (EVMOS) method and, finally, a "best member" OLS method. The advantages and drawbacks of each method are clarified. These techniques are applied in the context of the 63 Lorenz system, whose model version is affected by both initial condition and model errors. For short forecast lead times, the number and choice of predictors plays an important role. Contrarily to the other techniques, GM degrades when the number of predictors increases. At intermediate lead times, linear regression is unable to provide corrections to the forecast and can sometimes degrade the performance (GM and the best member OLS with noise). At long lead times the regression schemes (EVMOS, TDTR) which yield the correct variability and the largest correlation between ensemble error and spread, should be preferred.
Altitude Registration of Limb-Scattered Radiation
NASA Technical Reports Server (NTRS)
Moy, Leslie; Bhartia, Pawan K.; Jaross, Glen; Loughman, Robert; Kramarova, Natalya; Chen, Zhong; Taha, Ghassan; Chen, Grace; Xu, Philippe
2017-01-01
One of the largest constraints to the retrieval of accurate ozone profiles from UV backscatter limb sounding sensors is altitude registration. Two methods, the Rayleigh scattering attitude sensing (RSAS) and absolute radiance residual method (ARRM), are able to determine altitude registration to the accuracy necessary for long-term ozone monitoring. The methods compare model calculations of radiances to measured radiances and are independent of onboard tracking devices. RSAS determines absolute altitude errors, but, because the method is susceptible to aerosol interference, it is limited to latitudes and time periods with minimal aerosol contamination. ARRM, a new technique introduced in this paper, can be applied across all seasons and altitudes. However, it is only appropriate for relative altitude error estimates. The application of RSAS to Limb Profiler (LP) measurements from the Ozone Mapping and Profiler Suite (OMPS) on board the Suomi NPP (SNPP) satellite indicates tangent height (TH) errors greater than 1 km with an absolute accuracy of +/-200 m. Results using ARRM indicate a approx. 300 to 400m intra-orbital TH change varying seasonally +/-100 m, likely due to either errors in the spacecraft pointing or in the geopotential height (GPH) data that we use in our analysis. ARRM shows a change of approx. 200m over 5 years with a relative accuracy (a long-term accuracy) of 100m outside the polar regions.
Caprihan, A; Pearlson, G D; Calhoun, V D
2008-08-15
Principal component analysis (PCA) is often used to reduce the dimension of data before applying more sophisticated data analysis methods such as non-linear classification algorithms or independent component analysis. This practice is based on selecting components corresponding to the largest eigenvalues. If the ultimate goal is separation of data in two groups, then these set of components need not have the most discriminatory power. We measured the distance between two such populations using Mahalanobis distance and chose the eigenvectors to maximize it, a modified PCA method, which we call the discriminant PCA (DPCA). DPCA was applied to diffusion tensor-based fractional anisotropy images to distinguish age-matched schizophrenia subjects from healthy controls. The performance of the proposed method was evaluated by the one-leave-out method. We show that for this fractional anisotropy data set, the classification error with 60 components was close to the minimum error and that the Mahalanobis distance was twice as large with DPCA, than with PCA. Finally, by masking the discriminant function with the white matter tracts of the Johns Hopkins University atlas, we identified left superior longitudinal fasciculus as the tract which gave the least classification error. In addition, with six optimally chosen tracts the classification error was zero.
Error protection capability of space shuttle data bus designs
NASA Technical Reports Server (NTRS)
Proch, G. E.
1974-01-01
Error protection assurance in the reliability of digital data communications is discussed. The need for error protection on the space shuttle data bus system has been recognized and specified as a hardware requirement. The error protection techniques of particular concern are those designed into the Shuttle Main Engine Interface (MEI) and the Orbiter Multiplex Interface Adapter (MIA). The techniques and circuit design details proposed for these hardware are analyzed in this report to determine their error protection capability. The capability is calculated in terms of the probability of an undetected word error. Calculated results are reported for a noise environment that ranges from the nominal noise level stated in the hardware specifications to burst levels which may occur in extreme or anomalous conditions.
Segovia Hernández, Manuel
2005-01-01
Legionella, the causative agent of legionnaire's disease (LD), can survive and grow in amoebic cells. Free-living amoebae may play a role in the selection of virulence traits and in adaptation to survival in macrophages, and represent an important reservoir of Legionella. These amoebae may act as a Trojan horse bringing hidden bacteria within the human environments. The community outbreak of LD that occurred in Murcia in July 2001, the largest such outbreak ever reported, afforded an unusual opportunity to improve the knowledge of this disease.
Assessing uncertainty in high-resolution spatial climate data across the US Northeast.
Bishop, Daniel A; Beier, Colin M
2013-01-01
Local and regional-scale knowledge of climate change is needed to model ecosystem responses, assess vulnerabilities and devise effective adaptation strategies. High-resolution gridded historical climate (GHC) products address this need, but come with multiple sources of uncertainty that are typically not well understood by data users. To better understand this uncertainty in a region with a complex climatology, we conducted a ground-truthing analysis of two 4 km GHC temperature products (PRISM and NRCC) for the US Northeast using 51 Cooperative Network (COOP) weather stations utilized by both GHC products. We estimated GHC prediction error for monthly temperature means and trends (1980-2009) across the US Northeast and evaluated any landscape effects (e.g., elevation, distance from coast) on those prediction errors. Results indicated that station-based prediction errors for the two GHC products were similar in magnitude, but on average, the NRCC product predicted cooler than observed temperature means and trends, while PRISM was cooler for means and warmer for trends. We found no evidence for systematic sources of uncertainty across the US Northeast, although errors were largest at high elevations. Errors in the coarse-scale (4 km) digital elevation models used by each product were correlated with temperature prediction errors, more so for NRCC than PRISM. In summary, uncertainty in spatial climate data has many sources and we recommend that data users develop an understanding of uncertainty at the appropriate scales for their purposes. To this end, we demonstrate a simple method for utilizing weather stations to assess local GHC uncertainty and inform decisions among alternative GHC products.
10 CFR 74.59 - Quality assurance and accounting requirements.
Code of Federal Regulations, 2013 CFR
2013-01-01
... occurs which has the potential to affect a measurement result or when program data, generated by tests.../receiver differences, inventory differences, and process differences. (4) Utilize the data generated during... difference (SEID) and the standard error of the process differences. Calibration and measurement error data...
10 CFR 74.59 - Quality assurance and accounting requirements.
Code of Federal Regulations, 2014 CFR
2014-01-01
... occurs which has the potential to affect a measurement result or when program data, generated by tests.../receiver differences, inventory differences, and process differences. (4) Utilize the data generated during... difference (SEID) and the standard error of the process differences. Calibration and measurement error data...
75 FR 37815 - Submission for OMB review; Comment Request
Federal Register 2010, 2011, 2012, 2013, 2014
2010-06-30
... agencies to annually report error rate measures. Section 2 of the Improper Payments Information Act... requires preparation and submission of a report of errors occurring in the administration of Child Care... the annual Agency Financial Report (AFR) and will provide information necessary to offer technical...
World's largest macroalgal bloom caused by expansion of seaweed aquaculture in China.
Liu, Dongyan; Keesing, John K; Xing, Qianguo; Shi, Ping
2009-06-01
In late June 2008, just weeks before the opening of the Beijing Olympics, a massive green-tide occurred covering about 600km(2) along the coast of Qingdao, host city for Olympic sailing regatta. Coastal eutrophication was quickly attributed with the blame by the international media and some scientists. However, we explored an alternative hypothesis that the cause of the green-tide was due to the rapid expansion of Porphyra yezoensis aquaculture along the coastline over 180km away from Qingdao, and oceanographic conditions which favoured rapid growth of the bloom and contributed to transport of the bloom north into the Yellow Sea and then onshore northwest to Qingdao. At its peak offshore, the bloom covered 1200km(2) and affected 40,000km(2). This is the largest green-tide ever reported, the most extensive translocation of a green-tide and the first case of expansive seaweed aquaculture leading to a green-tide. Given similar oceanographic conditions to those that occurred in 2008, these green-tides may re-occur unless mitigation measures such as those proposed here are taken.
Human operator response to error-likely situations in complex engineering systems
NASA Technical Reports Server (NTRS)
Morris, Nancy M.; Rouse, William B.
1988-01-01
The causes of human error in complex systems are examined. First, a conceptual framework is provided in which two broad categories of error are discussed: errors of action, or slips, and errors of intention, or mistakes. Conditions in which slips and mistakes might be expected to occur are identified, based on existing theories of human error. Regarding the role of workload, it is hypothesized that workload may act as a catalyst for error. Two experiments are presented in which humans' response to error-likely situations were examined. Subjects controlled PLANT under a variety of conditions and periodically provided subjective ratings of mental effort. A complex pattern of results was obtained, which was not consistent with predictions. Generally, the results of this research indicate that: (1) humans respond to conditions in which errors might be expected by attempting to reduce the possibility of error, and (2) adaptation to conditions is a potent influence on human behavior in discretionary situations. Subjects' explanations for changes in effort ratings are also explored.
Issues with data and analyses: Errors, underlying themes, and potential solutions
Allison, David B.
2018-01-01
Some aspects of science, taken at the broadest level, are universal in empirical research. These include collecting, analyzing, and reporting data. In each of these aspects, errors can and do occur. In this work, we first discuss the importance of focusing on statistical and data errors to continually improve the practice of science. We then describe underlying themes of the types of errors and postulate contributing factors. To do so, we describe a case series of relatively severe data and statistical errors coupled with surveys of some types of errors to better characterize the magnitude, frequency, and trends. Having examined these errors, we then discuss the consequences of specific errors or classes of errors. Finally, given the extracted themes, we discuss methodological, cultural, and system-level approaches to reducing the frequency of commonly observed errors. These approaches will plausibly contribute to the self-critical, self-correcting, ever-evolving practice of science, and ultimately to furthering knowledge. PMID:29531079
Mapping DNA polymerase errors by single-molecule sequencing
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lee, David F.; Lu, Jenny; Chang, Seungwoo
Genomic integrity is compromised by DNA polymerase replication errors, which occur in a sequence-dependent manner across the genome. Accurate and complete quantification of a DNA polymerase's error spectrum is challenging because errors are rare and difficult to detect. We report a high-throughput sequencing assay to map in vitro DNA replication errors at the single-molecule level. Unlike previous methods, our assay is able to rapidly detect a large number of polymerase errors at base resolution over any template substrate without quantification bias. To overcome the high error rate of high-throughput sequencing, our assay uses a barcoding strategy in which each replicationmore » product is tagged with a unique nucleotide sequence before amplification. Here, this allows multiple sequencing reads of the same product to be compared so that sequencing errors can be found and removed. We demonstrate the ability of our assay to characterize the average error rate, error hotspots and lesion bypass fidelity of several DNA polymerases.« less
Mapping DNA polymerase errors by single-molecule sequencing
Lee, David F.; Lu, Jenny; Chang, Seungwoo; ...
2016-05-16
Genomic integrity is compromised by DNA polymerase replication errors, which occur in a sequence-dependent manner across the genome. Accurate and complete quantification of a DNA polymerase's error spectrum is challenging because errors are rare and difficult to detect. We report a high-throughput sequencing assay to map in vitro DNA replication errors at the single-molecule level. Unlike previous methods, our assay is able to rapidly detect a large number of polymerase errors at base resolution over any template substrate without quantification bias. To overcome the high error rate of high-throughput sequencing, our assay uses a barcoding strategy in which each replicationmore » product is tagged with a unique nucleotide sequence before amplification. Here, this allows multiple sequencing reads of the same product to be compared so that sequencing errors can be found and removed. We demonstrate the ability of our assay to characterize the average error rate, error hotspots and lesion bypass fidelity of several DNA polymerases.« less
NASA Astrophysics Data System (ADS)
Krawczynski, M.; McLean, N.
2017-12-01
One of the most accurate and useful ways of determining the age of rocks that formed more than about 500,000 years ago is uranium-lead (U-Pb) geochronology. Earth scientists use U-Pb geochronology to put together the geologic history of entire regions and of specific events, like the mass extinction of all non-avian dinosaurs about 66 million years ago or the catastrophic eruptions of supervolcanoes like the one currently centered at Yellowstone. The mineral zircon is often utilized because it is abundant, durable, and readily incorporates uranium into its crystal structure. But it excludes thorium, whose isotope 230Th is part of the naturally occurring isotopic decay chain from 238U to 206Pb. Calculating a date from the relative abundances of 206Pb and 238U therefore requires a correction for the missing 230Th. Existing experimental and observational constraints on the way U and Th behave when zircon crystallizes from a melt are not known precisely enough, and thus currently the uncertainty in dates introduced by they `Th correction' is one of the largest sources of systematic error in determining dates. Here we present preliminary results on our study of actinide partitioning between zircon and melt. Experiments have been conducted to grow zircon from melts doped with U and Th that mimic natural magmas at a range of temperatures, and compositions. Synthetic zircons are separated from their coexisting glass and using high precision and high-spatial-resolution techniques, the abundance and distribution of U and Th in each phase is determined. These preliminary experiments are the beginning of a study that will result in precise determination of the zircon/melt uranium and thorium partition coefficients under a wide variety of naturally occurring conditions. This data will be fit to a multidimensional surface using maximum likelihood regression techniques, so that the ratio of partition coefficients can be calculated for any set of known parameters. The results of this study will reduce the largest source of uncertainty in dating young zircons and improve the accuracy of U-Pb dates, improving our ability to tell time during geologic processes. The attainment of more accurate timing of the geologic timescale is important to geologists of all disciplines, from paleontology to planetary cosmochemistry to geobiology.
Dynamics of the Wulong landslide revealed by broadband seismic records
NASA Astrophysics Data System (ADS)
Li, Zhengyuan; Huang, Xinghui; Xu, Qiang; Yu, Dan; Fan, Junyi; Qiao, Xuejun
2017-02-01
The catastrophic Wulong landslide occurred at 14:51 (Beijing time, UTC+8) on 5 June 2009, in Wulong Prefecture, Southwest China. This rockslide occurred in a complex topographic environment. Seismic signals generated by this event were recorded by the seismic network deployed in the surrounding area, and long-period signals were extracted from 8 broadband seismic stations within 250 km to obtain source time functions by inversion. The location of this event was simultaneously acquired using a stepwise refined grid search approach, with an error of 2.2 km. The estimated source time functions reveal that, according to the movement parameters, this landslide could be divided into three stages with different movement directions, velocities, and increasing inertial forces. The sliding mass moved northward, northeastward and northward in the three stages, with average velocities of 6.5, 20.3, and 13.8 m/s, respectively. The maximum movement velocity of the mass reached 35 m/s before the end of the second stage. The basal friction coefficients were relatively small in the first stage and gradually increasing; large in the second stage, accompanied by the largest variability; and oscillating and gradually decreasing to a stable value, in the third stage. Analysis shows that the movement characteristics of these three stages are consistent with the topography of the sliding zone, corresponding to the northward initiation, eastward sliding after being stopped by the west wall, and northward debris flowing after collision with the east slope of the Tiejianggou valley. The maximum movement velocity of the sliding mass results from the largest height difference of the west slope of the Tiejianggou valley. The basal friction coefficients of the three stages represent the thin weak layer in the source zone, the dramatically varying topography of the west slope of the Tiejianggou valley, and characteristics of the debris flow along the Tiejianggou valley. Based on the above results, it is recognized that the inverted source time functions are consistent with the topography of the sliding zone. Special geological and topographic conditions can have a focusing effect on landslides and are key factors in inducing the major disasters, which may follow from them. This landslide was of an unusual nature, and it will be worthwhile to pursue research into its dynamic characteristics more deeply.[Figure not available: see fulltext.
Rhythmic chaos: irregularities of computer ECG diagnosis.
Wang, Yi-Ting Laureen; Seow, Swee-Chong; Singh, Devinder; Poh, Kian-Keong; Chai, Ping
2017-09-01
Diagnostic errors can occur when physicians rely solely on computer electrocardiogram interpretation. Cardiologists often receive referrals for computer misdiagnoses of atrial fibrillation. Patients may have been inappropriately anticoagulated for pseudo atrial fibrillation. Anticoagulation carries significant risks, and such errors may carry a high cost. Have we become overreliant on machines and technology? In this article, we illustrate three such cases and briefly discuss how we can reduce these errors. Copyright: © Singapore Medical Association.
Fabbretti, G
2010-06-01
Because of its complex nature, surgical pathology practice is prone to error. In this report, we describe our methods for reducing error as much as possible during the pre-analytical and analytical phases. This was achieved by revising procedures, and by using computer technology and automation. Most mistakes are the result of human error in the identification and matching of patient and samples. To avoid faulty data interpretation, we employed a new comprehensive computer system that acquires all patient ID information directly from the hospital's database with a remote order entry; it also provides label and request forms via-Web where clinical information is required before sending the sample. Both patient and sample are identified directly and immediately at the site where the surgical procedures are performed. Barcode technology is used to input information at every step and automation is used for sample blocks and slides to avoid errors that occur when information is recorded or transferred by hand. Quality control checks occur at every step of the process to ensure that none of the steps are left to chance and that no phase is dependent on a single operator. The system also provides statistical analysis of errors so that new strategies can be implemented to avoid repetition. In addition, the staff receives frequent training on avoiding errors and new developments. The results have been shown promising results with a very low error rate (0.27%). None of these compromised patient health and all errors were detected before the release of the diagnosis report.
Hughes, Charmayne M L; Baber, Chris; Bienkiewicz, Marta; Worthington, Andrew; Hazell, Alexa; Hermsdörfer, Joachim
2015-01-01
Approximately 33% of stroke patients have difficulty performing activities of daily living, often committing errors during the planning and execution of such activities. The objective of this study was to evaluate the ability of the human error identification (HEI) technique SHERPA (Systematic Human Error Reduction and Prediction Approach) to predict errors during the performance of daily activities in stroke patients with left and right hemisphere lesions. Using SHERPA we successfully predicted 36 of the 38 observed errors, with analysis indicating that the proportion of predicted and observed errors was similar for all sub-tasks and severity levels. HEI results were used to develop compensatory cognitive strategies that clinicians could employ to reduce or prevent errors from occurring. This study provides evidence for the reliability and validity of SHERPA in the design of cognitive rehabilitation strategies in stroke populations.
Applying the intention-to-treat principle in practice: Guidance on handling randomisation errors
Sullivan, Thomas R; Voysey, Merryn; Lee, Katherine J; Cook, Jonathan A; Forbes, Andrew B
2015-01-01
Background: The intention-to-treat principle states that all randomised participants should be analysed in their randomised group. The implications of this principle are widely discussed in relation to the analysis, but have received limited attention in the context of handling errors that occur during the randomisation process. The aims of this article are to (1) demonstrate the potential pitfalls of attempting to correct randomisation errors and (2) provide guidance on handling common randomisation errors when they are discovered that maintains the goals of the intention-to-treat principle. Methods: The potential pitfalls of attempting to correct randomisation errors are demonstrated and guidance on handling common errors is provided, using examples from our own experiences. Results: We illustrate the problems that can occur when attempts are made to correct randomisation errors and argue that documenting, rather than correcting these errors, is most consistent with the intention-to-treat principle. When a participant is randomised using incorrect baseline information, we recommend accepting the randomisation but recording the correct baseline data. If ineligible participants are inadvertently randomised, we advocate keeping them in the trial and collecting all relevant data but seeking clinical input to determine their appropriate course of management, unless they can be excluded in an objective and unbiased manner. When multiple randomisations are performed in error for the same participant, we suggest retaining the initial randomisation and either disregarding the second randomisation if only one set of data will be obtained for the participant, or retaining the second randomisation otherwise. When participants are issued the incorrect treatment at the time of randomisation, we propose documenting the treatment received and seeking clinical input regarding the ongoing treatment of the participant. Conclusion: Randomisation errors are almost inevitable and should be reported in trial publications. The intention-to-treat principle is useful for guiding responses to randomisation errors when they are discovered. PMID:26033877
Applying the intention-to-treat principle in practice: Guidance on handling randomisation errors.
Yelland, Lisa N; Sullivan, Thomas R; Voysey, Merryn; Lee, Katherine J; Cook, Jonathan A; Forbes, Andrew B
2015-08-01
The intention-to-treat principle states that all randomised participants should be analysed in their randomised group. The implications of this principle are widely discussed in relation to the analysis, but have received limited attention in the context of handling errors that occur during the randomisation process. The aims of this article are to (1) demonstrate the potential pitfalls of attempting to correct randomisation errors and (2) provide guidance on handling common randomisation errors when they are discovered that maintains the goals of the intention-to-treat principle. The potential pitfalls of attempting to correct randomisation errors are demonstrated and guidance on handling common errors is provided, using examples from our own experiences. We illustrate the problems that can occur when attempts are made to correct randomisation errors and argue that documenting, rather than correcting these errors, is most consistent with the intention-to-treat principle. When a participant is randomised using incorrect baseline information, we recommend accepting the randomisation but recording the correct baseline data. If ineligible participants are inadvertently randomised, we advocate keeping them in the trial and collecting all relevant data but seeking clinical input to determine their appropriate course of management, unless they can be excluded in an objective and unbiased manner. When multiple randomisations are performed in error for the same participant, we suggest retaining the initial randomisation and either disregarding the second randomisation if only one set of data will be obtained for the participant, or retaining the second randomisation otherwise. When participants are issued the incorrect treatment at the time of randomisation, we propose documenting the treatment received and seeking clinical input regarding the ongoing treatment of the participant. Randomisation errors are almost inevitable and should be reported in trial publications. The intention-to-treat principle is useful for guiding responses to randomisation errors when they are discovered. © The Author(s) 2015.
Slow Learner Errors Analysis in Solving Fractions Problems in Inclusive Junior High School Class
NASA Astrophysics Data System (ADS)
Novitasari, N.; Lukito, A.; Ekawati, R.
2018-01-01
A slow learner whose IQ is between 71 and 89 will have difficulties in solving mathematics problems that often lead to errors. The errors could be analyzed to where the errors may occur and its type. This research is qualitative descriptive which aims to describe the locations, types, and causes of slow learner errors in the inclusive junior high school class in solving the fraction problem. The subject of this research is one slow learner of seventh-grade student which was selected through direct observation by the researcher and through discussion with mathematics teacher and special tutor which handles the slow learner students. Data collection methods used in this study are written tasks and semistructured interviews. The collected data was analyzed by Newman’s Error Analysis (NEA). Results show that there are four locations of errors, namely comprehension, transformation, process skills, and encoding errors. There are four types of errors, such as concept, principle, algorithm, and counting errors. The results of this error analysis will help teachers to identify the causes of the errors made by the slow learner.
Microbial food-borne illnesses pose a significant health problem in Japan. In 1996 the world's largest outbreak of Escherichia coli food illness occurred in Japan. Since then, new regulatory measures were established, including strict hygiene practices in meat and food processi...
The 1991 eruptions of Mount Pinatubo, Philippines
Wolfe, Edward W.
1992-01-01
Recognition of the volcanic unrest at Mount Pinatubo in the Philippines began when steam explosions occurred on April 2, 1991. The unrest culminated ten weeks later in the world's largest eruption in more than half a century.
Legionnaires' Disease Outbreaks and Cooling Towers, New York City, New York, USA.
Fitzhenry, Robert; Weiss, Don; Cimini, Dan; Balter, Sharon; Boyd, Christopher; Alleyne, Lisa; Stewart, Renee; McIntosh, Natasha; Econome, Andrea; Lin, Ying; Rubinstein, Inessa; Passaretti, Teresa; Kidney, Anna; Lapierre, Pascal; Kass, Daniel; Varma, Jay K
2017-11-01
The incidence of Legionnaires' disease in the United States has been increasing since 2000. Outbreaks and clusters are associated with decorative, recreational, domestic, and industrial water systems, with the largest outbreaks being caused by cooling towers. Since 2006, 6 community-associated Legionnaires' disease outbreaks have occurred in New York City, resulting in 213 cases and 18 deaths. Three outbreaks occurred in 2015, including the largest on record (138 cases). Three outbreaks were linked to cooling towers by molecular comparison of human and environmental Legionella isolates, and the sources for the other 3 outbreaks were undetermined. The evolution of investigation methods and lessons learned from these outbreaks prompted enactment of a new comprehensive law governing the operation and maintenance of New York City cooling towers. Ongoing surveillance and program evaluation will determine if enforcement of the new cooling tower law reduces Legionnaires' disease incidence in New York City.
Legionnaires’ Disease Outbreaks and Cooling Towers, New York City, New York, USA
Fitzhenry, Robert; Cimini, Dan; Balter, Sharon; Boyd, Christopher; Alleyne, Lisa; Stewart, Renee; McIntosh, Natasha; Econome, Andrea; Lin, Ying; Rubinstein, Inessa; Passaretti, Teresa; Kidney, Anna; Lapierre, Pascal; Kass, Daniel; Varma, Jay K.
2017-01-01
The incidence of Legionnaires’ disease in the United States has been increasing since 2000. Outbreaks and clusters are associated with decorative, recreational, domestic, and industrial water systems, with the largest outbreaks being caused by cooling towers. Since 2006, 6 community-associated Legionnaires’ disease outbreaks have occurred in New York City, resulting in 213 cases and 18 deaths. Three outbreaks occurred in 2015, including the largest on record (138 cases). Three outbreaks were linked to cooling towers by molecular comparison of human and environmental Legionella isolates, and the sources for the other 3 outbreaks were undetermined. The evolution of investigation methods and lessons learned from these outbreaks prompted enactment of a new comprehensive law governing the operation and maintenance of New York City cooling towers. Ongoing surveillance and program evaluation will determine if enforcement of the new cooling tower law reduces Legionnaires’ disease incidence in New York City. PMID:29049017
NASA Astrophysics Data System (ADS)
Guilbert, Justin; Betts, Alan K.; Rizzo, Donna M.; Beckage, Brian; Bomblies, Arne
2015-03-01
We present evidence of increasing persistence in daily precipitation in the northeastern United States that suggests that global circulation changes are affecting regional precipitation patterns. Meteorological data from 222 stations in 10 northeastern states are analyzed using Markov chain parameter estimates to demonstrate that a significant mode of precipitation variability is the persistence of precipitation events. We find that the largest region-wide trend in wet persistence (i.e., the probability of precipitation in 1 day and given precipitation in the preceding day) occurs in June (+0.9% probability per decade over all stations). We also find that the study region is experiencing an increase in the magnitude of high-intensity precipitation events. The largest increases in the 95th percentile of daily precipitation occurred in April with a trend of +0.7 mm/d/decade. We discuss the implications of the observed precipitation signals for watershed hydrology and flood risk.
The characteristics on spatiotemporal variations of summer heatwaves in China
NASA Astrophysics Data System (ADS)
Qixiang, C.; Wang, L.; Wu, S., II; Li, Y.
2016-12-01
Summer heatwaves in China have impacts on forestry, agriculture resource, infrastructure, and heat -related illness and mortality. Based on daily air temperature and relative humidity from the Chinese Meteorological Data Sharing Service System, the spatial distribution and trends of the intensity, duration, and frequency of heatwaves in China during 1960-2015 were analyzed. Considering climatic variability, we defined a heatwave as a spell of consecutive days with maximum temperatures exceeding the relative threshold (temperature percentile) .We also consider a indices combined hot days and tropical nights (CHT), and the humidity-corrected apparent temperature (AT) to analyze the health impacts of hot days in summer. This study shows that while the average frequency and duration of heatwaves has an increasing trend since 1990s, the North China Plain has a decreasing trend. This study also shows that the largest CHT values occur in southeast China, and the largest AT values occur in South China.
ERIC Educational Resources Information Center
Westerberg, Carmen E.; Hawkins, Christopher A.; Rendon, Lauren
2018-01-01
Reality-monitoring errors occur when internally generated thoughts are remembered as external occurrences. We hypothesized that sleep-dependent memory consolidation could reduce them by strengthening connections between items and their contexts during an afternoon nap. Participants viewed words and imagined their referents. Pictures of the…
New Statistical Techniques for Evaluating Longitudinal Models.
ERIC Educational Resources Information Center
Murray, James R.; Wiley, David E.
A basic methodological approach in developmental studies is the collection of longitudinal data. Behavioral data cen take at least two forms, qualitative (or discrete) and quantitative. Both types are fallible. Measurement errors can occur in quantitative data and measures of these are based on error variance. Qualitative or discrete data can…
INCREASING THE ACCURACY OF MAYFIELD ESTIMATES USING KNOWLEDGE OF NEST AGE
This presentation will focus on the error introduced in nest-survival modeling when nest-cycles are assumed to be of constant length. I will present the types of error that may occur, including biases resulting from incorrect estimates of expected values, as well as biases that o...
75 FR 20603 - Submission for OMB Review; Comment Request
Federal Register 2010, 2011, 2012, 2013, 2014
2010-04-20
... requires Federal agencies to annually report error rate measures. Section 2 of the Improper Payments... CFR part 98 requires preparation and submission of a report of errors occurring in the administration... will be used to prepare the annual Agency Financial Report (AFR) and will provide information necessary...
77 FR 35682 - Submission for OMB Review; Comment Request
Federal Register 2010, 2011, 2012, 2013, 2014
2012-06-14
... requires Federal agencies to annually report error rate measures. Section 2 of the Improper Payments... CFR, Part 98 requires preparation and submission of a report of errors occurring in the administration... will be used to prepare the annual Agency Financial Report (AFR) and will provide information necessary...
NASA Astrophysics Data System (ADS)
Yeh, Ta-Kang; Hong, Jing-Shan; Wang, Chuan-Sheng; Chen, Chieh-Hung; Chen, Kwo-Hwa; Fong, Chin-Tzu
2016-06-01
Water vapor plays an important role in weather prediction. Thus, it would be helpful to use precipitable water vapor (PWV) data from Global Positioning System (GPS) signals to understand weather phenomena. Approximately 100 ground GPS stations that cooperate with approximately 500 ground weather stations were used in this study. The relationship between the PWV and rainfall was investigated by analyzing the amplitude and phase that resulted from harmonic analyses. The results indicated that the maximum PWV amplitudes were between 10.98 and 13.10 mm and always occurred at the end of July. The magnitudes of the PWV growth rate were between 0.65 and 0.81 mm/yr. These rates increased from 9.2% to 13.0% between 2006 and 2011. The largest peak PWV amplitude occurred in the Western region. However, the largest rainfall amplitude occurred in the Southern region. The presented peak rainfall time agreed with the peak PWV time in the Western, Southern, and Central Mountain regions. Although rainfall decreased with time in Taiwan, this decrease was not large. The greatest rainfall consistently occurred during the months in which typhoons occurred, and the greatest PWV values occurred at the end of July. Although the end of July had the greatest monthly average PWV values, the rainfall magnitude during this period was smaller than that during the typhoons, which only occurred for a few days; the PWV also increased during typhoons. Because this effect was short-term, it did not significantly contribute to the PWV monthly average.
Perceptual learning in children with visual impairment improves near visual acuity.
Huurneman, Bianca; Boonstra, F Nienke; Cox, Ralf F A; van Rens, Ger; Cillessen, Antonius H N
2013-09-17
This study investigated whether visual perceptual learning can improve near visual acuity and reduce foveal crowding effects in four- to nine-year-old children with visual impairment. Participants were 45 children with visual impairment and 29 children with normal vision. Children with visual impairment were divided into three groups: a magnifier group (n = 12), a crowded perceptual learning group (n = 18), and an uncrowded perceptual learning group (n = 15). Children with normal vision also were divided in three groups, but were measured only at baseline. Dependent variables were single near visual acuity (NVA), crowded NVA, LH line 50% crowding NVA, number of trials, accuracy, performance time, amount of small errors, and amount of large errors. Children with visual impairment trained during six weeks, two times per week, for 30 minutes (12 training sessions). After training, children showed significant improvement of NVA in addition to specific improvements on the training task. The crowded perceptual learning group showed the largest acuity improvements (1.7 logMAR lines on the crowded chart, P < 0.001). Only the children in the crowded perceptual learning group showed improvements on all NVA charts. Children with visual impairment benefit from perceptual training. While task-specific improvements were observed in all training groups, transfer to crowded NVA was largest in the crowded perceptual learning group. To our knowledge, this is the first study to provide evidence for the improvement of NVA by perceptual learning in children with visual impairment. (http://www.trialregister.nl number, NTR2537.).
Splash-cup plants accelerate raindrops to disperse seeds
Amador, Guillermo J.; Yamada, Yasukuni; McCurley, Matthew; Hu, David L.
2013-01-01
The conical flowers of splash-cup plants Chrysosplenium and Mazus catch raindrops opportunistically, exploiting the subsequent splash to disperse their seeds. In this combined experimental and theoretical study, we elucidate their mechanism for maximizing dispersal distance. We fabricate conical plant mimics using three-dimensional printing, and use high-speed video to visualize splash profiles and seed travel distance. Drop impacts that strike the cup off-centre achieve the largest dispersal distances of up to 1 m. Such distances are achieved because splash speeds are three to five times faster than incoming drop speeds, and so faster than the traditionally studied splashes occurring upon horizontal surfaces. This anomalous splash speed is because of the superposition of two components of momentum, one associated with a component of the drop's motion parallel to the splash-cup surface, and the other associated with film spreading induced by impact with the splash-cup. Our model incorporating these effects predicts the observed dispersal distance within 6–18% error. According to our experiments, the optimal cone angle for the splash-cup is 40°, a value consistent with the average of five species of splash-cup plants. This optimal angle arises from the competing effects of velocity amplification and projectile launching angle. PMID:23235266
Analysis of Seasonal Variability in Gulf of Alaska Glacier Mass Balance using GRACE
NASA Astrophysics Data System (ADS)
Arendt, A. A.; Luthcke, S. B.; Oneel, S.; Gardner, A. S.; Hill, D. F.
2011-12-01
Mass variations of glaciers in Alaska/northwestern Canada must be quantified in order to assess impacts on ecosystems, human infrastructure, and global sea level. Here we combine Gravity Recovery and Climate Experiment (GRACE) observations with a wide range of satellite and field data to investigate drivers of these recent changes, with a focus on seasonal variations. Our central focus will be the exceptionally high mass losses of 2009, which do not correlate with weather station temperature and precipitation data, but may be linked to ash fall from the March 31, 2009 eruption of Mt. Redoubt. The eruption resulted in a significant decrease in MODIS-derived surface albedo over many Alaska glacier regions, and likely contributed to some of the 2009 anomalous mass loss observed by GRACE. We also focus on the Juneau and Stikine Icefield regions that are far from the volcanic eruption but experienced the largest mass losses of any region in 2009. Although rapid drawdown of tidewater glaciers was occurring in southeast Alaska during 2009, we show these changes were probably not sufficiently widespread to explain all of the GRACE signal in those regions. We examine additional field and satellite datasets to quantify potential errors in the climate and GRACE fields that could result in the observed discrepancy.
Taylor, C; Parker, J; Stratford, J; Warren, M
2018-05-01
Although all systematic and random positional setup errors can be corrected for in entirety during on-line image-guided radiotherapy, the use of a specified action level, below which no correction occurs, is also an option. The following service evaluation aimed to investigate the use of this 3 mm action level for on-line image assessment and correction (online, systematic set-up error and weekly evaluation) for lower extremity sarcoma, and understand the impact on imaging frequency and patient positioning error within one cancer centre. All patients were immobilised using a thermoplastic shell attached to a plastic base and an individual moulded footrest. A retrospective analysis of 30 patients was performed. Patient setup and correctional data derived from cone beam CT analysis was retrieved. The timing, frequency and magnitude of corrections were evaluated. The population systematic and random error was derived. 20% of patients had no systematic corrections over the duration of treatment, and 47% had one. The maximum number of systematic corrections per course of radiotherapy was 4, which occurred for 2 patients. 34% of episodes occurred within the first 5 fractions. All patients had at least one observed translational error during their treatment greater than 0.3 cm, and 80% of patients had at least one observed translational error during their treatment greater than 0.5 cm. The population systematic error was 0.14 cm, 0.10 cm, 0.14 cm and random error was 0.27 cm, 0.22 cm, 0.23 cm in the lateral, caudocranial and anteroposterial directions. The required Planning Target Volume margin for the study population was 0.55 cm, 0.41 cm and 0.50 cm in the lateral, caudocranial and anteroposterial directions. The 3 mm action level for image assessment and correction prior to delivery reduced the imaging burden and focussed intervention on patients that exhibited greater positional variability. This strategy could be an efficient deployment of departmental resources if full daily correction of positional setup error is not possible. Copyright © 2017. Published by Elsevier Ltd.
Tailoring a Human Reliability Analysis to Your Industry Needs
NASA Technical Reports Server (NTRS)
DeMott, D. L.
2016-01-01
Companies at risk of accidents caused by human error that result in catastrophic consequences include: airline industry mishaps, medical malpractice, medication mistakes, aerospace failures, major oil spills, transportation mishaps, power production failures and manufacturing facility incidents. Human Reliability Assessment (HRA) is used to analyze the inherent risk of human behavior or actions introducing errors into the operation of a system or process. These assessments can be used to identify where errors are most likely to arise and the potential risks involved if they do occur. Using the basic concepts of HRA, an evolving group of methodologies are used to meet various industry needs. Determining which methodology or combination of techniques will provide a quality human reliability assessment is a key element to developing effective strategies for understanding and dealing with risks caused by human errors. There are a number of concerns and difficulties in "tailoring" a Human Reliability Assessment (HRA) for different industries. Although a variety of HRA methodologies are available to analyze human error events, determining the most appropriate tools to provide the most useful results can depend on industry specific cultures and requirements. Methodology selection may be based on a variety of factors that include: 1) how people act and react in different industries, 2) expectations based on industry standards, 3) factors that influence how the human errors could occur such as tasks, tools, environment, workplace, support, training and procedure, 4) type and availability of data, 5) how the industry views risk & reliability, and 6) types of emergencies, contingencies and routine tasks. Other considerations for methodology selection should be based on what information is needed from the assessment. If the principal concern is determination of the primary risk factors contributing to the potential human error, a more detailed analysis method may be employed versus a requirement to provide a numerical value as part of a probabilistic risk assessment. Industries involved with humans operating large equipment or transport systems (ex. railroads or airlines) would have more need to address the man machine interface than medical workers administering medications. Human error occurs in every industry; in most cases the consequences are relatively benign and occasionally beneficial. In cases where the results can have disastrous consequences, the use of Human Reliability techniques to identify and classify the risk of human errors allows a company more opportunities to mitigate or eliminate these types of risks and prevent costly tragedies.
2016-01-01
Objectives To assess why articles are retracted from BioMed Central journals, whether retraction notices adhered to the Committee on Publication Ethics (COPE) guidelines, and are becoming more frequent as a proportion of published articles. Design/setting Retrospective cross-sectional analysis of 134 retractions from January 2000 to December 2015. Results 134 retraction notices were published during this timeframe. Although they account for 0.07% of all articles published (190 514 excluding supplements, corrections, retractions and commissioned content), the rate of retraction is rising. COPE guidelines on retraction were adhered to in that an explicit reason for each retraction was given. However, some notices did not document who retracted the article (eight articles, 6%) and others were unclear whether the underlying cause was honest error or misconduct (15 articles, 11%). The largest proportion of notices was issued by the authors (47 articles, 35%). The majority of retractions were due to some form of misconduct (102 articles, 76%), that is, compromised peer review (44 articles, 33%), plagiarism (22 articles, 16%) and data falsification/fabrication (10 articles, 7%). Honest error accounted for 17 retractions (13%) of which 10 articles (7%) were published in error. The median number of days from publication to retraction was 337.5 days. Conclusions The most common reason to retract was compromised peer review. However, the majority of these cases date to March 2015 and appear to be the result of a systematic attempt to manipulate peer review across several publishers. Retractions due to plagiarism account for the second largest category and may be reduced by screening manuscripts before publication although this is not guaranteed. Retractions due to problems with the data may be reduced by appropriate data sharing and deposition before publication. Adopting a checklist (linked to COPE guidelines) and templates for various classes of retraction notices would increase transparency of retraction notices in future. PMID:27881524
An a priori model for the reduction of nutation observations: KSV(1994.3) nutation series
NASA Technical Reports Server (NTRS)
Herring, T. A.
1995-01-01
We discuss the formulation of a new nutation series to be used in the reduction of modern space geodetic data. The motivation for developing such a series is to develop a nutation series that has smaller short period errors than the IAU 1980 nutation series and to provide a series that can be used with techniques such as the Global Positioning System (GPS) that have sensitivity to nutations but can directly separate the effects of nutations from errors in the dynamical force models that effect the satellite orbits. A modern nutation series should allow the errors in the force models for GPS to be better understood. The series is constructed by convolving the Kinoshita and Souchay rigid Earth nutation series with an Earth response function whose parameters are partly based on geophysical models of the Earth and partly estimated from a long series (1979-1993) of very long baseline interferometry (VLBI) estimates of nutation angles. Secular rates of change of the nutation angles to represent corrections to the precession constant and a secular change of the obliquity of the ecliptic are included in the theory. Time dependent amplitudes of the Free Core Nutation (FCN) that is most likely excited by variations in atmospheric pressure are included when the geophysical parameters are estimated. The complex components of the prograde annual nutation are estimated simultaneously with the geophysical parameters because of the large contribution to the nutation from the S(sub 1) atmospheric tide. The weighted root mean square (WRMS) scatter of the nutation angle estimates about this new model are 0.32 mas and the largest correction to the series when the amplitudes of the ten largest nutations are estimated is 0.18 +/- 0.03 mas for the in phase component of the prograde 18. 6 year nutation.
Quantification of confounding factors in MRI-based dose calculations as applied to prostate IMRT
NASA Astrophysics Data System (ADS)
Maspero, Matteo; Seevinck, Peter R.; Schubert, Gerald; Hoesl, Michaela A. U.; van Asselen, Bram; Viergever, Max A.; Lagendijk, Jan J. W.; Meijer, Gert J.; van den Berg, Cornelis A. T.
2017-02-01
Magnetic resonance (MR)-only radiotherapy treatment planning requires pseudo-CT (pCT) images to enable MR-based dose calculations. To verify the accuracy of MR-based dose calculations, institutions interested in introducing MR-only planning will have to compare pCT-based and computer tomography (CT)-based dose calculations. However, interpreting such comparison studies may be challenging, since potential differences arise from a range of confounding factors which are not necessarily specific to MR-only planning. Therefore, the aim of this study is to identify and quantify the contribution of factors confounding dosimetric accuracy estimation in comparison studies between CT and pCT. The following factors were distinguished: set-up and positioning differences between imaging sessions, MR-related geometric inaccuracy, pCT generation, use of specific calibration curves to convert pCT into electron density information, and registration errors. The study comprised fourteen prostate cancer patients who underwent CT/MRI-based treatment planning. To enable pCT generation, a commercial solution (MRCAT, Philips Healthcare, Vantaa, Finland) was adopted. IMRT plans were calculated on CT (gold standard) and pCTs. Dose difference maps in a high dose region (CTV) and in the body volume were evaluated, and the contribution to dose errors of possible confounding factors was individually quantified. We found that the largest confounding factor leading to dose difference was the use of different calibration curves to convert pCT and CT into electron density (0.7%). The second largest factor was the pCT generation which resulted in pCT stratified into a fixed number of tissue classes (0.16%). Inter-scan differences due to patient repositioning, MR-related geometric inaccuracy, and registration errors did not significantly contribute to dose differences (0.01%). The proposed approach successfully identified and quantified the factors confounding accurate MRI-based dose calculation in the prostate. This study will be valuable for institutions interested in introducing MR-only dose planning in their clinical practice.
NASA Technical Reports Server (NTRS)
Lemoine, Frank G.; Rowlands, David D.; Luthcke, Scott B.; Zelensky, Nikita P.; Chinn, Douglas S.; Pavlis, Despina E.; Marr, Gregory
2001-01-01
The US Navy's GEOSAT Follow-On Spacecraft was launched on February 10, 1998 with the primary objective of the mission to map the oceans using a radar altimeter. Following an extensive set of calibration campaigns in 1999 and 2000, the US Navy formally accepted delivery of the satellite on November 29, 2000. Satellite laser ranging (SLR) and Doppler (Tranet-style) beacons track the spacecraft. Although limited amounts of GPS data were obtained, the primary mode of tracking remains satellite laser ranging. The GFO altimeter measurements are highly precise, with orbit error the largest component in the error budget. We have tuned the non-conservative force model for GFO and the gravity model using SLR, Doppler and altimeter crossover data sampled over one year. Gravity covariance projections to 70x70 show the radial orbit error on GEOSAT was reduced from 2.6 cm in EGM96 to 1.3 cm with the addition of SLR, GFO/GFO and TOPEX/GFO crossover data. Evaluation of the gravity fields using SLR and crossover data support the covariance projections and also show a dramatic reduction in geographically-correlated error for the tuned fields. In this paper, we report on progress in orbit determination for GFO using GFO/GFO and TOPEX/GFO altimeter crossovers. We will discuss improvements in satellite force modeling and orbit determination strategy, which allows reduction in GFO radial orbit error from 10-15 cm to better than 5 cm.
System care improves trauma outcome: patient care errors dominate reduced preventable death rate.
Thoburn, E; Norris, P; Flores, R; Goode, S; Rodriguez, E; Adams, V; Campbell, S; Albrink, M; Rosemurgy, A
1993-01-01
A review of 452 trauma deaths in Hillsborough County, Florida, in 1984 documented that 23% of non-CNS trauma deaths were preventable and occurred because of inadequate resuscitation or delay in proper surgical care. In late 1988 Hillsborough County organized a County Trauma Agency (HCTA) to coordinate trauma care among prehospital providers and state-designated trauma centers. The purpose of this study was to review county trauma deaths after the inception of the HCTA to determine the frequency of preventable deaths. 504 trauma deaths occurring between October 1989 and April 1991 were reviewed. Through committee review, 10 deaths were deemed preventable; 2 occurred outside the trauma system. Of the 10 deaths, 5 preventable deaths occurred late in severely injured patients. The preventable death rate has decreased to 7.0% with system care. The causes of preventable deaths have changed from delayed or inadequate intervention to postoperative care errors.
Error-associated behaviors and error rates for robotic geology
NASA Technical Reports Server (NTRS)
Anderson, Robert C.; Thomas, Geb; Wagner, Jacob; Glasgow, Justin
2004-01-01
This study explores human error as a function of the decision-making process. One of many models for human decision-making is Rasmussen's decision ladder [9]. The decision ladder identifies the multiple tasks and states of knowledge involved in decision-making. The tasks and states of knowledge can be classified by the level of cognitive effort required to make the decision, leading to the skill, rule, and knowledge taxonomy (Rasmussen, 1987). Skill based decisions require the least cognitive effort and knowledge based decisions require the greatest cognitive effort. Errors can occur at any of the cognitive levels.
Catastrophic lava dome failure at Soufrière Hills Volcano, Montserrat, 12-13 July 2003
Herd, Richard A.; Edmonds, Marie; Bass, Venus A.
2005-01-01
The lava dome collapse of 12–13 July 2003 was the largest of the Soufrière Hills Volcano eruption thus far (1995–2005) and the largest recorded in historical times from any volcano; 210 million m3 of dome material collapsed over 18 h and formed large pyroclastic flows, which reached the sea. The evolution of the collapse can be interpreted with reference to the complex structure of the lava dome, which comprised discrete spines and shear lobes and an apron of talus. Progressive slumping of talus for 10 h at the beginning of the collapse generated low-volume pyroclastic flows. It undermined the massive part of the lava dome and eventually prompted catastrophic failure. From 02:00 to 04:40 13 July 2003 large pyroclastic flows were generated; these reached their largest magnitude at 03:35, when the volume flux of material lost from the lava dome probably approached 16 million m3 over two minutes. The high flux of pyroclastic flows into the sea caused a tsunami and a hydrovolcanic explosion with an associated pyroclastic surge, which flowed inland. A vulcanian explosion occurred during or immediately after the largest pyroclastic flows at 03:35 13 July and four further explosions occurred at progressively longer intervals during 13–15 July 2003. The dome collapse lasted approximately 18 h, but 170 of the total 210 million m3 was removed in only 2.6 h during the most intense stage of the collapse.
Effects of extended work shifts and shift work on patient safety, productivity, and employee health.
Keller, Simone M
2009-12-01
It is estimated 1.3 million health care errors occur each year and of those errors 48,000 to 98,000 result in the deaths of patients (Barger et al., 2006). Errors occur for a variety of reasons, including the effects of extended work hours and shift work. The need for around-the-clock staff coverage has resulted in creative ways to maintain quality patient care, keep health care errors or adverse events to a minimum, and still meet the needs of the organization. One way organizations have attempted to alleviate staff shortages is to create extended work shifts. Instead of the standard 8-hour shift, workers are now working 10, 12, 16, or more hours to provide continuous patient care. Although literature does support these staffing patterns, it cannot be denied that shifts beyond the traditional 8 hours increase staff fatigue, health care errors, and adverse events and outcomes and decrease alertness and productivity. This article includes a review of current literature on shift work, the definition of shift work, error rates and adverse outcomes related to shift work, health effects on shift workers, shift work effects on older workers, recommended optimal shift length, positive and negative effects of shift work on the shift worker, hazards associated with driving after extended shifts, and implications for occupational health nurses. Copyright 2009, SLACK Incorporated.
Koetsier, Antonie; Peek, Niels; de Keizer, Nicolette
2012-01-01
Errors may occur in the registration of in-hospital mortality, making it less reliable as a quality indicator. We assessed the types of errors made in in-hospital mortality registration in the clinical quality registry National Intensive Care Evaluation (NICE) by comparing its mortality data to data from a national insurance claims database. Subsequently, we performed site visits at eleven Intensive Care Units (ICUs) to investigate the number, types and causes of errors made in in-hospital mortality registration. A total of 255 errors were found in the NICE registry. Two different types of software malfunction accounted for almost 80% of the errors. The remaining 20% were five types of manual transcription errors and human failures to record outcome data. Clinical registries should be aware of the possible existence of errors in recorded outcome data and understand their causes. In order to prevent errors, we recommend to thoroughly verify the software that is used in the registration process.
The largest glitch observed in the Crab pulsar
NASA Astrophysics Data System (ADS)
Shaw, B.; Lyne, A. G.; Stappers, B. W.; Weltevrede, P.; Bassa, C. G.; Lien, A. Y.; Mickaliger, M. B.; Breton, R. P.; Jordan, C. A.; Keith, M. J.; Krimm, H. A.
2018-05-01
We have observed a large glitch in the Crab pulsar (PSR B0531+21). The glitch occurred around MJD 58064 (2017 November 8) when the pulsar underwent an increase in the rotation rate of Δν = 1.530 × 10-5 Hz, corresponding to a fractional increase of Δν/ν = 0.516 × 10-6 making this event the largest glitch ever observed in this source. Due to our high-cadence and long-dwell time observations of the Crab pulsar we are able to partially resolve a fraction of the total spin-up of the star. This delayed spin-up occurred over a timescale of ˜1.7 days and is similar to the behaviour seen in the 1989 and 1996 large Crab pulsar glitches. The spin-down rate also increased at the glitch epoch by Δ \\dot{ν } / \\dot{ν } = 7 × 10^{-3}. In addition to being the largest such event observed in the Crab, the glitch occurred after the longest period of glitch inactivity since at least 1984 and we discuss a possible relationship between glitch size and waiting time. No changes to the shape of the pulse profile were observed near the glitch epoch at 610 MHz or 1520 MHz, nor did we identify any changes in the X-ray flux from the pulsar. The long-term recovery from the glitch continues to progress as \\dot{ν } slowly rises towards pre-glitch values. In line with other large Crab glitches, we expect there to be a persistent change to \\dot{ν }. We continue to monitor the long-term recovery with frequent, high quality observations.
2013-01-01
Objectives Health information technology (HIT) research findings suggested that new healthcare technologies could reduce some types of medical errors while at the same time introducing classes of medical errors (i.e., technology-induced errors). Technology-induced errors have their origins in HIT, and/or HIT contribute to their occurrence. The objective of this paper is to review current trends in the published literature on HIT safety. Methods A review and synthesis of the medical and life sciences literature focusing on the area of technology-induced error was conducted. Results There were four main trends in the literature on technology-induced error. The following areas were addressed in the literature: definitions of technology-induced errors; models, frameworks and evidence for understanding how technology-induced errors occur; a discussion of monitoring; and methods for preventing and learning about technology-induced errors. Conclusions The literature focusing on technology-induced errors continues to grow. Research has focused on the defining what an error is, models and frameworks used to understand these new types of errors, monitoring of such errors and methods that can be used to prevent these errors. More research will be needed to better understand and mitigate these types of errors. PMID:23882411
Using walker during walking: a pilot study for health elder.
Po-Chan, Yeh; Cherng-Yee, Leung
2012-01-01
Walker operation completely relies on the walker handle, however most marketed walkers possess two horizontal handles. Several researchers have suggested that horizontal handles might lead to wrist injury. Therefore, the purpose of this study is to assess the relevant design aspects of walker for elderly people. 28 elders participated in this study; when the experiment was started, subject walked on the tile for 3 meter distance twice by using walker. Data for analysis were selected at the corresponding wrist deviation and vertical force. The results showed that during walker using, the mean wrist deviation was greater than zero. The largest vertical force is significantly larger than the smallest one, and different wrist deviation occurred at three phases, the largest wrist deviation while raising walker is larger than the smallest one, however, no significant different was found between the largest and smallest wrist deviation while pressing walker. No significant correlation occurred between weight and wrist deviation. The correlation between weight and vertical force was significantly positive. With wrist deviation walker use may cause injury to upper-limb, however wrists remain in a neutral position during hand movement to prevent damage. The findings of this study should improve the design of walker handles to reduce the wrist deviations of users.
Flow-Centric, Back-in-Time Debugging
NASA Astrophysics Data System (ADS)
Lienhard, Adrian; Fierz, Julien; Nierstrasz, Oscar
Conventional debugging tools present developers with means to explore the run-time context in which an error has occurred. In many cases this is enough to help the developer discover the faulty source code and correct it. However, rather often errors occur due to code that has executed in the past, leaving certain objects in an inconsistent state. The actual run-time error only occurs when these inconsistent objects are used later in the program. So-called back-in-time debuggers help developers step back through earlier states of the program and explore execution contexts not available to conventional debuggers. Nevertheless, even Back-in-Time Debuggers do not help answer the question, “Where did this object come from?” The Object-Flow Virtual Machine, which we have proposed in previous work, tracks the flow of objects to answer precisely such questions, but this VM does not provide dedicated debugging support to explore faulty programs. In this paper we present a novel debugger, called Compass, to navigate between conventional run-time stack-oriented control flow views and object flows. Compass enables a developer to effectively navigate from an object contributing to an error back-in-time through all the code that has touched the object. We present the design and implementation of Compass, and we demonstrate how flow-centric, back-in-time debugging can be used to effectively locate the source of hard-to-find bugs.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Welsh, Lillian; Tanguay, Robert L.; Svoboda, Kurt R.
Zebrafish embryos offer a unique opportunity to investigate the mechanisms by which nicotine exposure impacts early vertebrate development. Embryos exposed to nicotine become functionally paralyzed by 42 hpf suggesting that the neuromuscular system is compromised in exposed embryos. We previously demonstrated that secondary spinal motoneurons in nicotine-exposed embryos were delayed in development and that their axons made pathfinding errors (Svoboda, K.R., Vijayaraghaven, S., Tanguay, R.L., 2002. Nicotinic receptors mediate changes in spinal motoneuron development and axonal pathfinding in embryonic zebrafish exposed to nicotine. J. Neurosci. 22, 10731-10741). In that study, we did not consider the potential role that altered skeletalmore » muscle development caused by nicotine exposure could play in contributing to the errors in spinal motoneuron axon pathfinding. In this study, we show that an alteration in skeletal muscle development occurs in tandem with alterations in spinal motoneuron development upon exposure to nicotine. The alteration in the muscle involves the binding of nicotine to the muscle-specific AChRs. The nicotine-induced alteration in muscle development does not occur in the zebrafish mutant (sofa potato, [sop]), which lacks muscle-specific AChRs. Even though muscle development is unaffected by nicotine exposure in sop mutants, motoneuron axonal pathfinding errors still occur in these mutants, indicating a direct effect of nicotine exposure on nervous system development.« less
In-Bed Accountability Development for a Passively Cooled, Electrically Heated Hydride (PACE) Bed
DOE Office of Scientific and Technical Information (OSTI.GOV)
Klein, J.E.
A nominal 1500 STP-L PAssively Cooled, Electrically heated hydride (PACE) Bed has been developed for implementation into a new Savannah River Site tritium project. The 1.2 meter (four-foot) long process vessel contains on internal 'U-tube' for tritium In-Bed Accountability (IBA) measurements. IBA will be performed on six, 12.6 kg production metal hydride storage beds.IBA tests were done on a prototype bed using electric heaters to simulate the radiolytic decay of tritium. Tests had gas flows from 10 to 100 SLPM through the U-tube or 100 SLPM through the bed's vacuum jacket. IBA inventory measurement errors at the 95% confidence levelmore » were calculated using the correlation of IBA gas temperature rise, or (hydride) bed temperature rise above ambient temperature, versus simulated tritium inventory.Prototype bed IBA inventory errors at 100 SLPM were the largest for gas flows through the vacuum jacket: 15.2 grams for the bed temperature rise and 11.5 grams for the gas temperature rise. For a 100 SLPM U-tube flow, the inventory error was 2.5 grams using bed temperature rise and 1.6 grams using gas temperature rise. For 50 to 100 SLPM U-tube flows, the IBA gas temperature rise inventory errors were nominally one to two grams that increased above four grams for flows less than 50 SLPM. For 50 to 100 SLPM U-tube flows, the IBA bed temperature rise inventory errors were greater than the gas temperature rise errors, but similar errors were found for both methods at gas flows of 20, 30, and 40 SLPM.Electric heater IBA tests were done for six production hydride beds using a 45 SLPM U-tube gas flow. Of the duplicate runs performed on these beds, five of the six beds produced IBA inventory errors of approximately three grams: consistent with results obtained in the laboratory prototype tests.« less
In-Bed Accountability Development for a Passively Cooled, Electrically Heated Hydride (PACE) Bed
DOE Office of Scientific and Technical Information (OSTI.GOV)
KLEIN, JAMES
A nominal 1500 STP-L PAssively Cooled, Electrically heated hydride (PACE) Bed has been developed for implementation into a new Savannah River Site tritium project. The 1.2 meter (four-foot) long process vessel contains an internal ''U-tube'' for tritium In-Bed Accountability (IBA) measurements. IBA will be performed on six, 12.6 kg production metal hydride storage beds. IBA tests were done on a prototype bed using electric heaters to simulate the radiolytic decay of tritium. Tests had gas flows from 10 to 100 SLPM through the U-tube or 100 SLPM through the bed's vacuum jacket. IBA inventory measurement errors at the 95 percentmore » confidence level were calculated using the correlation of IBA gas temperature rise, or (hydride) bed temperature rise above ambient temperature, versus simulated tritium inventory. Prototype bed IBA inventory errors at 100 SLPM were the largest for gas flows through the vacuum jacket: 15.2 grams for the bed temperature rise and 11.5 grams for the gas temperature rise. For a 100 SLPM U-tube flow, the inventory error was 2.5 grams using bed temperature rise and 1.6 grams using gas temperature rise. For 50 to 100 SLPM U-tube flows, the IBA gas temperature rise inventory errors were nominally one to two grams that increased above four grams for flows less than 50 SLPM. For 50 to 100 SLPM U-tube flows, the IBA bed temperature rise inventory errors were greater than the gas temperature rise errors, but similar errors were found for both methods at gas flows of 20, 30, and 40 SLPM. Electric heater IBA tests were done for six production hydride beds using a 45 SLPM U-tube gas flow. Of the duplicate runs performed on these beds, five of the six beds produced IBA inventory errors of approximately three grams: consistent with results obtained in the laboratory prototype tests.« less
Tropospheric Delay Raytracing Applied in VLBI Analysis
NASA Astrophysics Data System (ADS)
MacMillan, D. S.; Eriksson, D.; Gipson, J. M.
2013-12-01
Tropospheric delay modeling error continues to be one of the largest sources of error in VLBI analysis. For standard operational solutions, we use the VMF1 elevation-dependent mapping functions derived from ECMWF data. These mapping functions assume that tropospheric delay at a site is azimuthally symmetric. As this assumption does not reflect reality, we have determined the raytrace delay along the signal path through the troposphere for each VLBI quasar observation. We determined the troposphere refractivity fields from the pressure, temperature, specific humidity and geopotential height fields of the NASA GSFC GEOS-5 numerical weather model. We discuss results from analysis of the CONT11 R&D and the weekly operational R1+R4 experiment sessions. When applied in VLBI analysis, baseline length repeatabilities were better for 66-72% of baselines with raytraced delays than with VMF1 mapping functions. Vertical repeatabilities were better for 65% of sites.
Density-based penalty parameter optimization on C-SVM.
Liu, Yun; Lian, Jie; Bartolacci, Michael R; Zeng, Qing-An
2014-01-01
The support vector machine (SVM) is one of the most widely used approaches for data classification and regression. SVM achieves the largest distance between the positive and negative support vectors, which neglects the remote instances away from the SVM interface. In order to avoid a position change of the SVM interface as the result of an error system outlier, C-SVM was implemented to decrease the influences of the system's outliers. Traditional C-SVM holds a uniform parameter C for both positive and negative instances; however, according to the different number proportions and the data distribution, positive and negative instances should be set with different weights for the penalty parameter of the error terms. Therefore, in this paper, we propose density-based penalty parameter optimization of C-SVM. The experiential results indicated that our proposed algorithm has outstanding performance with respect to both precision and recall.
Applying Intelligent Algorithms to Automate the Identification of Error Factors.
Jin, Haizhe; Qu, Qingxing; Munechika, Masahiko; Sano, Masataka; Kajihara, Chisato; Duffy, Vincent G; Chen, Han
2018-05-03
Medical errors are the manifestation of the defects occurring in medical processes. Extracting and identifying defects as medical error factors from these processes are an effective approach to prevent medical errors. However, it is a difficult and time-consuming task and requires an analyst with a professional medical background. The issues of identifying a method to extract medical error factors and reduce the extraction difficulty need to be resolved. In this research, a systematic methodology to extract and identify error factors in the medical administration process was proposed. The design of the error report, extraction of the error factors, and identification of the error factors were analyzed. Based on 624 medical error cases across four medical institutes in both Japan and China, 19 error-related items and their levels were extracted. After which, they were closely related to 12 error factors. The relational model between the error-related items and error factors was established based on a genetic algorithm (GA)-back-propagation neural network (BPNN) model. Additionally, compared to GA-BPNN, BPNN, partial least squares regression and support vector regression, GA-BPNN exhibited a higher overall prediction accuracy, being able to promptly identify the error factors from the error-related items. The combination of "error-related items, their different levels, and the GA-BPNN model" was proposed as an error-factor identification technology, which could automatically identify medical error factors.
Pourrain, Laure; Serin, Michel; Dautriche, Anne; Jacquetin, Fréderic; Jarny, Christophe; Ballenecker, Isabelle; Bahous, Mickaël; Sgro, Catherine
2018-06-07
Medication errors are the most frequent medical care adverse events in France. Their management process used in hospital remains poorly applied in primary ambulatory care. The main objective of our study was to assess medication error management in general ambulatory practice. The secondary objectives were the characterization of the errors and the analysis of their root causes in order to implement corrective measures. The study was performed in a pluriprofessionnal health care house, applying the stages and tools validated by the French high health authority, that we previously adapted to ambulatory medical cares. During the 3 months study 4712 medical consultations were performed and we collected 64 medication errors. Most of affected patients were at the extreme ages of life (9,4 % before 9 years and 64 % after 70 years). Medication errors occurred at home in 39,1 % of cases, at pluriprofessionnal health care house (25,0 %) or at drugstore (17,2 %). They led to serious clinical consequences (classified as major, critical or catastrophic) in 17,2 % of cases. Drug induced adverse effects occurred in 5 patients, 3 of them needing hospitalization (1 patient recovered, 1 displayed sequelae and 1 died). In more than half of cases, the errors occurred at prescribing stage. The most frequent type of errors was the use of a wrong drug, different from that indicated for the patient (37,5 %) and poor treatment adherence (18,75 %). The systemic reported causes were a care process dysfunction (in coordination or procedure), the health care action context (patient home, not planned act, professional overwork), human factors such as patient and professional condition. The professional team adherence to the study was excellent. Our study demonstrates, for the first time in France, that medication errors management in ambulatory general medical care can be implemented in a pluriprofessionnal health care house with two conditions: the presence of a trained team coordinator, and the use of validated adapted and simple processes and tools. This study also shows that medications errors in general practice are specific of the care process organization. We identified vulnerable points, as transferring and communication between home and care facilities or conversely, medical coordination and involvement of the patient himself in his care. Copyright © 2018 Société française de pharmacologie et de thérapeutique. Published by Elsevier Masson SAS. All rights reserved.
SU-E-T-179: Clinical Impact of IMRT Failure Modes at Or Near TG-142 Tolerance Criteria Levels
DOE Office of Scientific and Technical Information (OSTI.GOV)
Faught, J Tonigan; Balter, P; Johnson, J
2015-06-15
Purpose: Quantitatively assess the clinical impact of 11 critical IMRT dose delivery failure modes. Methods: Eleven step-and-shoot IMRT failure modes (FMs) were introduced into twelve Pinnacle v9.8 treatment plans. One standard and one highly modulated plan on the IROC IMRT phantom and ten previous H&N patient treatment plans were used. FMs included physics components covered by basic QA near tolerance criteria levels (TG-142) such as beam energy, MLC positioning, and MLC modeling. Resultant DVHs were compared to those of failure-free plans and the severity of plan degradation was assessed considering PTV coverage and OAR and normal tissue tolerances and usedmore » for FMEA severity scoring. Six of these FMs were physically simulated and phantom irradiations performed. TLD and radiochromic film results are used for comparison to treatment planning studies. Results: Based on treatment planning studies, the largest clinical impact from the phantom cases was induced by 2 mm systematic MLC shift in one bank with the combination of a D95% target under dose near 16% and OAR overdose near 8%. Cord overdoses of 5%–11% occurred with gantry angle, collimator angle, couch angle, MLC leaf end modeling, and MLC transmission and leakage modeling FMs. PTV coverage and/or OAR sparing was compromised in all FMs introduced in phantom plans with the exception of CT number to electron density tables, MU linearity, and MLC tongue-and-groove modeling. Physical measurements did not entirely agree with treatment planning results. For example, symmetry errors resulted in the largest physically measured discrepancies of up to 3% in the PTVs while a maximum of 0.5% deviation was seen in the treatment planning studies. Patient treatment plan study results are under analysis. Conclusion: Even in the simplistic anatomy of the IROC phantom, some basic physics FMs, just outside of TG-142 tolerance criteria, appear to have the potential for large clinical implications.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Pogson, EM; Liverpool and Macarthur Cancer Therapy Centres, Liverpool, NSW; Ingham Institute for Applied Medical Research, Sydney, NSW
Purpose: To identify the robustness of different treatment techniques in respect to simulated linac errors on the dose distribution to the target volume and organs at risk for step and shoot IMRT (ssIMRT), VMAT and Autoplan generated VMAT nasopharynx plans. Methods: A nasopharynx patient dataset was retrospectively replanned with three different techniques: 7 beam ssIMRT, one arc manual generated VMAT and one arc automatically generated VMAT. Treatment simulated uncertainties: gantry, collimator, MLC field size and MLC shifts, were introduced into these plans at increments of 5,2,1,−1,−2 and −5 (degrees or mm) and recalculated in Pinnacle. The mean and maximum dosesmore » were calculated for the high dose PTV, parotids, brainstem, and spinal cord and then compared to the original baseline plan. Results: Simulated gantry angle errors have <1% effect on the PTV, ssIMRT is most sensitive. The small collimator errors (±1 and ±2 degrees) impacted the mean PTV dose by <2% for all techniques, however for the ±5 degree errors mean target varied by up to 7% for the Autoplan VMAT and 10% for the max dose to the spinal cord and brain stem, seen in all techniques. The simulated MLC shifts introduced the largest errors for the Autoplan VMAT, with the larger MLC modulation presumably being the cause. The most critical error observed, was the MLC field size error, where even small errors of 1 mm, caused significant changes to both the PTV and the OAR. The ssIMRT is the least sensitive and the Autoplan the most sensitive, with target errors of up to 20% over and under dosages observed. Conclusion: For a nasopharynx patient the plan robustness observed is highest for the ssIMRT plan and lowest for the Autoplan generated VMAT plan. This could be caused by the more complex MLC modulation seen for the VMAT plans. This project is supported by a grant from NSW Cancer Council.« less
Reducing medication errors in critical care: a multimodal approach
Kruer, Rachel M; Jarrell, Andrew S; Latif, Asad
2014-01-01
The Institute of Medicine has reported that medication errors are the single most common type of error in health care, representing 19% of all adverse events, while accounting for over 7,000 deaths annually. The frequency of medication errors in adult intensive care units can be as high as 947 per 1,000 patient-days, with a median of 105.9 per 1,000 patient-days. The formulation of drugs is a potential contributor to medication errors. Challenges related to drug formulation are specific to the various routes of medication administration, though errors associated with medication appearance and labeling occur among all drug formulations and routes of administration. Addressing these multifaceted challenges requires a multimodal approach. Changes in technology, training, systems, and safety culture are all strategies to potentially reduce medication errors related to drug formulation in the intensive care unit. PMID:25210478
Olson, Eric J.
2013-06-11
An apparatus, program product, and method that run an algorithm on a hardware based processor, generate a hardware error as a result of running the algorithm, generate an algorithm output for the algorithm, compare the algorithm output to another output for the algorithm, and detect the hardware error from the comparison. The algorithm is designed to cause the hardware based processor to heat to a degree that increases the likelihood of hardware errors to manifest, and the hardware error is observable in the algorithm output. As such, electronic components may be sufficiently heated and/or sufficiently stressed to create better conditions for generating hardware errors, and the output of the algorithm may be compared at the end of the run to detect a hardware error that occurred anywhere during the run that may otherwise not be detected by traditional methodologies (e.g., due to cooling, insufficient heat and/or stress, etc.).
The current and ideal state of anatomic pathology patient safety.
Raab, Stephen Spencer
2014-01-01
An anatomic pathology diagnostic error may be secondary to a number of active and latent technical and/or cognitive components, which may occur anywhere along the total testing process in clinical and/or laboratory domains. For the pathologist interpretive steps of diagnosis, we examine Kahneman's framework of slow and fast thinking to explain different causes of error in precision (agreement) and in accuracy (truth). The pathologist cognitive diagnostic process involves image pattern recognition and a slow thinking error may be caused by the application of different rationally-constructed mental maps of image criteria/patterns by different pathologists. This type of error is partly related to a system failure in standardizing the application of these maps. A fast thinking error involves the flawed leap from image pattern to incorrect diagnosis. In the ideal state, anatomic pathology systems would target these cognitive error causes as well as the technical latent factors that lead to error.
Considerations for Creating Multi-Language Personality Norms: A Three-Component Model of Error
ERIC Educational Resources Information Center
Meyer, Kevin D.; Foster, Jeff L.
2008-01-01
With the increasing globalization of human resources practices, a commensurate increase in demand has occurred for multi-language ("global") personality norms for use in selection and development efforts. The combination of data from multiple translations of a personality assessment into a single norm engenders error from multiple sources. This…
J.M. Hull; A.M. Fish; J.J. Keane; S.R. Mori; B.J Sacks; A.C. Hull
2010-01-01
One of the primary assumptions associated with many wildlife and population trend studies is that target species are correctly identified. This assumption may not always be valid, particularly for species similar in appearance to co-occurring species. We examined size overlap and identification error rates among Cooper's (Accipiter cooperii...
[Errors in medicine. Causes, impact and improvement measures to improve patient safety].
Waeschle, R M; Bauer, M; Schmidt, C E
2015-09-01
The guarantee of quality of care and patient safety is of major importance in hospitals even though increased economic pressure and work intensification are ubiquitously present. Nevertheless, adverse events still occur in 3-4 % of hospital stays and of these 25-50 % are estimated to be avoidable. The identification of possible causes of error and the development of measures for the prevention of medical errors are essential for patient safety. The implementation and continuous development of a constructive culture of error tolerance are fundamental.The origins of errors can be differentiated into systemic latent and individual active causes and components of both categories are typically involved when an error occurs. Systemic causes are, for example out of date structural environments, lack of clinical standards and low personnel density. These causes arise far away from the patient, e.g. management decisions and can remain unrecognized for a long time. Individual causes involve, e.g. confirmation bias, error of fixation and prospective memory failure. These causes have a direct impact on patient care and can result in immediate injury to patients. Stress, unclear information, complex systems and a lack of professional experience can promote individual causes. Awareness of possible causes of error is a fundamental precondition to establishing appropriate countermeasures.Error prevention should include actions directly affecting the causes of error and includes checklists and standard operating procedures (SOP) to avoid fixation and prospective memory failure and team resource management to improve communication and the generation of collective mental models. Critical incident reporting systems (CIRS) provide the opportunity to learn from previous incidents without resulting in injury to patients. Information technology (IT) support systems, such as the computerized physician order entry system, assist in the prevention of medication errors by providing information on dosage, pharmacological interactions, side effects and contraindications of medications.The major challenges for quality and risk management, for the heads of departments and the executive board is the implementation and support of the described actions and a sustained guidance of the staff involved in the modification management process. The global trigger tool is suitable for improving transparency and objectifying the frequency of medical errors.
Smiley, A M
1990-10-01
In February of 1986 a head-on collision occurred between a freight train and a passenger train in western Canada killing 23 people and causing over $30 million of damage. A Commission of Inquiry appointed by the Canadian government concluded that human error was the major reason for the collision. This report discusses the factors contributing to the human error: mainly poor work-rest schedules, the monotonous nature of the train driving task, insufficient information about train movements, and the inadequate backup systems in case of human error.
NASA Astrophysics Data System (ADS)
Rodríguez, C.; Aragón, E.; Castro, A.; Pedreira, R.; Sánchez-Navas, A.; Díaz-Alvarado, J.; D´Eramo, F.; Pinotti, L.; Aguilera, Y.; Cavarozzi, C.; Demartis, M.; Hernando, I. R.; Fuentes, T.
2017-10-01
The publisher regrets that an error occurred which led to the premature publication of this paper. This error bears no reflection on the article or its authors. The publisher apologizes to the authors and the readers for this unfortunate error in Journal of South American Earth Sciences, 78C (2017) 38-60, http://dx.doi.org/10.1016/j.jsames.2017.06.002.
A Conceptual Framework for Predicting Error in Complex Human-Machine Environments
NASA Technical Reports Server (NTRS)
Freed, Michael; Remington, Roger; Null, Cynthia H. (Technical Monitor)
1998-01-01
We present a Goals, Operators, Methods, and Selection Rules-Model Human Processor (GOMS-MHP) style model-based approach to the problem of predicting human habit capture errors. Habit captures occur when the model fails to allocate limited cognitive resources to retrieve task-relevant information from memory. Lacking the unretrieved information, decision mechanisms act in accordance with implicit default assumptions, resulting in error when relied upon assumptions prove incorrect. The model helps interface designers identify situations in which such failures are especially likely.
Highway safety research : a national agenda : executive summary.
DOT National Transportation Integrated Search
2001-01-01
Motor vehicle-related injury and death is the nations largest public health problem. The economic costs to society : will approach $2 trillion and an even greater intangible human loss will occur to family and friends of the 33 million : victims. :...
Refractive error at birth and its relation to gestational age.
Varughese, Sara; Varghese, Raji Mathew; Gupta, Nidhi; Ojha, Rishikant; Sreenivas, V; Puliyel, Jacob M
2005-06-01
The refractive status of premature infants is not well studied. This study was done to find the norms of refractive error in newborns at different gestational ages. One thousand two hundred three (1203) eyes were examined for refractive error by streak retinoscopy within the first week of life between June 2001 and September 2002. Tropicamide eye drops (0.8%) with phenylephrine 0.5% were used to achieve cycloplegia and mydriasis. The refractive error was measured in the vertical and horizontal meridia in both eyes and was recorded to the nearest dioptre (D). The neonates were grouped in five gestational age groups ranging from 24 weeks to 43 weeks. Extremely preterm babies were found to be myopic with a mean MSE (mean spherical equivalent) of -4.86 D. The MSE was found to progressively decrease (become less myopic) with increasing gestation and was +2.4 D at term. Astigmatism of more than 1 D spherical equivalent was seen in 67.8% of the eyes examined. Among newborns with > 1 D of astigmatism, the astigmatism was with-the-rule (vertical meridian having greater refractive power than horizontal) in 85% and against-the-rule in 15%. Anisometropia of more than 1 D spherical equivalent was seen in 31% babies. Term babies are known to be hypermetropic, and preterm babies with retinopathy of prematurity (ROP) are known to have myopia. This study provides data on the mean spherical equivalent, the degree of astigmatism, and incidence of anisometropia at different gestational ages. This is the largest study in world literature looking at refractive errors at birth against gestational age. It should help understand the norms of refractive errors in preterm babies.
A strategy for reducing gross errors in the generalized Born models of implicit solvation
Onufriev, Alexey V.; Sigalov, Grigori
2011-01-01
The “canonical” generalized Born (GB) formula [C. Still, A. Tempczyk, R. C. Hawley, and T. Hendrickson, J. Am. Chem. Soc. 112, 6127 (1990)] is known to provide accurate estimates for total electrostatic solvation energies ΔGel of biomolecules if the corresponding effective Born radii are accurate. Here we show that even if the effective Born radii are perfectly accurate, the canonical formula still exhibits significant number of gross errors (errors larger than 2kBT relative to numerical Poisson equation reference) in pairwise interactions between individual atomic charges. Analysis of exact analytical solutions of the Poisson equation (PE) for several idealized nonspherical geometries reveals two distinct spatial modes of the PE solution; these modes are also found in realistic biomolecular shapes. The canonical GB Green function misses one of two modes seen in the exact PE solution, which explains the observed gross errors. To address the problem and reduce gross errors of the GB formalism, we have used exact PE solutions for idealized nonspherical geometries to suggest an alternative analytical Green function to replace the canonical GB formula. The proposed functional form is mathematically nearly as simple as the original, but depends not only on the effective Born radii but also on their gradients, which allows for better representation of details of nonspherical molecular shapes. In particular, the proposed functional form captures both modes of the PE solution seen in nonspherical geometries. Tests on realistic biomolecular structures ranging from small peptides to medium size proteins show that the proposed functional form reduces gross pairwise errors in all cases, with the amount of reduction varying from more than an order of magnitude for small structures to a factor of 2 for the largest ones. PMID:21528947
Graf, Alexandra C; Bauer, Peter; Glimm, Ekkehard; Koenig, Franz
2014-07-01
Sample size modifications in the interim analyses of an adaptive design can inflate the type 1 error rate, if test statistics and critical boundaries are used in the final analysis as if no modification had been made. While this is already true for designs with an overall change of the sample size in a balanced treatment-control comparison, the inflation can be much larger if in addition a modification of allocation ratios is allowed as well. In this paper, we investigate adaptive designs with several treatment arms compared to a single common control group. Regarding modifications, we consider treatment arm selection as well as modifications of overall sample size and allocation ratios. The inflation is quantified for two approaches: a naive procedure that ignores not only all modifications, but also the multiplicity issue arising from the many-to-one comparison, and a Dunnett procedure that ignores modifications, but adjusts for the initially started multiple treatments. The maximum inflation of the type 1 error rate for such types of design can be calculated by searching for the "worst case" scenarios, that are sample size adaptation rules in the interim analysis that lead to the largest conditional type 1 error rate in any point of the sample space. To show the most extreme inflation, we initially assume unconstrained second stage sample size modifications leading to a large inflation of the type 1 error rate. Furthermore, we investigate the inflation when putting constraints on the second stage sample sizes. It turns out that, for example fixing the sample size of the control group, leads to designs controlling the type 1 error rate. © 2014 The Author. Biometrical Journal published by WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
Exploring the initial steps of the testing process: frequency and nature of pre-preanalytic errors.
Carraro, Paolo; Zago, Tatiana; Plebani, Mario
2012-03-01
Few data are available on the nature of errors in the so-called pre-preanalytic phase, the initial steps of the testing process. We therefore sought to evaluate pre-preanalytic errors using a study design that enabled us to observe the initial procedures performed in the ward, from the physician's test request to the delivery of specimens in the clinical laboratory. After a 1-week direct observational phase designed to identify the operating procedures followed in 3 clinical wards, we recorded all nonconformities and errors occurring over a 6-month period. Overall, the study considered 8547 test requests, for which 15 917 blood sample tubes were collected and 52 982 tests undertaken. No significant differences in error rates were found between the observational phase and the overall study period, but underfilling of coagulation tubes was found to occur more frequently in the direct observational phase (P = 0.043). In the overall study period, the frequency of errors was found to be particularly high regarding order transmission [29 916 parts per million (ppm)] and hemolysed samples (2537 ppm). The frequency of patient misidentification was 352 ppm, and the most frequent nonconformities were test requests recorded in the diary without the patient's name and failure to check the patient's identity at the time of blood draw. The data collected in our study confirm the relative frequency of pre-preanalytic errors and underline the need to consensually prepare and adopt effective standard operating procedures in the initial steps of laboratory testing and to monitor compliance with these procedures over time.
Paediatric Refractive Errors in an Eye Clinic in Osogbo, Nigeria.
Michaeline, Isawumi; Sheriff, Agboola; Bimbo, Ayegoro
2016-03-01
Paediatric ophthalmology is an emerging subspecialty in Nigeria and as such there is paucity of data on refractive errors in the country. This study set out to determine the pattern of refractive errors in children attending an eye clinic in South West Nigeria. A descriptive study of 180 consecutive subjects seen over a 2-year period. Presenting complaints, presenting visual acuity (PVA), age and sex were recorded. Clinical examination of the anterior and posterior segments of the eyes, extraocular muscle assessment and refraction were done. The types of refractive errors and their grades were determined. Corrected VA was obtained. Data was analysed using descriptive statistics in proportions, chi square with p value <0.05. The age range of subjects was between 3 and 16 years with mean age = 11.7 and SD = 0.51; with males making up 33.9%.The commonest presenting complaint was blurring of distant vision (40%), presenting visual acuity 6/9 (33.9%), normal vision constituted >75.0%, visual impairment20% and low vision 23.3%. Low grade spherical and cylindrical errors occurred most frequently (35.6% and 59.9% respectively). Regular astigmatism was significantly more common, P <0.001. The commonest diagnosis was simple myopic astigmatism (41.1%). Four cases of strabismus were seen. Simple spherical and cylindrical errors were the commonest types of refractive errors seen. Visual impairment and low vision occurred and could be a cause of absenteeism from school. Low-cost spectacle production or dispensing unit and health education are advocated for the prevention of visual impairment in a hospital set-up.
Schipler, Agnes; Iliakis, George
2013-09-01
Although the DNA double-strand break (DSB) is defined as a rupture in the double-stranded DNA molecule that can occur without chemical modification in any of the constituent building blocks, it is recognized that this form is restricted to enzyme-induced DSBs. DSBs generated by physical or chemical agents can include at the break site a spectrum of base alterations (lesions). The nature and number of such chemical alterations define the complexity of the DSB and are considered putative determinants for repair pathway choice and the probability that errors will occur during this processing. As the pathways engaged in DSB processing show distinct and frequently inherent propensities for errors, pathway choice also defines the error-levels cells opt to accept. Here, we present a classification of DSBs on the basis of increasing complexity and discuss how complexity may affect processing, as well as how it may cause lethal or carcinogenic processing errors. By critically analyzing the characteristics of DSB repair pathways, we suggest that all repair pathways can in principle remove lesions clustering at the DSB but are likely to fail when they encounter clusters of DSBs that cause a local form of chromothripsis. In the same framework, we also analyze the rational of DSB repair pathway choice.
Gaussian Hypothesis Testing and Quantum Illumination.
Wilde, Mark M; Tomamichel, Marco; Lloyd, Seth; Berta, Mario
2017-09-22
Quantum hypothesis testing is one of the most basic tasks in quantum information theory and has fundamental links with quantum communication and estimation theory. In this paper, we establish a formula that characterizes the decay rate of the minimal type-II error probability in a quantum hypothesis test of two Gaussian states given a fixed constraint on the type-I error probability. This formula is a direct function of the mean vectors and covariance matrices of the quantum Gaussian states in question. We give an application to quantum illumination, which is the task of determining whether there is a low-reflectivity object embedded in a target region with a bright thermal-noise bath. For the asymmetric-error setting, we find that a quantum illumination transmitter can achieve an error probability exponent stronger than a coherent-state transmitter of the same mean photon number, and furthermore, that it requires far fewer trials to do so. This occurs when the background thermal noise is either low or bright, which means that a quantum advantage is even easier to witness than in the symmetric-error setting because it occurs for a larger range of parameters. Going forward from here, we expect our formula to have applications in settings well beyond those considered in this paper, especially to quantum communication tasks involving quantum Gaussian channels.
The good doctor: the carer's perspective.
Levine, Carol
2004-01-01
Carers are family members, friends, and neighbours who perform medical tasks and personal care, manage housekeeping and financial affairs, and provide emotional support to people who are ill, disabled, or elderly. From a carer's perspective, the primary requisite for a good doctor is competence. Assuming equal technical skills and knowledge, the difference between 'good' and 'bad' doctors comes down to attitudes and behaviour-communication. An important aspect of communication is what doctors say to carers, and how they interpret what carers say to them. Body language-stances, gestures and expression-communicates as well. Good doctors are surrounded by courteous, helpful and efficient assistants. Doctors can make two types of errors in dealing with carers. Type 1 errors occur when doctors exclude the carer from decision making and information. Type 2 errors occur when doctors speak only to the carer and ignore the patient. Good doctors, patients and carers confront the existential meaning of illness together.
Bouhabel, Sarah; Kay-Rivest, Emily; Nhan, Carol; Bank, Ilana; Nugus, Peter; Fisher, Rachel; Nguyen, Lily Hp
2017-06-01
Otolaryngology-head and neck surgery (OTL-HNS) residents face a variety of difficult, high-stress situations, which may occur early in their training. Since these events occur infrequently, simulation-based learning has become an important part of residents' training and is already well established in fields such as anesthesia and emergency medicine. In the domain of OTL-HNS, it is gradually gaining in popularity. Crisis Resource Management (CRM), a program adapted from the aviation industry, aims to improve outcomes of crisis situations by attempting to mitigate human errors. Some examples of CRM principles include cultivating situational awareness; promoting proper use of available resources; and improving rapid decision making, particularly in high-acuity, low-frequency clinical situations. Our pilot project sought to integrate CRM principles into an airway simulation course for OTL-HNS residents, but most important, it evaluated whether learning objectives were met, through use of a novel error identification model.
JaŁoszyŃski, PaweŁ
2018-01-25
To date, the subgenus Rhomboconnus Franz of Euconnus Thomson was represented by ten species known to occur in Venezuela, Panama and Ecuador. For the first time Rhomboconnus is reported to occur in Peru and Bolivia, and two new species are described: Euconnus wari sp. n. (Peru) and E. inkachakanus sp. n. (Bolivia). The latter species is the largest representative of Rhomboconnus, with body length exceeding 3 mm.
Farag, Amany; Blegen, Mary; Gedney-Lose, Amalia; Lose, Daniel; Perkhounkova, Yelena
2017-05-01
Medication errors are one of the most frequently occurring errors in health care settings. The complexity of the ED work environment places patients at risk for medication errors. Most hospitals rely on nurses' voluntary medication error reporting, but these errors are under-reported. The purpose of this study was to examine the relationship among work environment (nurse manager leadership style and safety climate), social capital (warmth and belonging relationships and organizational trust), and nurses' willingness to report medication errors. A cross-sectional descriptive design using a questionnaire with a convenience sample of emergency nurses was used. Data were analyzed using descriptive, correlation, Mann-Whitney U, and Kruskal-Wallis statistics. A total of 71 emergency nurses were included in the study. Emergency nurses' willingness to report errors decreased as the nurses' years of experience increased (r = -0.25, P = .03). Their willingness to report errors increased when they received more feedback about errors (r = 0.25, P = .03) and when their managers used a transactional leadership style (r = 0.28, P = .01). ED nurse managers can modify their leadership style to encourage error reporting. Timely feedback after an error report is particularly important. Engaging experienced nurses to understand error root causes could increase voluntary error reporting. Published by Elsevier Inc.
Suba, Eric J; Pfeifer, John D; Raab, Stephen S
2007-10-01
Patient identification errors in surgical pathology often involve switches of prostate or breast needle core biopsy specimens among patients. We assessed strategies for decreasing the occurrence of these uncommon and yet potentially catastrophic events. Root cause analyses were performed following 3 cases of patient identification error involving prostate needle core biopsy specimens. Patient identification errors in surgical pathology result from slips and lapses of automatic human action that may occur at numerous steps during pre-laboratory, laboratory and post-laboratory work flow processes. Patient identification errors among prostate needle biopsies may be difficult to entirely prevent through the optimization of work flow processes. A DNA time-out, whereby DNA polymorphic microsatellite analysis is used to confirm patient identification before radiation therapy or radical surgery, may eliminate patient identification errors among needle biopsies.
Safety Strategies in an Academic Radiation Oncology Department and Recommendations for Action
Terezakis, Stephanie A.; Pronovost, Peter; Harris, Kendra; DeWeese, Theodore; Ford, Eric
2013-01-01
Background Safety initiatives in the United States continue to work on providing guidance as to how the average practitioner might make patients safer in the face of the complex process by which radiation therapy (RT), an essential treatment used in the management of many patients with cancer, is prepared and delivered. Quality control measures can uncover certain specific errors such as machine dose mis-calibration or misalignments of the patient in the radiation treatment beam. However, they are less effective at uncovering less common errors that can occur anywhere along the treatment planning and delivery process, and even when the process is functioning as intended, errors still occur. Prioritizing Risks and Implementing Risk-Reduction Strategies Activities undertaken at the radiation oncology department at the Johns Hopkins Hospital (Baltimore) include Failure Mode and Effects Analysis (FMEA), risk-reduction interventions, and voluntary error and near-miss reporting systems. A visual process map portrayed 269 RT steps occurring among four subprocesses—including consult, simulation, treatment planning, and treatment delivery. Two FMEAs revealed 127 and 159 possible failure modes, respectively. Risk-reduction interventions for 15 “top-ranked” failure modes were implemented. Since the error and near-miss reporting system’s implementation in the department in 2007, 253 events have been logged. However, the system may be insufficient for radiation oncology, for which a greater level of practice-specific information is required to fully understand each event. Conclusions The “basic science” of radiation treatment has received considerable support and attention in developing novel therapies to benefit patients. The time has come to apply the same focus and resources to ensuring that patients safely receive the maximal benefits possible. PMID:21819027