Sample records for calculated human error

  1. Injection Molding Parameters Calculations by Using Visual Basic (VB) Programming

    NASA Astrophysics Data System (ADS)

    Tony, B. Jain A. R.; Karthikeyen, S.; Alex, B. Jeslin A. R.; Hasan, Z. Jahid Ali

    2018-03-01

    Now a day’s manufacturing industry plays a vital role in production sectors. To fabricate a component lot of design calculation has to be done. There is a chance of human errors occurs during design calculations. The aim of this project is to create a special module using visual basic (VB) programming to calculate injection molding parameters to avoid human errors. To create an injection mold for a spur gear component the following parameters have to be calculated such as Cooling Capacity, Cooling Channel Diameter, and Cooling Channel Length, Runner Length and Runner Diameter, Gate Diameter and Gate Pressure. To calculate the above injection molding parameters a separate module has been created using Visual Basic (VB) Programming to reduce the human errors. The outcome of the module dimensions is the injection molding components such as mold cavity and core design, ejector plate design.

  2. Evaluation of lens distortion errors using an underwater camera system for video-based motion analysis

    NASA Technical Reports Server (NTRS)

    Poliner, Jeffrey; Fletcher, Lauren; Klute, Glenn K.

    1994-01-01

    Video-based motion analysis systems are widely employed to study human movement, using computers to capture, store, process, and analyze video data. This data can be collected in any environment where cameras can be located. One of the NASA facilities where human performance research is conducted is the Weightless Environment Training Facility (WETF), a pool of water which simulates zero-gravity with neutral buoyance. Underwater video collection in the WETF poses some unique problems. This project evaluates the error caused by the lens distortion of the WETF cameras. A grid of points of known dimensions was constructed and videotaped using a video vault underwater system. Recorded images were played back on a VCR and a personal computer grabbed and stored the images on disk. These images were then digitized to give calculated coordinates for the grid points. Errors were calculated as the distance from the known coordinates of the points to the calculated coordinates. It was demonstrated that errors from lens distortion could be as high as 8 percent. By avoiding the outermost regions of a wide-angle lens, the error can be kept smaller.

  3. Human Reliability and the Cost of Doing Business

    NASA Technical Reports Server (NTRS)

    DeMott, Diana

    2014-01-01

    Most businesses recognize that people will make mistakes and assume errors are just part of the cost of doing business, but does it need to be? Companies with high risk, or major consequences, should consider the effect of human error. In a variety of industries, Human Errors have caused costly failures and workplace injuries. These have included: airline mishaps, medical malpractice, administration of medication and major oil spills have all been blamed on human error. A technique to mitigate or even eliminate some of these costly human errors is the use of Human Reliability Analysis (HRA). Various methodologies are available to perform Human Reliability Assessments that range from identifying the most likely areas for concern to detailed assessments with human error failure probabilities calculated. Which methodology to use would be based on a variety of factors that would include: 1) how people react and act in different industries, and differing expectations based on industries standards, 2) factors that influence how the human errors could occur such as tasks, tools, environment, workplace, support, training and procedure, 3) type and availability of data and 4) how the industry views risk & reliability influences ( types of emergencies, contingencies and routine tasks versus cost based concerns). The Human Reliability Assessments should be the first step to reduce, mitigate or eliminate the costly mistakes or catastrophic failures. Using Human Reliability techniques to identify and classify human error risks allows a company more opportunities to mitigate or eliminate these risks and prevent costly failures.

  4. Quantification of airport community noise impact in terms of noise levels, population density, and human subjective response

    NASA Technical Reports Server (NTRS)

    Deloach, R.

    1981-01-01

    The Fraction Impact Method (FIM), developed by the National Research Council (NRC) for assessing the amount and physiological effect of noise, is described. Here, the number of people exposed to a given level of noise is multiplied by a weighting factor that depends on noise level. It is pointed out that the Aircraft-noise Levels and Annoyance MOdel (ALAMO), recently developed at NASA Langley Research Center, can perform the NRC fractional impact calculations for given modes of operation at any U.S. airport. The sensitivity of these calculations to errors in estimates of population, noise level, and human subjective response is discussed. It is found that a change in source noise causes a substantially smaller change in contour area than would be predicted simply on the basis of inverse square law considerations. Another finding is that the impact calculations are generally less sensitive to source noise errors than to systematic errors in population or subjective response.

  5. Stopping power and range calculations in human tissues by using the Hartree-Fock-Roothaan wave functions

    NASA Astrophysics Data System (ADS)

    Usta, Metin; Tufan, Mustafa Çağatay

    2017-11-01

    The object of this work is to present the consequences for the stopping power and range values of some human tissues at energies ranging from 1 MeV to 1 GeV and 1-500 MeV, respectively. The considered human tissues are lung, intestine, skin, larynx, breast, bladder, prostate and ovary. In this work, the stopping power is calculated by considering the number of velocity-dependent effective charge and effective mean excitation energies of the target material. We used the Hartree-Fock-Roothaan (HFR) atomic wave function to determine the charge density and the continuous slowing down approximation (CSDA) method for the calculation of the proton range. Electronic stopping power values of tissues results have been compared with the ICRU 44, 46 reports, SRIM, Janni and CasP data over the percent error rate. Range values relate to tissues have compared the range results with the SRIM, FLUKA and Geant4 data. For electronic stopping power results, ICRU, SRIM and Janni's data indicated the best fit with our values at 1-50, 50-250 MeV and 250 MeV-1 GeV, respectively. For range results, the best accordance with the calculated values have been found the SRIM data and the error level is less than 10% in proton therapy. However, greater 30% errors were observed in the 250 MeV and over energies.

  6. Monte Carlo simulation of expert judgments on human errors in chemical analysis--a case study of ICP-MS.

    PubMed

    Kuselman, Ilya; Pennecchi, Francesca; Epstein, Malka; Fajgelj, Ales; Ellison, Stephen L R

    2014-12-01

    Monte Carlo simulation of expert judgments on human errors in a chemical analysis was used for determination of distributions of the error quantification scores (scores of likelihood and severity, and scores of effectiveness of a laboratory quality system in prevention of the errors). The simulation was based on modeling of an expert behavior: confident, reasonably doubting and irresolute expert judgments were taken into account by means of different probability mass functions (pmfs). As a case study, 36 scenarios of human errors which may occur in elemental analysis of geological samples by ICP-MS were examined. Characteristics of the score distributions for three pmfs of an expert behavior were compared. Variability of the scores, as standard deviation of the simulated score values from the distribution mean, was used for assessment of the score robustness. A range of the score values, calculated directly from elicited data and simulated by a Monte Carlo method for different pmfs, was also discussed from the robustness point of view. It was shown that robustness of the scores, obtained in the case study, can be assessed as satisfactory for the quality risk management and improvement of a laboratory quality system against human errors. Copyright © 2014 Elsevier B.V. All rights reserved.

  7. M-Health for Improving Screening Accuracy of Acute Malnutrition in a Community-Based Management of Acute Malnutrition Program in Mumbai Informal Settlements.

    PubMed

    Chanani, Sheila; Wacksman, Jeremy; Deshmukh, Devika; Pantvaidya, Shanti; Fernandez, Armida; Jayaraman, Anuja

    2016-12-01

    Acute malnutrition is linked to child mortality and morbidity. Community-Based Management of Acute Malnutrition (CMAM) programs can be instrumental in large-scale detection and treatment of undernutrition. The World Health Organization (WHO) 2006 weight-for-height/length tables are diagnostic tools available to screen for acute malnutrition. Frontline workers (FWs) in a CMAM program in Dharavi, Mumbai, were using CommCare, a mobile application, for monitoring and case management of children in combination with the paper-based WHO simplified tables. A strategy was undertaken to digitize the WHO tables into the CommCare application. To measure differences in diagnostic accuracy in community-based screening for acute malnutrition, by FWs, using a mobile-based solution. Twenty-seven FWs initially used the paper-based tables and then switched to an updated mobile application that included a nutritional grade calculator. Human error rates specifically associated with grade classification were calculated by comparison of the grade assigned by the FW to the grade each child should have received based on the same WHO tables. Cohen kappa coefficient, sensitivity and specificity rates were also calculated and compared for paper-based grade assignments and calculator grade assignments. Comparing FWs (N = 14) who completed at least 40 screenings without and 40 with the calculator, the error rates were 5.5% and 0.7%, respectively (p < .0001). Interrater reliability (κ) increased to an almost perfect level (>.90), from .79 to .97, after switching to the mobile calculator. Sensitivity and specificity also improved significantly. The mobile calculator significantly reduces an important component of human error in using the WHO tables to assess acute malnutrition at the community level. © The Author(s) 2016.

  8. Simplification of the kinematic model of human movement

    NASA Astrophysics Data System (ADS)

    Dusza, Jacek J.; Wawrzyniak, Zbigniew M.; del Prado Martinez, David

    2013-10-01

    The paper presents a methods of simplification of the human gait model. The experimental data were obtained in the laboratory of the group SATI in the Electronics Engineering Department of the University of Valencia. As a result of the Mean Double Step (MDS) procedure, the human motion were described by a matrix containing the Cartesian coordinates of 26 markers placed on the human body recorded in the 100 time points. With these data it has been possible to develop an software application which performs a wide diversity of tasks like array simplification, mask calculation for the simplification, error calculation as well as tools for signals comparison and movement animation of the markers. Simplifications were made by the spectral analysis of signals and calculating the standard deviation of the differences between the signal and its approximation. Using this method the signals of displacement could be written as the time series limited to a small number of harmonic signals. This approach allows us for a high degree of data compression. The model presented in this work can be applied into the context of medical diagnostics or rehabilitation because for a given approximation error and a large number of harmonics may demonstrate some abnormalities (of orthopaedic symptoms) in the gait cycle analysis.

  9. New methodology to reconstruct in 2-D the cuspal enamel of modern human lower molars.

    PubMed

    Modesto-Mata, Mario; García-Campos, Cecilia; Martín-Francés, Laura; Martínez de Pinillos, Marina; García-González, Rebeca; Quintino, Yuliet; Canals, Antoni; Lozano, Marina; Dean, M Christopher; Martinón-Torres, María; Bermúdez de Castro, José María

    2017-08-01

    In the last years different methodologies have been developed to reconstruct worn teeth. In this article, we propose a new 2-D methodology to reconstruct the worn enamel of lower molars. Our main goals are to reconstruct molars with a high level of accuracy when measuring relevant histological variables and to validate the methodology calculating the errors associated with the measurements. This methodology is based on polynomial regression equations, and has been validated using two different dental variables: cuspal enamel thickness and crown height of the protoconid. In order to perform the validation process, simulated worn modern human molars were employed. The associated errors of the measurements were also estimated applying methodologies previously proposed by other authors. The mean percentage error estimated in reconstructed molars for these two variables in comparison with their own real values is -2.17% for the cuspal enamel thickness of the protoconid and -3.18% for the crown height of the protoconid. This error significantly improves the results of other methodologies, both in the interobserver error and in the accuracy of the measurements. The new methodology based on polynomial regressions can be confidently applied to the reconstruction of cuspal enamel of lower molars, as it improves the accuracy of the measurements and reduces the interobserver error. The present study shows that it is important to validate all methodologies in order to know the associated errors. This new methodology can be easily exportable to other modern human populations, the human fossil record and forensic sciences. © 2017 Wiley Periodicals, Inc.

  10. Evaluation of Flow Biosensor Technology in a Chronically-Instrumented Non-Human Primate Model

    NASA Technical Reports Server (NTRS)

    Koenig, S. C.; Reister, C.; Schaub, J.; Muniz, G.; Ferguson, T.; Fanton, J. W.

    1995-01-01

    The Physiology Research Branch of Brooks AFB conducts both human and non-human primate experiments to determine the effects of microgravity and hypergravity on the cardiovascular system and to indentify the particular mechanisms that invoke these responses. Primary investigative research efforts in a non-human primate model require the calculation of total peripheral resistance (TPR), systemic arterial compliance (SAC), and pressure-volume loop characteristics. These calculations require beat-to-beat measurement of aortic flow. We have evaluated commercially available electromagnetic (EMF) and transit-time flow measurement techniques. In vivo and in vitro experiments demonstrated that the average error of these techniques is less than 25 percent for EMF and less than 10 percent for transit-time.

  11. Reliability of drivers in urban intersections.

    PubMed

    Gstalter, Herbert; Fastenmeier, Wolfgang

    2010-01-01

    The concept of human reliability has been widely used in industrial settings by human factors experts to optimise the person-task fit. Reliability is estimated by the probability that a task will successfully be completed by personnel in a given stage of system operation. Human Reliability Analysis (HRA) is a technique used to calculate human error probabilities as the ratio of errors committed to the number of opportunities for that error. To transfer this notion to the measurement of car driver reliability the following components are necessary: a taxonomy of driving tasks, a definition of correct behaviour in each of these tasks, a list of errors as deviations from the correct actions and an adequate observation method to register errors and opportunities for these errors. Use of the SAFE-task analysis procedure recently made it possible to derive driver errors directly from the normative analysis of behavioural requirements. Driver reliability estimates could be used to compare groups of tasks (e.g. different types of intersections with their respective regulations) as well as groups of drivers' or individual drivers' aptitudes. This approach was tested in a field study with 62 drivers of different age groups. The subjects drove an instrumented car and had to complete an urban test route, the main features of which were 18 intersections representing six different driving tasks. The subjects were accompanied by two trained observers who recorded driver errors using standardized observation sheets. Results indicate that error indices often vary between both the age group of drivers and the type of driving task. The highest error indices occurred in the non-signalised intersection tasks and the roundabout, which exactly equals the corresponding ratings of task complexity from the SAFE analysis. A comparison of age groups clearly shows the disadvantage of older drivers, whose error indices in nearly all tasks are significantly higher than those of the other groups. The vast majority of these errors could be explained by high task load in the intersections, as they represent difficult tasks. The discussion shows how reliability estimates can be used in a constructive way to propose changes in car design, intersection layout and regulation as well as driver training.

  12. Dilution space ratio of 2H and 18O of doubly labeled water method in humans.

    PubMed

    Sagayama, Hiroyuki; Yamada, Yosuke; Racine, Natalie M; Shriver, Timothy C; Schoeller, Dale A

    2016-06-01

    Variation of the dilution space ratio (Nd/No) between deuterium ((2)H) and oxygen-18 ((18)O) impacts the calculation of total energy expenditure (TEE) by doubly labeled water (DLW). Our aim was to examine the physiological and methodological sources of variation of Nd/No in humans. We analyzed data from 2,297 humans (0.25-89 yr old). This included the variables Nd/No, total body water, TEE, body mass index (BMI), and percent body fat (%fat). To differentiate between physiologic and methodologic sources of variation, the urine samples from 54 subjects were divided and blinded and analyzed separately, and repeated DLW dosing was performed in an additional 55 participants after 6 mo. Sex, BMI, and %fat did not significantly affect Nd/No, for which the interindividual SD was 0.017. The measurement error from the duplicate urine sample sets was 0.010, and intraindividual SD of Nd/No in repeats experiments was 0.013. An additional SD of 0.008 was contributed by calibration of the DLW dose water. The variation of measured Nd/No in humans was distributed within a small range and measurement error accounted for 68% of this variation. There was no evidence that Nd/No differed with respect to sex, BMI, and age between 1 and 80 yr, and thus use of a constant value is suggested to minimize the effect of stable isotope analysis error on calculation of TEE in the DLW studies in humans. Based on a review of 103 publications, the average dilution space ratio is 1.036 for individuals between 1 and 80 yr of age. Copyright © 2016 the American Physiological Society.

  13. Approximate error conjugation gradient minimization methods

    DOEpatents

    Kallman, Jeffrey S

    2013-05-21

    In one embodiment, a method includes selecting a subset of rays from a set of all rays to use in an error calculation for a constrained conjugate gradient minimization problem, calculating an approximate error using the subset of rays, and calculating a minimum in a conjugate gradient direction based on the approximate error. In another embodiment, a system includes a processor for executing logic, logic for selecting a subset of rays from a set of all rays to use in an error calculation for a constrained conjugate gradient minimization problem, logic for calculating an approximate error using the subset of rays, and logic for calculating a minimum in a conjugate gradient direction based on the approximate error. In other embodiments, computer program products, methods, and systems are described capable of using approximate error in constrained conjugate gradient minimization problems.

  14. On the Calculation of Uncertainty Statistics with Error Bounds for CFD Calculations Containing Random Parameters and Fields

    NASA Technical Reports Server (NTRS)

    Barth, Timothy J.

    2016-01-01

    This chapter discusses the ongoing development of combined uncertainty and error bound estimates for computational fluid dynamics (CFD) calculations subject to imposed random parameters and random fields. An objective of this work is the construction of computable error bound formulas for output uncertainty statistics that guide CFD practitioners in systematically determining how accurately CFD realizations should be approximated and how accurately uncertainty statistics should be approximated for output quantities of interest. Formal error bounds formulas for moment statistics that properly account for the presence of numerical errors in CFD calculations and numerical quadrature errors in the calculation of moment statistics have been previously presented in [8]. In this past work, hierarchical node-nested dense and sparse tensor product quadratures are used to calculate moment statistics integrals. In the present work, a framework has been developed that exploits the hierarchical structure of these quadratures in order to simplify the calculation of an estimate of the quadrature error needed in error bound formulas. When signed estimates of realization error are available, this signed error may also be used to estimate output quantity of interest probability densities as a means to assess the impact of realization error on these density estimates. Numerical results are presented for CFD problems with uncertainty to demonstrate the capabilities of this framework.

  15. Fuzzy risk analysis of a modern γ-ray industrial irradiator.

    PubMed

    Castiglia, F; Giardina, M

    2011-06-01

    Fuzzy fault tree analyses were used to investigate accident scenarios that involve radiological exposure to operators working in industrial γ-ray irradiation facilities. The HEART method, a first generation human reliability analysis method, was used to evaluate the probability of adverse human error in these analyses. This technique was modified on the basis of fuzzy set theory to more directly take into account the uncertainties in the error-promoting factors on which the methodology is based. Moreover, with regard to some identified accident scenarios, fuzzy radiological exposure risk, expressed in terms of potential annual death, was evaluated. The calculated fuzzy risks for the examined plant were determined to be well below the reference risk suggested by International Commission on Radiological Protection.

  16. Paediatric electronic infusion calculator: An intervention to eliminate infusion errors in paediatric critical care.

    PubMed

    Venkataraman, Aishwarya; Siu, Emily; Sadasivam, Kalaimaran

    2016-11-01

    Medication errors, including infusion prescription errors are a major public health concern, especially in paediatric patients. There is some evidence that electronic or web-based calculators could minimise these errors. To evaluate the impact of an electronic infusion calculator on the frequency of infusion errors in the Paediatric Critical Care Unit of The Royal London Hospital, London, United Kingdom. We devised an electronic infusion calculator that calculates the appropriate concentration, rate and dose for the selected medication based on the recorded weight and age of the child and then prints into a valid prescription chart. Electronic infusion calculator was implemented from April 2015 in Paediatric Critical Care Unit. A prospective study, five months before and five months after implementation of electronic infusion calculator, was conducted. Data on the following variables were collected onto a proforma: medication dose, infusion rate, volume, concentration, diluent, legibility, and missing or incorrect patient details. A total of 132 handwritten prescriptions were reviewed prior to electronic infusion calculator implementation and 119 electronic infusion calculator prescriptions were reviewed after electronic infusion calculator implementation. Handwritten prescriptions had higher error rate (32.6%) as compared to electronic infusion calculator prescriptions (<1%) with a p  < 0.001. Electronic infusion calculator prescriptions had no errors on dose, volume and rate calculation as compared to handwritten prescriptions, hence warranting very few pharmacy interventions. Use of electronic infusion calculator for infusion prescription significantly reduced the total number of infusion prescribing errors in Paediatric Critical Care Unit and has enabled more efficient use of medical and pharmacy time resources.

  17. Previous Estimates of Mitochondrial DNA Mutation Level Variance Did Not Account for Sampling Error: Comparing the mtDNA Genetic Bottleneck in Mice and Humans

    PubMed Central

    Wonnapinij, Passorn; Chinnery, Patrick F.; Samuels, David C.

    2010-01-01

    In cases of inherited pathogenic mitochondrial DNA (mtDNA) mutations, a mother and her offspring generally have large and seemingly random differences in the amount of mutated mtDNA that they carry. Comparisons of measured mtDNA mutation level variance values have become an important issue in determining the mechanisms that cause these large random shifts in mutation level. These variance measurements have been made with samples of quite modest size, which should be a source of concern because higher-order statistics, such as variance, are poorly estimated from small sample sizes. We have developed an analysis of the standard error of variance from a sample of size n, and we have defined error bars for variance measurements based on this standard error. We calculate variance error bars for several published sets of measurements of mtDNA mutation level variance and show how the addition of the error bars alters the interpretation of these experimental results. We compare variance measurements from human clinical data and from mouse models and show that the mutation level variance is clearly higher in the human data than it is in the mouse models at both the primary oocyte and offspring stages of inheritance. We discuss how the standard error of variance can be used in the design of experiments measuring mtDNA mutation level variance. Our results show that variance measurements based on fewer than 20 measurements are generally unreliable and ideally more than 50 measurements are required to reliably compare variances with less than a 2-fold difference. PMID:20362273

  18. Operational hydrological forecasting in Bavaria. Part II: Ensemble forecasting

    NASA Astrophysics Data System (ADS)

    Ehret, U.; Vogelbacher, A.; Moritz, K.; Laurent, S.; Meyer, I.; Haag, I.

    2009-04-01

    In part I of this study, the operational flood forecasting system in Bavaria and an approach to identify and quantify forecast uncertainty was introduced. The approach is split into the calculation of an empirical 'overall error' from archived forecasts and the calculation of an empirical 'model error' based on hydrometeorological forecast tests, where rainfall observations were used instead of forecasts. The 'model error' can especially in upstream catchments where forecast uncertainty is strongly dependent on the current predictability of the atrmosphere be superimposed on the spread of a hydrometeorological ensemble forecast. In Bavaria, two meteorological ensemble prediction systems are currently tested for operational use: the 16-member COSMO-LEPS forecast and a poor man's ensemble composed of DWD GME, DWD Cosmo-EU, NCEP GFS, Aladin-Austria, MeteoSwiss Cosmo-7. The determination of the overall forecast uncertainty is dependent on the catchment characteristics: 1. Upstream catchment with high influence of weather forecast a) A hydrological ensemble forecast is calculated using each of the meteorological forecast members as forcing. b) Corresponding to the characteristics of the meteorological ensemble forecast, each resulting forecast hydrograph can be regarded as equally likely. c) The 'model error' distribution, with parameters dependent on hydrological case and lead time, is added to each forecast timestep of each ensemble member d) For each forecast timestep, the overall (i.e. over all 'model error' distribution of each ensemble member) error distribution is calculated e) From this distribution, the uncertainty range on a desired level (here: the 10% and 90% percentile) is extracted and drawn as forecast envelope. f) As the mean or median of an ensemble forecast does not necessarily exhibit meteorologically sound temporal evolution, a single hydrological forecast termed 'lead forecast' is chosen and shown in addition to the uncertainty bounds. This can be either an intermediate forecast between the extremes of the ensemble spread or a manually selected forecast based on a meteorologists advice. 2. Downstream catchments with low influence of weather forecast In downstream catchments with strong human impact on discharge (e.g. by reservoir operation) and large influence of upstream gauge observation quality on forecast quality, the 'overall error' may in most cases be larger than the combination of the 'model error' and an ensemble spread. Therefore, the overall forecast uncertainty bounds are calculated differently: a) A hydrological ensemble forecast is calculated using each of the meteorological forecast members as forcing. Here, additionally the corresponding inflow hydrograph from all upstream catchments must be used. b) As for an upstream catchment, the uncertainty range is determined by combination of 'model error' and the ensemble member forecasts c) In addition, the 'overall error' is superimposed on the 'lead forecast'. For reasons of consistency, the lead forecast must be based on the same meteorological forecast in the downstream and all upstream catchments. d) From the resulting two uncertainty ranges (one from the ensemble forecast and 'model error', one from the 'lead forecast' and 'overall error'), the envelope is taken as the most prudent uncertainty range. In sum, the uncertainty associated with each forecast run is calculated and communicated to the public in the form of 10% and 90% percentiles. As in part I of this study, the methodology as well as the useful- or uselessness of the resulting uncertainty ranges will be presented and discussed by typical examples.

  19. Toward a human-centered aircraft automation philosophy

    NASA Technical Reports Server (NTRS)

    Billings, Charles E.

    1989-01-01

    The evolution of automation in civil aircraft is examined in order to discern trends in the respective roles and functions of automation technology and the humans who operate these aircraft. The effects of advances in automation technology on crew reaction is considered and it appears that, though automation may well have decreased the frequency of certain types of human errors in flight, it may also have enabled new categories of human errors, some perhaps less obvious and therefore more serious than those it has alleviated. It is suggested that automation could be designed to keep the pilot closer to the control of the vehicle, while providing an array of information management and aiding functions designed to provide the pilot with data regarding flight replanning, degraded system operation, and the operational status and limits of the aircraft, its systems, and the physical and operational environment. The automation would serve as the pilot's assistant, providing and calculating data, watching for the unexpected, and keeping track of resources and their rate of expenditure.

  20. Ecological footprint model using the support vector machine technique.

    PubMed

    Ma, Haibo; Chang, Wenjuan; Cui, Guangbai

    2012-01-01

    The per capita ecological footprint (EF) is one of the most widely recognized measures of environmental sustainability. It aims to quantify the Earth's biological resources required to support human activity. In this paper, we summarize relevant previous literature, and present five factors that influence per capita EF. These factors are: National gross domestic product (GDP), urbanization (independent of economic development), distribution of income (measured by the Gini coefficient), export dependence (measured by the percentage of exports to total GDP), and service intensity (measured by the percentage of service to total GDP). A new ecological footprint model based on a support vector machine (SVM), which is a machine-learning method based on the structural risk minimization principle from statistical learning theory was conducted to calculate the per capita EF of 24 nations using data from 123 nations. The calculation accuracy was measured by average absolute error and average relative error. They were 0.004883 and 0.351078% respectively. Our results demonstrate that the EF model based on SVM has good calculation performance.

  1. Color Compatibility of Gingival Shade Guides and Gingiva-Colored Dental Materials with Healthy Human Gingiva.

    PubMed

    Sarmast, Nima D; Angelov, Nikola; Ghinea, Razvan; Powers, John M; Paravina, Rade D

    The CIELab and CIEDE2000 coverage error (ΔE* COV and ΔE' COV , respectively) of basic shades of different gingival shade guides and gingiva-colored restorative dental materials (n = 5) was calculated as compared to a previously compiled database on healthy human gingiva. Data were analyzed using analysis of variance with Tukey-Kramer multiple-comparison test (P < .05). A 50:50% acceptability threshold of 4.6 for ΔE* and 4.1 for ΔE' was used to interpret the results. ΔE* COV / ΔE' COV ranged from 4.4/3.5 to 8.6/6.9. The majority of gingival shade guides and gingiva-colored restorative materials exhibited statistically significant coverage errors above the 50:50% acceptability threshold and uneven shade distribution.

  2. A Method of Calculating Motion Error in a Linear Motion Bearing Stage

    PubMed Central

    Khim, Gyungho; Park, Chun Hong; Oh, Jeong Seok

    2015-01-01

    We report a method of calculating the motion error of a linear motion bearing stage. The transfer function method, which exploits reaction forces of individual bearings, is effective for estimating motion errors; however, it requires the rail-form errors. This is not suitable for a linear motion bearing stage because obtaining the rail-form errors is not straightforward. In the method described here, we use the straightness errors of a bearing block to calculate the reaction forces on the bearing block. The reaction forces were compared with those of the transfer function method. Parallelism errors between two rails were considered, and the motion errors of the linear motion bearing stage were measured and compared with the results of the calculations, revealing good agreement. PMID:25705715

  3. Do calculation errors by nurses cause medication errors in clinical practice? A literature review.

    PubMed

    Wright, Kerri

    2010-01-01

    This review aims to examine the literature available to ascertain whether medication errors in clinical practice are the result of nurses' miscalculating drug dosages. The research studies highlighting poor calculation skills of nurses and student nurses have been tested using written drug calculation tests in formal classroom settings [Kapborg, I., 1994. Calculation and administration of drug dosage by Swedish nurses, student nurses and physicians. International Journal for Quality in Health Care 6(4): 389 -395; Hutton, M., 1998. Nursing Mathematics: the importance of application Nursing Standard 13(11): 35-38; Weeks, K., Lynne, P., Torrance, C., 2000. Written drug dosage errors made by students: the threat to clinical effectiveness and the need for a new approach. Clinical Effectiveness in Nursing 4, 20-29]; Wright, K., 2004. Investigation to find strategies to improve student nurses' maths skills. British Journal Nursing 13(21) 1280-1287; Wright, K., 2005. An exploration into the most effective way to teach drug calculation skills to nursing students. Nurse Education Today 25, 430-436], but there have been no reviews of the literature on medication errors in practice that specifically look to see whether the medication errors are caused by nurses' poor calculation skills. The databases Medline, CINAHL, British Nursing Index (BNI), Journal of American Medical Association (JAMA) and Archives and Cochrane reviews were searched for research studies or systematic reviews which reported on the incidence or causes of drug errors in clinical practice. In total 33 articles met the criteria for this review. There were no studies that examined nurses' drug calculation errors in practice. As a result studies and systematic reviews that investigated the types and causes of drug errors were examined to establish whether miscalculations by nurses were the causes of errors. The review found insufficient evidence to suggest that medication errors are caused by nurses' poor calculation skills. Of the 33 studies reviewed only five articles specifically recorded information relating to calculation errors and only two of these detected errors using the direct observational approach. The literature suggests that there are other more pressing aspects of nurses' preparation and administration of medications which are contributing to medication errors in practice that require more urgent attention and calls into question the current focus on calculation and numeracy skills of pre registration and qualified nurses (NMC 2008). However, more research is required into the calculation errors in practice. In particular there is a need for a direct observational study on paediatric nurses as there are presently none examining this area of practice.

  4. Investigating the Causes of Medication Errors and Strategies to Prevention of Them from Nurses and Nursing Student Viewpoint

    PubMed Central

    Gorgich, Enam Alhagh Charkhat; Barfroshan, Sanam; Ghoreishi, Gholamreza; Yaghoobi, Maryam

    2016-01-01

    Introduction and Aim: Medication errors as a serious problem in world and one of the most common medical errors that threaten patient safety and may lead to even death of them. The purpose of this study was to investigate the causes of medication errors and strategies to prevention of them from nurses and nursing student viewpoint. Materials & Methods: This cross-sectional descriptive study was conducted on 327 nursing staff of khatam-al-anbia hospital and 62 intern nursing students in nursing and midwifery school of Zahedan, Iran, enrolled through the availability sampling in 2015. The data were collected by the valid and reliable questionnaire. To analyze the data, descriptive statistics, T-test and ANOVA were applied by use of SPSS16 software. Findings: The results showed that the most common causes of medications errors in nursing were tiredness due increased workload (97.8%), and in nursing students were drug calculation, (77.4%). The most important way for prevention in nurses and nursing student opinion, was reducing the work pressure by increasing the personnel, proportional to the number and condition of patients and also creating a unit as medication calculation. Also there was a significant relationship between the type of ward and the mean of medication errors in two groups. Conclusion: Based on the results it is recommended that nurse-managers resolve the human resources problem, provide workshops and in-service education about preparing medications, side-effects of drugs and pharmacological knowledge. Using electronic medications cards is a measure which reduces medications errors. PMID:27045413

  5. Sensitivity to prediction error in reach adaptation

    PubMed Central

    Haith, Adrian M.; Harran, Michelle D.; Shadmehr, Reza

    2012-01-01

    It has been proposed that the brain predicts the sensory consequences of a movement and compares it to the actual sensory feedback. When the two differ, an error signal is formed, driving adaptation. How does an error in one trial alter performance in the subsequent trial? Here we show that the sensitivity to error is not constant but declines as a function of error magnitude. That is, one learns relatively less from large errors compared with small errors. We performed an experiment in which humans made reaching movements and randomly experienced an error in both their visual and proprioceptive feedback. Proprioceptive errors were created with force fields, and visual errors were formed by perturbing the cursor trajectory to create a visual error that was smaller, the same size, or larger than the proprioceptive error. We measured single-trial adaptation and calculated sensitivity to error, i.e., the ratio of the trial-to-trial change in motor commands to error size. We found that for both sensory modalities sensitivity decreased with increasing error size. A reanalysis of a number of previously published psychophysical results also exhibited this feature. Finally, we asked how the brain might encode sensitivity to error. We reanalyzed previously published probabilities of cerebellar complex spikes (CSs) and found that this probability declined with increasing error size. From this we posit that a CS may be representative of the sensitivity to error, and not error itself, a hypothesis that may explain conflicting reports about CSs and their relationship to error. PMID:22773782

  6. A /31,15/ Reed-Solomon Code for large memory systems

    NASA Technical Reports Server (NTRS)

    Lim, R. S.

    1979-01-01

    This paper describes the encoding and the decoding of a (31,15) Reed-Solomon Code for multiple-burst error correction for large memory systems. The decoding procedure consists of four steps: (1) syndrome calculation, (2) error-location polynomial calculation, (3) error-location numbers calculation, and (4) error values calculation. The principal features of the design are the use of a hardware shift register for both high-speed encoding and syndrome calculation, and the use of a commercially available (31,15) decoder for decoding Steps 2, 3 and 4.

  7. Lack of Precision of Burn Surface Area Calculation by UK Armed Forces Medical Personnel

    DTIC Science & Technology

    2014-03-01

    computer screen or tablet , and therefore the variability in perception and representation inherent in having a human assess and draw the burn remains...Potential solutions to this source of error include 3D MRI and TeraHertz scanning technologies [40], but at the time of writing, these are not yet

  8. A new method for the assessment of patient safety competencies during a medical school clerkship using an objective structured clinical examination

    PubMed Central

    Daud-Gallotti, Renata Mahfuz; Morinaga, Christian Valle; Arlindo-Rodrigues, Marcelo; Velasco, Irineu Tadeu; Arruda Martins, Milton; Tiberio, Iolanda Calvo

    2011-01-01

    INTRODUCTION: Patient safety is seldom assessed using objective evaluations during undergraduate medical education. OBJECTIVE: To evaluate the performance of fifth-year medical students using an objective structured clinical examination focused on patient safety after implementation of an interactive program based on adverse events recognition and disclosure. METHODS: In 2007, a patient safety program was implemented in the internal medicine clerkship of our hospital. The program focused on human error theory, epidemiology of incidents, adverse events, and disclosure. Upon completion of the program, students completed an objective structured clinical examination with five stations and standardized patients. One station focused on patient safety issues, including medical error recognition/disclosure, the patient-physician relationship and humanism issues. A standardized checklist was completed by each standardized patient to assess the performance of each student. The student's global performance at each station and performance in the domains of medical error, the patient-physician relationship and humanism were determined. The correlations between the student performances in these three domains were calculated. RESULTS: A total of 95 students participated in the objective structured clinical examination. The mean global score at the patient safety station was 87.59±1.24 points. Students' performance in the medical error domain was significantly lower than their performance on patient-physician relationship and humanistic issues. Less than 60% of students (n = 54) offered the simulated patient an apology after a medical error occurred. A significant correlation was found between scores obtained in the medical error domains and scores related to both the patient-physician relationship and humanistic domains. CONCLUSIONS: An objective structured clinical examination is a useful tool to evaluate patient safety competencies during the medical student clerkship. PMID:21876976

  9. Corrigendum to “Thermophysical properties of U 3Si 2 to 1773 K”

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    White, Joshua Taylor; Nelson, Andrew Thomas; Dunwoody, John Tyler

    2016-12-01

    An error was discovered by the authors in the calculation of thermal diffusivity in “Thermophysical properties of U 3Si 2 to 1773 K”. The error was caused by operator error in entry of parameters used to fit the temperature rise versus time model necessary to calculate the thermal diffusivity. Lastly, this error propagated to the calculation of thermal conductivity, leading to values that were 18%–28% larger along with the corresponding calculated Lorenz values.

  10. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Katsuta, Y; Tohoku University Graduate School of Medicine, Sendal, Miyagi; Kadoya, N

    Purpose: In this study, we developed a system to calculate three dimensional (3D) dose that reflects dosimetric error caused by leaf miscalibration for head and neck and prostate volumetric modulated arc therapy (VMAT) without additional treatment planning system calculation on real time. Methods: An original system called clarkson dose calculation based dosimetric error calculation to calculate dosimetric error caused by leaf miscalibration was developed by MATLAB (Math Works, Natick, MA). Our program, first, calculates point doses at isocenter for baseline and modified VMAT plan, which generated by inducing MLC errors that enlarged aperture size of 1.0 mm with clarkson dosemore » calculation. Second, error incuced 3D dose was generated with transforming TPS baseline 3D dose using calculated point doses. Results: Mean computing time was less than 5 seconds. For seven head and neck and prostate plans, between our method and TPS calculated error incuced 3D dose, the 3D gamma passing rates (0.5%/2 mm, global) are 97.6±0.6% and 98.0±0.4%. The dose percentage change with dose volume histogram parameter of mean dose on target volume were 0.1±0.5% and 0.4±0.3%, and with generalized equivalent uniform dose on target volume were −0.2±0.5% and 0.2±0.3%. Conclusion: The erroneous 3D dose calculated by our method is useful to check dosimetric error caused by leaf miscalibration before pre treatment patient QA dosimetry checks.« less

  11. Combined Uncertainty and A-Posteriori Error Bound Estimates for General CFD Calculations: Theory and Software Implementation

    NASA Technical Reports Server (NTRS)

    Barth, Timothy J.

    2014-01-01

    This workshop presentation discusses the design and implementation of numerical methods for the quantification of statistical uncertainty, including a-posteriori error bounds, for output quantities computed using CFD methods. Hydrodynamic realizations often contain numerical error arising from finite-dimensional approximation (e.g. numerical methods using grids, basis functions, particles) and statistical uncertainty arising from incomplete information and/or statistical characterization of model parameters and random fields. The first task at hand is to derive formal error bounds for statistics given realizations containing finite-dimensional numerical error [1]. The error in computed output statistics contains contributions from both realization error and the error resulting from the calculation of statistics integrals using a numerical method. A second task is to devise computable a-posteriori error bounds by numerically approximating all terms arising in the error bound estimates. For the same reason that CFD calculations including error bounds but omitting uncertainty modeling are only of limited value, CFD calculations including uncertainty modeling but omitting error bounds are only of limited value. To gain maximum value from CFD calculations, a general software package for uncertainty quantification with quantified error bounds has been developed at NASA. The package provides implementations for a suite of numerical methods used in uncertainty quantification: Dense tensorization basis methods [3] and a subscale recovery variant [1] for non-smooth data, Sparse tensorization methods[2] utilizing node-nested hierarchies, Sampling methods[4] for high-dimensional random variable spaces.

  12. Prospect theory does not describe the feedback-related negativity value function.

    PubMed

    Sambrook, Thomas D; Roser, Matthew; Goslin, Jeremy

    2012-12-01

    Humans handle uncertainty poorly. Prospect theory accounts for this with a value function in which possible losses are overweighted compared to possible gains, and the marginal utility of rewards decreases with size. fMRI studies have explored the neural basis of this value function. A separate body of research claims that prediction errors are calculated by midbrain dopamine neurons. We investigated whether the prospect theoretic effects shown in behavioral and fMRI studies were present in midbrain prediction error coding by using the feedback-related negativity, an ERP component believed to reflect midbrain prediction errors. Participants' stated satisfaction with outcomes followed prospect theory but their feedback-related negativity did not, instead showing no effect of marginal utility and greater sensitivity to potential gains than losses. Copyright © 2012 Society for Psychophysiological Research.

  13. Human Error: A Concept Analysis

    NASA Technical Reports Server (NTRS)

    Hansen, Frederick D.

    2007-01-01

    Human error is the subject of research in almost every industry and profession of our times. This term is part of our daily language and intuitively understood by most people however, it would be premature to assume that everyone's understanding of human error s the same. For example, human error is used to describe the outcome or consequence of human action, the causal factor of an accident, deliberate violations,a nd the actual action taken by a human being. As a result, researchers rarely agree on the either a specific definition or how to prevent human error. The purpose of this article is to explore the specific concept of human error using Concept Analysis as described by Walker and Avant (1995). The concept of human error is examined as currently used in the literature of a variety of industries and professions. Defining attributes and examples of model, borderline, and contrary cases are described. The antecedents and consequences of human error are also discussed and a definition of human error is offered.

  14. Dipole estimation errors due to not incorporating anisotropic conductivities in realistic head models for EEG source analysis

    NASA Astrophysics Data System (ADS)

    Hallez, Hans; Staelens, Steven; Lemahieu, Ignace

    2009-10-01

    EEG source analysis is a valuable tool for brain functionality research and for diagnosing neurological disorders, such as epilepsy. It requires a geometrical representation of the human head or a head model, which is often modeled as an isotropic conductor. However, it is known that some brain tissues, such as the skull or white matter, have an anisotropic conductivity. Many studies reported that the anisotropic conductivities have an influence on the calculated electrode potentials. However, few studies have assessed the influence of anisotropic conductivities on the dipole estimations. In this study, we want to determine the dipole estimation errors due to not taking into account the anisotropic conductivities of the skull and/or brain tissues. Therefore, head models are constructed with the same geometry, but with an anisotropically conducting skull and/or brain tissue compartment. These head models are used in simulation studies where the dipole location and orientation error is calculated due to neglecting anisotropic conductivities of the skull and brain tissue. Results show that not taking into account the anisotropic conductivities of the skull yields a dipole location error between 2 and 25 mm, with an average of 10 mm. When the anisotropic conductivities of the brain tissues are neglected, the dipole location error ranges between 0 and 5 mm. In this case, the average dipole location error was 2.3 mm. In all simulations, the dipole orientation error was smaller than 10°. We can conclude that the anisotropic conductivities of the skull have to be incorporated to improve the accuracy of EEG source analysis. The results of the simulation, as presented here, also suggest that incorporation of the anisotropic conductivities of brain tissues is not necessary. However, more studies are needed to confirm these suggestions.

  15. Estimation of Human Body Volume (BV) from Anthropometric Measurements Based on Three-Dimensional (3D) Scan Technique.

    PubMed

    Liu, Xingguo; Niu, Jianwei; Ran, Linghua; Liu, Taijie

    2017-08-01

    This study aimed to develop estimation formulae for the total human body volume (BV) of adult males using anthropometric measurements based on a three-dimensional (3D) scanning technique. Noninvasive and reliable methods to predict the total BV from anthropometric measurements based on a 3D scan technique were addressed in detail. A regression analysis of BV based on four key measurements was conducted for approximately 160 adult male subjects. Eight total models of human BV show that the predicted results fitted by the regression models were highly correlated with the actual BV (p < 0.001). Two metrics, the mean value of the absolute difference between the actual and predicted BV (V error ) and the mean value of the ratio between V error and actual BV (RV error ), were calculated. The linear model based on human weight was recommended as the most optimal due to its simplicity and high efficiency. The proposed estimation formulae are valuable for estimating total body volume in circumstances in which traditional underwater weighing or air displacement plethysmography is not applicable or accessible. This journal requires that authors assign a level of evidence to each article. For a full description of these Evidence-Based Medicine ratings, please refer to the Table of Contents or the online Instructions to Authors www.springer.com/00266.

  16. Problems in evaluating radiation dose via terrestrial and aquatic pathways.

    PubMed Central

    Vaughan, B E; Soldat, J K; Schreckhise, R G; Watson, E C; McKenzie, D H

    1981-01-01

    This review is concerned with exposure risk and the environmental pathways models used for predictive assessment of radiation dose. Exposure factors, the adequacy of available data, and the model subcomponents are critically reviewed from the standpoint of absolute error propagation. Although the models are inherently capable of better absolute accuracy, a calculated dose is usually overestimated by from two to six orders of magnitude, in practice. The principal reason for so large an error lies in using "generic" concentration ratios in situations where site specific data are needed. Major opinion of the model makers suggests a number midway between these extremes, with only a small likelihood of ever underestimating the radiation dose. Detailed evaluations are made of source considerations influencing dose (i.e., physical and chemical status of released material); dispersal mechanisms (atmospheric, hydrologic and biotic vector transport); mobilization and uptake mechanisms (i.e., chemical and other factors affecting the biological availability of radioelements); and critical pathways. Examples are shown of confounding in food-chain pathways, due to uncritical application of concentration ratios. Current thoughts of replacing the critical pathways approach to calculating dose with comprehensive model calculations are also shown to be ill-advised, given present limitations in the comprehensive data base. The pathways models may also require improved parametrization, as they are not at present structured adequately to lend themselves to validation. The extremely wide errors associated with predicting exposure stand in striking contrast to the error range associated with the extrapolation of animal effects data to the human being. PMID:7037381

  17. Comparing biomarker measurements to a normal range: when ...

    EPA Pesticide Factsheets

    This commentary is the second of a series outlining one specific concept in interpreting biomarkers data. In the first, an observational method was presented for assessing the distribution of measurements before making parametric calculations. Here, the discussion revolves around the next step, the choice of using standard error of the mean or the calculated standard deviation to compare or predict measurement results. The National Exposure Research Laboratory’s (NERL’s) Human Exposure and Atmospheric Sciences Division (HEASD) conducts research in support of EPA’s mission to protect human health and the environment. HEASD’s research program supports Goal 1 (Clean Air) and Goal 4 (Healthy People) of EPA’s strategic plan. More specifically, our division conducts research to characterize the movement of pollutants from the source to contact with humans. Our multidisciplinary research program produces Methods, Measurements, and Models to identify relationships between and characterize processes that link source emissions, environmental concentrations, human exposures, and target-tissue dose. The impact of these tools is improved regulatory programs and policies for EPA.

  18. Error estimates for ice discharge calculated using the flux gate approach

    NASA Astrophysics Data System (ADS)

    Navarro, F. J.; Sánchez Gámez, P.

    2017-12-01

    Ice discharge to the ocean is usually estimated using the flux gate approach, in which ice flux is calculated through predefined flux gates close to the marine glacier front. However, published results usually lack a proper error estimate. In the flux calculation, both errors in cross-sectional area and errors in velocity are relevant. While for estimating the errors in velocity there are well-established procedures, the calculation of the error in the cross-sectional area requires the availability of ground penetrating radar (GPR) profiles transverse to the ice-flow direction. In this contribution, we use IceBridge operation GPR profiles collected in Ellesmere and Devon Islands, Nunavut, Canada, to compare the cross-sectional areas estimated using various approaches with the cross-sections estimated from GPR ice-thickness data. These error estimates are combined with those for ice-velocities calculated from Sentinel-1 SAR data, to get the error in ice discharge. Our preliminary results suggest, regarding area, that the parabolic cross-section approaches perform better than the quartic ones, which tend to overestimate the cross-sectional area for flight lines close to the central flowline. Furthermore, the results show that regional ice-discharge estimates made using parabolic approaches provide reasonable results, but estimates for individual glaciers can have large errors, up to 20% in cross-sectional area.

  19. Accuracy and Efficiency of Recording Pediatric Early Warning Scores Using an Electronic Physiological Surveillance System Compared With Traditional Paper-Based Documentation.

    PubMed

    Sefton, Gerri; Lane, Steven; Killen, Roger; Black, Stuart; Lyon, Max; Ampah, Pearl; Sproule, Cathryn; Loren-Gosling, Dominic; Richards, Caitlin; Spinty, Jean; Holloway, Colette; Davies, Coral; Wilson, April; Chean, Chung Shen; Carter, Bernie; Carrol, E D

    2017-05-01

    Pediatric Early Warning Scores are advocated to assist health professionals to identify early signs of serious illness or deterioration in hospitalized children. Scores are derived from the weighting applied to recorded vital signs and clinical observations reflecting deviation from a predetermined "norm." Higher aggregate scores trigger an escalation in care aimed at preventing critical deterioration. Process errors made while recording these data, including plotting or calculation errors, have the potential to impede the reliability of the score. To test this hypothesis, we conducted a controlled study of documentation using five clinical vignettes. We measured the accuracy of vital sign recording, score calculation, and time taken to complete documentation using a handheld electronic physiological surveillance system, VitalPAC Pediatric, compared with traditional paper-based charts. We explored the user acceptability of both methods using a Web-based survey. Twenty-three staff participated in the controlled study. The electronic physiological surveillance system improved the accuracy of vital sign recording, 98.5% versus 85.6%, P < .02, Pediatric Early Warning Score calculation, 94.6% versus 55.7%, P < .02, and saved time, 68 versus 98 seconds, compared with paper-based documentation, P < .002. Twenty-nine staff completed the Web-based survey. They perceived that the electronic physiological surveillance system offered safety benefits by reducing human error while providing instant visibility of recorded data to the entire clinical team.

  20. An error analysis perspective for patient alignment systems.

    PubMed

    Figl, Michael; Kaar, Marcus; Hoffman, Rainer; Kratochwil, Alfred; Hummel, Johann

    2013-09-01

    This paper analyses the effects of error sources which can be found in patient alignment systems. As an example, an ultrasound (US) repositioning system and its transformation chain are assessed. The findings of this concept can also be applied to any navigation system. In a first step, all error sources were identified and where applicable, corresponding target registration errors were computed. By applying error propagation calculations on these commonly used registration/calibration and tracking errors, we were able to analyse the components of the overall error. Furthermore, we defined a special situation where the whole registration chain reduces to the error caused by the tracking system. Additionally, we used a phantom to evaluate the errors arising from the image-to-image registration procedure, depending on the image metric used. We have also discussed how this analysis can be applied to other positioning systems such as Cone Beam CT-based systems or Brainlab's ExacTrac. The estimates found by our error propagation analysis are in good agreement with the numbers found in the phantom study but significantly smaller than results from patient evaluations. We probably underestimated human influences such as the US scan head positioning by the operator and tissue deformation. Rotational errors of the tracking system can multiply these errors, depending on the relative position of tracker and probe. We were able to analyse the components of the overall error of a typical patient positioning system. We consider this to be a contribution to the optimization of the positioning accuracy for computer guidance systems.

  1. Lessons learned: wrong intraocular lens.

    PubMed

    Schein, Oliver D; Banta, James T; Chen, Teresa C; Pritzker, Scott; Schachat, Andrew P

    2012-10-01

    To report cases involving the placement of the wrong intraocular lens (IOL) at the time of cataract surgery where human error occurred. Retrospective small case series, convenience sample. Seven surgical cases. Institutional review of errors committed and subsequent improvements to clinical protocols. Lessons learned and changes in procedures adapted. The pathways to a wrong IOL are many but largely reflect some combination of poor surgical team communication, transcription error, lack of preoperative clarity in surgical planning or failure to match the patient, and IOL calculation sheet with 2 unique identifiers. Safety in surgery involving IOLs is enhanced both by strict procedures, such as an IOL-specific "time-out," and the fostering of a surgical team culture in which all members are encouraged to voice questions and concerns. Copyright © 2012 American Academy of Ophthalmology. Published by Elsevier Inc. All rights reserved.

  2. ANIE: A mathematical algorithm for automated indexing of planar deformation features in quartz grains

    NASA Astrophysics Data System (ADS)

    Huber, Matthew S.; Ferriãre, Ludovic; Losiak, Anna; Koeberl, Christian

    2011-09-01

    Abstract- Planar deformation features (PDFs) in quartz, one of the most commonly used diagnostic indicators of shock metamorphism, are planes of amorphous material that follow crystallographic orientations, and can thus be distinguished from non-shock-induced fractures in quartz. The process of indexing data for PDFs from universal-stage measurements has traditionally been performed using a manual graphical method, a time-consuming process in which errors can easily be introduced. A mathematical method and computer algorithm, which we call the Automated Numerical Index Executor (ANIE) program for indexing PDFs, was produced, and is presented here. The ANIE program is more accurate and faster than the manual graphical determination of Miller-Bravais indices, as it allows control of the exact error used in the calculation and removal of human error from the process.

  3. Comparison of Methodologies Using Estimated or Measured Values of Total Corneal Astigmatism for Toric Intraocular Lens Power Calculation.

    PubMed

    Ferreira, Tiago B; Ribeiro, Paulo; Ribeiro, Filomena J; O'Neill, João G

    2017-12-01

    To compare the prediction error in the calculation of toric intraocular lenses (IOLs) associated with methods that estimate the power of the posterior corneal surface (ie, Barrett toric calculator and Abulafia-Koch formula) with that of methods that consider real measures obtained using Scheimpflug imaging: a software that uses vectorial calculation (Panacea toric calculator: http://www.panaceaiolandtoriccalculator.com) and a ray tracing software (PhacoOptics, Aarhus Nord, Denmark). In 107 eyes of 107 patients undergoing cataract surgery with toric IOL implantation (Acrysof IQ Toric; Alcon Laboratories, Inc., Fort Worth, TX), predicted residual astigmatism by each calculation method was compared with manifest refractive astigmatism. Prediction error in residual astigmatism was calculated using vector analysis. All calculation methods resulted in overcorrection of with-the-rule astigmatism and undercorrection of against-the-rule astigmatism. Both estimation methods resulted in lower mean and centroid astigmatic prediction errors, and a larger number of eyes within 0.50 diopters (D) of absolute prediction error than methods considering real measures (P < .001). Centroid prediction error (CPE) was 0.07 D at 172° for the Barrett toric calculator and 0.13 D at 174° for the Abulafia-Koch formula (combined with Holladay calculator). For methods using real posterior corneal surface measurements, CPE was 0.25 D at 173° for the Panacea calculator and 0.29 D at 171° for the ray tracing software. The Barrett toric calculator and Abulafia-Koch formula yielded the lowest astigmatic prediction errors. Directly evaluating total corneal power for toric IOL calculation was not superior to estimating it. [J Refract Surg. 2017;33(12):794-800.]. Copyright 2017, SLACK Incorporated.

  4. Common errors of drug administration in infants: causes and avoidance.

    PubMed

    Anderson, B J; Ellis, J F

    1999-01-01

    Drug administration errors are common in infants. Although the infant population has a high exposure to drugs, there are few data concerning pharmacokinetics or pharmacodynamics, or the influence of paediatric diseases on these processes. Children remain therapeutic orphans. Formulations are often suitable only for adults; in addition, the lack of maturation of drug elimination processes, alteration of body composition and influence of size render the calculation of drug doses complex in infants. The commonest drug administration error in infants is one of dose, and the commonest hospital site for this error is the intensive care unit. Drug errors are a consequence of system error, and preventive strategies are possible through system analysis. The goal of a zero drug error rate should be aggressively sought, with systems in place that aim to eliminate the effects of inevitable human error. This involves review of the entire system from drug manufacture to drug administration. The nuclear industry, telecommunications and air traffic control services all practise error reduction policies with zero error as a clear goal, not by finding fault in the individual, but by identifying faults in the system and building into that system mechanisms for picking up faults before they occur. Such policies could be adapted to medicine using interventions both specific (the production of formulations which are for children only and clearly labelled, regular audit by pharmacists, legible prescriptions, standardised dose tables) and general (paediatric drug trials, education programmes, nonpunitive error reporting) to reduce the number of errors made in giving medication to infants.

  5. Simplified mathematics for customized refractive surgery.

    PubMed

    Preussner, Paul Rolf; Wahl, Jochen

    2003-03-01

    To describe a simple mathematical approach to customized corneal refractive surgery or customized intraocular lens (IOL) design that allows "hypervision" and to investigate the accuracy limits. University eye hospital, Mainz, Germany. Corneal shape and at least 1 IOL surface are approximated by the well-known Cartesian conic section curves (ellipsoid, paraboloid, or hyperboloid). They are characterized by only 2 parameters, the vertex radius and the numerical eccentricity. Residual refraction errors for this approximation are calculated by numerical ray tracing. These errors can be displayed as a 2-dimensional refraction map across the pupil or by blurring the image of a Landolt ring superimposed on the retinal receptor grid, giving an overall impression of the visual outcome. If the eye is made emmetropic for paraxial rays and if the numerical eccentricities of the cornea and lens are appropriately fitted to each other, the residual refractive errors are small enough to allow hypervision. Visual acuity of at least 2.0 (20/10) appears to be possible, particularly for mesopic pupil diameters. However, customized optics may have limited application due to their sensitivity to misalignment errors such as decentrations or rotations. The mathematical approach described by Descartes 350 years ago is adequate to calculate hypervision optics for the human eye. The availability of suitable mathematical tools should, however, not be viewed with too much optimism as long as the accuracy of the implementation in surgical procedures is limited.

  6. Human Error In Complex Systems

    NASA Technical Reports Server (NTRS)

    Morris, Nancy M.; Rouse, William B.

    1991-01-01

    Report presents results of research aimed at understanding causes of human error in such complex systems as aircraft, nuclear powerplants, and chemical processing plants. Research considered both slips (errors of action) and mistakes (errors of intention), and influence of workload on them. Results indicated that: humans respond to conditions in which errors expected by attempting to reduce incidence of errors; and adaptation to conditions potent influence on human behavior in discretionary situations.

  7. Undergraduate paramedic students cannot do drug calculations.

    PubMed

    Eastwood, Kathryn; Boyle, Malcolm J; Williams, Brett

    2012-01-01

    Previous investigation of drug calculation skills of qualified paramedics has highlighted poor mathematical ability with no published studies having been undertaken on undergraduate paramedics. There are three major error classifications. Conceptual errors involve an inability to formulate an equation from information given, arithmetical errors involve an inability to operate a given equation, and finally computation errors are simple errors of addition, subtraction, division and multiplication. The objective of this study was to determine if undergraduate paramedics at a large Australia university could accurately perform common drug calculations and basic mathematical equations normally required in the workplace. A cross-sectional study methodology using a paper-based questionnaire was administered to undergraduate paramedic students to collect demographical data, student attitudes regarding their drug calculation performance, and answers to a series of basic mathematical and drug calculation questions. Ethics approval was granted. The mean score of correct answers was 39.5% with one student scoring 100%, 3.3% of students (n=3) scoring greater than 90%, and 63% (n=58) scoring 50% or less, despite 62% (n=57) of the students stating they 'did not have any drug calculations issues'. On average those who completed a minimum of year 12 Specialist Maths achieved scores over 50%. Conceptual errors made up 48.5%, arithmetical 31.1% and computational 17.4%. This study suggests undergraduate paramedics have deficiencies in performing accurate calculations, with conceptual errors indicating a fundamental lack of mathematical understanding. The results suggest an unacceptable level of mathematical competence to practice safely in the unpredictable prehospital environment.

  8. Medication errors in anesthesia: unacceptable or unavoidable?

    PubMed

    Dhawan, Ira; Tewari, Anurag; Sehgal, Sankalp; Sinha, Ashish Chandra

    Medication errors are the common causes of patient morbidity and mortality. It adds financial burden to the institution as well. Though the impact varies from no harm to serious adverse effects including death, it needs attention on priority basis since medication errors' are preventable. In today's world where people are aware and medical claims are on the hike, it is of utmost priority that we curb this issue. Individual effort to decrease medication error alone might not be successful until a change in the existing protocols and system is incorporated. Often drug errors that occur cannot be reversed. The best way to 'treat' drug errors is to prevent them. Wrong medication (due to syringe swap), overdose (due to misunderstanding or preconception of the dose, pump misuse and dilution error), incorrect administration route, under dosing and omission are common causes of medication error that occur perioperatively. Drug omission and calculation mistakes occur commonly in ICU. Medication errors can occur perioperatively either during preparation, administration or record keeping. Numerous human and system errors can be blamed for occurrence of medication errors. The need of the hour is to stop the blame - game, accept mistakes and develop a safe and 'just' culture in order to prevent medication errors. The newly devised systems like VEINROM, a fluid delivery system is a novel approach in preventing drug errors due to most commonly used medications in anesthesia. Similar developments along with vigilant doctors, safe workplace culture and organizational support all together can help prevent these errors. Copyright © 2016. Published by Elsevier Editora Ltda.

  9. Human Error Assessment and Reduction Technique (HEART) and Human Factor Analysis and Classification System (HFACS)

    NASA Technical Reports Server (NTRS)

    Alexander, Tiffaney Miller

    2017-01-01

    Research results have shown that more than half of aviation, aerospace and aeronautics mishaps incidents are attributed to human error. As a part of Safety within space exploration ground processing operations, the identification and/or classification of underlying contributors and causes of human error must be identified, in order to manage human error. This research provides a framework and methodology using the Human Error Assessment and Reduction Technique (HEART) and Human Factor Analysis and Classification System (HFACS), as an analysis tool to identify contributing factors, their impact on human error events, and predict the Human Error probabilities (HEPs) of future occurrences. This research methodology was applied (retrospectively) to six (6) NASA ground processing operations scenarios and thirty (30) years of Launch Vehicle related mishap data. This modifiable framework can be used and followed by other space and similar complex operations.

  10. Human Error Assessment and Reduction Technique (HEART) and Human Factor Analysis and Classification System (HFACS)

    NASA Technical Reports Server (NTRS)

    Alexander, Tiffaney Miller

    2017-01-01

    Research results have shown that more than half of aviation, aerospace and aeronautics mishaps/incidents are attributed to human error. As a part of Safety within space exploration ground processing operations, the identification and/or classification of underlying contributors and causes of human error must be identified, in order to manage human error. This research provides a framework and methodology using the Human Error Assessment and Reduction Technique (HEART) and Human Factor Analysis and Classification System (HFACS), as an analysis tool to identify contributing factors, their impact on human error events, and predict the Human Error probabilities (HEPs) of future occurrences. This research methodology was applied (retrospectively) to six (6) NASA ground processing operations scenarios and thirty (30) years of Launch Vehicle related mishap data. This modifiable framework can be used and followed by other space and similar complex operations.

  11. Human Error Assessment and Reduction Technique (HEART) and Human Factor Analysis and Classification System (HFACS)

    NASA Technical Reports Server (NTRS)

    Alexander, Tiffaney Miller

    2017-01-01

    Research results have shown that more than half of aviation, aerospace and aeronautics mishaps incidents are attributed to human error. As a part of Quality within space exploration ground processing operations, the identification and or classification of underlying contributors and causes of human error must be identified, in order to manage human error.This presentation will provide a framework and methodology using the Human Error Assessment and Reduction Technique (HEART) and Human Factor Analysis and Classification System (HFACS), as an analysis tool to identify contributing factors, their impact on human error events, and predict the Human Error probabilities (HEPs) of future occurrences. This research methodology was applied (retrospectively) to six (6) NASA ground processing operations scenarios and thirty (30) years of Launch Vehicle related mishap data. This modifiable framework can be used and followed by other space and similar complex operations.

  12. Performance Data Errors in Air Carrier Operations: Causes and Countermeasures

    NASA Technical Reports Server (NTRS)

    Berman, Benjamin A.; Dismukes, R Key; Jobe, Kimberly K.

    2012-01-01

    Several airline accidents have occurred in recent years as the result of erroneous weight or performance data used to calculate V-speeds, flap/trim settings, required runway lengths, and/or required climb gradients. In this report we consider 4 recent studies of performance data error, report our own study of ASRS-reported incidents, and provide countermeasures that can reduce vulnerability to accidents caused by performance data errors. Performance data are generated through a lengthy process involving several employee groups and computer and/or paper-based systems. Although much of the airline indUStry 's concern has focused on errors pilots make in entering FMS data, we determined that errors occur at every stage of the process and that errors by ground personnel are probably at least as frequent and certainly as consequential as errors by pilots. Most of the errors we examined could in principle have been trapped by effective use of existing procedures or technology; however, the fact that they were not trapped anywhere indicates the need for better countermeasures. Existing procedures are often inadequately designed to mesh with the ways humans process information. Because procedures often do not take into account the ways in which information flows in actual flight ops and time pressures and interruptions experienced by pilots and ground personnel, vulnerability to error is greater. Some aspects of NextGen operations may exacerbate this vulnerability. We identify measures to reduce the number of errors and to help catch the errors that occur.

  13. Understanding human management of automation errors

    PubMed Central

    McBride, Sara E.; Rogers, Wendy A.; Fisk, Arthur D.

    2013-01-01

    Automation has the potential to aid humans with a diverse set of tasks and support overall system performance. Automated systems are not always reliable, and when automation errs, humans must engage in error management, which is the process of detecting, understanding, and correcting errors. However, this process of error management in the context of human-automation interaction is not well understood. Therefore, we conducted a systematic review of the variables that contribute to error management. We examined relevant research in human-automation interaction and human error to identify critical automation, person, task, and emergent variables. We propose a framework for management of automation errors to incorporate and build upon previous models. Further, our analysis highlights variables that may be addressed through design and training to positively influence error management. Additional efforts to understand the error management process will contribute to automation designed and implemented to support safe and effective system performance. PMID:25383042

  14. Understanding human management of automation errors.

    PubMed

    McBride, Sara E; Rogers, Wendy A; Fisk, Arthur D

    2014-01-01

    Automation has the potential to aid humans with a diverse set of tasks and support overall system performance. Automated systems are not always reliable, and when automation errs, humans must engage in error management, which is the process of detecting, understanding, and correcting errors. However, this process of error management in the context of human-automation interaction is not well understood. Therefore, we conducted a systematic review of the variables that contribute to error management. We examined relevant research in human-automation interaction and human error to identify critical automation, person, task, and emergent variables. We propose a framework for management of automation errors to incorporate and build upon previous models. Further, our analysis highlights variables that may be addressed through design and training to positively influence error management. Additional efforts to understand the error management process will contribute to automation designed and implemented to support safe and effective system performance.

  15. TU-AB-BRC-03: Accurate Tissue Characterization for Monte Carlo Dose Calculation Using Dual-and Multi-Energy CT Data

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lalonde, A; Bouchard, H

    Purpose: To develop a general method for human tissue characterization with dual-and multi-energy CT and evaluate its performance in determining elemental compositions and the associated proton stopping power relative to water (SPR) and photon mass absorption coefficients (EAC). Methods: Principal component analysis is used to extract an optimal basis of virtual materials from a reference dataset of tissues. These principal components (PC) are used to perform two-material decomposition using simulated DECT data. The elemental mass fraction and the electron density in each tissue is retrieved by measuring the fraction of each PC. A stoichiometric calibration method is adapted to themore » technique to make it suitable for clinical use. The present approach is compared with two others: parametrization and three-material decomposition using the water-lipid-protein (WLP) triplet. Results: Monte Carlo simulations using TOPAS for four reference tissues shows that characterizing them with only two PC is enough to get a submillimetric precision on proton range prediction. Based on the simulated DECT data of 43 references tissues, the proposed method is in agreement with theoretical values of protons SPR and low-kV EAC with a RMS error of 0.11% and 0.35%, respectively. In comparison, parametrization and WLP respectively yield RMS errors of 0.13% and 0.29% on SPR, and 2.72% and 2.19% on EAC. Furthermore, the proposed approach shows potential applications for spectral CT. Using five PC and five energy bins reduces the SPR RMS error to 0.03%. Conclusion: The proposed method shows good performance in determining elemental compositions from DECT data and physical quantities relevant to radiotherapy dose calculation and generally shows better accuracy and unbiased results compared to reference methods. The proposed method is particularly suitable for Monte Carlo calculations and shows promise in using more than two energies to characterize human tissue with CT.« less

  16. Human operator response to error-likely situations in complex engineering systems

    NASA Technical Reports Server (NTRS)

    Morris, Nancy M.; Rouse, William B.

    1988-01-01

    The causes of human error in complex systems are examined. First, a conceptual framework is provided in which two broad categories of error are discussed: errors of action, or slips, and errors of intention, or mistakes. Conditions in which slips and mistakes might be expected to occur are identified, based on existing theories of human error. Regarding the role of workload, it is hypothesized that workload may act as a catalyst for error. Two experiments are presented in which humans' response to error-likely situations were examined. Subjects controlled PLANT under a variety of conditions and periodically provided subjective ratings of mental effort. A complex pattern of results was obtained, which was not consistent with predictions. Generally, the results of this research indicate that: (1) humans respond to conditions in which errors might be expected by attempting to reduce the possibility of error, and (2) adaptation to conditions is a potent influence on human behavior in discretionary situations. Subjects' explanations for changes in effort ratings are also explored.

  17. Application of human reliability analysis to nursing errors in hospitals.

    PubMed

    Inoue, Kayoko; Koizumi, Akio

    2004-12-01

    Adverse events in hospitals, such as in surgery, anesthesia, radiology, intensive care, internal medicine, and pharmacy, are of worldwide concern and it is important, therefore, to learn from such incidents. There are currently no appropriate tools based on state-of-the art models available for the analysis of large bodies of medical incident reports. In this study, a new model was developed to facilitate medical error analysis in combination with quantitative risk assessment. This model enables detection of the organizational factors that underlie medical errors, and the expedition of decision making in terms of necessary action. Furthermore, it determines medical tasks as module practices and uses a unique coding system to describe incidents. This coding system has seven vectors for error classification: patient category, working shift, module practice, linkage chain (error type, direct threat, and indirect threat), medication, severity, and potential hazard. Such mathematical formulation permitted us to derive two parameters: error rates for module practices and weights for the aforementioned seven elements. The error rate of each module practice was calculated by dividing the annual number of incident reports of each module practice by the annual number of the corresponding module practice. The weight of a given element was calculated by the summation of incident report error rates for an element of interest. This model was applied specifically to nursing practices in six hospitals over a year; 5,339 incident reports with a total of 63,294,144 module practices conducted were analyzed. Quality assurance (QA) of our model was introduced by checking the records of quantities of practices and reproducibility of analysis of medical incident reports. For both items, QA guaranteed legitimacy of our model. Error rates for all module practices were approximately of the order 10(-4) in all hospitals. Three major organizational factors were found to underlie medical errors: "violation of rules" with a weight of 826 x 10(-4), "failure of labor management" with a weight of 661 x 10(-4), and "defects in the standardization of nursing practices" with a weight of 495 x 10(-4).

  18. On parameters identification of computational models of vibrations during quiet standing of humans

    NASA Astrophysics Data System (ADS)

    Barauskas, R.; Krušinskienė, R.

    2007-12-01

    Vibration of the center of pressure (COP) of human body on the base of support during quiet standing is a very popular clinical research, which provides useful information about the physical and health condition of an individual. In this work, vibrations of COP of a human body in forward-backward direction during still standing are generated using controlled inverted pendulum (CIP) model with a single degree of freedom (dof) supplied with proportional, integral and differential (PID) controller, which represents the behavior of the central neural system of a human body and excited by cumulative disturbance vibration, generated within the body due to breathing or any other physical condition. The identification of the model and disturbance parameters is an important stage while creating a close-to-reality computational model able to evaluate features of disturbance. The aim of this study is to present the CIP model parameters identification approach based on the information captured by time series of the COP signal. The identification procedure is based on an error function minimization. Error function is formulated in terms of time laws of computed and experimentally measured COP vibrations. As an alternative, error function is formulated in terms of the stabilogram diffusion function (SDF). The minimization of error functions is carried out by employing methods based on sensitivity functions of the error with respect to model and excitation parameters. The sensitivity functions are obtained by using the variational techniques. The inverse dynamic problem approach has been employed in order to establish the properties of the disturbance time laws ensuring the satisfactory coincidence of measured and computed COP vibration laws. The main difficulty of the investigated problem is encountered during the model validation stage. Generally, neither the PID controller parameter set nor the disturbance time law are known in advance. In this work, an error function formulated in terms of time derivative of disturbance torque has been proposed in order to obtain PID controller parameters, as well as the reference time law of the disturbance. The disturbance torque is calculated from experimental data using the inverse dynamic approach. Experiments presented in this study revealed that vibrations of disturbance torque and PID controller parameters identified by the method may be qualified as feasible in humans. Presented approach may be easily extended to structural models with any number of dof or higher structural complexity.

  19. An optimized method to calculate error correction capability of tool influence function in frequency domain

    NASA Astrophysics Data System (ADS)

    Wang, Jia; Hou, Xi; Wan, Yongjian; Shi, Chunyan

    2017-10-01

    An optimized method to calculate error correction capability of tool influence function (TIF) in certain polishing conditions will be proposed based on smoothing spectral function. The basic mathematical model for this method will be established in theory. A set of polishing experimental data with rigid conformal tool is used to validate the optimized method. The calculated results can quantitatively indicate error correction capability of TIF for different spatial frequency errors in certain polishing conditions. The comparative analysis with previous method shows that the optimized method is simpler in form and can get the same accuracy results with less calculating time in contrast to previous method.

  20. b matrix errors in echo planar diffusion tensor imaging

    PubMed Central

    Boujraf, Saïd; Luypaert, Robert; Osteaux, Michel

    2001-01-01

    Diffusion‐weighted magnetic resonance imaging (DW‐MRI) is a recognized tool for early detection of infarction of the human brain. DW‐MRI uses the signal loss associated with the random thermal motion of water molecules in the presence of magnetic field gradients to derive parameters that reflect the translational mobility of the water molecules in tissues. If diffusion‐weighted images with different values of b matrix are acquired during one individual investigation, it is possible to calculate apparent diffusion coefficient maps that are the elements of the diffusion tensor. The diffusion tensor elements represent the apparent diffusion coefficient of protons of water molecules in each pixel in the corresponding sample. The relation between signal intensity in the diffusion‐weighted images, diffusion tensor, and b matrix is derived from the Bloch equations. Our goal is to establish the magnitude of the error made in the calculation of the elements of the diffusion tensor when the imaging gradients are ignored. PACS number(s): 87.57. –s, 87.61.–c PMID:11602015

  1. Human error and human factors engineering in health care.

    PubMed

    Welch, D L

    1997-01-01

    Human error is inevitable. It happens in health care systems as it does in all other complex systems, and no measure of attention, training, dedication, or punishment is going to stop it. The discipline of human factors engineering (HFE) has been dealing with the causes and effects of human error since the 1940's. Originally applied to the design of increasingly complex military aircraft cockpits, HFE has since been effectively applied to the problem of human error in such diverse systems as nuclear power plants, NASA spacecraft, the process control industry, and computer software. Today the health care industry is becoming aware of the costs of human error and is turning to HFE for answers. Just as early experimental psychologists went beyond the label of "pilot error" to explain how the design of cockpits led to air crashes, today's HFE specialists are assisting the health care industry in identifying the causes of significant human errors in medicine and developing ways to eliminate or ameliorate them. This series of articles will explore the nature of human error and how HFE can be applied to reduce the likelihood of errors and mitigate their effects.

  2. Fiber-optic evanescent-wave spectroscopy for fast multicomponent analysis of human blood

    NASA Astrophysics Data System (ADS)

    Simhi, Ronit; Gotshal, Yaron; Bunimovich, David; Katzir, Abraham; Sela, Ben-Ami

    1996-07-01

    A spectral analysis of human blood serum was undertaken by fiber-optic evanescent-wave spectroscopy (FEWS) by the use of a Fourier-transform infrared spectrometer. A special cell for the FEWS measurements was designed and built that incorporates an IR-transmitting silver halide fiber and a means for introducing the blood-serum sample. Further improvements in analysis were obtained by the adoption of multivariate calibration techniques that are already used in clinical chemistry. The partial least-squares algorithm was used to calculate the concentrations of cholesterol, total protein, urea, and uric acid in human blood serum. The estimated prediction errors obtained (in percent from the average value) were 6% for total protein, 15% for cholesterol, 30% for urea, and 30% for uric acid. These results were compared with another independent prediction method that used a neural-network model. This model yielded estimated prediction errors of 8.8% for total protein, 25% for cholesterol, and 21% for uric acid. spectroscopy, fiber-optic evanescent-wave spectroscopy, Fourier-transform infrared spectrometer, blood, multivariate calibration, neural networks.

  3. Computer calculated dose in paediatric prescribing.

    PubMed

    Kirk, Richard C; Li-Meng Goh, Denise; Packia, Jeya; Min Kam, Huey; Ong, Benjamin K C

    2005-01-01

    Medication errors are an important cause of hospital-based morbidity and mortality. However, only a few medication error studies have been conducted in children. These have mainly quantified errors in the inpatient setting; there is very little data available on paediatric outpatient and emergency department medication errors and none on discharge medication. This deficiency is of concern because medication errors are more common in children and it has been suggested that the risk of an adverse drug event as a consequence of a medication error is higher in children than in adults. The aims of this study were to assess the rate of medication errors in predominantly ambulatory paediatric patients and the effect of computer calculated doses on medication error rates of two commonly prescribed drugs. This was a prospective cohort study performed in a paediatric unit in a university teaching hospital between March 2003 and August 2003. The hospital's existing computer clinical decision support system was modified so that doctors could choose the traditional prescription method or the enhanced method of computer calculated dose when prescribing paracetamol (acetaminophen) or promethazine. All prescriptions issued to children (<16 years of age) at the outpatient clinic, emergency department and at discharge from the inpatient service were analysed. A medication error was defined as to have occurred if there was an underdose (below the agreed value), an overdose (above the agreed value), no frequency of administration specified, no dose given or excessive total daily dose. The medication error rates and the factors influencing medication error rates were determined using SPSS version 12. From March to August 2003, 4281 prescriptions were issued. Seven prescriptions (0.16%) were excluded, hence 4274 prescriptions were analysed. Most prescriptions were issued by paediatricians (including neonatologists and paediatric surgeons) and/or junior doctors. The error rate in the children's emergency department was 15.7%, for outpatients was 21.5% and for discharge medication was 23.6%. Most errors were the result of an underdose (64%; 536/833). The computer calculated dose error rate was 12.6% compared with the traditional prescription error rate of 28.2%. Logistical regression analysis showed that computer calculated dose was an important and independent variable influencing the error rate (adjusted relative risk = 0.436, 95% CI 0.336, 0.520, p < 0.001). Other important independent variables were seniority and paediatric training of the person prescribing and the type of drug prescribed. Medication error, especially underdose, is common in outpatient, emergency department and discharge prescriptions. Computer calculated doses can significantly reduce errors, but other risk factors have to be concurrently addressed to achieve maximum benefit.

  4. Applications of integrated human error identification techniques on the chemical cylinder change task.

    PubMed

    Cheng, Ching-Min; Hwang, Sheue-Ling

    2015-03-01

    This paper outlines the human error identification (HEI) techniques that currently exist to assess latent human errors. Many formal error identification techniques have existed for years, but few have been validated to cover latent human error analysis in different domains. This study considers many possible error modes and influential factors, including external error modes, internal error modes, psychological error mechanisms, and performance shaping factors, and integrates several execution procedures and frameworks of HEI techniques. The case study in this research was the operational process of changing chemical cylinders in a factory. In addition, the integrated HEI method was used to assess the operational processes and the system's reliability. It was concluded that the integrated method is a valuable aid to develop much safer operational processes and can be used to predict human error rates on critical tasks in the plant. Copyright © 2014 Elsevier Ltd and The Ergonomics Society. All rights reserved.

  5. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Morley, Steven

    The PyForecastTools package provides Python routines for calculating metrics for model validation, forecast verification and model comparison. For continuous predictands the package provides functions for calculating bias (mean error, mean percentage error, median log accuracy, symmetric signed bias), and for calculating accuracy (mean squared error, mean absolute error, mean absolute scaled error, normalized RMSE, median symmetric accuracy). Convenience routines to calculate the component parts (e.g. forecast error, scaled error) of each metric are also provided. To compare models the package provides: generic skill score; percent better. Robust measures of scale including median absolute deviation, robust standard deviation, robust coefficient ofmore » variation and the Sn estimator are all provided by the package. Finally, the package implements Python classes for NxN contingency tables. In the case of a multi-class prediction, accuracy and skill metrics such as proportion correct and the Heidke and Peirce skill scores are provided as object methods. The special case of a 2x2 contingency table inherits from the NxN class and provides many additional metrics for binary classification: probability of detection, probability of false detection, false alarm ration, threat score, equitable threat score, bias. Confidence intervals for many of these quantities can be calculated using either the Wald method or Agresti-Coull intervals.« less

  6. SPAR-H Step-by-Step Guidance

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    W. J. Galyean; A. M. Whaley; D. L. Kelly

    This guide provides step-by-step guidance on the use of the SPAR-H method for quantifying Human Failure Events (HFEs). This guide is intended to be used with the worksheets provided in: 'The SPAR-H Human Reliability Analysis Method,' NUREG/CR-6883, dated August 2005. Each step in the process of producing a Human Error Probability (HEP) is discussed. These steps are: Step-1, Categorizing the HFE as Diagnosis and/or Action; Step-2, Rate the Performance Shaping Factors; Step-3, Calculate PSF-Modified HEP; Step-4, Accounting for Dependence, and; Step-5, Minimum Value Cutoff. The discussions on dependence are extensive and include an appendix that describes insights obtained from themore » psychology literature.« less

  7. Guidance for selecting the measurement conditions in the dye-binding method for determining serum protein: theoretical analysis based on the chemical equilibrium of protein error.

    PubMed

    Suzuki, Y

    2001-11-01

    A methodology for selecting the measurement conditions in the dye-binding method for determining serum protein has been studied by a theoretical calculation. This calculation was based on the fact that a protein error occurs because of a reaction between the side chains of a positively charged amino acid residue in a protein molecule and a dissociated dye anion. The calculated characteristics of this method are summarized as follows: (1) Although the reaction between the dye and the protein occurs up to about pH 12, a change in the color shade, called protein error, is observed only in a pH region restricted within narrow limits. (2) Although the apparent absorbance (the absorbance of the test solution measured against a reagent blank) is lower than the true absorbance indicated by the formed dye-protein complex, the apparent absorbance correlates with the true absorbance with a correlation coefficient of 1.0. (3) At a higher dye concentration, the calibration curve is more linear at a higher pH than at a lower pH. Most of these characteristics were similarly observed experimentally in the reactions of BPB, BCG and BCP with human and bovine albumins. It is concluded that in order to ensure the linearity of the calibration curve, the measurement should be performed at a higher dye concentration and sufficiently high pH where the detection sensitivity is satisfied.

  8. Accuracy and Efficiency of Recording Pediatric Early Warning Scores Using an Electronic Physiological Surveillance System Compared With Traditional Paper-Based Documentation

    PubMed Central

    Sefton, Gerri; Lane, Steven; Killen, Roger; Black, Stuart; Lyon, Max; Ampah, Pearl; Sproule, Cathryn; Loren-Gosling, Dominic; Richards, Caitlin; Spinty, Jean; Holloway, Colette; Davies, Coral; Wilson, April; Chean, Chung Shen; Carter, Bernie; Carrol, E.D.

    2017-01-01

    Pediatric Early Warning Scores are advocated to assist health professionals to identify early signs of serious illness or deterioration in hospitalized children. Scores are derived from the weighting applied to recorded vital signs and clinical observations reflecting deviation from a predetermined “norm.” Higher aggregate scores trigger an escalation in care aimed at preventing critical deterioration. Process errors made while recording these data, including plotting or calculation errors, have the potential to impede the reliability of the score. To test this hypothesis, we conducted a controlled study of documentation using five clinical vignettes. We measured the accuracy of vital sign recording, score calculation, and time taken to complete documentation using a handheld electronic physiological surveillance system, VitalPAC Pediatric, compared with traditional paper-based charts. We explored the user acceptability of both methods using a Web-based survey. Twenty-three staff participated in the controlled study. The electronic physiological surveillance system improved the accuracy of vital sign recording, 98.5% versus 85.6%, P < .02, Pediatric Early Warning Score calculation, 94.6% versus 55.7%, P < .02, and saved time, 68 versus 98 seconds, compared with paper-based documentation, P < .002. Twenty-nine staff completed the Web-based survey. They perceived that the electronic physiological surveillance system offered safety benefits by reducing human error while providing instant visibility of recorded data to the entire clinical team. PMID:27832032

  9. SU-E-T-329: Dosimetric Impact of Implementing Metal Artifact Reduction Methods and Metal Energy Deposition Kernels for Photon Dose Calculations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Huang, J; Followill, D; Howell, R

    2015-06-15

    Purpose: To investigate two strategies for reducing dose calculation errors near metal implants: use of CT metal artifact reduction methods and implementation of metal-based energy deposition kernels in the convolution/superposition (C/S) method. Methods: Radiochromic film was used to measure the dose upstream and downstream of titanium and Cerrobend implants. To assess the dosimetric impact of metal artifact reduction methods, dose calculations were performed using baseline, uncorrected images and metal artifact reduction Methods: Philips O-MAR, GE’s monochromatic gemstone spectral imaging (GSI) using dual-energy CT, and GSI imaging with metal artifact reduction software applied (MARs).To assess the impact of metal kernels, titaniummore » and silver kernels were implemented into a commercial collapsed cone C/S algorithm. Results: The CT artifact reduction methods were more successful for titanium than Cerrobend. Interestingly, for beams traversing the metal implant, we found that errors in the dimensions of the metal in the CT images were more important for dose calculation accuracy than reduction of imaging artifacts. The MARs algorithm caused a distortion in the shape of the titanium implant that substantially worsened the calculation accuracy. In comparison to water kernel dose calculations, metal kernels resulted in better modeling of the increased backscatter dose at the upstream interface but decreased accuracy directly downstream of the metal. We also found that the success of metal kernels was dependent on dose grid size, with smaller calculation voxels giving better accuracy. Conclusion: Our study yielded mixed results, with neither the metal artifact reduction methods nor the metal kernels being globally effective at improving dose calculation accuracy. However, some successes were observed. The MARs algorithm decreased errors downstream of Cerrobend by a factor of two, and metal kernels resulted in more accurate backscatter dose upstream of metals. Thus, these two strategies do have the potential to improve accuracy for patients with metal implants in certain scenarios. This work was supported by Public Health Service grants CA 180803 and CA 10953 awarded by the National Cancer Institute, United States of Health and Human Services, and in part by Mobius Medical Systems.« less

  10. Undergraduate paramedic students cannot do drug calculations

    PubMed Central

    Eastwood, Kathryn; Boyle, Malcolm J; Williams, Brett

    2012-01-01

    BACKGROUND: Previous investigation of drug calculation skills of qualified paramedics has highlighted poor mathematical ability with no published studies having been undertaken on undergraduate paramedics. There are three major error classifications. Conceptual errors involve an inability to formulate an equation from information given, arithmetical errors involve an inability to operate a given equation, and finally computation errors are simple errors of addition, subtraction, division and multiplication. The objective of this study was to determine if undergraduate paramedics at a large Australia university could accurately perform common drug calculations and basic mathematical equations normally required in the workplace. METHODS: A cross-sectional study methodology using a paper-based questionnaire was administered to undergraduate paramedic students to collect demographical data, student attitudes regarding their drug calculation performance, and answers to a series of basic mathematical and drug calculation questions. Ethics approval was granted. RESULTS: The mean score of correct answers was 39.5% with one student scoring 100%, 3.3% of students (n=3) scoring greater than 90%, and 63% (n=58) scoring 50% or less, despite 62% (n=57) of the students stating they ‘did not have any drug calculations issues’. On average those who completed a minimum of year 12 Specialist Maths achieved scores over 50%. Conceptual errors made up 48.5%, arithmetical 31.1% and computational 17.4%. CONCLUSIONS: This study suggests undergraduate paramedics have deficiencies in performing accurate calculations, with conceptual errors indicating a fundamental lack of mathematical understanding. The results suggest an unacceptable level of mathematical competence to practice safely in the unpredictable prehospital environment. PMID:25215067

  11. Using meta-differential evolution to enhance a calculation of a continuous blood glucose level.

    PubMed

    Koutny, Tomas

    2016-09-01

    We developed a new model of glucose dynamics. The model calculates blood glucose level as a function of transcapillary glucose transport. In previous studies, we validated the model with animal experiments. We used analytical method to determine model parameters. In this study, we validate the model with subjects with type 1 diabetes. In addition, we combine the analytic method with meta-differential evolution. To validate the model with human patients, we obtained a data set of type 1 diabetes study that was coordinated by Jaeb Center for Health Research. We calculated a continuous blood glucose level from continuously measured interstitial fluid glucose level. We used 6 different scenarios to ensure robust validation of the calculation. Over 96% of calculated blood glucose levels fit A+B zones of the Clarke Error Grid. No data set required any correction of model parameters during the time course of measuring. We successfully verified the possibility of calculating a continuous blood glucose level of subjects with type 1 diabetes. This study signals a successful transition of our research from an animal experiment to a human patient. Researchers can test our model with their data on-line at https://diabetes.zcu.cz. Copyright © 2016 The Author. Published by Elsevier Ireland Ltd.. All rights reserved.

  12. Error Propagation Dynamics of PIV-based Pressure Field Calculations: How well does the pressure Poisson solver perform inherently?

    PubMed

    Pan, Zhao; Whitehead, Jared; Thomson, Scott; Truscott, Tadd

    2016-08-01

    Obtaining pressure field data from particle image velocimetry (PIV) is an attractive technique in fluid dynamics due to its noninvasive nature. The application of this technique generally involves integrating the pressure gradient or solving the pressure Poisson equation using a velocity field measured with PIV. However, very little research has been done to investigate the dynamics of error propagation from PIV-based velocity measurements to the pressure field calculation. Rather than measure the error through experiment, we investigate the dynamics of the error propagation by examining the Poisson equation directly. We analytically quantify the error bound in the pressure field, and are able to illustrate the mathematical roots of why and how the Poisson equation based pressure calculation propagates error from the PIV data. The results show that the error depends on the shape and type of boundary conditions, the dimensions of the flow domain, and the flow type.

  13. Human error in airway facilities.

    DOT National Transportation Integrated Search

    2001-01-01

    This report examines human errors in Airway Facilities (AF) with the intent of preventing these errors from being : passed on to the new Operations Control Centers. To effectively manage errors, they first have to be identified. : Human factors engin...

  14. Latent human error analysis and efficient improvement strategies by fuzzy TOPSIS in aviation maintenance tasks.

    PubMed

    Chiu, Ming-Chuan; Hsieh, Min-Chih

    2016-05-01

    The purposes of this study were to develop a latent human error analysis process, to explore the factors of latent human error in aviation maintenance tasks, and to provide an efficient improvement strategy for addressing those errors. First, we used HFACS and RCA to define the error factors related to aviation maintenance tasks. Fuzzy TOPSIS with four criteria was applied to evaluate the error factors. Results show that 1) adverse physiological states, 2) physical/mental limitations, and 3) coordination, communication, and planning are the factors related to airline maintenance tasks that could be addressed easily and efficiently. This research establishes a new analytic process for investigating latent human error and provides a strategy for analyzing human error using fuzzy TOPSIS. Our analysis process complements shortages in existing methodologies by incorporating improvement efficiency, and it enhances the depth and broadness of human error analysis methodology. Copyright © 2015 Elsevier Ltd and The Ergonomics Society. All rights reserved.

  15. Optimal algorithm to improve the calculation accuracy of energy deposition for betavoltaic MEMS batteries design

    NASA Astrophysics Data System (ADS)

    Li, Sui-xian; Chen, Haiyang; Sun, Min; Cheng, Zaijun

    2009-11-01

    Aimed at improving the calculation accuracy when calculating the energy deposition of electrons traveling in solids, a method we call optimal subdivision number searching algorithm is proposed. When treating the energy deposition of electrons traveling in solids, large calculation errors are found, we are conscious of that it is the result of dividing and summing when calculating the integral. Based on the results of former research, we propose a further subdividing and summing method. For β particles with the energy in the entire spectrum span, the energy data is set only to be the integral multiple of keV, and the subdivision number is set to be from 1 to 30, then the energy deposition calculation error collections are obtained. Searching for the minimum error in the collections, we can obtain the corresponding energy and subdivision number pairs, as well as the optimal subdivision number. The method is carried out in four kinds of solid materials, Al, Si, Ni and Au to calculate energy deposition. The result shows that the calculation error is reduced by one order with the improved algorithm.

  16. Experiential Teaching Increases Medication Calculation Accuracy Among Baccalaureate Nursing Students.

    PubMed

    Hurley, Teresa V

    Safe medication administration is an international goal. Calculation errors cause patient harm despite education. The research purpose was to evaluate the effectiveness of an experiential teaching strategy to reduce errors in a sample of 78 baccalaureate nursing students at a Northeastern college. A pretest-posttest design with random assignment into equal-sized groups was used. The experiential strategy was more effective than the traditional method (t = -0.312, df = 37, p = .004, 95% CI) with a reduction in calculation errors. Evaluations of error type and teaching strategies are indicated to facilitate course and program changes.

  17. Deriving concentrations of oxygen and carbon in human tissues using single- and dual-energy CT for ion therapy applications

    NASA Astrophysics Data System (ADS)

    Landry, Guillaume; Parodi, Katia; Wildberger, Joachim E.; Verhaegen, Frank

    2013-08-01

    Dedicated methods of in-vivo verification of ion treatment based on the detection of secondary emitted radiation, such as positron-emission-tomography and prompt gamma detection require high accuracy in the assignment of the elemental composition. This especially concerns the content in carbon and oxygen, which are the most abundant elements of human tissue. The standard single-energy computed tomography (SECT) approach to carbon and oxygen concentration determination has been shown to introduce significant discrepancies in the carbon and oxygen content of tissues. We propose a dual-energy CT (DECT)-based approach for carbon and oxygen content assignment and investigate the accuracy gains of the method. SECT and DECT Hounsfield units (HU) were calculated using the stoichiometric calibration procedure for a comprehensive set of human tissues. Fit parameters for the stoichiometric calibration were obtained from phantom scans. Gaussian distributions with standard deviations equal to those derived from phantom scans were subsequently generated for each tissue for several values of the computed tomography dose index (CTDIvol). The assignment of %weight carbon and oxygen (%wC,%wO) was performed based on SECT and DECT. The SECT scheme employed a HU versus %wC,O approach while for DECT we explored a Zeff versus %wC,O approach and a (Zeff, ρe) space approach. The accuracy of each scheme was estimated by calculating the root mean square (RMS) error on %wC,O derived from the input Gaussian distribution of HU for each tissue and also for the noiseless case as a limiting case. The (Zeff, ρe) space approach was also compared to SECT by comparing RMS error for hydrogen and nitrogen (%wH,%wN). Systematic shifts were applied to the tissue HU distributions to assess the robustness of the method against systematic uncertainties in the stoichiometric calibration procedure. In the absence of noise the (Zeff, ρe) space approach showed more accurate %wC,O assignment (largest error of 2%) than the Zeff versus %wC,O and HU versus %wC,O approaches (largest errors of 15% and 30%, respectively). When noise was present, the accuracy of the (Zeff, ρe) space (DECT approach) was decreased but the RMS error over all tissues was lower than the HU versus %wC,O (SECT approach) (5.8%wC versus 7.5%wC at CTDIvol = 20 mGy). The DECT approach showed decreasing RMS error with decreasing image noise (or increasing CTDIvol). At CTDIvol = 80 mGy the RMS error over all tissues was 3.7% for DECT and 6.2% for SECT approaches. However, systematic shifts greater than ±5HU undermined the accuracy gains afforded by DECT at any dose level. DECT provides more accurate %wC,O assignment than SECT when imaging noise and systematic uncertainties in HU values are not considered. The presence of imaging noise degrades the DECT accuracy on %wC,O assignment but it remains superior to SECT. However, DECT was found to be sensitive to systematic shifts of human tissue HU.

  18. The effect of errors in the assignment of the transmission functions on the accuracy of the thermal sounding of the atmosphere

    NASA Technical Reports Server (NTRS)

    Timofeyev, Y. M.

    1979-01-01

    In order to test the error of calculation in assumed values of the transmission function for Soviet and American radiometers sounding the atmosphere thermally from orbiting satellites, the assumptions of the transmission calculation is varied with respect to atmospheric CO2 content, transmission frequency, and atmospheric absorption. The error arising from variations of the assumptions from the standard basic model is calculated.

  19. Mathemagical Computing: Order of Operations and New Software.

    ERIC Educational Resources Information Center

    Ecker, Michael W.

    1989-01-01

    Describes mathematical problems which occur when using the computer as a calculator. Considers errors in BASIC calculation and the order of mathematical operations. Identifies errors in spreadsheet and calculator programs. Comments on sorting programs and provides a source for Mathemagical Black Holes. (MVL)

  20. Information systems and human error in the lab.

    PubMed

    Bissell, Michael G

    2004-01-01

    Health system costs in clinical laboratories are incurred daily due to human error. Indeed, a major impetus for automating clinical laboratories has always been the opportunity it presents to simultaneously reduce cost and improve quality of operations by decreasing human error. But merely automating these processes is not enough. To the extent that introduction of these systems results in operators having less practice in dealing with unexpected events or becoming deskilled in problemsolving, however new kinds of error will likely appear. Clinical laboratories could potentially benefit by integrating findings on human error from modern behavioral science into their operations. Fully understanding human error requires a deep understanding of human information processing and cognition. Predicting and preventing negative consequences requires application of this understanding to laboratory operations. Although the occurrence of a particular error at a particular instant cannot be absolutely prevented, human error rates can be reduced. The following principles are key: an understanding of the process of learning in relation to error; understanding the origin of errors since this knowledge can be used to reduce their occurrence; optimal systems should be forgiving to the operator by absorbing errors, at least for a time; although much is known by industrial psychologists about how to write operating procedures and instructions in ways that reduce the probability of error, this expertise is hardly ever put to use in the laboratory; and a feedback mechanism must be designed into the system that enables the operator to recognize in real time that an error has occurred.

  1. Correcting reaction rates measured by saturation-transfer magnetic resonance spectroscopy

    NASA Astrophysics Data System (ADS)

    Gabr, Refaat E.; Weiss, Robert G.; Bottomley, Paul A.

    2008-04-01

    Off-resonance or spillover irradiation and incomplete saturation can introduce significant errors in the estimates of chemical rate constants measured by saturation-transfer magnetic resonance spectroscopy (MRS). Existing methods of correction are effective only over a limited parameter range. Here, a general approach of numerically solving the Bloch-McConnell equations to calculate exchange rates, relaxation times and concentrations for the saturation-transfer experiment is investigated, but found to require more measurements and higher signal-to-noise ratios than in vivo studies can practically afford. As an alternative, correction formulae for the reaction rate are provided which account for the expected parameter ranges and limited measurements available in vivo. The correction term is a quadratic function of experimental measurements. In computer simulations, the new formulae showed negligible bias and reduced the maximum error in the rate constants by about 3-fold compared to traditional formulae, and the error scatter by about 4-fold, over a wide range of parameters for conventional saturation transfer employing progressive saturation, and for the four-angle saturation-transfer method applied to the creatine kinase (CK) reaction in the human heart at 1.5 T. In normal in vivo spectra affected by spillover, the correction increases the mean calculated forward CK reaction rate by 6-16% over traditional and prior correction formulae.

  2. Complete Systematic Error Model of SSR for Sensor Registration in ATC Surveillance Networks

    PubMed Central

    Besada, Juan A.

    2017-01-01

    In this paper, a complete and rigorous mathematical model for secondary surveillance radar systematic errors (biases) is developed. The model takes into account the physical effects systematically affecting the measurement processes. The azimuth biases are calculated from the physical error of the antenna calibration and the errors of the angle determination dispositive. Distance bias is calculated from the delay of the signal produced by the refractivity index of the atmosphere, and from clock errors, while the altitude bias is calculated taking into account the atmosphere conditions (pressure and temperature). It will be shown, using simulated and real data, that adapting a classical bias estimation process to use the complete parametrized model results in improved accuracy in the bias estimation. PMID:28934157

  3. Free kick instead of cross-validation in maximum-likelihood refinement of macromolecular crystal structures

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Pražnikar, Jure; University of Primorska,; Turk, Dušan, E-mail: dusan.turk@ijs.si

    2014-12-01

    The maximum-likelihood free-kick target, which calculates model error estimates from the work set and a randomly displaced model, proved superior in the accuracy and consistency of refinement of crystal structures compared with the maximum-likelihood cross-validation target, which calculates error estimates from the test set and the unperturbed model. The refinement of a molecular model is a computational procedure by which the atomic model is fitted to the diffraction data. The commonly used target in the refinement of macromolecular structures is the maximum-likelihood (ML) function, which relies on the assessment of model errors. The current ML functions rely on cross-validation. Theymore » utilize phase-error estimates that are calculated from a small fraction of diffraction data, called the test set, that are not used to fit the model. An approach has been developed that uses the work set to calculate the phase-error estimates in the ML refinement from simulating the model errors via the random displacement of atomic coordinates. It is called ML free-kick refinement as it uses the ML formulation of the target function and is based on the idea of freeing the model from the model bias imposed by the chemical energy restraints used in refinement. This approach for the calculation of error estimates is superior to the cross-validation approach: it reduces the phase error and increases the accuracy of molecular models, is more robust, provides clearer maps and may use a smaller portion of data for the test set for the calculation of R{sub free} or may leave it out completely.« less

  4. Errors induced by the neglect of polarization in radiance calculations for Rayleigh-scattering atmospheres

    NASA Technical Reports Server (NTRS)

    Mishchenko, M. I.; Lacis, A. A.; Travis, L. D.

    1994-01-01

    Although neglecting polarization and replacing the rigorous vector radiative transfer equation by its approximate scalar counterpart has no physical background, it is a widely used simplification when the incident light is unpolarized and only the intensity of the reflected light is to be computed. We employ accurate vector and scalar multiple-scattering calculations to perform a systematic study of the errors induced by the neglect of polarization in radiance calculations for a homogeneous, plane-parallel Rayleigh-scattering atmosphere (with and without depolarization) above a Lambertian surface. Specifically, we calculate percent errors in the reflected intensity for various directions of light incidence and reflection, optical thicknesses of the atmosphere, single-scattering albedos, depolarization factors, and surface albedos. The numerical data displayed can be used to decide whether or not the scalar approximation may be employed depending on the parameters of the problem. We show that the errors decrease with increasing depolarization factor and/or increasing surface albedo. For conservative or nearly conservative scattering and small surface albedos, the errors are maximum at optical thicknesses of about 1. The calculated errors may be too large for some practical applications, and, therefore, rigorous vector calculations should be employed whenever possible. However, if approximate scalar calculations are used, we recommend to avoid geometries involving phase angles equal or close to 0 deg and 90 deg, where the errors are especially significant. We propose a theoretical explanation of the large vector/scalar differences in the case of Rayleigh scattering. According to this explanation, the differences are caused by the particular structure of the Rayleigh scattering matrix and come from lower-order (except first-order) light scattering paths involving right scattering angles and right-angle rotations of the scattering plane.

  5. Derivation of simple rules for complex flow vector fields on the lower part of the human face for robot face design.

    PubMed

    Ishihara, Hisashi; Ota, Nobuyuki; Asada, Minoru

    2017-11-27

    It is quite difficult for android robots to replicate the numerous and various types of human facial expressions owing to limitations in terms of space, mechanisms, and materials. This situation could be improved with greater knowledge regarding these expressions and their deformation rules, i.e. by using the biomimetic approach. In a previous study, we investigated 16 facial deformation patterns and found that each facial point moves almost only in its own principal direction and different deformation patterns are created with different combinations of moving lengths. However, the replication errors caused by moving each control point of a face in only their principal direction were not evaluated for each deformation pattern at that time. Therefore, we calculated the replication errors in this study using the second principal component scores of the 16 sets of flow vectors at each point on the face. More than 60% of the errors were within 1 mm, and approximately 90% of them were within 3 mm. The average error was 1.1 mm. These results indicate that robots can replicate the 16 investigated facial expressions with errors within 3 mm and 1 mm for about 90% and 60% of the vectors, respectively, even if each point on the robot face moves in only its own principal direction. This finding seems promising for the development of robots capable of showing various facial expressions because significantly fewer types of movements than previously predicted are necessary.

  6. A Systematic Error Correction Method for TOVS Radiances

    NASA Technical Reports Server (NTRS)

    Joiner, Joanna; Rokke, Laurie; Einaudi, Franco (Technical Monitor)

    2000-01-01

    Treatment of systematic errors is crucial for the successful use of satellite data in a data assimilation system. Systematic errors in TOVS radiance measurements and radiative transfer calculations can be as large or larger than random instrument errors. The usual assumption in data assimilation is that observational errors are unbiased. If biases are not effectively removed prior to assimilation, the impact of satellite data will be lessened and can even be detrimental. Treatment of systematic errors is important for short-term forecast skill as well as the creation of climate data sets. A systematic error correction algorithm has been developed as part of a 1D radiance assimilation. This scheme corrects for spectroscopic errors, errors in the instrument response function, and other biases in the forward radiance calculation for TOVS. Such algorithms are often referred to as tuning of the radiances. The scheme is able to account for the complex, air-mass dependent biases that are seen in the differences between TOVS radiance observations and forward model calculations. We will show results of systematic error correction applied to the NOAA 15 Advanced TOVS as well as its predecessors. We will also discuss the ramifications of inter-instrument bias with a focus on stratospheric measurements.

  7. Individual Differences in Mathematical Competence Modulate Brain Responses to Arithmetic Errors: An fMRI Study

    ERIC Educational Resources Information Center

    Ansari, Daniel; Grabner, Roland H.; Koschutnig, Karl; Reishofer, Gernot; Ebner, Franz

    2011-01-01

    Data from both neuropsychological and neuroimaging studies have implicated the left inferior parietal cortex in calculation. Comparatively less attention has been paid to the neural responses associated with the commission of calculation errors and how the processing of arithmetic errors is modulated by individual differences in mathematical…

  8. Stochastic Models of Human Errors

    NASA Technical Reports Server (NTRS)

    Elshamy, Maged; Elliott, Dawn M. (Technical Monitor)

    2002-01-01

    Humans play an important role in the overall reliability of engineering systems. More often accidents and systems failure are traced to human errors. Therefore, in order to have meaningful system risk analysis, the reliability of the human element must be taken into consideration. Describing the human error process by mathematical models is a key to analyzing contributing factors. Therefore, the objective of this research effort is to establish stochastic models substantiated by sound theoretic foundation to address the occurrence of human errors in the processing of the space shuttle.

  9. Operational Interventions to Maintenance Error

    NASA Technical Reports Server (NTRS)

    Kanki, Barbara G.; Walter, Diane; Dulchinos, VIcki

    1997-01-01

    A significant proportion of aviation accidents and incidents are known to be tied to human error. However, research of flight operational errors has shown that so-called pilot error often involves a variety of human factors issues and not a simple lack of individual technical skills. In aircraft maintenance operations, there is similar concern that maintenance errors which may lead to incidents and accidents are related to a large variety of human factors issues. Although maintenance error data and research are limited, industry initiatives involving human factors training in maintenance have become increasingly accepted as one type of maintenance error intervention. Conscientious efforts have been made in re-inventing the team7 concept for maintenance operations and in tailoring programs to fit the needs of technical opeRAtions. Nevertheless, there remains a dual challenge: 1) to develop human factors interventions which are directly supported by reliable human error data, and 2) to integrate human factors concepts into the procedures and practices of everyday technical tasks. In this paper, we describe several varieties of human factors interventions and focus on two specific alternatives which target problems related to procedures and practices; namely, 1) structured on-the-job training and 2) procedure re-design. We hope to demonstrate that the key to leveraging the impact of these solutions comes from focused interventions; that is, interventions which are derived from a clear understanding of specific maintenance errors, their operational context and human factors components.

  10. Reduction of Maintenance Error Through Focused Interventions

    NASA Technical Reports Server (NTRS)

    Kanki, Barbara G.; Walter, Diane; Rosekind, Mark R. (Technical Monitor)

    1997-01-01

    It is well known that a significant proportion of aviation accidents and incidents are tied to human error. In flight operations, research of operational errors has shown that so-called "pilot error" often involves a variety of human factors issues and not a simple lack of individual technical skills. In aircraft maintenance operations, there is similar concern that maintenance errors which may lead to incidents and accidents are related to a large variety of human factors issues. Although maintenance error data and research are limited, industry initiatives involving human factors training in maintenance have become increasingly accepted as one type of maintenance error intervention. Conscientious efforts have been made in re-inventing the "team" concept for maintenance operations and in tailoring programs to fit the needs of technical operations. Nevertheless, there remains a dual challenge: to develop human factors interventions which are directly supported by reliable human error data, and to integrate human factors concepts into the procedures and practices of everyday technical tasks. In this paper, we describe several varieties of human factors interventions and focus on two specific alternatives which target problems related to procedures and practices; namely, 1) structured on-the-job training and 2) procedure re-design. We hope to demonstrate that the key to leveraging the impact of these solutions comes from focused interventions; that is, interventions which are derived from a clear understanding of specific maintenance errors, their operational context and human factors components.

  11. Nexus: A modular workflow management system for quantum simulation codes

    NASA Astrophysics Data System (ADS)

    Krogel, Jaron T.

    2016-01-01

    The management of simulation workflows represents a significant task for the individual computational researcher. Automation of the required tasks involved in simulation work can decrease the overall time to solution and reduce sources of human error. A new simulation workflow management system, Nexus, is presented to address these issues. Nexus is capable of automated job management on workstations and resources at several major supercomputing centers. Its modular design allows many quantum simulation codes to be supported within the same framework. Current support includes quantum Monte Carlo calculations with QMCPACK, density functional theory calculations with Quantum Espresso or VASP, and quantum chemical calculations with GAMESS. Users can compose workflows through a transparent, text-based interface, resembling the input file of a typical simulation code. A usage example is provided to illustrate the process.

  12. Technical Note: On the calculation of stopping-power ratio for stoichiometric calibration in proton therapy

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ödén, Jakob; Zimmerman, Jens; Nowik, Patrik

    2015-09-15

    Purpose: The quantitative effects of assumptions made in the calculation of stopping-power ratios (SPRs) are investigated, for stoichiometric CT calibration in proton therapy. The assumptions investigated include the use of the Bethe formula without correction terms, Bragg additivity, the choice of I-value for water, and the data source for elemental I-values. Methods: The predictions of the Bethe formula for SPR (no correction terms) were validated against more sophisticated calculations using the SRIM software package for 72 human tissues. A stoichiometric calibration was then performed at our hospital. SPR was calculated for the human tissues using either the assumption of simplemore » Bragg additivity or the Seltzer-Berger rule (as used in ICRU Reports 37 and 49). In each case, the calculation was performed twice: First, by assuming the I-value of water was an experimentally based value of 78 eV (value proposed in Errata and Addenda for ICRU Report 73) and second, by recalculating the I-value theoretically. The discrepancy between predictions using ICRU elemental I-values and the commonly used tables of Janni was also investigated. Results: Errors due to neglecting the correction terms to the Bethe formula were calculated at less than 0.1% for biological tissues. Discrepancies greater than 1%, however, were estimated due to departures from simple Bragg additivity when a fixed I-value for water was imposed. When the I-value for water was calculated in a consistent manner to that for tissue, this disagreement was substantially reduced. The difference between SPR predictions when using Janni’s or ICRU tables for I-values was up to 1.6%. Experimental data used for materials of relevance to proton therapy suggest that the ICRU-derived values provide somewhat more accurate results (root-mean-square-error: 0.8% versus 1.6%). Conclusions: The conclusions from this study are that (1) the Bethe formula can be safely used for SPR calculations without correction terms; (2) simple Bragg additivity can be reasonably assumed for compound materials; (3) if simple Bragg additivity is assumed, then the I-value for water should be calculated in a consistent manner to that of the tissue of interest (rather than using an experimentally derived value); (4) the ICRU Report 37 I-values may provide a better agreement with experiment than Janni’s tables.« less

  13. Teaching Statistics Online Using "Excel"

    ERIC Educational Resources Information Center

    Jerome, Lawrence

    2011-01-01

    As anyone who has taught or taken a statistics course knows, statistical calculations can be tedious and error-prone, with the details of a calculation sometimes distracting students from understanding the larger concepts. Traditional statistics courses typically use scientific calculators, which can relieve some of the tedium and errors but…

  14. Human Factors Process Task Analysis: Liquid Oxygen Pump Acceptance Test Procedure at the Advanced Technology Development Center

    NASA Technical Reports Server (NTRS)

    Diorio, Kimberly A.; Voska, Ned (Technical Monitor)

    2002-01-01

    This viewgraph presentation provides information on Human Factors Process Failure Modes and Effects Analysis (HF PFMEA). HF PFMEA includes the following 10 steps: Describe mission; Define System; Identify human-machine; List human actions; Identify potential errors; Identify factors that effect error; Determine likelihood of error; Determine potential effects of errors; Evaluate risk; Generate solutions (manage error). The presentation also describes how this analysis was applied to a liquid oxygen pump acceptance test.

  15. A theory of human error

    NASA Technical Reports Server (NTRS)

    Mcruer, D. T.; Clement, W. F.; Allen, R. W.

    1981-01-01

    Human errors tend to be treated in terms of clinical and anecdotal descriptions, from which remedial measures are difficult to derive. Correction of the sources of human error requires an attempt to reconstruct underlying and contributing causes of error from the circumstantial causes cited in official investigative reports. A comprehensive analytical theory of the cause-effect relationships governing propagation of human error is indispensable to a reconstruction of the underlying and contributing causes. A validated analytical theory of the input-output behavior of human operators involving manual control, communication, supervisory, and monitoring tasks which are relevant to aviation, maritime, automotive, and process control operations is highlighted. This theory of behavior, both appropriate and inappropriate, provides an insightful basis for investigating, classifying, and quantifying the needed cause-effect relationships governing propagation of human error.

  16. Global optimization method based on ray tracing to achieve optimum figure error compensation

    NASA Astrophysics Data System (ADS)

    Liu, Xiaolin; Guo, Xuejia; Tang, Tianjin

    2017-02-01

    Figure error would degrade the performance of optical system. When predicting the performance and performing system assembly, compensation by clocking of optical components around the optical axis is a conventional but user-dependent method. Commercial optical software cannot optimize this clocking. Meanwhile existing automatic figure-error balancing methods can introduce approximate calculation error and the build process of optimization model is complex and time-consuming. To overcome these limitations, an accurate and automatic global optimization method of figure error balancing is proposed. This method is based on precise ray tracing to calculate the wavefront error, not approximate calculation, under a given elements' rotation angles combination. The composite wavefront error root-mean-square (RMS) acts as the cost function. Simulated annealing algorithm is used to seek the optimal combination of rotation angles of each optical element. This method can be applied to all rotational symmetric optics. Optimization results show that this method is 49% better than previous approximate analytical method.

  17. The contributions of human factors on human error in Malaysia aviation maintenance industries

    NASA Astrophysics Data System (ADS)

    Padil, H.; Said, M. N.; Azizan, A.

    2018-05-01

    Aviation maintenance is a multitasking activity in which individuals perform varied tasks under constant pressure to meet deadlines as well as challenging work conditions. These situational characteristics combined with human factors can lead to various types of human related errors. The primary objective of this research is to develop a structural relationship model that incorporates human factors, organizational factors, and their impact on human errors in aviation maintenance. Towards that end, a questionnaire was developed which was administered to Malaysian aviation maintenance professionals. Structural Equation Modelling (SEM) approach was used in this study utilizing AMOS software. Results showed that there were a significant relationship of human factors on human errors and were tested in the model. Human factors had a partial effect on organizational factors while organizational factors had a direct and positive impact on human errors. It was also revealed that organizational factors contributed to human errors when coupled with human factors construct. This study has contributed to the advancement of knowledge on human factors effecting safety and has provided guidelines for improving human factors performance relating to aviation maintenance activities and could be used as a reference for improving safety performance in the Malaysian aviation maintenance companies.

  18. A theory of human error

    NASA Technical Reports Server (NTRS)

    Mcruer, D. T.; Clement, W. F.; Allen, R. W.

    1980-01-01

    Human error, a significant contributing factor in a very high proportion of civil transport, general aviation, and rotorcraft accidents is investigated. Correction of the sources of human error requires that one attempt to reconstruct underlying and contributing causes of error from the circumstantial causes cited in official investigative reports. A validated analytical theory of the input-output behavior of human operators involving manual control, communication, supervisory, and monitoring tasks which are relevant to aviation operations is presented. This theory of behavior, both appropriate and inappropriate, provides an insightful basis for investigating, classifying, and quantifying the needed cause-effect relationships governing propagation of human error.

  19. Kinematic Model-Based Pedestrian Dead Reckoning for Heading Correction and Lower Body Motion Tracking.

    PubMed

    Lee, Min Su; Ju, Hojin; Song, Jin Woo; Park, Chan Gook

    2015-11-06

    In this paper, we present a method for finding the enhanced heading and position of pedestrians by fusing the Zero velocity UPdaTe (ZUPT)-based pedestrian dead reckoning (PDR) and the kinematic constraints of the lower human body. ZUPT is a well known algorithm for PDR, and provides a sufficiently accurate position solution for short term periods, but it cannot guarantee a stable and reliable heading because it suffers from magnetic disturbance in determining heading angles, which degrades the overall position accuracy as time passes. The basic idea of the proposed algorithm is integrating the left and right foot positions obtained by ZUPTs with the heading and position information from an IMU mounted on the waist. To integrate this information, a kinematic model of the lower human body, which is calculated by using orientation sensors mounted on both thighs and calves, is adopted. We note that the position of the left and right feet cannot be apart because of the kinematic constraints of the body, so the kinematic model generates new measurements for the waist position. The Extended Kalman Filter (EKF) on the waist data that estimates and corrects error states uses these measurements and magnetic heading measurements, which enhances the heading accuracy. The updated position information is fed into the foot mounted sensors, and reupdate processes are performed to correct the position error of each foot. The proposed update-reupdate technique consequently ensures improved observability of error states and position accuracy. Moreover, the proposed method provides all the information about the lower human body, so that it can be applied more effectively to motion tracking. The effectiveness of the proposed algorithm is verified via experimental results, which show that a 1.25% Return Position Error (RPE) with respect to walking distance is achieved.

  20. Human Reliability and the Cost of Doing Business

    NASA Technical Reports Server (NTRS)

    DeMott, D. L.

    2014-01-01

    Human error cannot be defined unambiguously in advance of it happening, it often becomes an error after the fact. The same action can result in a tragic accident for one situation or a heroic action given a more favorable outcome. People often forget that we employ humans in business and industry for the flexibility and capability to change when needed. In complex systems, operations are driven by their specifications of the system and the system structure. People provide the flexibility to make it work. Human error has been reported as being responsible for 60%-80% of failures, accidents and incidents in high-risk industries. We don't have to accept that all human errors are inevitable. Through the use of some basic techniques, many potential human error events can be addressed. There are actions that can be taken to reduce the risk of human error.

  1. Linear modeling of human hand-arm dynamics relevant to right-angle torque tool interaction.

    PubMed

    Ay, Haluk; Sommerich, Carolyn M; Luscher, Anthony F

    2013-10-01

    A new protocol was evaluated for identification of stiffness, mass, and damping parameters employing a linear model for human hand-arm dynamics relevant to right-angle torque tool use. Powered torque tools are widely used to tighten fasteners in manufacturing industries. While these tools increase accuracy and efficiency of tightening processes, operators are repetitively exposed to impulsive forces, posing risk of upper extremity musculoskeletal injury. A novel testing apparatus was developed that closely mimics biomechanical exposure in torque tool operation. Forty experienced torque tool operators were tested with the apparatus to determine model parameters and validate the protocol for physical capacity assessment. A second-order hand-arm model with parameters extracted in the time domain met model accuracy criterion of 5% for time-to-peak displacement error in 93% of trials (vs. 75% for frequency domain). Average time-to-peak handle displacement and relative peak handle force errors were 0.69 ms and 0.21%, respectively. Model parameters were significantly affected by gender and working posture. Protocol and numerical calculation procedures provide an alternative method for assessing mechanical parameters relevant to right-angle torque tool use. The protocol more closely resembles tool use, and calculation procedures demonstrate better performance of parameter extraction using time domain system identification methods versus frequency domain. Potential future applications include parameter identification for in situ torque tool operation and equipment development for human hand-arm dynamics simulation under impulsive forces that could be used for assessing torque tools based on factors relevant to operator health (handle dynamics and hand-arm reaction force).

  2. Improved volumetric measurement of brain structure with a distortion correction procedure using an ADNI phantom.

    PubMed

    Maikusa, Norihide; Yamashita, Fumio; Tanaka, Kenichiro; Abe, Osamu; Kawaguchi, Atsushi; Kabasawa, Hiroyuki; Chiba, Shoma; Kasahara, Akihiro; Kobayashi, Nobuhisa; Yuasa, Tetsuya; Sato, Noriko; Matsuda, Hiroshi; Iwatsubo, Takeshi

    2013-06-01

    Serial magnetic resonance imaging (MRI) images acquired from multisite and multivendor MRI scanners are widely used in measuring longitudinal structural changes in the brain. Precise and accurate measurements are important in understanding the natural progression of neurodegenerative disorders such as Alzheimer's disease. However, geometric distortions in MRI images decrease the accuracy and precision of volumetric or morphometric measurements. To solve this problem, the authors suggest a commercially available phantom-based distortion correction method that accommodates the variation in geometric distortion within MRI images obtained with multivendor MRI scanners. The authors' method is based on image warping using a polynomial function. The method detects fiducial points within a phantom image using phantom analysis software developed by the Mayo Clinic and calculates warping functions for distortion correction. To quantify the effectiveness of the authors' method, the authors corrected phantom images obtained from multivendor MRI scanners and calculated the root-mean-square (RMS) of fiducial errors and the circularity ratio as evaluation values. The authors also compared the performance of the authors' method with that of a distortion correction method based on a spherical harmonics description of the generic gradient design parameters. Moreover, the authors evaluated whether this correction improves the test-retest reproducibility of voxel-based morphometry in human studies. A Wilcoxon signed-rank test with uncorrected and corrected images was performed. The root-mean-square errors and circularity ratios for all slices significantly improved (p < 0.0001) after the authors' distortion correction. Additionally, the authors' method was significantly better than a distortion correction method based on a description of spherical harmonics in improving the distortion of root-mean-square errors (p < 0.001 and 0.0337, respectively). Moreover, the authors' method reduced the RMS error arising from gradient nonlinearity more than gradwarp methods. In human studies, the coefficient of variation of voxel-based morphometry analysis of the whole brain improved significantly from 3.46% to 2.70% after distortion correction of the whole gray matter using the authors' method (Wilcoxon signed-rank test, p < 0.05). The authors proposed a phantom-based distortion correction method to improve reproducibility in longitudinal structural brain analysis using multivendor MRI. The authors evaluated the authors' method for phantom images in terms of two geometrical values and for human images in terms of test-retest reproducibility. The results showed that distortion was corrected significantly using the authors' method. In human studies, the reproducibility of voxel-based morphometry analysis for the whole gray matter significantly improved after distortion correction using the authors' method.

  3. Error Patterns with Fraction Calculations at Fourth Grade as a Function of Students' Mathematics Achievement Status

    ERIC Educational Resources Information Center

    Schumacher, Robin F.; Malone, Amelia S.

    2017-01-01

    The goal of this study was to describe fraction-calculation errors among fourth-grade students and to determine whether error patterns differed as a function of problem type (addition vs. subtraction; like vs. unlike denominators), orientation (horizontal vs. vertical), or mathematics-achievement status (low-, average-, or high-achieving). We…

  4. Human Error Analysis in a Permit to Work System: A Case Study in a Chemical Plant

    PubMed Central

    Jahangiri, Mehdi; Hoboubi, Naser; Rostamabadi, Akbar; Keshavarzi, Sareh; Hosseini, Ali Akbar

    2015-01-01

    Background A permit to work (PTW) is a formal written system to control certain types of work which are identified as potentially hazardous. However, human error in PTW processes can lead to an accident. Methods This cross-sectional, descriptive study was conducted to estimate the probability of human errors in PTW processes in a chemical plant in Iran. In the first stage, through interviewing the personnel and studying the procedure in the plant, the PTW process was analyzed using the hierarchical task analysis technique. In doing so, PTW was considered as a goal and detailed tasks to achieve the goal were analyzed. In the next step, the standardized plant analysis risk-human (SPAR-H) reliability analysis method was applied for estimation of human error probability. Results The mean probability of human error in the PTW system was estimated to be 0.11. The highest probability of human error in the PTW process was related to flammable gas testing (50.7%). Conclusion The SPAR-H method applied in this study could analyze and quantify the potential human errors and extract the required measures for reducing the error probabilities in PTW system. Some suggestions to reduce the likelihood of errors, especially in the field of modifying the performance shaping factors and dependencies among tasks are provided. PMID:27014485

  5. Particle deposition in human respiratory system: deposition of concentrated hygroscopic aerosols.

    PubMed

    Varghese, Suresh K; Gangamma, S

    2009-06-01

    In the nearly saturated human respiratory tract, the presence of water-soluble substances in the inhaled aerosols can cause change in the size distribution of the particles. This consequently alters the lung deposition profiles of the inhaled airborne particles. Similarly, the presence of high concentration of hygroscopic aerosols also affects the water vapor and temperature profiles in the respiratory tract. A model is presented to analyze these effects in human respiratory system. The model solves simultaneously the heat and mass transfer equations to determine the size evolution of respirable particles and gas-phase properties within human respiratory tract. First, the model predictions for nonhygroscopic aerosols are compared with experimental results. The model results are compared with experimental results of sodium chloride particles. The model reproduces the major features of the experimental data. The water vapor profile is significantly modified only when a high concentration of particles is present. The model is used to study the effect of equilibrium assumptions on particle deposition. Simulations show that an infinite dilution solution assumption to calculate the saturation equilibrium over droplet could induce errors in estimating particle growth. This error is significant in the case of particles of size greater than 1 mum and at number concentrations higher than 10(5)/cm(3).

  6. Local-search based prediction of medical image registration error

    NASA Astrophysics Data System (ADS)

    Saygili, Görkem

    2018-03-01

    Medical image registration is a crucial task in many different medical imaging applications. Hence, considerable amount of work has been published recently that aim to predict the error in a registration without any human effort. If provided, these error predictions can be used as a feedback to the registration algorithm to further improve its performance. Recent methods generally start with extracting image-based and deformation-based features, then apply feature pooling and finally train a Random Forest (RF) regressor to predict the real registration error. Image-based features can be calculated after applying a single registration but provide limited accuracy whereas deformation-based features such as variation of deformation vector field may require up to 20 registrations which is a considerably high time-consuming task. This paper proposes to use extracted features from a local search algorithm as image-based features to estimate the error of a registration. The proposed method comprises a local search algorithm to find corresponding voxels between registered image pairs and based on the amount of shifts and stereo confidence measures, it predicts the amount of registration error in millimetres densely using a RF regressor. Compared to other algorithms in the literature, the proposed algorithm does not require multiple registrations, can be efficiently implemented on a Graphical Processing Unit (GPU) and can still provide highly accurate error predictions in existence of large registration error. Experimental results with real registrations on a public dataset indicate a substantially high accuracy achieved by using features from the local search algorithm.

  7. Managing human fallibility in critical aerospace situations

    NASA Astrophysics Data System (ADS)

    Tew, Larry

    2014-11-01

    Human fallibility is pervasive in the aerospace industry with over 50% of errors attributed to human error. Consider the benefits to any organization if those errors were significantly reduced. Aerospace manufacturing involves high value, high profile systems with significant complexity and often repetitive build, assembly, and test operations. In spite of extensive analysis, planning, training, and detailed procedures, human factors can cause unexpected errors. Handling such errors involves extensive cause and corrective action analysis and invariably schedule slips and cost growth. We will discuss success stories, including those associated with electro-optical systems, where very significant reductions in human fallibility errors were achieved after receiving adapted and specialized training. In the eyes of company and customer leadership, the steps used to achieve these results lead to in a major culture change in both the workforce and the supporting management organization. This approach has proven effective in other industries like medicine, firefighting, law enforcement, and aviation. The roadmap to success and the steps to minimize human error are known. They can be used by any organization willing to accept human fallibility and take a proactive approach to incorporate the steps needed to manage and minimize error.

  8. Prediction of human errors by maladaptive changes in event-related brain networks.

    PubMed

    Eichele, Tom; Debener, Stefan; Calhoun, Vince D; Specht, Karsten; Engel, Andreas K; Hugdahl, Kenneth; von Cramon, D Yves; Ullsperger, Markus

    2008-04-22

    Humans engaged in monotonous tasks are susceptible to occasional errors that may lead to serious consequences, but little is known about brain activity patterns preceding errors. Using functional MRI and applying independent component analysis followed by deconvolution of hemodynamic responses, we studied error preceding brain activity on a trial-by-trial basis. We found a set of brain regions in which the temporal evolution of activation predicted performance errors. These maladaptive brain activity changes started to evolve approximately 30 sec before the error. In particular, a coincident decrease of deactivation in default mode regions of the brain, together with a decline of activation in regions associated with maintaining task effort, raised the probability of future errors. Our findings provide insights into the brain network dynamics preceding human performance errors and suggest that monitoring of the identified precursor states may help in avoiding human errors in critical real-world situations.

  9. Prediction of human errors by maladaptive changes in event-related brain networks

    PubMed Central

    Eichele, Tom; Debener, Stefan; Calhoun, Vince D.; Specht, Karsten; Engel, Andreas K.; Hugdahl, Kenneth; von Cramon, D. Yves; Ullsperger, Markus

    2008-01-01

    Humans engaged in monotonous tasks are susceptible to occasional errors that may lead to serious consequences, but little is known about brain activity patterns preceding errors. Using functional MRI and applying independent component analysis followed by deconvolution of hemodynamic responses, we studied error preceding brain activity on a trial-by-trial basis. We found a set of brain regions in which the temporal evolution of activation predicted performance errors. These maladaptive brain activity changes started to evolve ≈30 sec before the error. In particular, a coincident decrease of deactivation in default mode regions of the brain, together with a decline of activation in regions associated with maintaining task effort, raised the probability of future errors. Our findings provide insights into the brain network dynamics preceding human performance errors and suggest that monitoring of the identified precursor states may help in avoiding human errors in critical real-world situations. PMID:18427123

  10. Defining the Relationship Between Human Error Classes and Technology Intervention Strategies

    NASA Technical Reports Server (NTRS)

    Wiegmann, Douglas A.; Rantanen, Eas M.

    2003-01-01

    The modus operandi in addressing human error in aviation systems is predominantly that of technological interventions or fixes. Such interventions exhibit considerable variability both in terms of sophistication and application. Some technological interventions address human error directly while others do so only indirectly. Some attempt to eliminate the occurrence of errors altogether whereas others look to reduce the negative consequences of these errors. In any case, technological interventions add to the complexity of the systems and may interact with other system components in unforeseeable ways and often create opportunities for novel human errors. Consequently, there is a need to develop standards for evaluating the potential safety benefit of each of these intervention products so that resources can be effectively invested to produce the biggest benefit to flight safety as well as to mitigate any adverse ramifications. The purpose of this project was to help define the relationship between human error and technological interventions, with the ultimate goal of developing a set of standards for evaluating or measuring the potential benefits of new human error fixes.

  11. Differential sea-state bias: A case study using TOPEX/POSEIDON data

    NASA Technical Reports Server (NTRS)

    Stewart, Robert H.; Devalla, B.

    1994-01-01

    We used selected data from the NASA altimeter TOPEX/POSEIDON to calculate differences in range measured by the C and Ku-band altimeters when the satellite overflew 5 to 15 m waves late at night. The range difference is due to free electrons in the ionosphere and to errors in sea-state bias. For the selected data the ionospheric influence on Ku range is less than 2 cm. Any difference in range over short horizontal distances is due only to a small along-track variability of the ionosphere and to errors in calculating the differential sea-state bias. We find that there is a barely detectable error in the bias in the geophysical data records. The wave-induced error in the ionospheric correction is less than 0.2% of significant wave height. The equivalent error in differential range is less than 1% of wave height. Errors in the differential sea-state bias calculations appear to be small even for extreme wave heights that greatly exceed the conditions on which the bias is based. The results also improved our confidence in the sea-state bias correction used for calculating the geophysical data records. Any error in the correction must influence Ku and C-band ranges almost equally.

  12. Applying lessons learned to enhance human performance and reduce human error for ISS operations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Nelson, W.R.

    1999-01-01

    A major component of reliability, safety, and mission success for space missions is ensuring that the humans involved (flight crew, ground crew, mission control, etc.) perform their tasks and functions as required. This includes compliance with training and procedures during normal conditions, and successful compensation when malfunctions or unexpected conditions occur. A very significant issue that affects human performance in space flight is human error. Human errors can invalidate carefully designed equipment and procedures. If certain errors combine with equipment failures or design flaws, mission failure or loss of life can occur. The control of human error during operation ofmore » the International Space Station (ISS) will be critical to the overall success of the program. As experience from Mir operations has shown, human performance plays a vital role in the success or failure of long duration space missions. The Department of Energy{close_quote}s Idaho National Engineering and Environmental Laboratory (INEEL) is developing a systematic approach to enhance human performance and reduce human errors for ISS operations. This approach is based on the systematic identification and evaluation of lessons learned from past space missions such as Mir to enhance the design and operation of ISS. This paper will describe previous INEEL research on human error sponsored by NASA and how it can be applied to enhance human reliability for ISS. {copyright} {ital 1999 American Institute of Physics.}« less

  13. Applying lessons learned to enhance human performance and reduce human error for ISS operations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Nelson, W.R.

    1998-09-01

    A major component of reliability, safety, and mission success for space missions is ensuring that the humans involved (flight crew, ground crew, mission control, etc.) perform their tasks and functions as required. This includes compliance with training and procedures during normal conditions, and successful compensation when malfunctions or unexpected conditions occur. A very significant issue that affects human performance in space flight is human error. Human errors can invalidate carefully designed equipment and procedures. If certain errors combine with equipment failures or design flaws, mission failure or loss of life can occur. The control of human error during operation ofmore » the International Space Station (ISS) will be critical to the overall success of the program. As experience from Mir operations has shown, human performance plays a vital role in the success or failure of long duration space missions. The Department of Energy`s Idaho National Engineering and Environmental Laboratory (INEEL) is developed a systematic approach to enhance human performance and reduce human errors for ISS operations. This approach is based on the systematic identification and evaluation of lessons learned from past space missions such as Mir to enhance the design and operation of ISS. This paper describes previous INEEL research on human error sponsored by NASA and how it can be applied to enhance human reliability for ISS.« less

  14. Vessel Sampling and Blood Flow Velocity Distribution With Vessel Diameter for Characterizing the Human Bulbar Conjunctival Microvasculature.

    PubMed

    Wang, Liang; Yuan, Jin; Jiang, Hong; Yan, Wentao; Cintrón-Colón, Hector R; Perez, Victor L; DeBuc, Delia C; Feuer, William J; Wang, Jianhua

    2016-03-01

    This study determined (1) how many vessels (i.e., the vessel sampling) are needed to reliably characterize the bulbar conjunctival microvasculature and (2) if characteristic information can be obtained from the distribution histogram of the blood flow velocity and vessel diameter. Functional slitlamp biomicroscope was used to image hundreds of venules per subject. The bulbar conjunctiva in five healthy human subjects was imaged on six different locations in the temporal bulbar conjunctiva. The histograms of the diameter and velocity were plotted to examine whether the distribution was normal. Standard errors were calculated from the standard deviation and vessel sample size. The ratio of the standard error of the mean over the population mean was used to determine the sample size cutoff. The velocity was plotted as a function of the vessel diameter to display the distribution of the diameter and velocity. The results showed that the sampling size was approximately 15 vessels, which generated a standard error equivalent to 15% of the population mean from the total vessel population. The distributions of the diameter and velocity were not only unimodal, but also somewhat positively skewed and not normal. The blood flow velocity was related to the vessel diameter (r=0.23, P<0.05). This was the first study to determine the sampling size of the vessels and the distribution histogram of the blood flow velocity and vessel diameter, which may lead to a better understanding of the human microvascular system of the bulbar conjunctiva.

  15. Manually locating physical and virtual reality objects.

    PubMed

    Chen, Karen B; Kimmel, Ryan A; Bartholomew, Aaron; Ponto, Kevin; Gleicher, Michael L; Radwin, Robert G

    2014-09-01

    In this study, we compared how users locate physical and equivalent three-dimensional images of virtual objects in a cave automatic virtual environment (CAVE) using the hand to examine how human performance (accuracy, time, and approach) is affected by object size, location, and distance. Virtual reality (VR) offers the promise to flexibly simulate arbitrary environments for studying human performance. Previously, VR researchers primarily considered differences between virtual and physical distance estimation rather than reaching for close-up objects. Fourteen participants completed manual targeting tasks that involved reaching for corners on equivalent physical and virtual boxes of three different sizes. Predicted errors were calculated from a geometric model based on user interpupillary distance, eye location, distance from the eyes to the projector screen, and object. Users were 1.64 times less accurate (p < .001) and spent 1.49 times more time (p = .01) targeting virtual versus physical box corners using the hands. Predicted virtual targeting errors were on average 1.53 times (p < .05) greater than the observed errors for farther virtual targets but not significantly different for close-up virtual targets. Target size, location, and distance, in addition to binocular disparity, affected virtual object targeting inaccuracy. Observed virtual box inaccuracy was less than predicted for farther locations, suggesting possible influence of cues other than binocular vision. Human physical interaction with objects in VR for simulation, training, and prototyping involving reaching and manually handling virtual objects in a CAVE are more accurate than predicted when locating farther objects.

  16. Output Error Analysis of Planar 2-DOF Five-bar Mechanism

    NASA Astrophysics Data System (ADS)

    Niu, Kejia; Wang, Jun; Ting, Kwun-Lon; Tao, Fen; Cheng, Qunchao; Wang, Quan; Zhang, Kaiyang

    2018-03-01

    Aiming at the mechanism error caused by clearance of planar 2-DOF Five-bar motion pair, the method of equivalent joint clearance of kinematic pair to virtual link is applied. The structural error model of revolute joint clearance is established based on the N-bar rotation laws and the concept of joint rotation space, The influence of the clearance of the moving pair is studied on the output error of the mechanis. and the calculation method and basis of the maximum error are given. The error rotation space of the mechanism under the influence of joint clearance is obtained. The results show that this method can accurately calculate the joint space error rotation space, which provides a new way to analyze the planar parallel mechanism error caused by joint space.

  17. Structured methods for identifying and correcting potential human errors in aviation operations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Nelson, W.R.

    1997-10-01

    Human errors have been identified as the source of approximately 60% of the incidents and accidents that occur in commercial aviation. It can be assumed that a very large number of human errors occur in aviation operations, even though in most cases the redundancies and diversities built into the design of aircraft systems prevent the errors from leading to serious consequences. In addition, when it is acknowledged that many system failures have their roots in human errors that occur in the design phase, it becomes apparent that the identification and elimination of potential human errors could significantly decrease the risksmore » of aviation operations. This will become even more critical during the design of advanced automation-based aircraft systems as well as next-generation systems for air traffic management. Structured methods to identify and correct potential human errors in aviation operations have been developed and are currently undergoing testing at the Idaho National Engineering and Environmental Laboratory (INEEL).« less

  18. Human errors and violations in computer and information security: the viewpoint of network administrators and security specialists.

    PubMed

    Kraemer, Sara; Carayon, Pascale

    2007-03-01

    This paper describes human errors and violations of end users and network administration in computer and information security. This information is summarized in a conceptual framework for examining the human and organizational factors contributing to computer and information security. This framework includes human error taxonomies to describe the work conditions that contribute adversely to computer and information security, i.e. to security vulnerabilities and breaches. The issue of human error and violation in computer and information security was explored through a series of 16 interviews with network administrators and security specialists. The interviews were audio taped, transcribed, and analyzed by coding specific themes in a node structure. The result is an expanded framework that classifies types of human error and identifies specific human and organizational factors that contribute to computer and information security. Network administrators tended to view errors created by end users as more intentional than unintentional, while errors created by network administrators as more unintentional than intentional. Organizational factors, such as communication, security culture, policy, and organizational structure, were the most frequently cited factors associated with computer and information security.

  19. Intervention strategies for the management of human error

    NASA Technical Reports Server (NTRS)

    Wiener, Earl L.

    1993-01-01

    This report examines the management of human error in the cockpit. The principles probably apply as well to other applications in the aviation realm (e.g. air traffic control, dispatch, weather, etc.) as well as other high-risk systems outside of aviation (e.g. shipping, high-technology medical procedures, military operations, nuclear power production). Management of human error is distinguished from error prevention. It is a more encompassing term, which includes not only the prevention of error, but also a means of disallowing an error, once made, from adversely affecting system output. Such techniques include: traditional human factors engineering, improvement of feedback and feedforward of information from system to crew, 'error-evident' displays which make erroneous input more obvious to the crew, trapping of errors within a system, goal-sharing between humans and machines (also called 'intent-driven' systems), paperwork management, and behaviorally based approaches, including procedures, standardization, checklist design, training, cockpit resource management, etc. Fifteen guidelines for the design and implementation of intervention strategies are included.

  20. Error Patterns in Portuguese Students' Addition and Subtraction Calculation Tasks: Implications for Teaching

    ERIC Educational Resources Information Center

    Watson, Silvana Maria R.; Lopes, João; Oliveira, Célia; Judge, Sharon

    2018-01-01

    Purpose: The purpose of this descriptive study is to investigate why some elementary children have difficulties mastering addition and subtraction calculation tasks. Design/methodology/approach: The researchers have examined error types in addition and subtraction calculation made by 697 Portuguese students in elementary grades. Each student…

  1. Comparing errors in ED computer-assisted vs conventional pediatric drug dosing and administration.

    PubMed

    Yamamoto, Loren; Kanemori, Joan

    2010-06-01

    Compared to fixed-dose single-vial drug administration in adults, pediatric drug dosing and administration requires a series of calculations, all of which are potentially error prone. The purpose of this study is to compare error rates and task completion times for common pediatric medication scenarios using computer program assistance vs conventional methods. Two versions of a 4-part paper-based test were developed. Each part consisted of a set of medication administration and/or dosing tasks. Emergency department and pediatric intensive care unit nurse volunteers completed these tasks using both methods (sequence assigned to start with a conventional or a computer-assisted approach). Completion times, errors, and the reason for the error were recorded. Thirty-eight nurses completed the study. Summing the completion of all 4 parts, the mean conventional total time was 1243 seconds vs the mean computer program total time of 879 seconds (P < .001). The conventional manual method had a mean of 1.8 errors vs the computer program with a mean of 0.7 errors (P < .001). Of the 97 total errors, 36 were due to misreading the drug concentration on the label, 34 were due to calculation errors, and 8 were due to misplaced decimals. Of the 36 label interpretation errors, 18 (50%) occurred with digoxin or insulin. Computerized assistance reduced errors and the time required for drug administration calculations. A pattern of errors emerged, noting that reading/interpreting certain drug labels were more error prone. Optimizing the layout of drug labels could reduce the error rate for error-prone labels. Copyright (c) 2010 Elsevier Inc. All rights reserved.

  2. Error Patterns with Fraction Calculations at Fourth Grade as a Function of Students' Mathematics Achievement Status

    ERIC Educational Resources Information Center

    Schumacher, Robin F.; Malone, Amelia S.

    2017-01-01

    The goal of the present study was to describe fraction-calculation errors among 4th-grade students and determine whether error patterns differed as a function of problem type (addition vs. subtraction; like vs. unlike denominators), orientation (horizontal vs. vertical), or mathematics-achievement status (low- vs. average- vs. high-achieving). We…

  3. Exploring Reactions to Pilot Reliability Certification and Changing Attitudes on the Reduction of Errors

    ERIC Educational Resources Information Center

    Boedigheimer, Dan

    2010-01-01

    Approximately 70% of aviation accidents are attributable to human error. The greatest opportunity for further improving aviation safety is found in reducing human errors in the cockpit. The purpose of this quasi-experimental, mixed-method research was to evaluate whether there was a difference in pilot attitudes toward reducing human error in the…

  4. Evaluating a medical error taxonomy.

    PubMed

    Brixey, Juliana; Johnson, Todd R; Zhang, Jiajie

    2002-01-01

    Healthcare has been slow in using human factors principles to reduce medical errors. The Center for Devices and Radiological Health (CDRH) recognizes that a lack of attention to human factors during product development may lead to errors that have the potential for patient injury, or even death. In response to the need for reducing medication errors, the National Coordinating Council for Medication Errors Reporting and Prevention (NCC MERP) released the NCC MERP taxonomy that provides a standard language for reporting medication errors. This project maps the NCC MERP taxonomy of medication error to MedWatch medical errors involving infusion pumps. Of particular interest are human factors associated with medical device errors. The NCC MERP taxonomy of medication errors is limited in mapping information from MEDWATCH because of the focus on the medical device and the format of reporting.

  5. Automated estimation of abdominal effective diameter for body size normalization of CT dose.

    PubMed

    Cheng, Phillip M

    2013-06-01

    Most CT dose data aggregation methods do not currently adjust dose values for patient size. This work proposes a simple heuristic for reliably computing an effective diameter of a patient from an abdominal CT image. Evaluation of this method on 106 patients scanned on Philips Brilliance 64 and Brilliance Big Bore scanners demonstrates close correspondence between computed and manually measured patient effective diameters, with a mean absolute error of 1.0 cm (error range +2.2 to -0.4 cm). This level of correspondence was also demonstrated for 60 patients on Siemens, General Electric, and Toshiba scanners. A calculated effective diameter in the middle slice of an abdominal CT study was found to be a close approximation of the mean calculated effective diameter for the study, with a mean absolute error of approximately 1.0 cm (error range +3.5 to -2.2 cm). Furthermore, the mean absolute error for an adjusted mean volume computed tomography dose index (CTDIvol) using a mid-study calculated effective diameter, versus a mean per-slice adjusted CTDIvol based on the calculated effective diameter of each slice, was 0.59 mGy (error range 1.64 to -3.12 mGy). These results are used to calculate approximate normalized dose length product values in an abdominal CT dose database of 12,506 studies.

  6. An Evaluation of Departmental Radiation Oncology Incident Reports: Anticipating a National Reporting System

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Terezakis, Stephanie A., E-mail: stereza1@jhmi.edu; Harris, Kendra M.; Ford, Eric

    Purpose: Systems to ensure patient safety are of critical importance. The electronic incident reporting systems (IRS) of 2 large academic radiation oncology departments were evaluated for events that may be suitable for submission to a national reporting system (NRS). Methods and Materials: All events recorded in the combined IRS were evaluated from 2007 through 2010. Incidents were graded for potential severity using the validated French Nuclear Safety Authority (ASN) 5-point scale. These incidents were categorized into 7 groups: (1) human error, (2) software error, (3) hardware error, (4) error in communication between 2 humans, (5) error at the human-software interface,more » (6) error at the software-hardware interface, and (7) error at the human-hardware interface. Results: Between the 2 systems, 4407 incidents were reported. Of these events, 1507 (34%) were considered to have the potential for clinical consequences. Of these 1507 events, 149 (10%) were rated as having a potential severity of ≥2. Of these 149 events, the committee determined that 79 (53%) of these events would be submittable to a NRS of which the majority was related to human error or to the human-software interface. Conclusions: A significant number of incidents were identified in this analysis. The majority of events in this study were related to human error and to the human-software interface, further supporting the need for a NRS to facilitate field-wide learning and system improvement.« less

  7. Sleep quality, posttraumatic stress, depression, and human errors in train drivers: a population-based nationwide study in South Korea.

    PubMed

    Jeon, Hong Jin; Kim, Ji-Hae; Kim, Bin-Na; Park, Seung Jin; Fava, Maurizio; Mischoulon, David; Kang, Eun-Ho; Roh, Sungwon; Lee, Dongsoo

    2014-12-01

    Human error is defined as an unintended error that is attributable to humans rather than machines, and that is important to avoid to prevent accidents. We aimed to investigate the association between sleep quality and human errors among train drivers. Cross-sectional. Population-based. A sample of 5,480 subjects who were actively working as train drivers were recruited in South Korea. The participants were 4,634 drivers who completed all questionnaires (response rate 84.6%). None. The Pittsburgh Sleep Quality Index (PSQI), the Center for Epidemiologic Studies Depression Scale (CES-D), the Impact of Event Scale-Revised (IES-R), the State-Trait Anxiety Inventory (STAI), and the Korean Occupational Stress Scale (KOSS). Of 4,634 train drivers, 349 (7.5%) showed more than one human error per 5 y. Human errors were associated with poor sleep quality, higher PSQI total scores, short sleep duration at night, and longer sleep latency. Among train drivers with poor sleep quality, those who experienced severe posttraumatic stress showed a significantly higher number of human errors than those without. Multiple logistic regression analysis showed that human errors were significantly associated with poor sleep quality and posttraumatic stress, whereas there were no significant associations with depression, trait and state anxiety, and work stress after adjusting for age, sex, education years, marital status, and career duration. Poor sleep quality was found to be associated with more human errors in train drivers, especially in those who experienced severe posttraumatic stress. © 2014 Associated Professional Sleep Societies, LLC.

  8. Computer-assisted design and synthesis of molecularly imprinted polymers for selective extraction of acetazolamide from human plasma prior to its voltammetric determination.

    PubMed

    Khodadadian, Mehdi; Ahmadi, Farhad

    2010-06-15

    Molecularly imprinted polymers (MIPs) were computationally designed and synthesized for the selective extraction of a carbonic anhydrase inhibitor, i.e. acetazolamide (ACZ), from human plasma. Density functional theory (DFT) calculations were performed to study the intermolecular interactions in the pre-polymerization mixture and to find a suitable functional monomer in MIP preparation. The interaction energies were corrected for the basis set superposition error (BSSE) using the counterpoise (CP) correction. The polymerization solvent was simulated by means of polarizable continuum model (PCM). It was found that acrylamide (AAM) is the best candidate to prepare MIPs. To confirm the results of theoretical calculations, three MIPs were synthesized with different functional monomers and evaluated using Langmuir-Freundlich (LF) isotherm. The results indicated that the most homogeneous MIP with the highest number of binding sites is the MIP prepared by AAM. This polymer was then used as a selective adsorbent to develop a molecularly imprinted solid-phase extraction procedure followed by differential pulse voltammetry (MISPE-DPV) for clean-up and determination of ACZ in human plasma.

  9. Computer-assisted design and synthesis of a highly selective smart adsorbent for extraction of clonazepam from human serum.

    PubMed

    Aqababa, Heydar; Tabandeh, Mehrdad; Tabatabaei, Meisam; Hasheminejad, Meisam; Emadi, Masoomeh

    2013-01-01

    A computational approach was applied to screen functional monomers and polymerization solvents for rational design of molecular imprinted polymers (MIPs) as smart adsorbents for solid-phase extraction of clonazepam (CLO) form human serum. The comparison of the computed binding energies of the complexes formed between the template and functional monomers was conducted. The primary computational results were corrected by taking into calculation both the basis set superposition error (BSSE) and the effect of the polymerization solvent using the counterpoise (CP) correction and the polarizable continuum model, respectively. Based on the theoretical calculations, trifluoromethyl acrylic acid (TFMAA) and acrylonitrile (ACN) were found as the best and the worst functional monomers, correspondingly. To test the accuracy of the computational results, three MIPs were synthesized by different functional monomers and their Langmuir-Freundlich (LF) isotherms were studied. The experimental results obtained confirmed the computational results and indicated that the MIP synthesized using TFMAA had the highest affinity for CLO in human serum despite the presence of a vast spectrum of ions. Copyright © 2012 Elsevier B.V. All rights reserved.

  10. Weighted-MSE based on saliency map for assessing video quality of H.264 video streams

    NASA Astrophysics Data System (ADS)

    Boujut, H.; Benois-Pineau, J.; Hadar, O.; Ahmed, T.; Bonnet, P.

    2011-01-01

    Human vision system is very complex and has been studied for many years specifically for purposes of efficient encoding of visual, e.g. video content from digital TV. There have been physiological and psychological evidences which indicate that viewers do not pay equal attention to all exposed visual information, but only focus on certain areas known as focus of attention (FOA) or saliency regions. In this work, we propose a novel based objective quality assessment metric, for assessing the perceptual quality of decoded video sequences affected by transmission errors and packed loses. The proposed method weights the Mean Square Error (MSE), Weighted-MSE (WMSE), according to the calculated saliency map at each pixel. Our method was validated trough subjective quality experiments.

  11. Automation of POST Cases via External Optimizer and "Artificial p2" Calculation

    NASA Technical Reports Server (NTRS)

    Dees, Patrick D.; Zwack, Mathew R.; Michelson, Diane K.

    2017-01-01

    During conceptual design speed and accuracy are often at odds. Specifically in the realm of launch vehicles, optimizing the ascent trajectory requires a larger pool of analytical power and expertise. Experienced analysts working on familiar vehicles can produce optimal trajectories in a short time frame, however whenever either "experienced" or "familiar " is not applicable the optimization process can become quite lengthy. In order to construct a vehicle agnostic method an established global optimization algorithm is needed. In this work the authors develop an "artificial" error term to map arbitrary control vectors to non-zero error by which a global method can operate. Two global methods are compared alongside Design of Experiments and random sampling and are shown to produce comparable results to analysis done by a human expert.

  12. Theoretical study of the accuracy of the elution by characteristic points method for bi-langmuir isotherms.

    PubMed

    Ravald, L; Fornstedt, T

    2001-01-26

    The bi-Langmuir equation has recently been proven essential to describe chiral chromatographic surfaces and we therefore investigated the accuracy of the elution by characteristic points method (ECP) for estimation of bi-Langmuir isotherm parameters. The ECP calculations was done on elution profiles generated by the equilibrium-dispersive model of chromatography for five different sets of bi-Langmuir parameters. The ECP method generates two different errors; (i) the error of the ECP calculated isotherm and (ii) the model error of the fitting to the ECP isotherm. Both errors decreased with increasing column efficiency. Moreover, the model error was strongly affected by the weight of the bi-Langmuir function fitted. For some bi-Langmuir compositions the error of the ECP calculated isotherm is too large even at high column efficiencies. Guidelines will be given on surface types to be avoided and on column efficiencies and loading factors required for adequate parameter estimations with ECP.

  13. Radiative flux and forcing parameterization error in aerosol-free clear skies

    DOE PAGES

    Pincus, Robert; Mlawer, Eli J.; Oreopoulos, Lazaros; ...

    2015-07-03

    This article reports on the accuracy in aerosol- and cloud-free conditions of the radiation parameterizations used in climate models. Accuracy is assessed relative to observationally validated reference models for fluxes under present-day conditions and forcing (flux changes) from quadrupled concentrations of carbon dioxide. Agreement among reference models is typically within 1 W/m 2, while parameterized calculations are roughly half as accurate in the longwave and even less accurate, and more variable, in the shortwave. Absorption of shortwave radiation is underestimated by most parameterizations in the present day and has relatively large errors in forcing. Error in present-day conditions is essentiallymore » unrelated to error in forcing calculations. Recent revisions to parameterizations have reduced error in most cases. As a result, a dependence on atmospheric conditions, including integrated water vapor, means that global estimates of parameterization error relevant for the radiative forcing of climate change will require much more ambitious calculations.« less

  14. Head repositioning accuracy to neutral: a comparative study of error calculation.

    PubMed

    Hill, Robert; Jensen, Pål; Baardsen, Tor; Kulvik, Kristian; Jull, Gwendolen; Treleaven, Julia

    2009-02-01

    Deficits in cervical proprioception have been identified in subjects with neck pain through the measure of head repositioning accuracy (HRA). Nevertheless there appears to be no general consensus regarding the construct of measurement of error used for calculating HRA. This study investigated four different mathematical methods of measurement of error to determine if there were any differences in their ability to discriminate between a control group and subjects with a whiplash associated disorder. The four methods for measuring cervical joint position error were calculated using a previous data set consisting of 50 subjects with whiplash complaining of dizziness (WAD D), 50 subjects with whiplash not complaining of dizziness (WAD ND) and 50 control subjects. The results indicated that no one measure of HRA uniquely detected or defined the differences between the whiplash and control groups. Constant error (CE) was significantly different between the whiplash and control groups from extension (p<0.05). Absolute errors (AEs) and root mean square errors (RMSEs) demonstrated differences between the two WAD groups in rotation trials (p<0.05). No differences were seen with variable error (VE). The results suggest that a combination of AE (or RMSE) and CE are probably the most suitable measures for analysis of HRA.

  15. Nexus: a modular workflow management system for quantum simulation codes

    DOE PAGES

    Krogel, Jaron T.

    2015-08-24

    The management of simulation workflows is a significant task for the individual computational researcher. Automation of the required tasks involved in simulation work can decrease the overall time to solution and reduce sources of human error. A new simulation workflow management system, Nexus, is presented to address these issues. Nexus is capable of automated job management on workstations and resources at several major supercomputing centers. Its modular design allows many quantum simulation codes to be supported within the same framework. Current support includes quantum Monte Carlo calculations with QMCPACK, density functional theory calculations with Quantum Espresso or VASP, and quantummore » chemical calculations with GAMESS. Users can compose workflows through a transparent, text-based interface, resembling the input file of a typical simulation code. A usage example is provided to illustrate the process.« less

  16. Longitudinal changes in young children’s 0–100 to 0–1000 number-line error signatures

    PubMed Central

    Reeve, Robert A.; Paul, Jacob M.; Butterworth, Brian

    2015-01-01

    We use a latent difference score (LDS) model to examine changes in young children’s number-line (NL) error signatures (errors marking numbers on a NL) over 18 months. A LDS model (1) overcomes some of the inference limitations of analytic models used in previous research, and in particular (2) provides a more reliable test of hypotheses about the meaning and significance of changes in NL error signatures over time and task. The NL error signatures of 217 6-year-olds’ (on test occasion one) were assessed three times over 18 months, along with their math ability on two occasions. On the first occasion (T1) children completed a 0–100 NL task; on the second (T2) a 0–100 NL and a 0–1000 NL task; on the third (T3) occasion a 0–1000 NL task. On the third and fourth occasions (T3 and T4), children completed mental calculation tasks. Although NL error signatures changed over time, these were predictable from other NL task error signatures, and predicted calculation accuracy at T3, as well as changes in calculation between T3 and T4. Multiple indirect effects (change parameters) showed that associations between initial NL error signatures (0–100 NL) and later mental calculation ability were mediated by error signatures on the 0–1000 NL task. The pattern of findings from the LDS model highlight the value of identifying direct and indirect effects in characterizing changing relationships in cognitive representations over task and time. Substantively, they support the claim that children’s NL error signatures generalize over task and time and thus can be used to predict math ability. PMID:26029152

  17. Analyzing human errors in flight mission operations

    NASA Technical Reports Server (NTRS)

    Bruno, Kristin J.; Welz, Linda L.; Barnes, G. Michael; Sherif, Josef

    1993-01-01

    A long-term program is in progress at JPL to reduce cost and risk of flight mission operations through a defect prevention/error management program. The main thrust of this program is to create an environment in which the performance of the total system, both the human operator and the computer system, is optimized. To this end, 1580 Incident Surprise Anomaly reports (ISA's) from 1977-1991 were analyzed from the Voyager and Magellan projects. A Pareto analysis revealed that 38 percent of the errors were classified as human errors. A preliminary cluster analysis based on the Magellan human errors (204 ISA's) is presented here. The resulting clusters described the underlying relationships among the ISA's. Initial models of human error in flight mission operations are presented. Next, the Voyager ISA's will be scored and included in the analysis. Eventually, these relationships will be used to derive a theoretically motivated and empirically validated model of human error in flight mission operations. Ultimately, this analysis will be used to make continuous process improvements continuous process improvements to end-user applications and training requirements. This Total Quality Management approach will enable the management and prevention of errors in the future.

  18. Improved accuracy of intraocular lens power calculation with the Zeiss IOLMaster.

    PubMed

    Olsen, Thomas

    2007-02-01

    This study aimed to demonstrate how the level of accuracy in intraocular lens (IOL) power calculation can be improved with optical biometry using partial optical coherence interferometry (PCI) (Zeiss IOLMaster) and current anterior chamber depth (ACD) prediction algorithms. Intraocular lens power in 461 consecutive cataract operations was calculated using both PCI and ultrasound and the accuracy of the results of each technique were compared. To illustrate the importance of ACD prediction per se, predictions were calculated using both a recently published 5-variable method and the Haigis 2-variable method and the results compared. All calculations were optimized in retrospect to account for systematic errors, including IOL constants and other off-set errors. The average absolute IOL prediction error (observed minus expected refraction) was 0.65 dioptres with ultrasound and 0.43 D with PCI using the 5-variable ACD prediction method (p < 0.00001). The number of predictions within +/- 0.5 D, +/- 1.0 D and +/- 2.0 D of the expected outcome was 62.5%, 92.4% and 99.9% with PCI, compared with 45.5%, 77.3% and 98.4% with ultrasound, respectively (p < 0.00001). The 2-variable ACD method resulted in an average error in PCI predictions of 0.46 D, which was significantly higher than the error in the 5-variable method (p < 0.001). The accuracy of IOL power calculation can be significantly improved using calibrated axial length readings obtained with PCI and modern IOL power calculation formulas incorporating the latest generation ACD prediction algorithms.

  19. SIMULATED HUMAN ERROR PROBABILITY AND ITS APPLICATION TO DYNAMIC HUMAN FAILURE EVENTS

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Herberger, Sarah M.; Boring, Ronald L.

    Abstract Objectives: Human reliability analysis (HRA) methods typically analyze human failure events (HFEs) at the overall task level. For dynamic HRA, it is important to model human activities at the subtask level. There exists a disconnect between dynamic subtask level and static task level that presents issues when modeling dynamic scenarios. For example, the SPAR-H method is typically used to calculate the human error probability (HEP) at the task level. As demonstrated in this paper, quantification in SPAR-H does not translate to the subtask level. Methods: Two different discrete distributions were generated for each SPAR-H Performance Shaping Factor (PSF) tomore » define the frequency of PSF levels. The first distribution was a uniform, or uninformed distribution that assumed the frequency of each PSF level was equally likely. The second non-continuous distribution took the frequency of PSF level as identified from an assessment of the HERA database. These two different approaches were created to identify the resulting distribution of the HEP. The resulting HEP that appears closer to the known distribution, a log-normal centered on 1E-3, is the more desirable. Each approach then has median, average and maximum HFE calculations applied. To calculate these three values, three events, A, B and C are generated from the PSF level frequencies comprised of subtasks. The median HFE selects the median PSF level from each PSF and calculates HEP. The average HFE takes the mean PSF level, and the maximum takes the maximum PSF level. The same data set of subtask HEPs yields starkly different HEPs when aggregated to the HFE level in SPAR-H. Results: Assuming that each PSF level in each HFE is equally likely creates an unrealistic distribution of the HEP that is centered at 1. Next the observed frequency of PSF levels was applied with the resulting HEP behaving log-normally with a majority of the values under 2.5% HEP. The median, average and maximum HFE calculations did yield different answers for the HFE. The HFE maximum grossly over estimates the HFE, while the HFE distribution occurs less than HFE median, and greater than HFE average. Conclusions: Dynamic task modeling can be perused through the framework of SPAR-H. Identification of distributions associated with each PSF needs to be defined, and may change depending upon the scenario. However it is very unlikely that each PSF level is equally likely as the resulting HEP distribution is strongly centered at 100%, which is unrealistic. Other distributions may need to be identified for PSFs, to facilitate the transition to dynamic task modeling. Additionally discrete distributions need to be exchanged for continuous so that simulations for the HFE can further advance. This paper provides a method to explore dynamic subtask to task translation and provides examples of the process using the SPAR-H method.« less

  20. Non-invasive mapping of calculation function by repetitive navigated transcranial magnetic stimulation.

    PubMed

    Maurer, Stefanie; Tanigawa, Noriko; Sollmann, Nico; Hauck, Theresa; Ille, Sebastian; Boeckh-Behrens, Tobias; Meyer, Bernhard; Krieg, Sandro M

    2016-11-01

    Concerning calculation function, studies have already reported on localizing computational function in patients and volunteers by functional magnetic resonance imaging and transcranial magnetic stimulation. However, the development of accurate repetitive navigated TMS (rTMS) with a considerably higher spatial resolution opens a new field in cognitive neuroscience. This study was therefore designed to evaluate the feasibility of rTMS for locating cortical calculation function in healthy volunteers, and to establish this technique for future scientific applications as well as preoperative mapping in brain tumor patients. Twenty healthy subjects underwent rTMS calculation mapping using 5 Hz/10 pulses. Fifty-two previously determined cortical spots of the whole hemispheres were stimulated on both sides. The subjects were instructed to perform the calculation task composed of 80 simple arithmetic operations while rTMS pulses were applied. The highest error rate (80 %) for all errors of all subjects was observed in the right ventral precentral gyrus. Concerning division task, a 45 % error rate was achieved in the left middle frontal gyrus. The subtraction task showed its highest error rate (40 %) in the right angular gyrus (anG). In the addition task a 35 % error rate was observed in the left anterior superior temporal gyrus. Lastly, the multiplication task induced a maximum error rate of 30 % in the left anG. rTMS seems feasible as a way to locate cortical calculation function. Besides language function, the cortical localizations are well in accordance with the current literature for other modalities or lesion studies.

  1. A decoding procedure for the Reed-Solomon codes

    NASA Technical Reports Server (NTRS)

    Lim, R. S.

    1978-01-01

    A decoding procedure is described for the (n,k) t-error-correcting Reed-Solomon (RS) code, and an implementation of the (31,15) RS code for the I4-TENEX central system. This code can be used for error correction in large archival memory systems. The principal features of the decoder are a Galois field arithmetic unit implemented by microprogramming a microprocessor, and syndrome calculation by using the g(x) encoding shift register. Complete decoding of the (31,15) code is expected to take less than 500 microsecs. The syndrome calculation is performed by hardware using the encoding shift register and a modified Chien search. The error location polynomial is computed by using Lin's table, which is an interpretation of Berlekamp's iterative algorithm. The error location numbers are calculated by using the Chien search. Finally, the error values are computed by using Forney's method.

  2. Distribution of standing-wave errors in real-ear sound-level measurements.

    PubMed

    Richmond, Susan A; Kopun, Judy G; Neely, Stephen T; Tan, Hongyang; Gorga, Michael P

    2011-05-01

    Standing waves can cause measurement errors when sound-pressure level (SPL) measurements are performed in a closed ear canal, e.g., during probe-microphone system calibration for distortion-product otoacoustic emission (DPOAE) testing. Alternative calibration methods, such as forward-pressure level (FPL), minimize the influence of standing waves by calculating the forward-going sound waves separate from the reflections that cause errors. Previous research compared test performance (Burke et al., 2010) and threshold prediction (Rogers et al., 2010) using SPL and multiple FPL calibration conditions, and surprisingly found no significant improvements when using FPL relative to SPL, except at 8 kHz. The present study examined the calibration data collected by Burke et al. and Rogers et al. from 155 human subjects in order to describe the frequency location and magnitude of standing-wave pressure minima to see if these errors might explain trends in test performance. Results indicate that while individual results varied widely, pressure variability was larger around 4 kHz and smaller at 8 kHz, consistent with the dimensions of the adult ear canal. The present data suggest that standing-wave errors are not responsible for the historically poor (8 kHz) or good (4 kHz) performance of DPOAE measures at specific test frequencies.

  3. Human Error and the International Space Station: Challenges and Triumphs in Science Operations

    NASA Technical Reports Server (NTRS)

    Harris, Samantha S.; Simpson, Beau C.

    2016-01-01

    Any system with a human component is inherently risky. Studies in human factors and psychology have repeatedly shown that human operators will inevitably make errors, regardless of how well they are trained. Onboard the International Space Station (ISS) where crew time is arguably the most valuable resource, errors by the crew or ground operators can be costly to critical science objectives. Operations experts at the ISS Payload Operations Integration Center (POIC), located at NASA's Marshall Space Flight Center in Huntsville, Alabama, have learned that from payload concept development through execution, there are countless opportunities to introduce errors that can potentially result in costly losses of crew time and science. To effectively address this challenge, we must approach the design, testing, and operation processes with two specific goals in mind. First, a systematic approach to error and human centered design methodology should be implemented to minimize opportunities for user error. Second, we must assume that human errors will be made and enable rapid identification and recoverability when they occur. While a systematic approach and human centered development process can go a long way toward eliminating error, the complete exclusion of operator error is not a reasonable expectation. The ISS environment in particular poses challenging conditions, especially for flight controllers and astronauts. Operating a scientific laboratory 250 miles above the Earth is a complicated and dangerous task with high stakes and a steep learning curve. While human error is a reality that may never be fully eliminated, smart implementation of carefully chosen tools and techniques can go a long way toward minimizing risk and increasing the efficiency of NASA's space science operations.

  4. Image based Monte Carlo Modeling for Computational Phantom

    NASA Astrophysics Data System (ADS)

    Cheng, Mengyun; Wang, Wen; Zhao, Kai; Fan, Yanchang; Long, Pengcheng; Wu, Yican

    2014-06-01

    The evaluation on the effects of ionizing radiation and the risk of radiation exposure on human body has been becoming one of the most important issues for radiation protection and radiotherapy fields, which is helpful to avoid unnecessary radiation and decrease harm to human body. In order to accurately evaluate the dose on human body, it is necessary to construct more realistic computational phantom. However, manual description and verfication of the models for Monte carlo(MC)simulation are very tedious, error-prone and time-consuming. In addiation, it is difficult to locate and fix the geometry error, and difficult to describe material information and assign it to cells. MCAM (CAD/Image-based Automatic Modeling Program for Neutronics and Radiation Transport Simulation) was developed as an interface program to achieve both CAD- and image-based automatic modeling by FDS Team (Advanced Nuclear Energy Research Team, http://www.fds.org.cn). The advanced version (Version 6) of MCAM can achieve automatic conversion from CT/segmented sectioned images to computational phantoms such as MCNP models. Imaged-based automatic modeling program(MCAM6.0) has been tested by several medical images and sectioned images. And it has been applied in the construction of Rad-HUMAN. Following manual segmentation and 3D reconstruction, a whole-body computational phantom of Chinese adult female called Rad-HUMAN was created by using MCAM6.0 from sectioned images of a Chinese visible human dataset. Rad-HUMAN contains 46 organs/tissues, which faithfully represented the average anatomical characteristics of the Chinese female. The dose conversion coefficients(Dt/Ka) from kerma free-in-air to absorbed dose of Rad-HUMAN were calculated. Rad-HUMAN can be applied to predict and evaluate dose distributions in the Treatment Plan System (TPS), as well as radiation exposure for human body in radiation protection.

  5. SU-F-T-376: The Efficiency of Calculating Photonuclear Reaction On High-Energy Photon Therapy by Monte Carlo Method

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hirayama, S; Fujibuchi, T

    Purpose: Secondary-neutrons having harmful influences to a human body are generated by photonuclear reaction on high-energy photon therapy. Their characteristics are not known in detail since the calculation to evaluate them takes very long time. PHITS(Particle and Heavy Ion Transport code System) Monte Carlo code since versions 2.80 has the new parameter “pnimul” raising the probability of occurring photonuclear reaction forcibly to make the efficiency of calculation. We investigated the optimum value of “pnimul” on high-energy photon therapy. Methods: The geometry of accelerator head based on the specification of a Varian Clinac 21EX was used for PHITS ver. 2.80. Themore » phantom (30 cm * 30 cm * 30 cm) filled the composition defined by ICRU(International Commission on Radiation Units) was placed at source-surface distance 100 cm. We calculated the neutron energy spectra in the surface of ICRU phantom with “pnimal” setting 1, 10, 100, 1000, 10000 and compared the total calculation time and the behavior of photon using PDD(Percentage Depth Dose) and OCR(Off-Center Ratio). Next, the cutoff energy of photon, electron and positron were investigated for the calculation efficiency with 4, 5, 6 and 7 MeV. Results: The calculation total time until the errors of neutron fluence become within 1% decreased as increasing “pnimul”. PDD and OCR showed no differences by the parameter. The calculation time setting the cutoff energy like 4, 5, 6 and 7 MeV decreased as increasing the cutoff energy. However, the errors of photon become within 1% did not decrease by the cutoff energy. Conclusion: The optimum values of “pnimul” and the cutoff energy were investigated on high-energy photon therapy. It is suggest that using the optimum “pnimul” makes the calculation efficiency. The study of the cutoff energy need more investigation.« less

  6. Modeling human response errors in synthetic flight simulator domain

    NASA Technical Reports Server (NTRS)

    Ntuen, Celestine A.

    1992-01-01

    This paper presents a control theoretic approach to modeling human response errors (HRE) in the flight simulation domain. The human pilot is modeled as a supervisor of a highly automated system. The synthesis uses the theory of optimal control pilot modeling for integrating the pilot's observation error and the error due to the simulation model (experimental error). Methods for solving the HRE problem are suggested. Experimental verification of the models will be tested in a flight quality handling simulation.

  7. Approaches to reducing photon dose calculation errors near metal implants

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Huang, Jessie Y.; Followill, David S.; Howell, Reb

    Purpose: Dose calculation errors near metal implants are caused by limitations of the dose calculation algorithm in modeling tissue/metal interface effects as well as density assignment errors caused by imaging artifacts. The purpose of this study was to investigate two strategies for reducing dose calculation errors near metal implants: implementation of metal-based energy deposition kernels in the convolution/superposition (C/S) dose calculation method and use of metal artifact reduction methods for computed tomography (CT) imaging. Methods: Both error reduction strategies were investigated using a simple geometric slab phantom with a rectangular metal insert (composed of titanium or Cerrobend), as well asmore » two anthropomorphic phantoms (one with spinal hardware and one with dental fillings), designed to mimic relevant clinical scenarios. To assess the dosimetric impact of metal kernels, the authors implemented titanium and silver kernels in a commercial collapsed cone C/S algorithm. To assess the impact of CT metal artifact reduction methods, the authors performed dose calculations using baseline imaging techniques (uncorrected 120 kVp imaging) and three commercial metal artifact reduction methods: Philips Healthcare’s O-MAR, GE Healthcare’s monochromatic gemstone spectral imaging (GSI) using dual-energy CT, and GSI with metal artifact reduction software (MARS) applied. For the simple geometric phantom, radiochromic film was used to measure dose upstream and downstream of metal inserts. For the anthropomorphic phantoms, ion chambers and radiochromic film were used to quantify the benefit of the error reduction strategies. Results: Metal kernels did not universally improve accuracy but rather resulted in better accuracy upstream of metal implants and decreased accuracy directly downstream. For the clinical cases (spinal hardware and dental fillings), metal kernels had very little impact on the dose calculation accuracy (<1.0%). Of the commercial CT artifact reduction methods investigated, the authors found that O-MAR was the most consistent method, resulting in either improved dose calculation accuracy (dental case) or little impact on calculation accuracy (spine case). GSI was unsuccessful at reducing the severe artifacts caused by dental fillings and had very little impact on calculation accuracy. GSI with MARS on the other hand gave mixed results, sometimes introducing metal distortion and increasing calculation errors (titanium rectangular implant and titanium spinal hardware) but other times very successfully reducing artifacts (Cerrobend rectangular implant and dental fillings). Conclusions: Though successful at improving dose calculation accuracy upstream of metal implants, metal kernels were not found to substantially improve accuracy for clinical cases. Of the commercial artifact reduction methods investigated, O-MAR was found to be the most consistent candidate for all-purpose CT simulation imaging. The MARS algorithm for GSI should be used with caution for titanium implants, larger implants, and implants located near heterogeneities as it can distort the size and shape of implants and increase calculation errors.« less

  8. Defining the Relationship Between Human Error Classes and Technology Intervention Strategies

    NASA Technical Reports Server (NTRS)

    Wiegmann, Douglas A.; Rantanen, Esa; Crisp, Vicki K. (Technical Monitor)

    2002-01-01

    One of the main factors in all aviation accidents is human error. The NASA Aviation Safety Program (AvSP), therefore, has identified several human-factors safety technologies to address this issue. Some technologies directly address human error either by attempting to reduce the occurrence of errors or by mitigating the negative consequences of errors. However, new technologies and system changes may also introduce new error opportunities or even induce different types of errors. Consequently, a thorough understanding of the relationship between error classes and technology "fixes" is crucial for the evaluation of intervention strategies outlined in the AvSP, so that resources can be effectively directed to maximize the benefit to flight safety. The purpose of the present project, therefore, was to examine the repositories of human factors data to identify the possible relationship between different error class and technology intervention strategies. The first phase of the project, which is summarized here, involved the development of prototype data structures or matrices that map errors onto "fixes" (and vice versa), with the hope of facilitating the development of standards for evaluating safety products. Possible follow-on phases of this project are also discussed. These additional efforts include a thorough and detailed review of the literature to fill in the data matrix and the construction of a complete database and standards checklists.

  9. Forecasting Error Calculation with Mean Absolute Deviation and Mean Absolute Percentage Error

    NASA Astrophysics Data System (ADS)

    Khair, Ummul; Fahmi, Hasanul; Hakim, Sarudin Al; Rahim, Robbi

    2017-12-01

    Prediction using a forecasting method is one of the most important things for an organization, the selection of appropriate forecasting methods is also important but the percentage error of a method is more important in order for decision makers to adopt the right culture, the use of the Mean Absolute Deviation and Mean Absolute Percentage Error to calculate the percentage of mistakes in the least square method resulted in a percentage of 9.77% and it was decided that the least square method be worked for time series and trend data.

  10. [Comparative volumetry of human testes using special types of testicular sonography, Prader's orchidometer, Schirren's circle and sliding caliber].

    PubMed

    Dörnberger, V; Dörnberger, G

    1987-01-01

    On 99 testes of corpses (death had occurred between 26 und 86 years) comparative volumetry was done. In the left surrounding capsules (without scrotal skin and tunica dartos) the testes were measured via real time sonography in a waterbath (7.5 MHz linear-scan), afterwards length, breadth and height were measured by a sliding calibre, the largest diameter (the length) of the testis was determined by Schirren's circle and finally the size of the testis was measured via Prader's orchidometer. After all the testes were surgically exposed, their volume (by litres) was determined according to Archimedes' principle. As for the Archimedes' principle a random mean error of 7% must be accepted, sonographic determination of the volume showed a random mean error of 15%. Whereas the accuracy of measurement increases with increasing volumes, both methods should be used with caution if the volumes are below 4 ml, since the possibilities of error are rather great. According to Prader's orchidometer the measured volumes on average were higher (+ 27%) with a random mean error of 19.5%. With Schirren's circle the obtained mean value was even higher (+ 52%) in comparison to the "real" volume by Archimedes' principle with a random mean error of 19%. The measurements of the testes in the left capsules by sliding calibre can be optimized, if one applies a correcting factor f (sliding calibre) = 0.39 for calculation of the testis volume corresponding to an ellipsoid. Here you will get the same mean value as in Archimedes' principle with a standard mean error of only 9%. If one applies the correction factor of real time sonography of testis f (sono) = 0.65 the mean value of sliding calibre measurements would be 68.8% too high with a standard mean error of 20.3%. For measurements via sliding calibre the calculation of the testis volume corresponding to an ellipsoid one should apply the smaller factor f (sliding calibre) = 0.39, because in this way the left capsules of testis and the epididymis are considered.

  11. Development and implementation of a human accuracy program in patient foodservice.

    PubMed

    Eden, S H; Wood, S M; Ptak, K M

    1987-04-01

    For many years, industry has utilized the concept of human error rates to monitor and minimize human errors in the production process. A consistent quality-controlled product increases consumer satisfaction and repeat purchase of product. Administrative dietitians have applied the concepts of using human error rates (the number of errors divided by the number of opportunities for error) at four hospitals, with a total bed capacity of 788, within a tertiary-care medical center. Human error rate was used to monitor and evaluate trayline employee performance and to evaluate layout and tasks of trayline stations, in addition to evaluating employees in patient service areas. Long-term employees initially opposed the error rate system with some hostility and resentment, while newer employees accepted the system. All employees now believe that the constant feedback given by supervisors enhances their self-esteem and productivity. Employee error rates are monitored daily and are used to counsel employees when necessary; they are also utilized during annual performance evaluation. Average daily error rates for a facility staffed by new employees decreased from 7% to an acceptable 3%. In a facility staffed by long-term employees, the error rate increased, reflecting improper error documentation. Patient satisfaction surveys reveal satisfaction, for tray accuracy increased from 88% to 92% in the facility staffed by long-term employees and has remained above the 90% standard in the facility staffed by new employees.

  12. Error protection capability of space shuttle data bus designs

    NASA Technical Reports Server (NTRS)

    Proch, G. E.

    1974-01-01

    Error protection assurance in the reliability of digital data communications is discussed. The need for error protection on the space shuttle data bus system has been recognized and specified as a hardware requirement. The error protection techniques of particular concern are those designed into the Shuttle Main Engine Interface (MEI) and the Orbiter Multiplex Interface Adapter (MIA). The techniques and circuit design details proposed for these hardware are analyzed in this report to determine their error protection capability. The capability is calculated in terms of the probability of an undetected word error. Calculated results are reported for a noise environment that ranges from the nominal noise level stated in the hardware specifications to burst levels which may occur in extreme or anomalous conditions.

  13. Benchmarking the pseudopotential and fixed-node approximations in diffusion Monte Carlo calculations of molecules and solids

    DOE PAGES

    Nazarov, Roman; Shulenburger, Luke; Morales, Miguel A.; ...

    2016-03-28

    We performed diffusion Monte Carlo (DMC) calculations of the spectroscopic properties of a large set of molecules, assessing the effect of different approximations. In systems containing elements with large atomic numbers, we show that the errors associated with the use of nonlocal mean-field-based pseudopotentials in DMC calculations can be significant and may surpass the fixed-node error. In conclusion, we suggest practical guidelines for reducing these pseudopotential errors, which allow us to obtain DMC-computed spectroscopic parameters of molecules and equation of state properties of solids in excellent agreement with experiment.

  14. Benchmarking the pseudopotential and fixed-node approximations in diffusion Monte Carlo calculations of molecules and solids

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Nazarov, Roman; Shulenburger, Luke; Morales, Miguel A.

    We performed diffusion Monte Carlo (DMC) calculations of the spectroscopic properties of a large set of molecules, assessing the effect of different approximations. In systems containing elements with large atomic numbers, we show that the errors associated with the use of nonlocal mean-field-based pseudopotentials in DMC calculations can be significant and may surpass the fixed-node error. In conclusion, we suggest practical guidelines for reducing these pseudopotential errors, which allow us to obtain DMC-computed spectroscopic parameters of molecules and equation of state properties of solids in excellent agreement with experiment.

  15. Air and smear sample calculational tool for Fluor Hanford Radiological control

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    BAUMANN, B.L.

    2003-07-11

    A spreadsheet calculation tool was developed to automate the calculations performed for determining the concentration of airborne radioactivity and smear counting as outlined in HNF-13536, Section 5.2.7, ''Analyzing Air and Smear Samples''. This document reports on the design and testing of the calculation tool. Radiological Control Technicians (RCTs) will save time and reduce hand written and calculation errors by using an electronic form for documenting and calculating work place air samples. Current expectations are RCTs will perform an air sample and collect the filter or perform a smear for surface contamination. RCTs will then survey the filter for gross alphamore » and beta/gamma radioactivity and with the gross counts utilize either hand calculation method or a calculator to determine activity on the filter. The electronic form will allow the RCT with a few key strokes to document the individual's name, payroll, gross counts, instrument identifiers; produce an error free record. This productivity gain is realized by the enhanced ability to perform mathematical calculations electronically (reducing errors) and at the same time, documenting the air sample.« less

  16. Reflections on human error - Matters of life and death

    NASA Technical Reports Server (NTRS)

    Wiener, Earl L.

    1989-01-01

    The last two decades have witnessed a rapid growth in the introduction of automatic devices into aircraft cockpits, and eleswhere in human-machine systems. This was motivated in part by the assumption that when human functioning is replaced by machine functioning, human error is eliminated. Experience to date shows that this is far from true, and that automation does not replace humans, but changes their role in the system, as well as the types and severity of the errors they make. This altered role may lead to fewer, but more critical errors. Intervention strategies to prevent these errors, or ameliorate their consequences include basic human factors engineering of the interface, enhanced warning and alerting systems, and more intelligent interfaces that understand the strategic intent of the crew and can detect and trap inconsistent or erroneous input before it affects the system.

  17. Uncertainty modelling and analysis of volume calculations based on a regular grid digital elevation model (DEM)

    NASA Astrophysics Data System (ADS)

    Li, Chang; Wang, Qing; Shi, Wenzhong; Zhao, Sisi

    2018-05-01

    The accuracy of earthwork calculations that compute terrain volume is critical to digital terrain analysis (DTA). The uncertainties in volume calculations (VCs) based on a DEM are primarily related to three factors: 1) model error (ME), which is caused by an adopted algorithm for a VC model, 2) discrete error (DE), which is usually caused by DEM resolution and terrain complexity, and 3) propagation error (PE), which is caused by the variables' error. Based on these factors, the uncertainty modelling and analysis of VCs based on a regular grid DEM are investigated in this paper. Especially, how to quantify the uncertainty of VCs is proposed by a confidence interval based on truncation error (TE). In the experiments, the trapezoidal double rule (TDR) and Simpson's double rule (SDR) were used to calculate volume, where the TE is the major ME, and six simulated regular grid DEMs with different terrain complexity and resolution (i.e. DE) were generated by a Gauss synthetic surface to easily obtain the theoretical true value and eliminate the interference of data errors. For PE, Monte-Carlo simulation techniques and spatial autocorrelation were used to represent DEM uncertainty. This study can enrich uncertainty modelling and analysis-related theories of geographic information science.

  18. A Physiologically Based Pharmacokinetic Model to Predict the Pharmacokinetics of Highly Protein-Bound Drugs and Impact of Errors in Plasma Protein Binding

    PubMed Central

    Ye, Min; Nagar, Swati; Korzekwa, Ken

    2015-01-01

    Predicting the pharmacokinetics of highly protein-bound drugs is difficult. Also, since historical plasma protein binding data was often collected using unbuffered plasma, the resulting inaccurate binding data could contribute to incorrect predictions. This study uses a generic physiologically based pharmacokinetic (PBPK) model to predict human plasma concentration-time profiles for 22 highly protein-bound drugs. Tissue distribution was estimated from in vitro drug lipophilicity data, plasma protein binding, and blood: plasma ratio. Clearance was predicted with a well-stirred liver model. Underestimated hepatic clearance for acidic and neutral compounds was corrected by an empirical scaling factor. Predicted values (pharmacokinetic parameters, plasma concentration-time profile) were compared with observed data to evaluate model accuracy. Of the 22 drugs, less than a 2-fold error was obtained for terminal elimination half-life (t1/2, 100% of drugs), peak plasma concentration (Cmax, 100%), area under the plasma concentration-time curve (AUC0–t, 95.4%), clearance (CLh, 95.4%), mean retention time (MRT, 95.4%), and steady state volume (Vss, 90.9%). The impact of fup errors on CLh and Vss prediction was evaluated. Errors in fup resulted in proportional errors in clearance prediction for low-clearance compounds, and in Vss prediction for high-volume neutral drugs. For high-volume basic drugs, errors in fup did not propagate to errors in Vss prediction. This is due to the cancellation of errors in the calculations for tissue partitioning of basic drugs. Overall, plasma profiles were well simulated with the present PBPK model. PMID:26531057

  19. Joint Center Estimation Using Single-Frame Optimization: Part 1: Numerical Simulation.

    PubMed

    Frick, Eric; Rahmatalla, Salam

    2018-04-04

    The biomechanical models used to refine and stabilize motion capture processes are almost invariably driven by joint center estimates, and any errors in joint center calculation carry over and can be compounded when calculating joint kinematics. Unfortunately, accurate determination of joint centers is a complex task, primarily due to measurements being contaminated by soft-tissue artifact (STA). This paper proposes a novel approach to joint center estimation implemented via sequential application of single-frame optimization (SFO). First, the method minimizes the variance of individual time frames’ joint center estimations via the developed variance minimization method to obtain accurate overall initial conditions. These initial conditions are used to stabilize an optimization-based linearization of human motion that determines a time-varying joint center estimation. In this manner, the complex and nonlinear behavior of human motion contaminated by STA can be captured as a continuous series of unique rigid-body realizations without requiring a complex analytical model to describe the behavior of STA. This article intends to offer proof of concept, and the presented method must be further developed before it can be reasonably applied to human motion. Numerical simulations were introduced to verify and substantiate the efficacy of the proposed methodology. When directly compared with a state-of-the-art inertial method, SFO reduced the error due to soft-tissue artifact in all cases by more than 45%. Instead of producing a single vector value to describe the joint center location during a motion capture trial as existing methods often do, the proposed method produced time-varying solutions that were highly correlated ( r > 0.82) with the true, time-varying joint center solution.

  20. A stochastic dynamic model for human error analysis in nuclear power plants

    NASA Astrophysics Data System (ADS)

    Delgado-Loperena, Dharma

    Nuclear disasters like Three Mile Island and Chernobyl indicate that human performance is a critical safety issue, sending a clear message about the need to include environmental press and competence aspects in research. This investigation was undertaken to serve as a roadmap for studying human behavior through the formulation of a general solution equation. The theoretical model integrates models from two heretofore-disassociated disciplines (behavior specialists and technical specialists), that historically have independently studied the nature of error and human behavior; including concepts derived from fractal and chaos theory; and suggests re-evaluation of base theory regarding human error. The results of this research were based on comprehensive analysis of patterns of error, with the omnipresent underlying structure of chaotic systems. The study of patterns lead to a dynamic formulation, serving for any other formula used to study human error consequences. The search for literature regarding error yielded insight for the need to include concepts rooted in chaos theory and strange attractors---heretofore unconsidered by mainstream researchers who investigated human error in nuclear power plants or those who employed the ecological model in their work. The study of patterns obtained from the rupture of a steam generator tube (SGTR) event simulation, provided a direct application to aspects of control room operations in nuclear power plant operations. In doing so, the conceptual foundation based in the understanding of the patterns of human error analysis can be gleaned, resulting in reduced and prevent undesirable events.

  1. Usefulness of biological fingerprint in magnetic resonance imaging for patient verification.

    PubMed

    Ueda, Yasuyuki; Morishita, Junji; Kudomi, Shohei; Ueda, Katsuhiko

    2016-09-01

    The purpose of our study is to investigate the feasibility of automated patient verification using multi-planar reconstruction (MPR) images generated from three-dimensional magnetic resonance (MR) imaging of the brain. Several anatomy-related MPR images generated from three-dimensional fast scout scan of each MR examination were used as biological fingerprint images in this study. The database of this study consisted of 730 temporal pairs of MR examination of the brain. We calculated the correlation value between current and prior biological fingerprint images of the same patient and also all combinations of two images for different patients to evaluate the effectiveness of our method for patient verification. The best performance of our system were as follows: a half-total error rate of 1.59 % with a false acceptance rate of 0.023 % and a false rejection rate of 3.15 %, an equal error rate of 1.37 %, and a rank-one identification rate of 98.6 %. Our method makes it possible to verify the identity of the patient using only some existing medical images without the addition of incidental equipment. Also, our method will contribute to patient misidentification error management caused by human errors.

  2. Estimating Gestational Age With Sonography: Regression-Derived Formula Versus the Fetal Biometric Average.

    PubMed

    Cawyer, Chase R; Anderson, Sarah B; Szychowski, Jeff M; Neely, Cherry; Owen, John

    2018-03-01

    To compare the accuracy of a new regression-derived formula developed from the National Fetal Growth Studies data to the common alternative method that uses the average of the gestational ages (GAs) calculated for each fetal biometric measurement (biparietal diameter, head circumference, abdominal circumference, and femur length). This retrospective cross-sectional study identified nonanomalous singleton pregnancies that had a crown-rump length plus at least 1 additional sonographic examination with complete fetal biometric measurements. With the use of the crown-rump length to establish the referent estimated date of delivery, each method's (National Institute of Child Health and Human Development regression versus Hadlock average [Radiology 1984; 152:497-501]), error at every examination was computed. Error, defined as the difference between the crown-rump length-derived GA and each method's predicted GA (weeks), was compared in 3 GA intervals: 1 (14 weeks-20 weeks 6 days), 2 (21 weeks-28 weeks 6 days), and 3 (≥29 weeks). In addition, the proportion of each method's examinations that had errors outside prespecified (±) day ranges was computed by using odds ratios. A total of 16,904 sonograms were identified. The overall and prespecified GA range subset mean errors were significantly smaller for the regression compared to the average (P < .01), and the regression had significantly lower odds of observing examinations outside the specified range of error in GA intervals 2 (odds ratio, 1.15; 95% confidence interval, 1.01-1.31) and 3 (odds ratio, 1.24; 95% confidence interval, 1.17-1.32) than the average method. In a contemporary unselected population of women dated by a crown-rump length-derived GA, the National Institute of Child Health and Human Development regression formula produced fewer estimates outside a prespecified margin of error than the commonly used Hadlock average; the differences were most pronounced for GA estimates at 29 weeks and later. © 2017 by the American Institute of Ultrasound in Medicine.

  3. [Can the scattering of differences from the target refraction be avoided?].

    PubMed

    Janknecht, P

    2008-10-01

    We wanted to check how the stochastic error is affected by two lens formulae. The power of the intraocular lens was calculated using the SRK-II formula and the Haigis formula after eye length measurement with ultrasound and the IOL Master. Both lens formulae were partially derived and Gauss error analysis was used for examination of the propagated error. 61 patients with a mean age of 73.8 years were analysed. The postoperative refraction differed from the calculated refraction after ultrasound biometry using the SRK-II formula by 0.05 D (-1.56 to + 1.31, S. D.: 0.59 D; 92 % within +/- 1.0 D), after IOL Master biometry using the SRK-II formula by -0.15 D (-1.18 to + 1.25, S. D.: 0.52 D; 97 % within +/- 1.0 D), and after IOL Master biometry using the Haigis formula by -0.11 D (-1.14 to + 1.14, S. D.: 0.48 D; 95 % within +/- 1.0 D). The results did not differ from one another. The propagated error of the Haigis formula can be calculated according to DeltaP = square root (deltaL x (-4.206))(2) + (deltaVK x 0.9496)(2) + (DeltaDC x (-1.4950))(2). (DeltaL: error measuring axial length, DeltaVK error measuring anterior chamber depth, DeltaDC error measuring corneal power), the propagated error of the SRK-II formula according to DeltaP = square root (DeltaL x (-2.5))(2) + (DeltaDC x (-0.9))(2). The propagated error of the Haigis formula is always larger than the propagated error of the SRK-II formula. Scattering of the postoperative difference from the expected refraction cannot be avoided completely. It is possible to limit the systematic error by developing complicated formulae like the Haigis formula. However, increasing the number of parameters which need to be measured increases the dispersion of the calculated postoperative refraction. A compromise has to be found, and therefore the SRK-II formula is not outdated.

  4. Error reduction in three-dimensional metrology combining optical and touch probe data

    NASA Astrophysics Data System (ADS)

    Gerde, Janice R.; Christens-Barry, William A.

    2010-08-01

    Analysis of footwear under the Harmonized Tariff Schedule of the United States (HTSUS) is partly based on identifying the boundary ("parting line") between the "external surface area upper" (ESAU) and the sample's sole. Often, that boundary is obscured. We establish the parting line as the curved intersection between the sample outer surface and its insole surface. The outer surface is determined by discrete point cloud coordinates obtained using a laser scanner. The insole surface is defined by point cloud data, obtained using a touch probe device-a coordinate measuring machine (CMM). Because these point cloud data sets do not overlap spatially, a polynomial surface is fitted to the insole data and extended to intersect a mesh fitted to the outer surface point cloud. This line of intersection defines the ESAU boundary, permitting further fractional area calculations to proceed. The defined parting line location is sensitive to the polynomial used to fit experimental data. Extrapolation to the intersection with the ESAU can heighten this sensitivity. We discuss a methodology for transforming these data into a common reference frame. Three scenarios are considered: measurement error in point cloud coordinates, from fitting a polynomial surface to a point cloud then extrapolating beyond the data set, and error from reference frame transformation. These error sources can influence calculated surface areas. We describe experiments to assess error magnitude, the sensitivity of calculated results on these errors, and minimizing error impact on calculated quantities. Ultimately, we must ensure that statistical error from these procedures is minimized and within acceptance criteria.

  5. A new approach for flow-through respirometry measurements in humans

    PubMed Central

    Ingebrigtsen, Jan P.; Bergouignan, Audrey; Ohkawara, Kazunori; Kohrt, Wendy M.; Lighton, John R. B.

    2010-01-01

    Indirect whole room calorimetry is commonly used in studies of human metabolism. These calorimeters can be configured as either push or pull systems. A major obstacle to accurately calculating gas exchange rates in a pull system is that the excurrent flow rate is increased above the incurrent flow rate, because the organism produces water vapor, which also dilutes the concentrations of respiratory gasses in the excurrent sample. A common approach to this problem is to dry the excurrent gasses prior to measurement, but if drying is incomplete, large errors in the calculated oxygen consumption will result. The other major potential source of error is fluctuations in the concentration of O2 and CO2 in the incurrent airstream. We describe a novel approach to measuring gas exchange using a pull-type whole room indirect calorimeter. Relative humidity and temperature of the incurrent and excurrent airstreams are measured continuously using high-precision, relative humidity and temperature sensors, permitting accurate measurement of water vapor pressure. The excurrent flow rates are then adjusted to eliminate the flow contribution from water vapor, and respiratory gas concentrations are adjusted to eliminate the effect of water vapor dilution. In addition, a novel switching approach is used that permits constant, uninterrupted measurement of the excurrent airstream while allowing frequent measurements of the incurrent airstream. To demonstrate the accuracy of this approach, we present the results of validation trials compared with our existing system and metabolic carts, as well as the results of standard propane combustion tests. PMID:20200135

  6. Usability: Human Research Program - Space Human Factors and Habitability

    NASA Technical Reports Server (NTRS)

    Sandor, Aniko; Holden, Kritina L.

    2009-01-01

    The Usability project addresses the need for research in the area of metrics and methodologies used in hardware and software usability testing in order to define quantifiable and verifiable usability requirements. A usability test is a human-in-the-loop evaluation where a participant works through a realistic set of representative tasks using the hardware/software under investigation. The purpose of this research is to define metrics and methodologies for measuring and verifying usability in the aerospace domain in accordance with FY09 focus on errors, consistency, and mobility/maneuverability. Usability metrics must be predictive of success with the interfaces, must be easy to obtain and/or calculate, and must meet the intent of current Human Systems Integration Requirements (HSIR). Methodologies must work within the constraints of the aerospace domain, be cost and time efficient, and be able to be applied without extensive specialized training.

  7. Human factors process failure modes and effects analysis (HF PFMEA) software tool

    NASA Technical Reports Server (NTRS)

    Chandler, Faith T. (Inventor); Relvini, Kristine M. (Inventor); Shedd, Nathaneal P. (Inventor); Valentino, William D. (Inventor); Philippart, Monica F. (Inventor); Bessette, Colette I. (Inventor)

    2011-01-01

    Methods, computer-readable media, and systems for automatically performing Human Factors Process Failure Modes and Effects Analysis for a process are provided. At least one task involved in a process is identified, where the task includes at least one human activity. The human activity is described using at least one verb. A human error potentially resulting from the human activity is automatically identified, the human error is related to the verb used in describing the task. A likelihood of occurrence, detection, and correction of the human error is identified. The severity of the effect of the human error is identified. The likelihood of occurrence, and the severity of the risk of potential harm is identified. The risk of potential harm is compared with a risk threshold to identify the appropriateness of corrective measures.

  8. Air Force Academy Homepage

    Science.gov Websites

    Chaplain Corps Cadet Chapel Community Center Chapel Institutional Review Board Not Human Subjects Research Requirements 7 Not Human Subjects Research Form 8 Researcher Instructions - Activities Submitted to DoD IRB 9 Review 18 Not Human Subjects Errors 19 Exempt Research Most Frequent Errors 20 Most Frequent Errors for

  9. Development of an FAA-EUROCONTROL technique for the analysis of human error in ATM : final report.

    DOT National Transportation Integrated Search

    2002-07-01

    Human error has been identified as a dominant risk factor in safety-oriented industries such as air traffic control (ATC). However, little is known about the factors leading to human errors in current air traffic management (ATM) systems. The first s...

  10. Human Error: The Stakes Are Raised.

    ERIC Educational Resources Information Center

    Greenberg, Joel

    1980-01-01

    Mistakes related to the operation of nuclear power plants and other technologically complex systems are discussed. Recommendations are given for decreasing the chance of human error in the operation of nuclear plants. The causes of the Three Mile Island incident are presented in terms of the human error element. (SA)

  11. Forces associated with pneumatic power screwdriver operation: statics and dynamics.

    PubMed

    Lin, Jia-Hua; Radwin, Robert G; Fronczak, Frank J; Richard, Terry G

    2003-10-10

    The statics and dynamics of pneumatic power screwdriver operation were investigated in the context of predicting forces acting against the human operator. A static force model is described in the paper, based on tool geometry, mass, orientation in space, feed force, torque build up, and stall torque. Three common power hand tool shapes are considered, including pistol grip, right angle, and in-line. The static model estimates handle force needed to support a power nutrunner when it acts against the tightened fastener with a constant torque. A system of equations for static force and moment equilibrium conditions are established, and the resultant handle force (resolved in orthogonal directions) is calculated in matrix form. A dynamic model is formulated to describe pneumatic motor torque build-up characteristics dependent on threaded fastener joint hardness. Six pneumatic tools were tested to validate the deterministic model. The average torque prediction error was 6.6% (SD = 5.4%) and the average handle force prediction error was 6.7% (SD = 6.4%) for a medium-soft threaded fastener joint. The average torque prediction error was 5.2% (SD = 5.3%) and the average handle force prediction error was 3.6% (SD = 3.2%) for a hard threaded fastener joint. Use of these equations for estimating handle forces based on passive mechanical elements representing the human operator is also described. These models together should be useful for considering tool handle force in the selection and design of power screwdrivers, particularly for minimizing handle forces in the prevention of injuries and work related musculoskeletal disorders.

  12. Avoiding Human Error in Mission Operations: Cassini Flight Experience

    NASA Technical Reports Server (NTRS)

    Burk, Thomas A.

    2012-01-01

    Operating spacecraft is a never-ending challenge and the risk of human error is ever- present. Many missions have been significantly affected by human error on the part of ground controllers. The Cassini mission at Saturn has not been immune to human error, but Cassini operations engineers use tools and follow processes that find and correct most human errors before they reach the spacecraft. What is needed are skilled engineers with good technical knowledge, good interpersonal communications, quality ground software, regular peer reviews, up-to-date procedures, as well as careful attention to detail and the discipline to test and verify all commands that will be sent to the spacecraft. Two areas of special concern are changes to flight software and response to in-flight anomalies. The Cassini team has a lot of practical experience in all these areas and they have found that well-trained engineers with good tools who follow clear procedures can catch most errors before they get into command sequences to be sent to the spacecraft. Finally, having a robust and fault-tolerant spacecraft that allows ground controllers excellent visibility of its condition is the most important way to ensure human error does not compromise the mission.

  13. Good people who try their best can have problems: recognition of human factors and how to minimise error.

    PubMed

    Brennan, Peter A; Mitchell, David A; Holmes, Simon; Plint, Simon; Parry, David

    2016-01-01

    Human error is as old as humanity itself and is an appreciable cause of mistakes by both organisations and people. Much of the work related to human factors in causing error has originated from aviation where mistakes can be catastrophic not only for those who contribute to the error, but for passengers as well. The role of human error in medical and surgical incidents, which are often multifactorial, is becoming better understood, and includes both organisational issues (by the employer) and potential human factors (at a personal level). Mistakes as a result of individual human factors and surgical teams should be better recognised and emphasised. Attitudes and acceptance of preoperative briefing has improved since the introduction of the World Health Organization (WHO) surgical checklist. However, this does not address limitations or other safety concerns that are related to performance, such as stress and fatigue, emotional state, hunger, awareness of what is going on situational awareness, and other factors that could potentially lead to error. Here we attempt to raise awareness of these human factors, and highlight how they can lead to error, and how they can be minimised in our day-to-day practice. Can hospitals move from being "high risk industries" to "high reliability organisations"? Copyright © 2015 The British Association of Oral and Maxillofacial Surgeons. Published by Elsevier Ltd. All rights reserved.

  14. Continuous slope-area discharge records in Maricopa County, Arizona, 2004–2012

    USGS Publications Warehouse

    Wiele, Stephen M.; Heaton, John W.; Bunch, Claire E.; Gardner, David E.; Smith, Christopher F.

    2015-12-29

    Analyses of sources of errors and the impact stage data errors have on calculated discharge time series are considered, along with issues in data reduction. Steeper, longer stream reaches are generally less sensitive to measurement error. Other issues considered are pressure transducer drawdown, capture of flood peaks with discrete stage data, selection of stage record for development of rating curves, and minimum stages for the calculation of discharge.

  15. Using a Delphi Method to Identify Human Factors Contributing to Nursing Errors.

    PubMed

    Roth, Cheryl; Brewer, Melanie; Wieck, K Lynn

    2017-07-01

    The purpose of this study was to identify human factors associated with nursing errors. Using a Delphi technique, this study used feedback from a panel of nurse experts (n = 25) on an initial qualitative survey questionnaire followed by summarizing the results with feedback and confirmation. Synthesized factors regarding causes of errors were incorporated into a quantitative Likert-type scale, and the original expert panel participants were queried a second time to validate responses. The list identified 24 items as most common causes of nursing errors, including swamping and errors made by others that nurses are expected to recognize and fix. The responses provided a consensus top 10 errors list based on means with heavy workload and fatigue at the top of the list. The use of the Delphi survey established consensus and developed a platform upon which future study of nursing errors can evolve as a link to future solutions. This list of human factors in nursing errors should serve to stimulate dialogue among nurses about how to prevent errors and improve outcomes. Human and system failures have been the subject of an abundance of research, yet nursing errors continue to occur. © 2016 Wiley Periodicals, Inc.

  16. FRamework Assessing Notorious Contributing Influences for Error (FRANCIE): Perspective on Taxonomy Development to Support Error Reporting and Analysis

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lon N. Haney; David I. Gertman

    2003-04-01

    Beginning in the 1980s a primary focus of human reliability analysis was estimation of human error probabilities. However, detailed qualitative modeling with comprehensive representation of contextual variables often was lacking. This was likely due to the lack of comprehensive error and performance shaping factor taxonomies, and the limited data available on observed error rates and their relationship to specific contextual variables. In the mid 90s Boeing, America West Airlines, NASA Ames Research Center and INEEL partnered in a NASA sponsored Advanced Concepts grant to: assess the state of the art in human error analysis, identify future needs for human errormore » analysis, and develop an approach addressing these needs. Identified needs included the need for a method to identify and prioritize task and contextual characteristics affecting human reliability. Other needs identified included developing comprehensive taxonomies to support detailed qualitative modeling and to structure meaningful data collection efforts across domains. A result was the development of the FRamework Assessing Notorious Contributing Influences for Error (FRANCIE) with a taxonomy for airline maintenance tasks. The assignment of performance shaping factors to generic errors by experts proved to be valuable to qualitative modeling. Performance shaping factors and error types from such detailed approaches can be used to structure error reporting schemes. In a recent NASA Advanced Human Support Technology grant FRANCIE was refined, and two new taxonomies for use on space missions were developed. The development, sharing, and use of error taxonomies, and the refinement of approaches for increased fidelity of qualitative modeling is offered as a means to help direct useful data collection strategies.« less

  17. TH-E-BRE-07: Development of Dose Calculation Error Predictors for a Widely Implemented Clinical Algorithm

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Egan, A; Laub, W

    2014-06-15

    Purpose: Several shortcomings of the current implementation of the analytic anisotropic algorithm (AAA) may lead to dose calculation errors in highly modulated treatments delivered to highly heterogeneous geometries. Here we introduce a set of dosimetric error predictors that can be applied to a clinical treatment plan and patient geometry in order to identify high risk plans. Once a problematic plan is identified, the treatment can be recalculated with more accurate algorithm in order to better assess its viability. Methods: Here we focus on three distinct sources dosimetric error in the AAA algorithm. First, due to a combination of discrepancies inmore » smallfield beam modeling as well as volume averaging effects, dose calculated through small MLC apertures can be underestimated, while that behind small MLC blocks can overestimated. Second, due the rectilinear scaling of the Monte Carlo generated pencil beam kernel, energy is not properly transported through heterogeneities near, but not impeding, the central axis of the beamlet. And third, AAA overestimates dose in regions very low density (< 0.2 g/cm{sup 3}). We have developed an algorithm to detect the location and magnitude of each scenario within the patient geometry, namely the field-size index (FSI), the heterogeneous scatter index (HSI), and the lowdensity index (LDI) respectively. Results: Error indices successfully identify deviations between AAA and Monte Carlo dose distributions in simple phantom geometries. Algorithms are currently implemented in the MATLAB computing environment and are able to run on a typical RapidArc head and neck geometry in less than an hour. Conclusion: Because these error indices successfully identify each type of error in contrived cases, with sufficient benchmarking, this method can be developed into a clinical tool that may be able to help estimate AAA dose calculation errors and when it might be advisable to use Monte Carlo calculations.« less

  18. Quantum mechanical calculation of electric fields and vibrational Stark shifts at active site of human aldose reductase

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wang, Xianwei; State Key Laboratory of Precision Spectroscopy, Institute of Theoretical and Computational Science, East China Normal University, Shanghai 200062; Zhang, John Z. H.

    2015-11-14

    Recent advance in biophysics has made it possible to directly measure site-specific electric field at internal sites of proteins using molecular probes with C = O or C≡N groups in the context of vibrational Stark effect. These measurements directly probe changes of electric field at specific protein sites due to, e.g., mutation and are very useful in protein design. Computational simulation of the Stark effect based on force fields such as AMBER and OPLS, while providing good insight, shows large errors in comparison to experimental measurement due to inherent difficulties associated with point charge based representation of force fields. Inmore » this study, quantum mechanical calculation of protein’s internal electrostatic properties and vibrational Stark shifts was carried out by using electrostatically embedded generalized molecular fractionation with conjugate caps method. Quantum calculated change of mutation-induced electric field and vibrational Stark shift is reported at the internal probing site of enzyme human aldose reductase. The quantum result is in much better agreement with experimental data than those predicted by force fields, underscoring the deficiency of traditional point charge models describing intra-protein electrostatic properties.« less

  19. Analysis of measured data of human body based on error correcting frequency

    NASA Astrophysics Data System (ADS)

    Jin, Aiyan; Peipei, Gao; Shang, Xiaomei

    2014-04-01

    Anthropometry is to measure all parts of human body surface, and the measured data is the basis of analysis and study of the human body, establishment and modification of garment size and formulation and implementation of online clothing store. In this paper, several groups of the measured data are gained, and analysis of data error is gotten by analyzing the error frequency and using analysis of variance method in mathematical statistics method. Determination of the measured data accuracy and the difficulty of measured parts of human body, further studies of the causes of data errors, and summarization of the key points to minimize errors possibly are also mentioned in the paper. This paper analyses the measured data based on error frequency, and in a way , it provides certain reference elements to promote the garment industry development.

  20. Quantifying soil carbon loss and uncertainty from a peatland wildfire using multi-temporal LiDAR

    USGS Publications Warehouse

    Reddy, Ashwan D.; Hawbaker, Todd J.; Wurster, F.; Zhu, Zhiliang; Ward, S.; Newcomb, Doug; Murray, R.

    2015-01-01

    Peatlands are a major reservoir of global soil carbon, yet account for just 3% of global land cover. Human impacts like draining can hinder the ability of peatlands to sequester carbon and expose their soils to fire under dry conditions. Estimating soil carbon loss from peat fires can be challenging due to uncertainty about pre-fire surface elevations. This study uses multi-temporal LiDAR to obtain pre- and post-fire elevations and estimate soil carbon loss caused by the 2011 Lateral West fire in the Great Dismal Swamp National Wildlife Refuge, VA, USA. We also determine how LiDAR elevation error affects uncertainty in our carbon loss estimate by randomly perturbing the LiDAR point elevations and recalculating elevation change and carbon loss, iterating this process 1000 times. We calculated a total loss using LiDAR of 1.10 Tg C across the 25 km2 burned area. The fire burned an average of 47 cm deep, equivalent to 44 kg C/m2, a value larger than the 1997 Indonesian peat fires (29 kg C/m2). Carbon loss via the First-Order Fire Effects Model (FOFEM) was estimated to be 0.06 Tg C. Propagating the LiDAR elevation error to the carbon loss estimates, we calculated a standard deviation of 0.00009 Tg C, equivalent to 0.008% of total carbon loss. We conclude that LiDAR elevation error is not a significant contributor to uncertainty in soil carbon loss under severe fire conditions with substantial peat consumption. However, uncertainties may be more substantial when soil elevation loss is of a similar or smaller magnitude than the reported LiDAR error.

  1. Simulation of rare events in quantum error correction

    NASA Astrophysics Data System (ADS)

    Bravyi, Sergey; Vargo, Alexander

    2013-12-01

    We consider the problem of calculating the logical error probability for a stabilizer quantum code subject to random Pauli errors. To access the regime of large code distances where logical errors are extremely unlikely we adopt the splitting method widely used in Monte Carlo simulations of rare events and Bennett's acceptance ratio method for estimating the free energy difference between two canonical ensembles. To illustrate the power of these methods in the context of error correction, we calculate the logical error probability PL for the two-dimensional surface code on a square lattice with a pair of holes for all code distances d≤20 and all error rates p below the fault-tolerance threshold. Our numerical results confirm the expected exponential decay PL˜exp[-α(p)d] and provide a simple fitting formula for the decay rate α(p). Both noiseless and noisy syndrome readout circuits are considered.

  2. QUANTIFYING UNCERTAINTY DUE TO RANDOM ERRORS FOR MOMENT ANALYSES OF BREAKTHROUGH CURVES

    EPA Science Inventory

    The uncertainty in moments calculated from breakthrough curves (BTCs) is investigated as a function of random measurement errors in the data used to define the BTCs. The method presented assumes moments are calculated by numerical integration using the trapezoidal rule, and is t...

  3. Tailoring a Human Reliability Analysis to Your Industry Needs

    NASA Technical Reports Server (NTRS)

    DeMott, D. L.

    2016-01-01

    Companies at risk of accidents caused by human error that result in catastrophic consequences include: airline industry mishaps, medical malpractice, medication mistakes, aerospace failures, major oil spills, transportation mishaps, power production failures and manufacturing facility incidents. Human Reliability Assessment (HRA) is used to analyze the inherent risk of human behavior or actions introducing errors into the operation of a system or process. These assessments can be used to identify where errors are most likely to arise and the potential risks involved if they do occur. Using the basic concepts of HRA, an evolving group of methodologies are used to meet various industry needs. Determining which methodology or combination of techniques will provide a quality human reliability assessment is a key element to developing effective strategies for understanding and dealing with risks caused by human errors. There are a number of concerns and difficulties in "tailoring" a Human Reliability Assessment (HRA) for different industries. Although a variety of HRA methodologies are available to analyze human error events, determining the most appropriate tools to provide the most useful results can depend on industry specific cultures and requirements. Methodology selection may be based on a variety of factors that include: 1) how people act and react in different industries, 2) expectations based on industry standards, 3) factors that influence how the human errors could occur such as tasks, tools, environment, workplace, support, training and procedure, 4) type and availability of data, 5) how the industry views risk & reliability, and 6) types of emergencies, contingencies and routine tasks. Other considerations for methodology selection should be based on what information is needed from the assessment. If the principal concern is determination of the primary risk factors contributing to the potential human error, a more detailed analysis method may be employed versus a requirement to provide a numerical value as part of a probabilistic risk assessment. Industries involved with humans operating large equipment or transport systems (ex. railroads or airlines) would have more need to address the man machine interface than medical workers administering medications. Human error occurs in every industry; in most cases the consequences are relatively benign and occasionally beneficial. In cases where the results can have disastrous consequences, the use of Human Reliability techniques to identify and classify the risk of human errors allows a company more opportunities to mitigate or eliminate these types of risks and prevent costly tragedies.

  4. Quantitative evaluation of patient-specific quality assurance using online dosimetry system

    NASA Astrophysics Data System (ADS)

    Jung, Jae-Yong; Shin, Young-Ju; Sohn, Seung-Chang; Min, Jung-Whan; Kim, Yon-Lae; Kim, Dong-Su; Choe, Bo-Young; Suh, Tae-Suk

    2018-01-01

    In this study, we investigated the clinical performance of an online dosimetry system (Mobius FX system, MFX) by 1) dosimetric plan verification using gamma passing rates and dose volume metrics and 2) error-detection capability evaluation by deliberately introduced machine error. Eighteen volumetric modulated arc therapy (VMAT) plans were studied. To evaluate the clinical performance of the MFX, we used gamma analysis and dose volume histogram (DVH) analysis. In addition, to evaluate the error-detection capability, we used gamma analysis and DVH analysis utilizing three types of deliberately introduced errors (Type 1: gantry angle-independent multi-leaf collimator (MLC) error, Type 2: gantry angle-dependent MLC error, and Type 3: gantry angle error). A dosimetric verification comparison of physical dosimetry system (Delt4PT) and online dosimetry system (MFX), gamma passing rates of the two dosimetry systems showed very good agreement with treatment planning system (TPS) calculation. For the average dose difference between the TPS calculation and the MFX measurement, most of the dose metrics showed good agreement within a tolerance of 3%. For the error-detection comparison of Delta4PT and MFX, the gamma passing rates of the two dosimetry systems did not meet the 90% acceptance criterion with the magnitude of error exceeding 2 mm and 1.5 ◦, respectively, for error plans of Types 1, 2, and 3. For delivery with all error types, the average dose difference of PTV due to error magnitude showed good agreement between calculated TPS and measured MFX within 1%. Overall, the results of the online dosimetry system showed very good agreement with those of the physical dosimetry system. Our results suggest that a log file-based online dosimetry system is a very suitable verification tool for accurate and efficient clinical routines for patient-specific quality assurance (QA).

  5. The Automated Assessment of Postural Stability: Balance Detection Algorithm.

    PubMed

    Napoli, Alessandro; Glass, Stephen M; Tucker, Carole; Obeid, Iyad

    2017-12-01

    Impaired balance is a common indicator of mild traumatic brain injury, concussion and musculoskeletal injury. Given the clinical relevance of such injuries, especially in military settings, it is paramount to develop more accurate and reliable on-field evaluation tools. This work presents the design and implementation of the automated assessment of postural stability (AAPS) system, for on-field evaluations following concussion. The AAPS is a computer system, based on inexpensive off-the-shelf components and custom software, that aims to automatically and reliably evaluate balance deficits, by replicating a known on-field clinical test, namely, the Balance Error Scoring System (BESS). The AAPS main innovation is its balance error detection algorithm that has been designed to acquire data from a Microsoft Kinect ® sensor and convert them into clinically-relevant BESS scores, using the same detection criteria defined by the original BESS test. In order to assess the AAPS balance evaluation capability, a total of 15 healthy subjects (7 male, 8 female) were required to perform the BESS test, while simultaneously being tracked by a Kinect 2.0 sensor and a professional-grade motion capture system (Qualisys AB, Gothenburg, Sweden). High definition videos with BESS trials were scored off-line by three experienced observers for reference scores. AAPS performance was assessed by comparing the AAPS automated scores to those derived by three experienced observers. Our results show that the AAPS error detection algorithm presented here can accurately and precisely detect balance deficits with performance levels that are comparable to those of experienced medical personnel. Specifically, agreement levels between the AAPS algorithm and the human average BESS scores ranging between 87.9% (single-leg on foam) and 99.8% (double-leg on firm ground) were detected. Moreover, statistically significant differences in balance scores were not detected by an ANOVA test with alpha equal to 0.05. Despite some level of disagreement between human and AAPS-generated scores, the use of an automated system yields important advantages over currently available human-based alternatives. These results underscore the value of using the AAPS, that can be quickly deployed in the field and/or in outdoor settings with minimal set-up time. Finally, the AAPS can record multiple error types and their time course with extremely high temporal resolution. These features are not achievable by humans, who cannot keep track of multiple balance errors with such a high resolution. Together, these results suggest that computerized BESS calculation may provide more accurate and consistent measures of balance than those derived from human experts.

  6. Using APEX to Model Anticipated Human Error: Analysis of a GPS Navigational Aid

    NASA Technical Reports Server (NTRS)

    VanSelst, Mark; Freed, Michael; Shefto, Michael (Technical Monitor)

    1997-01-01

    The interface development process can be dramatically improved by predicting design facilitated human error at an early stage in the design process. The approach we advocate is to SIMULATE the behavior of a human agent carrying out tasks with a well-specified user interface, ANALYZE the simulation for instances of human error, and then REFINE the interface or protocol to minimize predicted error. This approach, incorporated into the APEX modeling architecture, differs from past approaches to human simulation in Its emphasis on error rather than e.g. learning rate or speed of response. The APEX model consists of two major components: (1) a powerful action selection component capable of simulating behavior in complex, multiple-task environments; and (2) a resource architecture which constrains cognitive, perceptual, and motor capabilities to within empirically demonstrated limits. The model mimics human errors arising from interactions between limited human resources and elements of the computer interface whose design falls to anticipate those limits. We analyze the design of a hand-held Global Positioning System (GPS) device used for radical and navigational decisions in small yacht recalls. The analysis demonstrates how human system modeling can be an effective design aid, helping to accelerate the process of refining a product (or procedure).

  7. TU-D-201-06: HDR Plan Prechecks Using Eclipse Scripting API

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Palaniswaamy, G; Morrow, A; Kim, S

    Purpose: Automate brachytherapy treatment plan quality check using Eclipse v13.6 scripting API based on pre-configured rules to minimize human error and maximize efficiency. Methods: The HDR Precheck system is developed based on a rules-driven approach using Eclipse scripting API. This system checks for critical plan parameters like channel length, first source position, source step size and channel mapping. The planned treatment time is verified independently based on analytical methods. For interstitial or SAVI APBI treatment plans, a Patterson-Parker system calculation is performed to verify the planned treatment time. For endobronchial treatments, an analytical formula from TG-59 is used. Acceptable tolerancesmore » were defined based on clinical experiences in our department. The system was designed to show PASS/FAIL status levels. Additional information, if necessary, is indicated appropriately in a separate comments field in the user interface. Results: The HDR Precheck system has been developed and tested to verify the treatment plan parameters that are routinely checked by the clinical physicist. The report also serves as a reminder or checklist for the planner to perform any additional critical checks such as applicator digitization or scenarios where the channel mapping was intentionally changed. It is expected to reduce the current manual plan check time from 15 minutes to <1 minute. Conclusion: Automating brachytherapy plan prechecks significantly reduces treatment plan precheck time and reduces human errors. When fully developed, this system will be able to perform TG-43 based second check of the treatment planning system’s dose calculation using random points in the target and critical structures. A histogram will be generated along with tabulated mean and standard deviation values for each structure. A knowledge database will also be developed for Brachyvision plans which will then be used for knowledge-based plan quality checks to further reduce treatment planning errors and increase confidence in the planned treatment.« less

  8. The Enigmatic Cornea and Intraocular Lens Calculations: The LXXIII Edward Jackson Memorial Lecture.

    PubMed

    Koch, Douglas D

    2016-11-01

    To review the progress and challenges in obtaining accurate corneal power measurements for intraocular lens (IOL) calculations. Personal perspective, review of literature, case presentations, and personal data. Through literature review findings, case presentations, and data from the author's center, the types of corneal measurement errors that can occur in IOL calculation are categorized and described, along with discussion of future options to improve accuracy. Advances in IOL calculation technology and formulas have greatly increased the accuracy of IOL calculations. Recent reports suggest that over 90% of normal eyes implanted with IOLs may achieve accuracy to within 0.5 diopter (D) of the refractive target. Though errors in estimation of corneal power can cause IOL calculation errors in eyes with normal corneas, greater difficulties in measuring corneal power are encountered in eyes with diseased, scarred, and postsurgical corneas. For these corneas, problematic issues are quantifying anterior corneal power and measuring posterior corneal power and astigmatism. Results in these eyes are improving, but 2 examples illustrate current limitations: (1) spherical accuracy within 0.5 D is achieved in only 70% of eyes with post-refractive surgery corneas, and (2) astigmatism accuracy within 0.5 D is achieved in only 80% of eyes implanted with toric IOLs. Corneal power measurements are a major source of error in IOL calculations. New corneal imaging technology and IOL calculation formulas have improved outcomes and hold the promise of ongoing progress. Copyright © 2016 Elsevier Inc. All rights reserved.

  9. WASP (Write a Scientific Paper) using Excel - 6: Standard error and confidence interval.

    PubMed

    Grech, Victor

    2018-03-01

    The calculation of descriptive statistics includes the calculation of standard error and confidence interval, an inevitable component of data analysis in inferential statistics. This paper provides pointers as to how to do this in Microsoft Excel™. Copyright © 2018 Elsevier B.V. All rights reserved.

  10. Decision Aids for Multiple-Decision Disease Management as Affected by Weather Input Errors

    USDA-ARS?s Scientific Manuscript database

    Many disease management decision support systems (DSS) rely, exclusively or in part, on weather inputs to calculate an indicator for disease hazard. Error in the weather inputs, typically due to forecasting, interpolation or estimation from off-site sources, may affect model calculations and manage...

  11. New algorithm for toric intraocular lens power calculation considering the posterior corneal astigmatism.

    PubMed

    Canovas, Carmen; Alarcon, Aixa; Rosén, Robert; Kasthurirangan, Sanjeev; Ma, Joseph J K; Koch, Douglas D; Piers, Patricia

    2018-02-01

    To assess the accuracy of toric intraocular lens (IOL) power calculations of a new algorithm that incorporates the effect of posterior corneal astigmatism (PCA). Abbott Medical Optics, Inc., Groningen, the Netherlands. Retrospective case report. In eyes implanted with toric IOLs, the exact vergence formula of the Tecnis toric calculator was used to predict refractive astigmatism from preoperative biometry, surgeon-estimated surgically induced astigmatism (SIA), and implanted IOL power, with and without including the new PCA algorithm. For each calculation method, the error in predicted refractive astigmatism was calculated as the vector difference between the prediction and the actual refraction. Calculations were also made using postoperative keratometry (K) values to eliminate the potential effect of incorrect SIA estimates. The study comprised 274 eyes. The PCA algorithm significantly reduced the centroid error in predicted refractive astigmatism (P < .001). With the PCA algorithm, the centroid error reduced from 0.50 @ 1 to 0.19 @ 3 when using preoperative K values and from 0.30 @ 0 to 0.02 @ 84 when using postoperative K values. Patients who had anterior corneal against-the-rule, with-the-rule, and oblique astigmatism had improvement with the PCA algorithm. In addition, the PCA algorithm reduced the median absolute error in all groups (P < .001). The use of the new PCA algorithm decreased the error in the prediction of residual refractive astigmatism in eyes implanted with toric IOLs. Therefore, the new PCA algorithm, in combination with an exact vergence IOL power calculation formula, led to an increased predictability of toric IOL power. Copyright © 2018 ASCRS and ESCRS. Published by Elsevier Inc. All rights reserved.

  12. Exploring human error in military aviation flight safety events using post-incident classification systems.

    PubMed

    Hooper, Brionny J; O'Hare, David P A

    2013-08-01

    Human error classification systems theoretically allow researchers to analyze postaccident data in an objective and consistent manner. The Human Factors Analysis and Classification System (HFACS) framework is one such practical analysis tool that has been widely used to classify human error in aviation. The Cognitive Error Taxonomy (CET) is another. It has been postulated that the focus on interrelationships within HFACS can facilitate the identification of the underlying causes of pilot error. The CET provides increased granularity at the level of unsafe acts. The aim was to analyze the influence of factors at higher organizational levels on the unsafe acts of front-line operators and to compare the errors of fixed-wing and rotary-wing operations. This study analyzed 288 aircraft incidents involving human error from an Australasian military organization occurring between 2001 and 2008. Action errors accounted for almost twice (44%) the proportion of rotary wing compared to fixed wing (23%) incidents. Both classificatory systems showed significant relationships between precursor factors such as the physical environment, mental and physiological states, crew resource management, training and personal readiness, and skill-based, but not decision-based, acts. The CET analysis showed different predisposing factors for different aspects of skill-based behaviors. Skill-based errors in military operations are more prevalent in rotary wing incidents and are related to higher level supervisory processes in the organization. The Cognitive Error Taxonomy provides increased granularity to HFACS analyses of unsafe acts.

  13. Statistical physics of interacting neural networks

    NASA Astrophysics Data System (ADS)

    Kinzel, Wolfgang; Metzler, Richard; Kanter, Ido

    2001-12-01

    Recent results on the statistical physics of time series generation and prediction are presented. A neural network is trained on quasi-periodic and chaotic sequences and overlaps to the sequence generator as well as the prediction errors are calculated numerically. For each network there exists a sequence for which it completely fails to make predictions. Two interacting networks show a transition to perfect synchronization. A pool of interacting networks shows good coordination in the minority game-a model of competition in a closed market. Finally, as a demonstration, a perceptron predicts bit sequences produced by human beings.

  14. Measurement Model and Precision Analysis of Accelerometers for Maglev Vibration Isolation Platforms.

    PubMed

    Wu, Qianqian; Yue, Honghao; Liu, Rongqiang; Zhang, Xiaoyou; Ding, Liang; Liang, Tian; Deng, Zongquan

    2015-08-14

    High precision measurement of acceleration levels is required to allow active control for vibration isolation platforms. It is necessary to propose an accelerometer configuration measurement model that yields such a high measuring precision. In this paper, an accelerometer configuration to improve measurement accuracy is proposed. The corresponding calculation formulas of the angular acceleration were derived through theoretical analysis. A method is presented to minimize angular acceleration noise based on analysis of the root mean square noise of the angular acceleration. Moreover, the influence of installation position errors and accelerometer orientation errors on the calculation precision of the angular acceleration is studied. Comparisons of the output differences between the proposed configuration and the previous planar triangle configuration under the same installation errors are conducted by simulation. The simulation results show that installation errors have a relatively small impact on the calculation accuracy of the proposed configuration. To further verify the high calculation precision of the proposed configuration, experiments are carried out for both the proposed configuration and the planar triangle configuration. On the basis of the results of simulations and experiments, it can be concluded that the proposed configuration has higher angular acceleration calculation precision and can be applied to different platforms.

  15. Measurement Model and Precision Analysis of Accelerometers for Maglev Vibration Isolation Platforms

    PubMed Central

    Wu, Qianqian; Yue, Honghao; Liu, Rongqiang; Zhang, Xiaoyou; Ding, Liang; Liang, Tian; Deng, Zongquan

    2015-01-01

    High precision measurement of acceleration levels is required to allow active control for vibration isolation platforms. It is necessary to propose an accelerometer configuration measurement model that yields such a high measuring precision. In this paper, an accelerometer configuration to improve measurement accuracy is proposed. The corresponding calculation formulas of the angular acceleration were derived through theoretical analysis. A method is presented to minimize angular acceleration noise based on analysis of the root mean square noise of the angular acceleration. Moreover, the influence of installation position errors and accelerometer orientation errors on the calculation precision of the angular acceleration is studied. Comparisons of the output differences between the proposed configuration and the previous planar triangle configuration under the same installation errors are conducted by simulation. The simulation results show that installation errors have a relatively small impact on the calculation accuracy of the proposed configuration. To further verify the high calculation precision of the proposed configuration, experiments are carried out for both the proposed configuration and the planar triangle configuration. On the basis of the results of simulations and experiments, it can be concluded that the proposed configuration has higher angular acceleration calculation precision and can be applied to different platforms. PMID:26287203

  16. The application of SHERPA (Systematic Human Error Reduction and Prediction Approach) in the development of compensatory cognitive rehabilitation strategies for stroke patients with left and right brain damage.

    PubMed

    Hughes, Charmayne M L; Baber, Chris; Bienkiewicz, Marta; Worthington, Andrew; Hazell, Alexa; Hermsdörfer, Joachim

    2015-01-01

    Approximately 33% of stroke patients have difficulty performing activities of daily living, often committing errors during the planning and execution of such activities. The objective of this study was to evaluate the ability of the human error identification (HEI) technique SHERPA (Systematic Human Error Reduction and Prediction Approach) to predict errors during the performance of daily activities in stroke patients with left and right hemisphere lesions. Using SHERPA we successfully predicted 36 of the 38 observed errors, with analysis indicating that the proportion of predicted and observed errors was similar for all sub-tasks and severity levels. HEI results were used to develop compensatory cognitive strategies that clinicians could employ to reduce or prevent errors from occurring. This study provides evidence for the reliability and validity of SHERPA in the design of cognitive rehabilitation strategies in stroke populations.

  17. An Analysis of U.S. Army Fratricide Incidents during the Global War on Terror (11 September 2001 to 31 March 2008)

    DTIC Science & Technology

    2010-03-15

    Swiss cheese model of human error causation. ................................................................... 3  2. Results for the classification of...based on Reason’s “ Swiss cheese ” model of human error (1990). Figure 1 describes how an accident is likely to occur when all of the errors, or “holes...align. A detailed description of HFACS can be found in Wiegmann and Shappell (2003). Figure 1. The Swiss cheese model of human error

  18. Reliability Evaluation and Improvement Approach of Chemical Production Man - Machine - Environment System

    NASA Astrophysics Data System (ADS)

    Miao, Yongchun; Kang, Rongxue; Chen, Xuefeng

    2017-12-01

    In recent years, with the gradual extension of reliability research, the study of production system reliability has become the hot topic in various industries. Man-machine-environment system is a complex system composed of human factors, machinery equipment and environment. The reliability of individual factor must be analyzed in order to gradually transit to the research of three-factor reliability. Meanwhile, the dynamic relationship among man-machine-environment should be considered to establish an effective blurry evaluation mechanism to truly and effectively analyze the reliability of such systems. In this paper, based on the system engineering, fuzzy theory, reliability theory, human error, environmental impact and machinery equipment failure theory, the reliabilities of human factor, machinery equipment and environment of some chemical production system were studied by the method of fuzzy evaluation. At last, the reliability of man-machine-environment system was calculated to obtain the weighted result, which indicated that the reliability value of this chemical production system was 86.29. Through the given evaluation domain it can be seen that the reliability of man-machine-environment integrated system is in a good status, and the effective measures for further improvement were proposed according to the fuzzy calculation results.

  19. Why GPS makes distances bigger than they are

    PubMed Central

    Ranacher, Peter; Brunauer, Richard; Trutschnig, Wolfgang; Van der Spek, Stefan; Reich, Siegfried

    2016-01-01

    ABSTRACT Global navigation satellite systems such as the Global Positioning System (GPS) is one of the most important sensors for movement analysis. GPS is widely used to record the trajectories of vehicles, animals and human beings. However, all GPS movement data are affected by both measurement and interpolation errors. In this article we show that measurement error causes a systematic bias in distances recorded with a GPS; the distance between two points recorded with a GPS is – on average – bigger than the true distance between these points. This systematic ‘overestimation of distance’ becomes relevant if the influence of interpolation error can be neglected, which in practice is the case for movement sampled at high frequencies. We provide a mathematical explanation of this phenomenon and illustrate that it functionally depends on the autocorrelation of GPS measurement error (C). We argue that C can be interpreted as a quality measure for movement data recorded with a GPS. If there is a strong autocorrelation between any two consecutive position estimates, they have very similar error. This error cancels out when average speed, distance or direction is calculated along the trajectory. Based on our theoretical findings we introduce a novel approach to determine C in real-world GPS movement data sampled at high frequencies. We apply our approach to pedestrian trajectories and car trajectories. We found that the measurement error in the data was strongly spatially and temporally autocorrelated and give a quality estimate of the data. Most importantly, our findings are not limited to GPS alone. The systematic bias and its implications are bound to occur in any movement data collected with absolute positioning if interpolation error can be neglected. PMID:27019610

  20. Analytical study of the effects of soft tissue artefacts on functional techniques to define axes of rotation.

    PubMed

    De Rosario, Helios; Page, Álvaro; Besa, Antonio

    2017-09-06

    The accurate location of the main axes of rotation (AoR) is a crucial step in many applications of human movement analysis. There are different formal methods to determine the direction and position of the AoR, whose performance varies across studies, depending on the pose and the source of errors. Most methods are based on minimizing squared differences between observed and modelled marker positions or rigid motion parameters, implicitly assuming independent and uncorrelated errors, but the largest error usually results from soft tissue artefacts (STA), which do not have such statistical properties and are not effectively cancelled out by such methods. However, with adequate methods it is possible to assume that STA only account for a small fraction of the observed motion and to obtain explicit formulas through differential analysis that relate STA components to the resulting errors in AoR parameters. In this paper such formulas are derived for three different functional calibration techniques (Geometric Fitting, mean Finite Helical Axis, and SARA), to explain why each technique behaves differently from the others, and to propose strategies to compensate for those errors. These techniques were tested with published data from a sit-to-stand activity, where the true axis was defined using bi-planar fluoroscopy. All the methods were able to estimate the direction of the AoR with an error of less than 5°, whereas there were errors in the location of the axis of 30-40mm. Such location errors could be reduced to less than 17mm by the methods based on equations that use rigid motion parameters (mean Finite Helical Axis, SARA) when the translation component was calculated using the three markers nearest to the axis. Copyright © 2017 Elsevier Ltd. All rights reserved.

  1. A physiologically based pharmacokinetic model to predict the pharmacokinetics of highly protein-bound drugs and the impact of errors in plasma protein binding.

    PubMed

    Ye, Min; Nagar, Swati; Korzekwa, Ken

    2016-04-01

    Predicting the pharmacokinetics of highly protein-bound drugs is difficult. Also, since historical plasma protein binding data were often collected using unbuffered plasma, the resulting inaccurate binding data could contribute to incorrect predictions. This study uses a generic physiologically based pharmacokinetic (PBPK) model to predict human plasma concentration-time profiles for 22 highly protein-bound drugs. Tissue distribution was estimated from in vitro drug lipophilicity data, plasma protein binding and the blood: plasma ratio. Clearance was predicted with a well-stirred liver model. Underestimated hepatic clearance for acidic and neutral compounds was corrected by an empirical scaling factor. Predicted values (pharmacokinetic parameters, plasma concentration-time profile) were compared with observed data to evaluate the model accuracy. Of the 22 drugs, less than a 2-fold error was obtained for the terminal elimination half-life (t1/2 , 100% of drugs), peak plasma concentration (Cmax , 100%), area under the plasma concentration-time curve (AUC0-t , 95.4%), clearance (CLh , 95.4%), mean residence time (MRT, 95.4%) and steady state volume (Vss , 90.9%). The impact of fup errors on CLh and Vss prediction was evaluated. Errors in fup resulted in proportional errors in clearance prediction for low-clearance compounds, and in Vss prediction for high-volume neutral drugs. For high-volume basic drugs, errors in fup did not propagate to errors in Vss prediction. This is due to the cancellation of errors in the calculations for tissue partitioning of basic drugs. Overall, plasma profiles were well simulated with the present PBPK model. Copyright © 2016 John Wiley & Sons, Ltd. Copyright © 2016 John Wiley & Sons, Ltd.

  2. When bad things happen: adverse event reporting and disclosure as patient safety and risk management tools in the neonatal intensive care unit.

    PubMed

    Donn, Steven M; McDonnell, William M

    2012-01-01

    The Institute of Medicine has recommended a change in culture from "name and blame" to patient safety. This will require system redesign to identify and address errors, establish performance standards, and set safety expectations. This approach, however, is at odds with the present medical malpractice (tort) system. The current system is outcomes-based, meaning that health care providers and institutions are often sued despite providing appropriate care. Nevertheless, the focus should remain to provide the safest patient care. Effective peer review may be hindered by the present tort system. Reporting of medical errors is a key piece of peer review and education, and both anonymous reporting and confidential reporting of errors have potential disadvantages. Diagnostic and treatment errors continue to be the leading sources of allegations of malpractice in pediatrics, and the neonatal intensive care unit is uniquely vulnerable. Most errors result from systems failures rather than human error. Risk management can be an effective process to identify, evaluate, and address problems that may injure patients, lead to malpractice claims, and result in financial losses. Risk management identifies risk or potential risk, calculates the probability of an adverse event arising from a risk, estimates the impact of the adverse event, and attempts to control the risk. Implementation of a successful risk management program requires a positive attitude, sufficient knowledge base, and a commitment to improvement. Transparency in the disclosure of medical errors and a strategy of prospective risk management in dealing with medical errors may result in a substantial reduction in medical malpractice lawsuits, lower litigation costs, and a more safety-conscious environment. Thieme Medical Publishers, Inc.

  3. Utilization of electrical impedance imaging for estimation of in-vivo tissue resistivities

    NASA Astrophysics Data System (ADS)

    Eyuboglu, B. Murat; Pilkington, Theo C.

    1993-08-01

    In order to determine in vivo resistivity of tissues in the thorax, the possibility of combining electrical impedance imaging (EII) techniques with (1) anatomical data extracted from high resolution images, (2) a prior knowledge of tissue resistivities, and (3) a priori noise information was assessed in this study. A Least Square Error Estimator (LSEE) and a statistically constrained Minimum Mean Square Error Estimator (MiMSEE) were implemented to estimate regional electrical resistivities from potential measurements made on the body surface. A two dimensional boundary element model of the human thorax, which consists of four different conductivity regions (the skeletal muscle, the heart, the right lung, and the left lung) was adopted to simulate the measured EII torso potentials. The calculated potentials were then perturbed by simulated instrumentation noise. The signal information used to form the statistical constraint for the MiMSEE was obtained from a prior knowledge of the physiological range of tissue resistivities. The noise constraint was determined from a priori knowledge of errors due to linearization of the forward problem and to the instrumentation noise.

  4. Two ultraviolet radiation datasets that cover China

    NASA Astrophysics Data System (ADS)

    Liu, Hui; Hu, Bo; Wang, Yuesi; Liu, Guangren; Tang, Liqin; Ji, Dongsheng; Bai, Yongfei; Bao, Weikai; Chen, Xin; Chen, Yunming; Ding, Weixin; Han, Xiaozeng; He, Fei; Huang, Hui; Huang, Zhenying; Li, Xinrong; Li, Yan; Liu, Wenzhao; Lin, Luxiang; Ouyang, Zhu; Qin, Boqiang; Shen, Weijun; Shen, Yanjun; Su, Hongxin; Song, Changchun; Sun, Bo; Sun, Song; Wang, Anzhi; Wang, Genxu; Wang, Huimin; Wang, Silong; Wang, Youshao; Wei, Wenxue; Xie, Ping; Xie, Zongqiang; Yan, Xiaoyuan; Zeng, Fanjiang; Zhang, Fawei; Zhang, Yangjian; Zhang, Yiping; Zhao, Chengyi; Zhao, Wenzhi; Zhao, Xueyong; Zhou, Guoyi; Zhu, Bo

    2017-07-01

    Ultraviolet (UV) radiation has significant effects on ecosystems, environments, and human health, as well as atmospheric processes and climate change. Two ultraviolet radiation datasets are described in this paper. One contains hourly observations of UV radiation measured at 40 Chinese Ecosystem Research Network stations from 2005 to 2015. CUV3 broadband radiometers were used to observe the UV radiation, with an accuracy of 5%, which meets the World Meteorology Organization's measurement standards. The extremum method was used to control the quality of the measured datasets. The other dataset contains daily cumulative UV radiation estimates that were calculated using an all-sky estimation model combined with a hybrid model. The reconstructed daily UV radiation data span from 1961 to 2014. The mean absolute bias error and root-mean-square error are smaller than 30% at most stations, and most of the mean bias error values are negative, which indicates underestimation of the UV radiation intensity. These datasets can improve our basic knowledge of the spatial and temporal variations in UV radiation. Additionally, these datasets can be used in studies of potential ozone formation and atmospheric oxidation, as well as simulations of ecological processes.

  5. Knowledge about Human Papillomavirus and Cervical Cancer: Predictors of HPV Vaccination among Dental Students

    PubMed

    Rajiah, Kingston; Maharajan, Mari Kannan; Fang Num, Kelly Sze; How Koh, Raymond Chee

    2017-06-25

    Background: The objective of this study is to determine the influence of dental students’ knowledge and attitude regarding human papillomavirus infection of cervical cancer on willingness to pay for vaccination. Basic research design: A convenience sampling method was used. The minimal sample size of 136 was calculated using the Raosoft calculator with a 5 % margin of error and 95% confidence level. Participants: The study population were all final year dental students from the School of Dentistry. Methods: A self-administered questionnaire was used to measure knowledge levels and attitudes regarding human papillomavirus vaccination. Contingent valuation was conducted for willingness to pay for vaccination. Main outcome measures: The Center for Disease Control and Prevention has stated that human papillomavirus are associated with oropharynx cancer and the American Dental Association insist on expanding public awareness of the oncogenic potential of some HPV infections. Thus, as future dental practitioners, dental students should be aware of human papillomavirus and their links with cancer and the benefits of vaccination. Results: Knowledge on HPV and cervical cancer did not impact on attitudes towards vaccines. However, significant correlation existed between knowledge and willingness to pay for vaccination. Conclusions: Dental students’ knowledge on HPV and cervical cancer has no influence on their attitude towards HPV vaccines. However, their willingness to pay for HPV vaccination is influenced by their knowledge of cervical cancer and HPV vaccination. Creative Commons Attribution License

  6. Systemic errors calibration in dynamic stitching interferometry

    NASA Astrophysics Data System (ADS)

    Wu, Xin; Qi, Te; Yu, Yingjie; Zhang, Linna

    2016-05-01

    The systemic error is the main error sauce in sub-aperture stitching calculation. In this paper, a systemic error calibration method is proposed based on pseudo shearing. This method is suitable in dynamic stitching interferometry for large optical plane. The feasibility is vibrated by some simulations and experiments.

  7. A Quality Improvement Project to Decrease Human Milk Errors in the NICU.

    PubMed

    Oza-Frank, Reena; Kachoria, Rashmi; Dail, James; Green, Jasmine; Walls, Krista; McClead, Richard E

    2017-02-01

    Ensuring safe human milk in the NICU is a complex process with many potential points for error, of which one of the most serious is administration of the wrong milk to the wrong infant. Our objective was to describe a quality improvement initiative that was associated with a reduction in human milk administration errors identified over a 6-year period in a typical, large NICU setting. We employed a quasi-experimental time series quality improvement initiative by using tools from the model for improvement, Six Sigma methodology, and evidence-based interventions. Scanned errors were identified from the human milk barcode medication administration system. Scanned errors of interest were wrong-milk-to-wrong-infant, expired-milk, or preparation errors. The scanned error rate and the impact of additional improvement interventions from 2009 to 2015 were monitored by using statistical process control charts. From 2009 to 2015, the total number of errors scanned declined from 97.1 per 1000 bottles to 10.8. Specifically, the number of expired milk error scans declined from 84.0 per 1000 bottles to 8.9. The number of preparation errors (4.8 per 1000 bottles to 2.2) and wrong-milk-to-wrong-infant errors scanned (8.3 per 1000 bottles to 2.0) also declined. By reducing the number of errors scanned, the number of opportunities for errors also decreased. Interventions that likely had the greatest impact on reducing the number of scanned errors included installation of bedside (versus centralized) scanners and dedicated staff to handle milk. Copyright © 2017 by the American Academy of Pediatrics.

  8. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Elliott, C.J.; McVey, B.; Quimby, D.C.

    The level of field errors in an FEL is an important determinant of its performance. We have computed 3D performance of a large laser subsystem subjected to field errors of various types. These calculations have been guided by simple models such as SWOOP. The technique of choice is utilization of the FELEX free electron laser code that now possesses extensive engineering capabilities. Modeling includes the ability to establish tolerances of various types: fast and slow scale field bowing, field error level, beam position monitor error level, gap errors, defocusing errors, energy slew, displacement and pointing errors. Many effects of thesemore » errors on relative gain and relative power extraction are displayed and are the essential elements of determining an error budget. The random errors also depend on the particular random number seed used in the calculation. The simultaneous display of the performance versus error level of cases with multiple seeds illustrates the variations attributable to stochasticity of this model. All these errors are evaluated numerically for comprehensive engineering of the system. In particular, gap errors are found to place requirements beyond mechanical tolerances of {plus minus}25{mu}m, and amelioration of these may occur by a procedure utilizing direct measurement of the magnetic fields at assembly time. 4 refs., 12 figs.« less

  9. Studies on time of death estimation in the early post mortem period -- application of a method based on eyeball temperature measurement to human bodies.

    PubMed

    Kaliszan, Michał

    2013-09-01

    This paper presents a verification of the thermodynamic model allowing an estimation of the time of death (TOD) by calculating the post mortem interval (PMI) based on a single eyeball temperature measurement at the death scene. The study was performed on 30 cases with known PMI, ranging from 1h 35min to 5h 15min, using pin probes connected to a high precision electronic thermometer (Dostmann-electronic). The measured eye temperatures ranged from 20.2 to 33.1°C. Rectal temperature was measured at the same time and ranged from 32.8 to 37.4°C. Ambient temperatures which ranged from -1 to 24°C, environmental conditions (still air to light wind) and the amount of hair on the head were also recorded every time. PMI was calculated using a formula based on Newton's law of cooling, previously derived and successfully tested in comprehensive studies on pigs and a few human cases. Thanks to both the significantly faster post mortem decrease of eye temperature and a residual or nonexistent plateau effect in the eye, as well as practically no influence of body mass, TOD in the human death cases could be estimated with good accuracy. The highest TOD estimation error during the post mortem intervals up to around 5h was 1h 16min, 1h 14min and 1h 03min, respectively in three cases among 30, while for the remaining 27 cases it was not more than 47min. The mean error for all 30 cases was ±31min. All that indicates that the proposed method is of quite good precision in the early post mortem period, with an accuracy of ±1h for a 95% confidence interval. On the basis of the presented method, TOD can be also calculated at the death scene with the use of a proposed portable electronic device (TOD-meter). Copyright © 2013 Elsevier Ireland Ltd. All rights reserved.

  10. Estimation of daily interfractional larynx residual setup error after isocentric alignment for head and neck radiotherapy: Quality-assurance implications for target volume and organ-at-risk margination using daily CT-on-rails imaging

    PubMed Central

    Baron, Charles A.; Awan, Musaddiq J.; Mohamed, Abdallah S. R.; Akel, Imad; Rosenthal, David I.; Gunn, G. Brandon; Garden, Adam S.; Dyer, Brandon A.; Court, Laurence; Sevak, Parag R; Kocak-Uzel, Esengul; Fuller, Clifton D.

    2016-01-01

    Larynx may alternatively serve as a target or organ-at-risk (OAR) in head and neck cancer (HNC) image-guided radiotherapy (IGRT). The objective of this study was to estimate IGRT parameters required for larynx positional error independent of isocentric alignment and suggest population–based compensatory margins. Ten HNC patients receiving radiotherapy (RT) with daily CT-on-rails imaging were assessed. Seven landmark points were placed on each daily scan. Taking the most superior anterior point of the C5 vertebra as a reference isocenter for each scan, residual displacement vectors to the other 6 points were calculated post-isocentric alignment. Subsequently, using the first scan as a reference, the magnitude of vector differences for all 6 points for all scans over the course of treatment were calculated. Residual systematic and random error, and the necessary compensatory CTV-to-PTV and OAR-to-PRV margins were calculated, using both observational cohort data and a bootstrap-resampled population estimator. The grand mean displacements for all anatomical points was 5.07mm, with mean systematic error of 1.1mm and mean random setup error of 2.63mm, while bootstrapped POIs grand mean displacement was 5.09mm, with mean systematic error of 1.23mm and mean random setup error of 2.61mm. Required margin for CTV-PTV expansion was 4.6mm for all cohort points, while the bootstrap estimator of the equivalent margin was 4.9mm. The calculated OAR-to-PRV expansion for the observed residual set-up error was 2.7mm, and bootstrap estimated expansion of 2.9mm. We conclude that the interfractional larynx setup error is a significant source of RT set-up/delivery error in HNC both when the larynx is considered as a CTV or OAR. We estimate the need for a uniform expansion of 5mm to compensate for set up error if the larynx is a target or 3mm if the larynx is an OAR when using a non-laryngeal bony isocenter. PMID:25679151

  11. Estimation of daily interfractional larynx residual setup error after isocentric alignment for head and neck radiotherapy: quality assurance implications for target volume and organs‐at‐risk margination using daily CT on‐rails imaging

    PubMed Central

    Baron, Charles A.; Awan, Musaddiq J.; Mohamed, Abdallah S.R.; Akel, Imad; Rosenthal, David I.; Gunn, G. Brandon; Garden, Adam S.; Dyer, Brandon A.; Court, Laurence; Sevak, Parag R.; Kocak‐Uzel, Esengul

    2014-01-01

    Larynx may alternatively serve as a target or organs at risk (OAR) in head and neck cancer (HNC) image‐guided radiotherapy (IGRT). The objective of this study was to estimate IGRT parameters required for larynx positional error independent of isocentric alignment and suggest population‐based compensatory margins. Ten HNC patients receiving radiotherapy (RT) with daily CT on‐rails imaging were assessed. Seven landmark points were placed on each daily scan. Taking the most superior‐anterior point of the C5 vertebra as a reference isocenter for each scan, residual displacement vectors to the other six points were calculated postisocentric alignment. Subsequently, using the first scan as a reference, the magnitude of vector differences for all six points for all scans over the course of treatment was calculated. Residual systematic and random error and the necessary compensatory CTV‐to‐PTV and OAR‐to‐PRV margins were calculated, using both observational cohort data and a bootstrap‐resampled population estimator. The grand mean displacements for all anatomical points was 5.07 mm, with mean systematic error of 1.1 mm and mean random setup error of 2.63 mm, while bootstrapped POIs grand mean displacement was 5.09 mm, with mean systematic error of 1.23 mm and mean random setup error of 2.61 mm. Required margin for CTV‐PTV expansion was 4.6 mm for all cohort points, while the bootstrap estimator of the equivalent margin was 4.9 mm. The calculated OAR‐to‐PRV expansion for the observed residual setup error was 2.7 mm and bootstrap estimated expansion of 2.9 mm. We conclude that the interfractional larynx setup error is a significant source of RT setup/delivery error in HNC, both when the larynx is considered as a CTV or OAR. We estimate the need for a uniform expansion of 5 mm to compensate for setup error if the larynx is a target, or 3 mm if the larynx is an OAR, when using a nonlaryngeal bony isocenter. PACS numbers: 87.55.D‐, 87.55.Qr

  12. Analysis of focusing error signals by differential astigmatic method under off-center tracking in the land-groove-type optical disk

    NASA Astrophysics Data System (ADS)

    Shinoda, Masahisa; Nakatani, Hidehiko

    2015-04-01

    We theoretically calculate the behavior of the focusing error signal in the land-groove-type optical disk when the objective lens traverses on out of the radius of the optical disk. The differential astigmatic method is employed instead of the conventional astigmatic method for generating the focusing error signals. The signal behaviors are compared and analyzed in terms of the gain difference of the slope sensitivity of the focusing error signals from the land and the groove. In our calculation, the format of digital versatile disc-random access memory (DVD-RAM) is adopted as the land-groove-type optical disk model, and advantageous conditions for suppressing the gain difference are investigated. The calculation method and results described in this paper will be reflected in the next generation land-groove-type optical disks.

  13. Human errors and measurement uncertainty

    NASA Astrophysics Data System (ADS)

    Kuselman, Ilya; Pennecchi, Francesca

    2015-04-01

    Evaluating the residual risk of human errors in a measurement and testing laboratory, remaining after the error reduction by the laboratory quality system, and quantifying the consequences of this risk for the quality of the measurement/test results are discussed based on expert judgments and Monte Carlo simulations. A procedure for evaluation of the contribution of the residual risk to the measurement uncertainty budget is proposed. Examples are provided using earlier published sets of expert judgments on human errors in pH measurement of groundwater, elemental analysis of geological samples by inductively coupled plasma mass spectrometry, and multi-residue analysis of pesticides in fruits and vegetables. The human error contribution to the measurement uncertainty budget in the examples was not negligible, yet also not dominant. This was assessed as a good risk management result.

  14. Human factors evaluation of remote afterloading brachytherapy: Human error and critical tasks in remote afterloading brachytherapy and approaches for improved system performance. Volume 1

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Callan, J.R.; Kelly, R.T.; Quinn, M.L.

    1995-05-01

    Remote Afterloading Brachytherapy (RAB) is a medical process used in the treatment of cancer. RAB uses a computer-controlled device to remotely insert and remove radioactive sources close to a target (or tumor) in the body. Some RAB problems affecting the radiation dose to the patient have been reported and attributed to human error. To determine the root cause of human error in the RAB system, a human factors team visited 23 RAB treatment sites in the US The team observed RAB treatment planning and delivery, interviewed RAB personnel, and performed walk-throughs, during which staff demonstrated the procedures and practices usedmore » in performing RAB tasks. Factors leading to human error in the RAB system were identified. The impact of those factors on the performance of RAB was then evaluated and prioritized in terms of safety significance. Finally, the project identified and evaluated alternative approaches for resolving the safety significant problems related to human error.« less

  15. Human error analysis of commercial aviation accidents using the human factors analysis and classification system (HFACS)

    DOT National Transportation Integrated Search

    2001-02-01

    The Human Factors Analysis and Classification System (HFACS) is a general human error framework : originally developed and tested within the U.S. military as a tool for investigating and analyzing the human : causes of aviation accidents. Based upon ...

  16. Image based automatic water meter reader

    NASA Astrophysics Data System (ADS)

    Jawas, N.; Indrianto

    2018-01-01

    Water meter is used as a tool to calculate water consumption. This tool works by utilizing water flow and shows the calculation result with mechanical digit counter. Practically, in everyday use, an operator will manually check the digit counter periodically. The Operator makes logs of the number shows by water meter to know the water consumption. This manual operation is time consuming and prone to human error. Therefore, in this paper we propose an automatic water meter digit reader from digital image. The digits sequence is detected by utilizing contour information of the water meter front panel.. Then an OCR method is used to get the each digit character. The digit sequence detection is an important part of overall process. It determines the success of overall system. The result shows promising results especially in sequence detection.

  17. SU-E-T-484: In Vivo Dosimetry Tolerances in External Beam Fast Neutron Therapy

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Young, L; Gopan, O

    Purpose: Optical stimulated luminescence (OSL) dosimetry with Landauer Al2O3:C nanodots was developed at our institution as a passive in vivo dosimetry (IVD) system for patients treated with fast neutron therapy. The purpose of this study was to establish clinically relevant tolerance limits for detecting treatment errors requiring further investigation. Methods: Tolerance levels were estimated by conducting a series of IVD expected dose calculations for square field sizes ranging between 2.8 and 28.8 cm. For each field size evaluated, doses were calculated for open and internal wedged fields with angles of 30°, 45°, or 60°. Theoretical errors were computed for variationsmore » of incorrect beam configurations. Dose errors, defined as the percent difference from the expected dose calculation, were measured with groups of three nanodots placed in a 30 x 30 cm solid water phantom, at beam isocenter (150 cm SAD, 1.7 cm Dmax). The tolerances were applied to IVD patient measurements. Results: The overall accuracy of the nanodot measurements is 2–3% for open fields. Measurement errors agreed with calculated errors to within 3%. Theoretical estimates of dosimetric errors showed that IVD measurements with OSL nanodots will detect the absence of an internal wedge or a wrong wedge angle. Incorrect nanodot placement on a wedged field is more likely to be caught if the offset is in the direction of the “toe” of the wedge where the dose difference in percentage is about 12%. Errors caused by an incorrect flattening filter size produced a 2% measurement error that is not detectable by IVD measurement alone. Conclusion: IVD with nanodots will detect treatment errors associated with the incorrect implementation of the internal wedge. The results of this study will streamline the physicists’ investigations in determining the root cause of an IVD reading that is out of normally accepted tolerances.« less

  18. Novel mathematical algorithm for pupillometric data analysis.

    PubMed

    Canver, Matthew C; Canver, Adam C; Revere, Karen E; Amado, Defne; Bennett, Jean; Chung, Daniel C

    2014-01-01

    Pupillometry is used clinically to evaluate retinal and optic nerve function by measuring pupillary response to light stimuli. We have developed a mathematical algorithm to automate and expedite the analysis of non-filtered, non-calculated pupillometric data obtained from mouse pupillary light reflex recordings, obtained from dynamic pupillary diameter recordings following exposure of varying light intensities. The non-filtered, non-calculated pupillometric data is filtered through a low pass finite impulse response (FIR) filter. Thresholding is used to remove data caused by eye blinking, loss of pupil tracking, and/or head movement. Twelve physiologically relevant parameters were extracted from the collected data: (1) baseline diameter, (2) minimum diameter, (3) response amplitude, (4) re-dilation amplitude, (5) percent of baseline diameter, (6) response time, (7) re-dilation time, (8) average constriction velocity, (9) average re-dilation velocity, (10) maximum constriction velocity, (11) maximum re-dilation velocity, and (12) onset latency. No significant differences were noted between parameters derived from algorithm calculated values and manually derived results (p ≥ 0.05). This mathematical algorithm will expedite endpoint data derivation and eliminate human error in the manual calculation of pupillometric parameters from non-filtered, non-calculated pupillometric values. Subsequently, these values can be used as reference metrics for characterizing the natural history of retinal disease. Furthermore, it will be instrumental in the assessment of functional visual recovery in humans and pre-clinical models of retinal degeneration and optic nerve disease following pharmacological or gene-based therapies. Copyright © 2013 Elsevier Ireland Ltd. All rights reserved.

  19. Age estimation based on pulp chamber volume of first molars from cone-beam computed tomography images.

    PubMed

    Ge, Zhi-pu; Ma, Ruo-han; Li, Gang; Zhang, Ji-zong; Ma, Xu-chen

    2015-08-01

    To establish a method that can be used for human age estimation on the basis of pulp chamber volume of first molars and to identify whether the method is good enough for age estimation in real human cases. CBCT images of 373 maxillary first molars and 372 mandibular first molars were collected to establish the mathematical model from 190 female and 213 male patients whose age between 12 and 69 years old. The inclusion criteria of the first molars were: no caries, no excessive tooth wear, no dental restorations, no artifacts due to metal restorative materials present in adjacent teeth, and no pulpal calcification. All the CBCT images were acquired with a CBCT unit NewTom VG (Quantitative Radiology, Verona, Italy) and reconstructed with a voxel-size of 0.15mm. The images were subsequently exported as DICOM data sets and imported into an open source 3D image semi-automatic segmenting and voxel-counting software ITK-SNAP 2.4 for the calculation of pulp chamber volumes. A logarithmic regression analysis was conducted with age as dependent variable and pulp chamber volume as independent variables to establish a mathematical model for the human age estimation. To identify the precision and accuracy of the model for human age estimation, another 104 maxillary first molars and 103 mandibular first molars from 55 female and 57 male patients whose age between 12 and 67 years old were collected, too. Mean absolute error and root mean square error between the actual age and estimated age were used to determine the precision and accuracy of the mathematical model. The study was approved by the Institutional Review Board of Peking University School and Hospital of Stomatology. A mathematical model was suggested for: AGE=117.691-26.442×ln (pulp chamber volume). The regression was statistically significant (p=0.000<0.01). The coefficient of determination (R(2)) was 0.564. There is a mean absolute error of 8.122 and root mean square error of 5.603 between the actual age and estimated age for all the tested teeth. The pulp chamber volume of first molar is a useful index for the estimation of human age with reasonable precision and accuracy. Copyright © 2015 Elsevier Ireland Ltd. All rights reserved.

  20. Calculating Interaction Energies Using First Principle Theories: Consideration of Basis Set Superposition Error and Fragment Relaxation

    ERIC Educational Resources Information Center

    Bowen, J. Philip; Sorensen, Jennifer B.; Kirschner, Karl N.

    2007-01-01

    The analysis explains the basis set superposition error (BSSE) and fragment relaxation involved in calculating the interaction energies using various first principle theories. Interacting the correlated fragment and increasing the size of the basis set can help in decreasing the BSSE to a great extent.

  1. Progress in the improved lattice calculation of direct CP-violation in the Standard Model

    NASA Astrophysics Data System (ADS)

    Kelly, Christopher

    2018-03-01

    We discuss the ongoing effort by the RBC & UKQCD collaborations to improve our lattice calculation of the measure of Standard Model direct CP violation, ɛ', with physical kinematics. We present our progress in decreasing the (dominant) statistical error and discuss other related activities aimed at reducing the systematic errors.

  2. Calculation of lens alignment errors using the ray transfer matrices for the lens assembly system with an autocollimator and a rotation stage

    NASA Astrophysics Data System (ADS)

    Chu, Jiyoung; Cho, Sungwhi; Joo, Won Don; Jang, Sangdon

    2017-08-01

    One of the most popular methods for high precision lens assembly of an optical system is using an autocollimator and a rotation stage. Some companies provide software for calculating the state of the lens along with their lens assembly systems, but the calculation algorithms used by the software are unknown. In this paper, we suggest a calculation method for lens alignment errors using ray transfer matrices. Alignment errors resulting from tilting and decentering of a lens element can be calculated from the tilts of the front and back surfaces of the lens. The tilt of each surface can be obtained from the position of the reticle image on the CCD camera of the autocollimator. Rays from a reticle of the autocollimator are reflected from the target surface of the lens, which rotates with the rotation stage, and are imaged on the CCD camera. To obtain a clear image, the distance between the autocollimator and the first lens surface should be adjusted according to the focusing lens of the autocollimator and the lens surfaces from the first to the target surface. Ray propagations for the autocollimator and the tilted lens surfaces can be expressed effectively by using ray transfer matrices and lens alignment errors can be derived from them. This method was compared with Zemax simulation for various lenses with spherical or flat surfaces and the error was less than a few percent.

  3. Eyewitness identification evidence and innocence risk.

    PubMed

    Clark, Steven E; Godfrey, Ryan D

    2009-02-01

    It is well known that the frailties of human memory and vulnerability to suggestion lead to eyewitness identification errors. However, variations in different aspects of the eyewitnessing conditions produce different kinds of errors that are related to wrongful convictions in very different ways. We present a review of the eyewitness identification literature, organized around underlying cognitive mechanisms, memory, similarity, and decision processes, assessing the effects on both correct and mistaken identification. In addition, we calculate a conditional probability we call innocence risk, which is the probability that the suspect is innocent, given that the suspect was identified. Assessment of innocence risk is critical to the theoretical development of eyewitness identification research, as well as to legal decision making and policy evaluation. Our review shows a complex relationship between misidentification and innocence risk, sheds light on some areas of controversy, and suggests that some issues thought to be resolved are in need of additional research.

  4. Automatic and semi-automatic approaches for arteriolar-to-venular computation in retinal photographs

    NASA Astrophysics Data System (ADS)

    Mendonça, Ana Maria; Remeseiro, Beatriz; Dashtbozorg, Behdad; Campilho, Aurélio

    2017-03-01

    The Arteriolar-to-Venular Ratio (AVR) is a popular dimensionless measure which allows the assessment of patients' condition for the early diagnosis of different diseases, including hypertension and diabetic retinopathy. This paper presents two new approaches for AVR computation in retinal photographs which include a sequence of automated processing steps: vessel segmentation, caliber measurement, optic disc segmentation, artery/vein classification, region of interest delineation, and AVR calculation. Both approaches have been tested on the INSPIRE-AVR dataset, and compared with a ground-truth provided by two medical specialists. The obtained results demonstrate the reliability of the fully automatic approach which provides AVR ratios very similar to at least one of the observers. Furthermore, the semi-automatic approach, which includes the manual modification of the artery/vein classification if needed, allows to significantly reduce the error to a level below the human error.

  5. Acousto-thermometric recovery of the deep temperature profile using heat conduction equations

    NASA Astrophysics Data System (ADS)

    Anosov, A. A.; Belyaev, R. V.; Vilkov, V. A.; Dvornikova, M. V.; Dvornikova, V. V.; Kazanskii, A. S.; Kuryatnikova, N. A.; Mansfel'd, A. D.

    2012-09-01

    In a model experiment using the acousto-thermographic method, deep temperature profiles varying in time are recovered. In the recovery algorithm, we used a priori information in the form of a requirement that the calculated temperature must satisfy the heat conduction equation. The problem is reduced to determining two parameters: the initial temperature and the temperature conductivity coefficient of the object under consideration (the plasticine band). During the experiment, there was independent inspection using electronic thermometers mounted inside the plasticine. The error in the temperature conductivity coefficient was about 17% and the error in initial temperature determination was less than one degree. Such recovery results allow application of this approach to solving a number of medical problems. It is experimentally proved that acoustic irregularities influence the acousto-thermometric results as well. It is shown that in the chosen scheme of experiment (which corresponds to measurements of human muscle tissue), this influence can be neglected.

  6. The decline and fall of Type II error rates

    Treesearch

    Steve Verrill; Mark Durst

    2005-01-01

    For general linear models with normally distributed random errors, the probability of a Type II error decreases exponentially as a function of sample size. This potentially rapid decline reemphasizes the importance of performing power calculations.

  7. The MiAge Calculator: a DNA methylation-based mitotic age calculator of human tissue types.

    PubMed

    Youn, Ahrim; Wang, Shuang

    2018-01-01

    Cell division is important in human aging and cancer. The estimation of the number of cell divisions (mitotic age) of a given tissue type in individuals is of great interest as it allows not only the study of biological aging (using a new molecular aging target) but also the stratification of prospective cancer risk. Here, we introduce the MiAge Calculator, a mitotic age calculator based on a novel statistical framework, the MiAge model. MiAge is designed to quantitatively estimate mitotic age (total number of lifetime cell divisions) of a tissue using the stochastic replication errors accumulated in the epigenetic inheritance process during cell divisions. With the MiAge model, the MiAge Calculator was built using the training data of DNA methylation measures of 4,020 tumor and adjacent normal tissue samples from eight TCGA cancer types and was tested using the testing data of DNA methylation measures of 2,221 tumor and adjacent normal tissue samples of five other TCGA cancer types. We showed that within each of the thirteen cancer types studied, the estimated mitotic age is universally accelerated in tumor tissues compared to adjacent normal tissues. Across the thirteen cancer types, we showed that worse cancer survivals are associated with more accelerated mitotic age in tumor tissues. Importantly, we demonstrated the utility of mitotic age by showing that the integration of mitotic age and clinical information leads to improved survival prediction in six out of the thirteen cancer types studied. The MiAge Calculator is available at http://www.columbia.edu/∼sw2206/softwares.htm .

  8. XCO2 Retrieval Errors from a PCA-based Approach to Fast Radiative Transfer

    NASA Astrophysics Data System (ADS)

    Somkuti, Peter; Boesch, Hartmut; Natraj, Vijay; Kopparla, Pushkar

    2017-04-01

    Multiple-scattering radiative transfer (RT) calculations are an integral part of forward models used to infer greenhouse gas concentrations in the shortwave-infrared spectral range from satellite missions such as GOSAT or OCO-2. Such calculations are, however, computationally expensive and, combined with the recent growth in data volume, necessitate the use of acceleration methods in order to make retrievals feasible on an operational level. The principle component analysis (PCA)-based approach to fast radiative transfer introduced by Natraj et al. 2005 is a spectral binning method, in which the many line-by-line monochromatic calculations are replaced by a small set of representative ones. From the PCA performed on the optical layer properties for a scene-dependent atmosphere, the results of the representative calculations are mapped onto all spectral points in the given band. Since this RT scheme is an approximation, the computed top-of-atmosphere radiances exhibit errors compared to the "full" line-by-line calculation. These errors ultimately propagate into the final retrieved greenhouse gas concentrations, and their magnitude depends on scene-dependent parameters such as aerosol loadings or viewing geometry. An advantage of this method is the ability to choose the degree of accuracy by increasing or decreasing the number of empirical orthogonal functions used for the reconstruction of the radiances. We have performed a large set of global simulations based on real GOSAT scenes and assess the retrieval errors induced by the fast RT approximation through linear error analysis. We find that across a wide range of geophysical parameters, the errors are for the most part smaller than ± 0.2 ppm and ± 0.06 ppm (out of roughly 400 ppm) for ocean and land scenes respectively. A fast RT scheme that produces low errors is important, since regional biases in XCO2 even in the low sub-ppm range can cause significant changes in carbon fluxes obtained from inversions (Chevallier et al. 2007).

  9. Underestimation of Low-Dose Radiation in Treatment Planning of Intensity-Modulated Radiotherapy

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Jang, Si Young; Liu, H. Helen; Mohan, Radhe

    2008-08-01

    Purpose: To investigate potential dose calculation errors in the low-dose regions and identify causes of such errors for intensity-modulated radiotherapy (IMRT). Methods and Materials: The IMRT treatment plans of 23 patients with lung cancer and mesothelioma were reviewed. Of these patients, 15 had severe pulmonary complications after radiotherapy. Two commercial treatment-planning systems (TPSs) and a Monte Carlo system were used to calculate and compare dose distributions and dose-volume parameters of the target volumes and critical structures. The effect of tissue heterogeneity, multileaf collimator (MLC) modeling, beam modeling, and other factors that could contribute to the differences in IMRT dose calculationsmore » were analyzed. Results: In the commercial TPS-generated IMRT plans, dose calculation errors primarily occurred in the low-dose regions of IMRT plans (<50% of the radiation dose prescribed for the tumor). Although errors in the dose-volume histograms of the normal lung were small (<5%) above 10 Gy, underestimation of dose <10 Gy was found to be up to 25% in patients with mesothelioma or large target volumes. These errors were found to be caused by inadequate modeling of MLC transmission and leaf scatter in commercial TPSs. The degree of low-dose errors depends on the target volumes and the degree of intensity modulation. Conclusions: Secondary radiation from MLCs contributes a significant portion of low dose in IMRT plans. Dose underestimation could occur in conventional IMRT dose calculations if such low-dose radiation is not properly accounted for.« less

  10. Novel wave intensity analysis of arterial pulse wave propagation accounting for peripheral reflections

    PubMed Central

    Alastruey, Jordi; Hunt, Anthony A E; Weinberg, Peter D

    2014-01-01

    We present a novel analysis of arterial pulse wave propagation that combines traditional wave intensity analysis with identification of Windkessel pressures to account for the effect on the pressure waveform of peripheral wave reflections. Using haemodynamic data measured in vivo in the rabbit or generated numerically in models of human compliant vessels, we show that traditional wave intensity analysis identifies the timing, direction and magnitude of the predominant waves that shape aortic pressure and flow waveforms in systole, but fails to identify the effect of peripheral reflections. These reflections persist for several cardiac cycles and make up most of the pressure waveform, especially in diastole and early systole. Ignoring peripheral reflections leads to an erroneous indication of a reflection-free period in early systole and additional error in the estimates of (i) pulse wave velocity at the ascending aorta given by the PU–loop method (9.5% error) and (ii) transit time to a dominant reflection site calculated from the wave intensity profile (27% error). These errors decreased to 1.3% and 10%, respectively, when accounting for peripheral reflections. Using our new analysis, we investigate the effect of vessel compliance and peripheral resistance on wave intensity, peripheral reflections and reflections originating in previous cardiac cycles. PMID:24132888

  11. Illusory conjunctions reflect the time course of the attentional blink.

    PubMed

    Botella, Juan; Privado, Jesús; de Liaño, Beatriz Gil-Gómez; Suero, Manuel

    2011-07-01

    Illusory conjunctions in the time domain are binding errors for features from stimuli presented sequentially but in the same spatial position. A similar experimental paradigm is employed for the attentional blink (AB), an impairment of performance for the second of two targets when it is presented 200-500 msec after the first target. The analysis of errors along the time course of the AB allows the testing of models of illusory conjunctions. In an experiment, observers identified one (control condition) or two (experimental condition) letters in a specified color, so that illusory conjunctions in each response could be linked to specific positions in the series. Two items in the target colors (red and white, embedded in distractors of different colors) were employed in four conditions defined according to whether both targets were in the same or different colors. Besides the U-shaped function for hits, the errors were analyzed by calculating several response parameters reflecting characteristics such as the average position of the responses or the attentional suppression during the blink. The several error parameters cluster in two time courses, as would be expected from prevailing models of the AB. Furthermore, the results match the predictions from Botella, Barriopedro, and Suero's (Journal of Experimental Psychology: Human Perception and Performance, 27, 1452-1467, 2001) model for illusory conjunctions.

  12. Measurement uncertainty associated with chromatic confocal profilometry for 3D surface texture characterization of natural human enamel.

    PubMed

    Mullan, F; Bartlett, D; Austin, R S

    2017-06-01

    To investigate the measurement performance of a chromatic confocal profilometer for quantification of surface texture of natural human enamel in vitro. Contributions to the measurement uncertainty from all potential sources of measurement error using a chromatic confocal profilometer and surface metrology software were quantified using a series of surface metrology calibration artifacts and pre-worn enamel samples. The 3D surface texture analysis protocol was optimized across 0.04mm 2 of natural and unpolished enamel undergoing dietary acid erosion (pH 3.2, titratable acidity 41.3mmolOH/L). Flatness deviations due to the x, y stage mechanical movement were the major contribution to the measurement uncertainty; with maximum Sz flatness errors of 0.49μm. Whereas measurement noise; non-linearity's in x, y, z and enamel sample dimensional instability contributed minimal errors. The measurement errors were propagated into an uncertainty budget following a Type B uncertainty evaluation in order to calculate the Standard Combined Uncertainty (u c ), which was ±0.28μm. Statistically significant increases in the median (IQR) roughness (Sa) of the polished samples occurred after 15 (+0.17 (0.13)μm), 30 (+0.12 (0.09)μm) and 45 (+0.18 (0.15)μm) min of erosion (P<0.001 vs. baseline). In contrast, natural unpolished enamel samples revealed a statistically significant decrease in Sa roughness of -0.14 (0.34) μm only after 45min erosion (P<0.05s vs. baseline). The main contribution to measurement uncertainty using chromatic confocal profilometry was from flatness deviations however by optimizing measurement protocols the profilometer successfully characterized surface texture changes in enamel from erosive wear in vitro. Copyright © 2017 The Academy of Dental Materials. All rights reserved.

  13. Complete Tri-Axis Magnetometer Calibration with a Gyro Auxiliary

    PubMed Central

    Yang, Deng; You, Zheng; Li, Bin; Duan, Wenrui; Yuan, Binwen

    2017-01-01

    Magnetometers combined with inertial sensors are widely used for orientation estimation, and calibrations are necessary to achieve high accuracy. This paper presents a complete tri-axis magnetometer calibration algorithm with a gyro auxiliary. The magnetic distortions and sensor errors, including the misalignment error between the magnetometer and assembled platform, are compensated after calibration. With the gyro auxiliary, the magnetometer linear interpolation outputs are calculated, and the error parameters are evaluated under linear operations of magnetometer interpolation outputs. The simulation and experiment are performed to illustrate the efficiency of the algorithm. After calibration, the heading errors calculated by magnetometers are reduced to 0.5° (1σ). This calibration algorithm can also be applied to tri-axis accelerometers whose error model is similar to tri-axis magnetometers. PMID:28587115

  14. Error Analysis of Wind Measurements for the University of Illinois Sodium Doppler Temperature System

    NASA Technical Reports Server (NTRS)

    Pfenninger, W. Matthew; Papen, George C.

    1992-01-01

    Four-frequency lidar measurements of temperature and wind velocity require accurate frequency tuning to an absolute reference and long term frequency stability. We quantify frequency tuning errors for the Illinois sodium system, to measure absolute frequencies and a reference interferometer to measure relative frequencies. To determine laser tuning errors, we monitor the vapor cell and interferometer during lidar data acquisition and analyze the two signals for variations as functions of time. Both sodium cell and interferometer are the same as those used to frequency tune the laser. By quantifying the frequency variations of the laser during data acquisition, an error analysis of temperature and wind measurements can be calculated. These error bounds determine the confidence in the calculated temperatures and wind velocities.

  15. The development of a public optometry system in Mozambique: a Cost Benefit Analysis.

    PubMed

    Thompson, Stephen; Naidoo, Kovin; Harris, Geoff; Bilotto, Luigi; Ferrão, Jorge; Loughman, James

    2014-09-23

    The economic burden of uncorrected refractive error (URE) is thought to be high in Mozambique, largely as a consequence of the lack of resources and systems to tackle this largely avoidable problem. The Mozambique Eyecare Project (MEP) has established the first optometry training and human resource deployment initiative to address the burden of URE in Lusophone Africa. The nature of the MEP programme provides the opportunity to determine, using Cost Benefit Analysis (CBA), whether investing in the establishment and delivery of a comprehensive system for optometry human resource development and public sector deployment is economically justifiable for Lusophone Africa. A CBA methodology was applied across the period 2009-2049. Costs associated with establishing and operating a school of optometry, and a programme to address uncorrected refractive error, were included. Benefits were calculated using a human capital approach to valuing sight. Disability weightings from the Global Burden of Disease study were applied. Costs were subtracted from benefits to provide the net societal benefit, which was discounted to provide the net present value using a 3% discount rate. Using the most recently published disability weightings, the potential exists, through the correction of URE in 24.3 million potentially economically productive persons, to achieve a net present value societal benefit of up to $1.1 billion by 2049, at a Benefit-Cost ratio of 14:1. When CBA assumptions are varied as part of the sensitivity analysis, the results suggest the societal benefit could lie in the range of $649 million to $9.6 billion by 2049. This study demonstrates that a programme designed to address the burden of refractive error in Mozambique is economically justifiable in terms of the increased productivity that would result due to its implementation.

  16. Hartman Testing of X-Ray Telescopes

    NASA Technical Reports Server (NTRS)

    Saha, Timo T.; Biskasch, Michael; Zhang, William W.

    2013-01-01

    Hartmann testing of x-ray telescopes is a simple test method to retrieve and analyze alignment errors and low-order circumferential errors of x-ray telescopes and their components. A narrow slit is scanned along the circumference of the telescope in front of the mirror and the centroids of the images are calculated. From the centroid data, alignment errors, radius variation errors, and cone-angle variation errors can be calculated. Mean cone angle, mean radial height (average radius), and the focal length of the telescope can also be estimated if the centroid data is measured at multiple focal plane locations. In this paper we present the basic equations that are used in the analysis process. These equations can be applied to full circumference or segmented x-ray telescopes. We use the Optical Surface Analysis Code (OSAC) to model a segmented x-ray telescope and show that the derived equations and accompanying analysis retrieves the alignment errors and low order circumferential errors accurately.

  17. Systematic sparse matrix error control for linear scaling electronic structure calculations.

    PubMed

    Rubensson, Emanuel H; Sałek, Paweł

    2005-11-30

    Efficient truncation criteria used in multiatom blocked sparse matrix operations for ab initio calculations are proposed. As system size increases, so does the need to stay on top of errors and still achieve high performance. A variant of a blocked sparse matrix algebra to achieve strict error control with good performance is proposed. The presented idea is that the condition to drop a certain submatrix should depend not only on the magnitude of that particular submatrix, but also on which other submatrices that are dropped. The decision to remove a certain submatrix is based on the contribution the removal would cause to the error in the chosen norm. We study the effect of an accumulated truncation error in iterative algorithms like trace correcting density matrix purification. One way to reduce the initial exponential growth of this error is presented. The presented error control for a sparse blocked matrix toolbox allows for achieving optimal performance by performing only necessary operations needed to maintain the requested level of accuracy. Copyright 2005 Wiley Periodicals, Inc.

  18. Compound Stimulus Presentation Does Not Deepen Extinction in Human Causal Learning

    PubMed Central

    Griffiths, Oren; Holmes, Nathan; Westbrook, R. Fred

    2017-01-01

    Models of associative learning have proposed that cue-outcome learning critically depends on the degree of prediction error encountered during training. Two experiments examined the role of error-driven extinction learning in a human causal learning task. Target cues underwent extinction in the presence of additional cues, which differed in the degree to which they predicted the outcome, thereby manipulating outcome expectancy and, in the absence of any change in reinforcement, prediction error. These prediction error manipulations have each been shown to modulate extinction learning in aversive conditioning studies. While both manipulations resulted in increased prediction error during training, neither enhanced extinction in the present human learning task (one manipulation resulted in less extinction at test). The results are discussed with reference to the types of associations that are regulated by prediction error, the types of error terms involved in their regulation, and how these interact with parameters involved in training. PMID:28232809

  19. Formaldehyde Distribution over North America: Implications for Satellite Retrievals of Formaldehyde Columns and Isoprene Emission

    NASA Technical Reports Server (NTRS)

    Millet, Dylan B.; Jacob, Daniel J.; Turquety, Solene; Hudman, Rynda C.; Wu, Shiliang; Anderson, Bruce E.; Fried, Alan; Walega, James; Heikes, Brian G.; Blake, Donald R.; hide

    2006-01-01

    Formaldehyde (HCHO) columns measured from space provide constraints on emissions of volatile organic compounds (VOCs). Quantitative interpretation requires characterization of errors in HCHO column retrievals and relating these columns to VOC emissions. Retrieval error is mainly in the air mass factor (AMF) which relates fitted backscattered radiances to vertical columns and requires external information on HCHO, aerosols, and clouds. Here we use aircraft data collected over North America and the Atlantic to determine the local relationships between HCHO columns and VOC emissions, calculate AMFs for HCHO retrievals, assess the errors in deriving AMFs with a chemical transport model (GEOS-Chem), and draw conclusions regarding space-based mapping of VOC emissions. We show that isoprene drives observed HCHO column variability over North America; HCHO column data from space can thus be used effectively as a proxy for isoprene emission. From observed HCHO and isoprene profiles we find an HCHO molar yield from isoprene oxidation of 1.6 +/- 0.5, consistent with current chemical mechanisms. Clouds are the primary error source in the AMF calculation; errors in the HCHO vertical profile and aerosols have comparatively little effect. The mean bias and 1Q uncertainty in the GEOS-Chem AMF calculation increase from <1% and 15% for clear skies to 17% and 24% for half-cloudy scenes. With fitting errors, this gives an overall 1 Q error in HCHO satellite measurements of 25-31%. Retrieval errors, combined with uncertainties in the HCHO yield from isoprene oxidation, result in a 40% (1sigma) error in inferring isoprene emissions from HCHO satellite measurements.

  20. Comparison of Accuracy in Intraocular Lens Power Calculation by Measuring Axial Length with Immersion Ultrasound Biometry and Partial Coherence Interferometry.

    PubMed

    Ruangsetakit, Varee

    2015-11-01

    To re-examine relative accuracy of intraocular lens (IOL) power calculation of immersion ultrasound biometry (IUB) and partial coherence interferometry (PCI) based on a new approach that limits its interest on the cases in which the IUB's IOL and PCI's IOL assignments disagree. Prospective observational study of 108 eyes that underwent cataract surgeries at Taksin Hospital. Two halves ofthe randomly chosen sample eyes were implanted with the IUB- and PCI-assigned lens. Postoperative refractive errors were measured in the fifth week. More accurate calculation was based on significantly smaller mean absolute errors (MAEs) and root mean squared errors (RMSEs) away from emmetropia. The distributions of the errors were examined to ensure that the higher accuracy was significant clinically as well. The (MAEs, RMSEs) were smaller for PCI of (0.5106 diopter (D), 0.6037D) than for IUB of (0.7000D, 0.8062D). The higher accuracy was principally contributedfrom negative errors, i.e., myopia. The MAEs and RMSEs for (IUB, PCI)'s negative errors were (0.7955D, 0.5185D) and (0.8562D, 0.5853D). Their differences were significant. The 72.34% of PCI errors fell within a clinically accepted range of ± 0.50D, whereas 50% of IUB errors did. PCI's higher accuracy was significant statistically and clinically, meaning that lens implantation based on PCI's assignments could improve postoperative outcomes over those based on IUB's assignments.

  1. Human errors and occupational injuries of older female workers in the residential healthcare facilities for the elderly.

    PubMed

    Kim, Jun Sik; Jeong, Byung Yong

    2018-05-03

    The study aimed to describe the characteristics of occupational injuries of female workers in the residential healthcare facilities for the elderly, and analyze human errors as causes of accidents. From the national industrial accident compensation data, 506 female injuries were analyzed by age and occupation. The results showed that medical service worker was the most prevalent (54.1%), followed by social welfare worker (20.4%). Among injuries, 55.7% were <1 year of work experience, and 37.9% were ≥60 years old. Slips/falls were the most common type of accident (42.7%), and proportion of injured by slips/falls increases with age. Among human errors, action errors were the primary reasons, followed by perception errors, and cognition errors. Besides, the ratios of injuries by perception errors and action errors increase with age, respectively. The findings of this study suggest that there is a need to design workplaces that accommodate the characteristics of older female workers.

  2. MkMRCC, APUCC and APUBD approaches to 1,n-didehydropolyene diradicals: the nature of through-bond exchange interactions

    NASA Astrophysics Data System (ADS)

    Nishihara, Satomichi; Saito, Toru; Yamanaka, Shusuke; Kitagawa, Yasutaka; Kawakami, Takashi; Okumura, Mitsutaka; Yamaguchi, Kizashi

    2010-10-01

    Mukherjee-type (Mk) state specific (SS) multi-reference (MR) coupled-cluster (CC) calculations of 1,n-didehydropolyene diradicals were carried out to elucidate singlet-triplet energy gaps via through-bond coupling between terminal radicals. Spin-unrestricted Hartree-Fock (UHF) based coupled-cluster (CC) computations of these diradicals were also performed. Comparison between symmetry-adapted MkMRCC and broken-symmetry (BS) UHF-CC computational results indicated that spin-contamination error of UHF-CC solutions was left at the SD level, although it had been thought that this error was negligible for the CC scheme in general. In order to eliminate the spin contamination error, approximate spin-projection (AP) scheme was applied for UCC, and the AP procedure indeed eliminated the error to yield good agreement with MRCC in energy. The CCD with spin-unrestricted Brueckner's orbital (UB) was also employed for these polyene diradicals, showing that large spin-contamination errors at UHF solutions are dramatically improved, and therefore AP scheme for UBD removed easily the rest of spin-contaminations. Pure- and hybrid-density functional theory (DFT) calculations of the species were also performed. Three different computational schemes for total spin angular momentums were examined for the AP correction of the hybrid DFT. The AP DFT calculations yielded the singlet-triplet energy gaps that were in good agreement with those of MRCC, AP UHF-CC and AP UB-CC. Chemical indices such as the diradical character were calculated with all these methods. Implications of the present computational results are discussed in relation to previous RMRCC calculations of diradical species and BS calculations of large exchange coupled systems.

  3. Towards a systematic assessment of errors in diffusion Monte Carlo calculations of semiconductors: Case study of zinc selenide and zinc oxide

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Yu, Jaehyung; Wagner, Lucas K.; Ertekin, Elif, E-mail: ertekin@illinois.edu

    2015-12-14

    The fixed node diffusion Monte Carlo (DMC) method has attracted interest in recent years as a way to calculate properties of solid materials with high accuracy. However, the framework for the calculation of properties such as total energies, atomization energies, and excited state energies is not yet fully established. Several outstanding questions remain as to the effect of pseudopotentials, the magnitude of the fixed node error, and the size of supercell finite size effects. Here, we consider in detail the semiconductors ZnSe and ZnO and carry out systematic studies to assess the magnitude of the energy differences arising from controlledmore » and uncontrolled approximations in DMC. The former include time step errors and supercell finite size effects for ground and optically excited states, and the latter include pseudopotentials, the pseudopotential localization approximation, and the fixed node approximation. We find that for these compounds, the errors can be controlled to good precision using modern computational resources and that quantum Monte Carlo calculations using Dirac-Fock pseudopotentials can offer good estimates of both cohesive energy and the gap of these systems. We do however observe differences in calculated optical gaps that arise when different pseudopotentials are used.« less

  4. Information-Gathering Patterns Associated with Higher Rates of Diagnostic Error

    ERIC Educational Resources Information Center

    Delzell, John E., Jr.; Chumley, Heidi; Webb, Russell; Chakrabarti, Swapan; Relan, Anju

    2009-01-01

    Diagnostic errors are an important source of medical errors. Problematic information-gathering is a common cause of diagnostic errors among physicians and medical students. The objectives of this study were to (1) determine if medical students' information-gathering patterns formed clusters of similar strategies, and if so (2) to calculate the…

  5. [Risk and risk management in aviation].

    PubMed

    Müller, Manfred

    2004-10-01

    RISK MANAGEMENT: The large proportion of human errors in aviation accidents suggested the solution--at first sight brilliant--to replace the fallible human being by an "infallible" digitally-operating computer. However, even after the introduction of the so-called HITEC-airplanes, the factor human error still accounts for 75% of all accidents. Thus, if the computer is ruled out as the ultimate safety system, how else can complex operations involving quick and difficult decisions be controlled? OPTIMIZED TEAM INTERACTION/PARALLEL CONNECTION OF THOUGHT MACHINES: Since a single person is always "highly error-prone", support and control have to be guaranteed by a second person. The independent work of mind results in a safety network that more efficiently cushions human errors. NON-PUNITIVE ERROR MANAGEMENT: To be able to tackle the actual problems, the open discussion of intervened errors must not be endangered by the threat of punishment. It has been shown in the past that progress is primarily achieved by investigating and following up mistakes, failures and catastrophes shortly after they happened. HUMAN FACTOR RESEARCH PROJECT: A comprehensive survey showed the following result: By far the most frequent safety-critical situation (37.8% of all events) consists of the following combination of risk factors: 1. A complication develops. 2. In this situation of increased stress a human error occurs. 3. The negative effects of the error cannot be corrected or eased because there are deficiencies in team interaction on the flight deck. This means, for example, that a negative social climate has the effect of a "turbocharger" when a human error occurs. It needs to be pointed out that a negative social climate is not identical with a dispute. In many cases the working climate is burdened without the responsible person even noticing it: A first negative impression, too much or too little respect, contempt, misunderstandings, not expressing unclear concern, etc. can considerably reduce the efficiency of a team.

  6. Calculation of the Nucleon Axial Form Factor Using Staggered Lattice QCD

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Meyer, Aaron S.; Hill, Richard J.; Kronfeld, Andreas S.

    The nucleon axial form factor is a dominant contribution to errors in neutrino oscillation studies. Lattice QCD calculations can help control theory errors by providing first-principles information on nucleon form factors. In these proceedings, we present preliminary results on a blinded calculation ofmore » $$g_A$$ and the axial form factor using HISQ staggered baryons with 2+1+1 flavors of sea quarks. Calculations are done using physical light quark masses and are absolutely normalized. We discuss fitting form factor data with the model-independent $z$ expansion parametrization.« less

  7. Competition between learned reward and error outcome predictions in anterior cingulate cortex.

    PubMed

    Alexander, William H; Brown, Joshua W

    2010-02-15

    The anterior cingulate cortex (ACC) is implicated in performance monitoring and cognitive control. Non-human primate studies of ACC show prominent reward signals, but these are elusive in human studies, which instead show mainly conflict and error effects. Here we demonstrate distinct appetitive and aversive activity in human ACC. The error likelihood hypothesis suggests that ACC activity increases in proportion to the likelihood of an error, and ACC is also sensitive to the consequence magnitude of the predicted error. Previous work further showed that error likelihood effects reach a ceiling as the potential consequences of an error increase, possibly due to reductions in the average reward. We explored this issue by independently manipulating reward magnitude of task responses and error likelihood while controlling for potential error consequences in an Incentive Change Signal Task. The fMRI results ruled out a modulatory effect of expected reward on error likelihood effects in favor of a competition effect between expected reward and error likelihood. Dynamic causal modeling showed that error likelihood and expected reward signals are intrinsic to the ACC rather than received from elsewhere. These findings agree with interpretations of ACC activity as signaling both perceptions of risk and predicted reward. Copyright 2009 Elsevier Inc. All rights reserved.

  8. Quantifying uncertainty in carbon and nutrient pools of coarse woody debris

    NASA Astrophysics Data System (ADS)

    See, C. R.; Campbell, J. L.; Fraver, S.; Domke, G. M.; Harmon, M. E.; Knoepp, J. D.; Woodall, C. W.

    2016-12-01

    Woody detritus constitutes a major pool of both carbon and nutrients in forested ecosystems. Estimating coarse wood stocks relies on many assumptions, even when full surveys are conducted. Researchers rarely report error in coarse wood pool estimates, despite the importance to ecosystem budgets and modelling efforts. To date, no study has attempted a comprehensive assessment of error rates and uncertainty inherent in the estimation of this pool. Here, we use Monte Carlo analysis to propagate the error associated with the major sources of uncertainty present in the calculation of coarse wood carbon and nutrient (i.e., N, P, K, Ca, Mg, Na) pools. We also evaluate individual sources of error to identify the importance of each source of uncertainty in our estimates. We quantify sampling error by comparing the three most common field methods used to survey coarse wood (two transect methods and a whole-plot survey). We quantify the measurement error associated with length and diameter measurement, and technician error in species identification and decay class using plots surveyed by multiple technicians. We use previously published values of model error for the four most common methods of volume estimation: Smalian's, conical frustum, conic paraboloid, and average-of-ends. We also use previously published values for error in the collapse ratio (cross-sectional height/width) of decayed logs that serves as a surrogate for the volume remaining. We consider sampling error in chemical concentration and density for all decay classes, using distributions from both published and unpublished studies. Analytical uncertainty is calculated using standard reference plant material from the National Institute of Standards. Our results suggest that technician error in decay classification can have a large effect on uncertainty, since many of the error distributions included in the calculation (e.g. density, chemical concentration, volume-model selection, collapse ratio) are decay-class specific.

  9. Uncertainties in predicting solar panel power output

    NASA Technical Reports Server (NTRS)

    Anspaugh, B.

    1974-01-01

    The problem of calculating solar panel power output at launch and during a space mission is considered. The major sources of uncertainty and error in predicting the post launch electrical performance of the panel are considered. A general discussion of error analysis is given. Examples of uncertainty calculations are included. A general method of calculating the effect on the panel of various degrading environments is presented, with references supplied for specific methods. A technique for sizing a solar panel for a required mission power profile is developed.

  10. Variation of haemoglobin extinction coefficients can cause errors in the determination of haemoglobin concentration measured by near-infrared spectroscopy

    NASA Astrophysics Data System (ADS)

    Kim, J. G.; Liu, H.

    2007-10-01

    Near-infrared spectroscopy or imaging has been extensively applied to various biomedical applications since it can detect the concentrations of oxyhaemoglobin (HbO2), deoxyhaemoglobin (Hb) and total haemoglobin (Hbtotal) from deep tissues. To quantify concentrations of these haemoglobin derivatives, the extinction coefficient values of HbO2 and Hb have to be employed. However, it was not well recognized among researchers that small differences in extinction coefficients could cause significant errors in quantifying the concentrations of haemoglobin derivatives. In this study, we derived equations to estimate errors of haemoglobin derivatives caused by the variation of haemoglobin extinction coefficients. To prove our error analysis, we performed experiments using liquid-tissue phantoms containing 1% Intralipid in a phosphate-buffered saline solution. The gas intervention of pure oxygen was given in the solution to examine the oxygenation changes in the phantom, and 3 mL of human blood was added twice to show the changes in [Hbtotal]. The error calculation has shown that even a small variation (0.01 cm-1 mM-1) in extinction coefficients can produce appreciable relative errors in quantification of Δ[HbO2], Δ[Hb] and Δ[Hbtotal]. We have also observed that the error of Δ[Hbtotal] is not always larger than those of Δ[HbO2] and Δ[Hb]. This study concludes that we need to be aware of any variation in haemoglobin extinction coefficients, which could result from changes in temperature, and to utilize corresponding animal's haemoglobin extinction coefficients for the animal experiments, in order to obtain more accurate values of Δ[HbO2], Δ[Hb] and Δ[Hbtotal] from in vivo tissue measurements.

  11. Temporal uncertainty analysis of human errors based on interrelationships among multiple factors: a case of Minuteman III missile accident.

    PubMed

    Rong, Hao; Tian, Jin; Zhao, Tingdi

    2016-01-01

    In traditional approaches of human reliability assessment (HRA), the definition of the error producing conditions (EPCs) and the supporting guidance are such that some of the conditions (especially organizational or managerial conditions) can hardly be included, and thus the analysis is burdened with incomprehensiveness without reflecting the temporal trend of human reliability. A method based on system dynamics (SD), which highlights interrelationships among technical and organizational aspects that may contribute to human errors, is presented to facilitate quantitatively estimating the human error probability (HEP) and its related variables changing over time in a long period. Taking the Minuteman III missile accident in 2008 as a case, the proposed HRA method is applied to assess HEP during missile operations over 50 years by analyzing the interactions among the variables involved in human-related risks; also the critical factors are determined in terms of impact that the variables have on risks in different time periods. It is indicated that both technical and organizational aspects should be focused on to minimize human errors in a long run. Copyright © 2015 Elsevier Ltd and The Ergonomics Society. All rights reserved.

  12. Systematic analysis of video data from different human-robot interaction studies: a categorization of social signals during error situations.

    PubMed

    Giuliani, Manuel; Mirnig, Nicole; Stollnberger, Gerald; Stadler, Susanne; Buchner, Roland; Tscheligi, Manfred

    2015-01-01

    Human-robot interactions are often affected by error situations that are caused by either the robot or the human. Therefore, robots would profit from the ability to recognize when error situations occur. We investigated the verbal and non-verbal social signals that humans show when error situations occur in human-robot interaction experiments. For that, we analyzed 201 videos of five human-robot interaction user studies with varying tasks from four independent projects. The analysis shows that there are two types of error situations: social norm violations and technical failures. Social norm violations are situations in which the robot does not adhere to the underlying social script of the interaction. Technical failures are caused by technical shortcomings of the robot. The results of the video analysis show that the study participants use many head movements and very few gestures, but they often smile, when in an error situation with the robot. Another result is that the participants sometimes stop moving at the beginning of error situations. We also found that the participants talked more in the case of social norm violations and less during technical failures. Finally, the participants use fewer non-verbal social signals (for example smiling, nodding, and head shaking), when they are interacting with the robot alone and no experimenter or other human is present. The results suggest that participants do not see the robot as a social interaction partner with comparable communication skills. Our findings have implications for builders and evaluators of human-robot interaction systems. The builders need to consider including modules for recognition and classification of head movements to the robot input channels. The evaluators need to make sure that the presence of an experimenter does not skew the results of their user studies.

  13. Generalized ray tracing method for the calculation of the peripheral refraction induced by an ophthalmic lens

    NASA Astrophysics Data System (ADS)

    Rojo, Pilar; Royo, Santiago; Caum, Jesus; Ramírez, Jorge; Madariaga, Ines

    2015-02-01

    Peripheral refraction, the refractive error present outside the main direction of gaze, has lately attracted interest due to its alleged relationship with the progression of myopia. The ray tracing procedures involved in its calculation need to follow an approach different from those used in conventional ophthalmic lens design, where refractive errors are compensated only in the main direction of gaze. We present a methodology for the evaluation of the peripheral refractive error in ophthalmic lenses, adapting the conventional generalized ray tracing approach to the requirements of the evaluation of peripheral refraction. The nodal point of the eye and a retinal conjugate surface will be used to evaluate the three-dimensional distribution of refractive error around the fovea. The proposed approach enables us to calculate the three-dimensional peripheral refraction induced by any ophthalmic lens at any direction of gaze and to personalize the lens design to the requirements of the user. The complete evaluation process for a given user prescribed with a -5.76D ophthalmic lens for foveal vision is detailed, and comparative results obtained when the geometry of the lens is modified and when the central refractive error is over- or undercorrected. The methodology is also applied for an emmetropic eye to show its application for refractive errors other than myopia.

  14. Dosage uniformity problems which occur due to technological errors in extemporaneously prepared suppositories in hospitals and pharmacies

    PubMed Central

    Kalmár, Éva; Lasher, Jason Richard; Tarry, Thomas Dean; Myers, Andrea; Szakonyi, Gerda; Dombi, György; Baki, Gabriella; Alexander, Kenneth S.

    2013-01-01

    The availability of suppositories in Hungary, especially in clinical pharmacy practice, is usually provided by extemporaneous preparations. Due to the known advantages of rectal drug administration, its benefits are frequently utilized in pediatrics. However, errors during the extemporaneous manufacturing process can lead to non-homogenous drug distribution within the dosage units. To determine the root cause of these errors and provide corrective actions, we studied suppository samples prepared with exactly known errors using both cerimetric titration and HPLC technique. Our results show that the most frequent technological error occurs when the pharmacist fails to use the correct displacement factor in the calculations which could lead to a 4.6% increase/decrease in the assay in individual dosage units. The second most important source of error can occur when the molding excess is calculated solely for the suppository base. This can further dilute the final suppository drug concentration causing the assay to be as low as 80%. As a conclusion we emphasize that the application of predetermined displacement factors in calculations for the formulation of suppositories is highly important, which enables the pharmacist to produce a final product containing exactly the determined dose of an active substance despite the different densities of the components. PMID:25161378

  15. Dosage uniformity problems which occur due to technological errors in extemporaneously prepared suppositories in hospitals and pharmacies.

    PubMed

    Kalmár, Eva; Lasher, Jason Richard; Tarry, Thomas Dean; Myers, Andrea; Szakonyi, Gerda; Dombi, György; Baki, Gabriella; Alexander, Kenneth S

    2014-09-01

    The availability of suppositories in Hungary, especially in clinical pharmacy practice, is usually provided by extemporaneous preparations. Due to the known advantages of rectal drug administration, its benefits are frequently utilized in pediatrics. However, errors during the extemporaneous manufacturing process can lead to non-homogenous drug distribution within the dosage units. To determine the root cause of these errors and provide corrective actions, we studied suppository samples prepared with exactly known errors using both cerimetric titration and HPLC technique. Our results show that the most frequent technological error occurs when the pharmacist fails to use the correct displacement factor in the calculations which could lead to a 4.6% increase/decrease in the assay in individual dosage units. The second most important source of error can occur when the molding excess is calculated solely for the suppository base. This can further dilute the final suppository drug concentration causing the assay to be as low as 80%. As a conclusion we emphasize that the application of predetermined displacement factors in calculations for the formulation of suppositories is highly important, which enables the pharmacist to produce a final product containing exactly the determined dose of an active substance despite the different densities of the components.

  16. Application of structured support vector machine backpropagation to a convolutional neural network for human pose estimation.

    PubMed

    Witoonchart, Peerajak; Chongstitvatana, Prabhas

    2017-08-01

    In this study, for the first time, we show how to formulate a structured support vector machine (SSVM) as two layers in a convolutional neural network, where the top layer is a loss augmented inference layer and the bottom layer is the normal convolutional layer. We show that a deformable part model can be learned with the proposed structured SVM neural network by backpropagating the error of the deformable part model to the convolutional neural network. The forward propagation calculates the loss augmented inference and the backpropagation calculates the gradient from the loss augmented inference layer to the convolutional layer. Thus, we obtain a new type of convolutional neural network called an Structured SVM convolutional neural network, which we applied to the human pose estimation problem. This new neural network can be used as the final layers in deep learning. Our method jointly learns the structural model parameters and the appearance model parameters. We implemented our method as a new layer in the existing Caffe library. Copyright © 2017 Elsevier Ltd. All rights reserved.

  17. Human error analysis of commercial aviation accidents: application of the Human Factors Analysis and Classification system (HFACS).

    PubMed

    Wiegmann, D A; Shappell, S A

    2001-11-01

    The Human Factors Analysis and Classification System (HFACS) is a general human error framework originally developed and tested within the U.S. military as a tool for investigating and analyzing the human causes of aviation accidents. Based on Reason's (1990) model of latent and active failures, HFACS addresses human error at all levels of the system, including the condition of aircrew and organizational factors. The purpose of the present study was to assess the utility of the HFACS framework as an error analysis and classification tool outside the military. The HFACS framework was used to analyze human error data associated with aircrew-related commercial aviation accidents that occurred between January 1990 and December 1996 using database records maintained by the NTSB and the FAA. Investigators were able to reliably accommodate all the human causal factors associated with the commercial aviation accidents examined in this study using the HFACS system. In addition, the classification of data using HFACS highlighted several critical safety issues in need of intervention research. These results demonstrate that the HFACS framework can be a viable tool for use within the civil aviation arena. However, additional research is needed to examine its applicability to areas outside the flight deck, such as aircraft maintenance and air traffic control domains.

  18. Combining wrist age and third molars in forensic age estimation: how to calculate the joint age estimate and its error rate in age diagnostics.

    PubMed

    Gelbrich, Bianca; Frerking, Carolin; Weiss, Sandra; Schwerdt, Sebastian; Stellzig-Eisenhauer, Angelika; Tausche, Eve; Gelbrich, Götz

    2015-01-01

    Forensic age estimation in living adolescents is based on several methods, e.g. the assessment of skeletal and dental maturation. Combination of several methods is mandatory, since age estimates from a single method are too imprecise due to biological variability. The correlation of the errors of the methods being combined must be known to calculate the precision of combined age estimates. To examine the correlation of the errors of the hand and the third molar method and to demonstrate how to calculate the combined age estimate. Clinical routine radiographs of the hand and dental panoramic images of 383 patients (aged 7.8-19.1 years, 56% female) were assessed. Lack of correlation (r = -0.024, 95% CI = -0.124 to + 0.076, p = 0.64) allows calculating the combined age estimate as the weighted average of the estimates from hand bones and third molars. Combination improved the standard deviations of errors (hand = 0.97, teeth = 1.35 years) to 0.79 years. Uncorrelated errors of the age estimates obtained from both methods allow straightforward determination of the common estimate and its variance. This is also possible when reference data for the hand and the third molar method are established independently from each other, using different samples.

  19. To Err Is Human; To Structurally Prime from Errors Is Also Human

    ERIC Educational Resources Information Center

    Slevc, L. Robert; Ferreira, Victor S.

    2013-01-01

    Natural language contains disfluencies and errors. Do listeners simply discard information that was clearly produced in error, or can erroneous material persist to affect subsequent processing? Two experiments explored this question using a structural priming paradigm. Speakers described dative-eliciting pictures after hearing prime sentences that…

  20. Human factors analysis and classification system-HFACS.

    DOT National Transportation Integrated Search

    2000-02-01

    Human error has been implicated in 70 to 80% of all civil and military aviation accidents. Yet, most accident : reporting systems are not designed around any theoretical framework of human error. As a result, most : accident databases are not conduci...

  1. Error rate of automated calculation for wound surface area using a digital photography.

    PubMed

    Yang, S; Park, J; Lee, H; Lee, J B; Lee, B U; Oh, B H

    2018-02-01

    Although measuring would size using digital photography is a quick and simple method to evaluate the skin wound, the possible compatibility of it has not been fully validated. To investigate the error rate of our newly developed wound surface area calculation using digital photography. Using a smartphone and a digital single lens reflex (DSLR) camera, four photographs of various sized wounds (diameter: 0.5-3.5 cm) were taken from the facial skin model in company with color patches. The quantitative values of wound areas were automatically calculated. The relative error (RE) of this method with regard to wound sizes and types of camera was analyzed. RE of individual calculated area was from 0.0329% (DSLR, diameter 1.0 cm) to 23.7166% (smartphone, diameter 2.0 cm). In spite of the correction of lens curvature, smartphone has significantly higher error rate than DSLR camera (3.9431±2.9772 vs 8.1303±4.8236). However, in cases of wound diameter below than 3 cm, REs of average values of four photographs were below than 5%. In addition, there was no difference in the average value of wound area taken by smartphone and DSLR camera in those cases. For the follow-up of small skin defect (diameter: <3 cm), our newly developed automated wound area calculation method is able to be applied to the plenty of photographs, and the average values of them are a relatively useful index of wound healing with acceptable error rate. © 2017 John Wiley & Sons A/S. Published by John Wiley & Sons Ltd.

  2. SIMPLIFIED CALCULATION OF SOLAR FLUX ON THE SIDE WALL OF CYLINDRICAL CAVITY SOLAR RECEIVERS

    NASA Technical Reports Server (NTRS)

    Bhandari, P.

    1994-01-01

    The Simplified Calculation of Solar Flux Distribution on the Side Wall of Cylindrical Cavity Solar Receivers program employs a simple solar flux calculation algorithm for a cylindrical cavity type solar receiver. Applications of this program include the study of solar energy, heat transfer, and space power-solar dynamics engineering. The aperture plate of the receiver is assumed to be located in the focal plane of a paraboloidal concentrator, and the geometry is assumed to be axisymmetric. The concentrator slope error is assumed to be the only surface error; it is assumed that there are no pointing or misalignment errors. Using cone optics, the contour error method is utilized to handle the slope error of the concentrator. The flux distribution on the side wall is calculated by integration of the energy incident from cones emanating from all the differential elements on the concentrator. The calculations are done for any set of dimensions and properties of the receiver and the concentrator, and account for any spillover on the aperture plate. The results of this algorithm compared excellently with those predicted by more complicated programs. Because of the utilization of axial symmetry and overall simplification, it is extremely fast. It can be easily extended to other axi-symmetric receiver geometries. The program was written in Fortran 77, compiled using a Ryan McFarland compiler, and run on an IBM PC-AT with a math coprocessor. It requires 60K of memory and has been implemented under MS-DOS 3.2.1. The program was developed in 1988.

  3. Modifying Spearman's Attenuation Equation to Yield Partial Corrections for Measurement Error--With Application to Sample Size Calculations

    ERIC Educational Resources Information Center

    Nicewander, W. Alan

    2018-01-01

    Spearman's correction for attenuation (measurement error) corrects a correlation coefficient for measurement errors in either-or-both of two variables, and follows from the assumptions of classical test theory. Spearman's equation removes all measurement error from a correlation coefficient which translates into "increasing the reliability of…

  4. Comparative Efficacy of the New Optical Biometer on Intraocular Lens Power Calculation (AL-Scan versus IOLMaster).

    PubMed

    Ha, Ahnul; Wee, Won Ryang; Kim, Mee Kum

    2018-05-15

    To evaluate the agreement in axial length (AL), keratometry, and anterior chamber depth measurements between AL-Scan and IOLMaster biometers and to compare the efficacy of the AL-Scan on intraocular lens (IOL) power calculations and refractive outcomes with those obtained by the IOLMaster. Medical records of 48 eyes from 48 patients who underwent uneventful phacoemulsification and IOL insertion were retrospectively reviewed. One of the two types of monofocal aspheric IOLs were implanted (Tecnis ZCB00 [Tecnis, n = 34] or CT Asphina 509M [Asphina, n = 14]). Two different partial coherence interferometers measured and compared AL, keratometry (2.4 mm), anterior chamber depth, and IOL power calculations with SRK/T, Hoffer Q, Holladay2, and Haigis formulas. The difference between expected and actual final refractive error was compared as refractive mean error (ME), refractive mean absolute error (MAE), and median absolute error (MedAE). AL measured by the AL-Scan was shorter than that measured by the IOLMaster (p = 0.029). The IOL power of Tecnis did not differ between the four formulas; however, the Asphina measurement calculated using Hoffer Q for the AL-Scan was lower (0.28 diopters, p = 0.015) than that calculated by the IOLMaster. There were no statistically significant differences between the calculations by MAE and MedAE for the four formulas in either IOL. In SRK/T, ME in Tecnis-inserted eyes measured by AL-Scan showed a tendency toward myopia (p = 0.032). Measurement by AL-Scan provides reliable biometry data and power calculations compared to the IOLMaster; however, refractive outcomes of Tecnis-inserted eyes by AL-Scan calculated using SRK/T can show a slight myopic tendency. © 2018 The Korean Ophthalmological Society.

  5. Technical approaches for measurement of human errors

    NASA Technical Reports Server (NTRS)

    Clement, W. F.; Heffley, R. K.; Jewell, W. F.; Mcruer, D. T.

    1980-01-01

    Human error is a significant contributing factor in a very high proportion of civil transport, general aviation, and rotorcraft accidents. The technical details of a variety of proven approaches for the measurement of human errors in the context of the national airspace system are presented. Unobtrusive measurements suitable for cockpit operations and procedures in part of full mission simulation are emphasized. Procedure, system performance, and human operator centered measurements are discussed as they apply to the manual control, communication, supervisory, and monitoring tasks which are relevant to aviation operations.

  6. An evaluation of the underlying mechanisms of bloodstain pattern analysis error.

    PubMed

    Behrooz, Nima; Hulse-Smith, Lee; Chandra, Sanjeev

    2011-09-01

    An experiment was designed to explore the underlying mechanisms of blood disintegration and its subsequent effect on area of origin (AO) calculations. Blood spatter patterns were created through the controlled application of pressurized air (20-80 kPa) for 0.1 msec onto suspended blood droplets (2.7-3.2 mm diameter). The resulting disintegration process was captured using high-speed photography. Straight-line triangulation resulted in a 50% height overestimation, whereas using the lowest calculated height for each spatter pattern reduced this error to 8%. Incorporation of projectile motion resulted in a 28% height underestimation. The AO xy-coordinate was found to be very accurate with a maximum offset of only 4 mm, while AO size calculations were found to be two- to fivefold greater than expected. Subsequently, reverse triangulation analysis revealed the rotational offset for 26% of stains could not be attributed to measurement error, suggesting that some portion of error is inherent in the disintegration process. © 2011 American Academy of Forensic Sciences.

  7. Analysis of behavior of focusing error signals generated by astigmatic method when a focused spot moves beyond the radius of a land-groove-type optical disk

    NASA Astrophysics Data System (ADS)

    Shinoda, Masahisa; Nakatani, Hidehiko; Nakai, Kenya; Ohmaki, Masayuki

    2015-09-01

    We theoretically calculate behaviors of focusing error signals generated by an astigmatic method in a land-groove-type optical disk. The focusing error signal from the land does not coincide with that from the groove. This behavior is enhanced when a focused spot of an optical pickup moves beyond the radius of the optical disk. A gain difference between the slope sensitivities of focusing error signals from the land and the groove is an important factor with respect to stable focusing servo control. In our calculation, the format of digital versatile disc-random access memory (DVD-RAM) is adopted as the land-groove-type optical disk model, and the dependences of the gain difference on various factors are investigated. The gain difference strongly depends on the optical intensity distribution of the laser beam in the optical pickup. The calculation method and results in this paper will be reflected in newly developed land-groove-type optical disks.

  8. Garbage in, Garbage Out: Data Collection, Quality Assessment and Reporting Standards for Social Media Data Use in Health Research, Infodemiology and Digital Disease Detection.

    PubMed

    Kim, Yoonsang; Huang, Jidong; Emery, Sherry

    2016-02-26

    Social media have transformed the communications landscape. People increasingly obtain news and health information online and via social media. Social media platforms also serve as novel sources of rich observational data for health research (including infodemiology, infoveillance, and digital disease detection detection). While the number of studies using social data is growing rapidly, very few of these studies transparently outline their methods for collecting, filtering, and reporting those data. Keywords and search filters applied to social data form the lens through which researchers may observe what and how people communicate about a given topic. Without a properly focused lens, research conclusions may be biased or misleading. Standards of reporting data sources and quality are needed so that data scientists and consumers of social media research can evaluate and compare methods and findings across studies. We aimed to develop and apply a framework of social media data collection and quality assessment and to propose a reporting standard, which researchers and reviewers may use to evaluate and compare the quality of social data across studies. We propose a conceptual framework consisting of three major steps in collecting social media data: develop, apply, and validate search filters. This framework is based on two criteria: retrieval precision (how much of retrieved data is relevant) and retrieval recall (how much of the relevant data is retrieved). We then discuss two conditions that estimation of retrieval precision and recall rely on--accurate human coding and full data collection--and how to calculate these statistics in cases that deviate from the two ideal conditions. We then apply the framework on a real-world example using approximately 4 million tobacco-related tweets collected from the Twitter firehose. We developed and applied a search filter to retrieve e-cigarette-related tweets from the archive based on three keyword categories: devices, brands, and behavior. The search filter retrieved 82,205 e-cigarette-related tweets from the archive and was validated. Retrieval precision was calculated above 95% in all cases. Retrieval recall was 86% assuming ideal conditions (no human coding errors and full data collection), 75% when unretrieved messages could not be archived, 86% assuming no false negative errors by coders, and 93% allowing both false negative and false positive errors by human coders. This paper sets forth a conceptual framework for the filtering and quality evaluation of social data that addresses several common challenges and moves toward establishing a standard of reporting social data. Researchers should clearly delineate data sources, how data were accessed and collected, and the search filter building process and how retrieval precision and recall were calculated. The proposed framework can be adapted to other public social media platforms.

  9. Emitter location errors in electronic recognition system

    NASA Astrophysics Data System (ADS)

    Matuszewski, Jan; Dikta, Anna

    2017-04-01

    The paper describes some of the problems associated with emitter location calculations. This aspect is the most important part of the series of tasks in the electronic recognition systems. The basic tasks include: detection of emission of electromagnetic signals, tracking (determining the direction of emitter sources), signal analysis in order to classify different emitter types and the identification of the sources of emission of the same type. The paper presents a brief description of the main methods of emitter localization and the basic mathematical formulae for calculating their location. The errors' estimation has been made to determine the emitter location for three different methods and different scenarios of emitters and direction finding (DF) sensors deployment in the electromagnetic environment. The emitter has been established using a special computer program. On the basis of extensive numerical calculations, the evaluation of precise emitter location in the recognition systems for different configuration alignment of bearing devices and emitter was conducted. The calculations which have been made based on the simulated data for different methods of location are presented in the figures and respective tables. The obtained results demonstrate that calculation of the precise emitter location depends on: the number of DF sensors, the distances between emitter and DF sensors, their mutual location in the reconnaissance area and bearing errors. The precise emitter location varies depending on the number of obtained bearings. The higher the number of bearings, the better the accuracy of calculated emitter location in spite of relatively high bearing errors for each DF sensor.

  10. Analysis on optical heterodyne frequency error of full-field heterodyne interferometer

    NASA Astrophysics Data System (ADS)

    Li, Yang; Zhang, Wenxi; Wu, Zhou; Lv, Xiaoyu; Kong, Xinxin; Guo, Xiaoli

    2017-06-01

    The full-field heterodyne interferometric measurement technology is beginning better applied by employing low frequency heterodyne acousto-optical modulators instead of complex electro-mechanical scanning devices. The optical element surface could be directly acquired by synchronously detecting the received signal phases of each pixel, because standard matrix detector as CCD and CMOS cameras could be used in heterodyne interferometer. Instead of the traditional four-step phase shifting phase calculating, Fourier spectral analysis method is used for phase extracting which brings lower sensitivity to sources of uncertainty and higher measurement accuracy. In this paper, two types of full-field heterodyne interferometer are described whose advantages and disadvantages are also specified. Heterodyne interferometer has to combine two different frequency beams to produce interference, which brings a variety of optical heterodyne frequency errors. Frequency mixing error and beat frequency error are two different kinds of inescapable heterodyne frequency errors. In this paper, the effects of frequency mixing error to surface measurement are derived. The relationship between the phase extraction accuracy and the errors are calculated. :: The tolerance of the extinction ratio of polarization splitting prism and the signal-to-noise ratio of stray light is given. The error of phase extraction by Fourier analysis that caused by beat frequency shifting is derived and calculated. We also propose an improved phase extraction method based on spectrum correction. An amplitude ratio spectrum correction algorithm with using Hanning window is used to correct the heterodyne signal phase extraction. The simulation results show that this method can effectively suppress the degradation of phase extracting caused by beat frequency error and reduce the measurement uncertainty of full-field heterodyne interferometer.

  11. The dynamics of error processing in the human brain as reflected by high-gamma activity in noninvasive and intracranial EEG.

    PubMed

    Völker, Martin; Fiederer, Lukas D J; Berberich, Sofie; Hammer, Jiří; Behncke, Joos; Kršek, Pavel; Tomášek, Martin; Marusič, Petr; Reinacher, Peter C; Coenen, Volker A; Helias, Moritz; Schulze-Bonhage, Andreas; Burgard, Wolfram; Ball, Tonio

    2018-06-01

    Error detection in motor behavior is a fundamental cognitive function heavily relying on local cortical information processing. Neural activity in the high-gamma frequency band (HGB) closely reflects such local cortical processing, but little is known about its role in error processing, particularly in the healthy human brain. Here we characterize the error-related response of the human brain based on data obtained with noninvasive EEG optimized for HGB mapping in 31 healthy subjects (15 females, 16 males), and additional intracranial EEG data from 9 epilepsy patients (4 females, 5 males). Our findings reveal a multiscale picture of the global and local dynamics of error-related HGB activity in the human brain. On the global level as reflected in the noninvasive EEG, the error-related response started with an early component dominated by anterior brain regions, followed by a shift to parietal regions, and a subsequent phase characterized by sustained parietal HGB activity. This phase lasted for more than 1 s after the error onset. On the local level reflected in the intracranial EEG, a cascade of both transient and sustained error-related responses involved an even more extended network, spanning beyond frontal and parietal regions to the insula and the hippocampus. HGB mapping appeared especially well suited to investigate late, sustained components of the error response, possibly linked to downstream functional stages such as error-related learning and behavioral adaptation. Our findings establish the basic spatio-temporal properties of HGB activity as a neural correlate of error processing, complementing traditional error-related potential studies. Copyright © 2018 The Authors. Published by Elsevier Inc. All rights reserved.

  12. Minimizing treatment planning errors in proton therapy using failure mode and effects analysis

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zheng, Yuanshui, E-mail: yuanshui.zheng@okc.procure.com; Johnson, Randall; Larson, Gary

    Purpose: Failure mode and effects analysis (FMEA) is a widely used tool to evaluate safety or reliability in conventional photon radiation therapy. However, reports about FMEA application in proton therapy are scarce. The purpose of this study is to apply FMEA in safety improvement of proton treatment planning at their center. Methods: The authors performed an FMEA analysis of their proton therapy treatment planning process using uniform scanning proton beams. The authors identified possible failure modes in various planning processes, including image fusion, contouring, beam arrangement, dose calculation, plan export, documents, billing, and so on. For each error, the authorsmore » estimated the frequency of occurrence, the likelihood of being undetected, and the severity of the error if it went undetected and calculated the risk priority number (RPN). The FMEA results were used to design their quality management program. In addition, the authors created a database to track the identified dosimetric errors. Periodically, the authors reevaluated the risk of errors by reviewing the internal error database and improved their quality assurance program as needed. Results: In total, the authors identified over 36 possible treatment planning related failure modes and estimated the associated occurrence, detectability, and severity to calculate the overall risk priority number. Based on the FMEA, the authors implemented various safety improvement procedures into their practice, such as education, peer review, and automatic check tools. The ongoing error tracking database provided realistic data on the frequency of occurrence with which to reevaluate the RPNs for various failure modes. Conclusions: The FMEA technique provides a systematic method for identifying and evaluating potential errors in proton treatment planning before they result in an error in patient dose delivery. The application of FMEA framework and the implementation of an ongoing error tracking system at their clinic have proven to be useful in error reduction in proton treatment planning, thus improving the effectiveness and safety of proton therapy.« less

  13. Minimizing treatment planning errors in proton therapy using failure mode and effects analysis.

    PubMed

    Zheng, Yuanshui; Johnson, Randall; Larson, Gary

    2016-06-01

    Failure mode and effects analysis (FMEA) is a widely used tool to evaluate safety or reliability in conventional photon radiation therapy. However, reports about FMEA application in proton therapy are scarce. The purpose of this study is to apply FMEA in safety improvement of proton treatment planning at their center. The authors performed an FMEA analysis of their proton therapy treatment planning process using uniform scanning proton beams. The authors identified possible failure modes in various planning processes, including image fusion, contouring, beam arrangement, dose calculation, plan export, documents, billing, and so on. For each error, the authors estimated the frequency of occurrence, the likelihood of being undetected, and the severity of the error if it went undetected and calculated the risk priority number (RPN). The FMEA results were used to design their quality management program. In addition, the authors created a database to track the identified dosimetric errors. Periodically, the authors reevaluated the risk of errors by reviewing the internal error database and improved their quality assurance program as needed. In total, the authors identified over 36 possible treatment planning related failure modes and estimated the associated occurrence, detectability, and severity to calculate the overall risk priority number. Based on the FMEA, the authors implemented various safety improvement procedures into their practice, such as education, peer review, and automatic check tools. The ongoing error tracking database provided realistic data on the frequency of occurrence with which to reevaluate the RPNs for various failure modes. The FMEA technique provides a systematic method for identifying and evaluating potential errors in proton treatment planning before they result in an error in patient dose delivery. The application of FMEA framework and the implementation of an ongoing error tracking system at their clinic have proven to be useful in error reduction in proton treatment planning, thus improving the effectiveness and safety of proton therapy.

  14. Development of an errorable car-following driver model

    NASA Astrophysics Data System (ADS)

    Yang, H.-H.; Peng, H.

    2010-06-01

    An errorable car-following driver model is presented in this paper. An errorable driver model is one that emulates human driver's functions and can generate both nominal (error-free), as well as devious (with error) behaviours. This model was developed for evaluation and design of active safety systems. The car-following data used for developing and validating the model were obtained from a large-scale naturalistic driving database. The stochastic car-following behaviour was first analysed and modelled as a random process. Three error-inducing behaviours were then introduced. First, human perceptual limitation was studied and implemented. Distraction due to non-driving tasks was then identified based on the statistical analysis of the driving data. Finally, time delay of human drivers was estimated through a recursive least-square identification process. By including these three error-inducing behaviours, rear-end collisions with the lead vehicle could occur. The simulated crash rate was found to be similar but somewhat higher than that reported in traffic statistics.

  15. Testing a new application for TOPSIS: monitoring drought and wet periods in Iran

    NASA Astrophysics Data System (ADS)

    Roshan, Gholamreza; Ghanghermeh, AbdolAzim; Grab, Stefan W.

    2018-01-01

    Globally, droughts are a recurring major natural disaster owing to below normal precipitation, and are occasionally associated with high temperatures, which together negatively impact upon human health and social, economic, and cultural activities. Drought early warning and monitoring is thus essential for reducing such potential impacts on society. To this end, several experimental methods have previously been proposed for calculating drought, yet these are based almost entirely on precipitation alone. Here, for the first time, and in contrast to previous studies, we use seven climate parameters to establish drought/wet periods; these include: T min, T max, sunshine hours, relative humidity, average rainfall, number of rain days greater than 1 mm, and the ratio of total precipitation to number of days with precipitation, using the technique for order of preference by similarity to ideal solution (TOPSIS) algorithm. To test the TOPSIS method for different climate zones, six sample stations representing a variety of different climate conditions were used by assigning weight changes to climate parameters, which are then applied to the model, together with multivariate regression analysis. For the six stations tested, model results indicate the lowest errors for Zabol station and maximum errors for Kermanshah. The validation techniques strongly support our proposed new method for calculating and rating drought/wet events using TOPSIS.

  16. Alchemy: A Web 2.0 Real-time Quality Assurance Platform for Human Immunodeficiency Virus, Hepatitis C Virus, and BK Virus Quantitation Assays.

    PubMed

    Agosto-Arroyo, Emmanuel; Coshatt, Gina M; Winokur, Thomas S; Harada, Shuko; Park, Seung L

    2017-01-01

    The molecular diagnostics laboratory faces the challenge of improving test turnaround time (TAT). Low and consistent TATs are of great clinical and regulatory importance, especially for molecular virology tests. Laboratory information systems (LISs) contain all the data elements necessary to do accurate quality assurance (QA) reporting of TAT and other measures, but these reports are in most cases still performed manually: a time-consuming and error-prone task. The aim of this study was to develop a web-based real-time QA platform that would automate QA reporting in the molecular diagnostics laboratory at our institution, and minimize the time expended in preparing these reports. Using a standard Linux, Nginx, MariaDB, PHP stack virtual machine running atop a Dell Precision 5810, we designed and built a web-based QA platform, code-named Alchemy. Data files pulled periodically from the LIS in comma-separated value format were used to autogenerate QA reports for the human immunodeficiency virus (HIV) quantitation, hepatitis C virus (HCV) quantitation, and BK virus (BKV) quantitation. Alchemy allowed the user to select a specific timeframe to be analyzed and calculated key QA statistics in real-time, including the average TAT in days, tests falling outside the expected TAT ranges, and test result ranges. Before implementing Alchemy, reporting QA for the HIV, HCV, and BKV quantitation assays took 45-60 min of personnel time per test every month. With Alchemy, that time has decreased to 15 min total per month. Alchemy allowed the user to select specific periods of time and analyzed the TAT data in-depth without the need of extensive manual calculations. Alchemy has significantly decreased the time and the human error associated with QA report generation in our molecular diagnostics laboratory. Other tests will be added to this web-based platform in future updates. This effort shows the utility of informatician-supervised resident/fellow programming projects as learning opportunities and workflow improvements in the molecular laboratory.

  17. MISSE 2 PEACE Polymers Experiment Atomic Oxygen Erosion Yield Error Analysis

    NASA Technical Reports Server (NTRS)

    McCarthy, Catherine E.; Banks, Bruce A.; deGroh, Kim, K.

    2010-01-01

    Atomic oxygen erosion of polymers in low Earth orbit (LEO) poses a serious threat to spacecraft performance and durability. To address this, 40 different polymer samples and a sample of pyrolytic graphite, collectively called the PEACE (Polymer Erosion and Contamination Experiment) Polymers, were exposed to the LEO space environment on the exterior of the International Space Station (ISS) for nearly 4 years as part of the Materials International Space Station Experiment 1 & 2 (MISSE 1 & 2). The purpose of the PEACE Polymers experiment was to obtain accurate mass loss measurements in space to combine with ground measurements in order to accurately calculate the atomic oxygen erosion yields of a wide variety of polymeric materials exposed to the LEO space environment for a long period of time. Error calculations were performed in order to determine the accuracy of the mass measurements and therefore of the erosion yield values. The standard deviation, or error, of each factor was incorporated into the fractional uncertainty of the erosion yield for each of three different situations, depending on the post-flight weighing procedure. The resulting error calculations showed the erosion yield values to be very accurate, with an average error of 3.30 percent.

  18. Impact of electronic chemotherapy order forms on prescribing errors at an urban medical center: results from an interrupted time-series analysis.

    PubMed

    Elsaid, K; Truong, T; Monckeberg, M; McCarthy, H; Butera, J; Collins, C

    2013-12-01

    To evaluate the impact of electronic standardized chemotherapy templates on incidence and types of prescribing errors. A quasi-experimental interrupted time series with segmented regression. A 700-bed multidisciplinary tertiary care hospital with an ambulatory cancer center. A multidisciplinary team including oncology physicians, nurses, pharmacists and information technologists. Standardized, regimen-specific, chemotherapy prescribing forms were developed and implemented over a 32-month period. Trend of monthly prevented prescribing errors per 1000 chemotherapy doses during the pre-implementation phase (30 months), immediate change in the error rate from pre-implementation to implementation and trend of errors during the implementation phase. Errors were analyzed according to their types: errors in communication or transcription, errors in dosing calculation and errors in regimen frequency or treatment duration. Relative risk (RR) of errors in the post-implementation phase (28 months) compared with the pre-implementation phase was computed with 95% confidence interval (CI). Baseline monthly error rate was stable with 16.7 prevented errors per 1000 chemotherapy doses. A 30% reduction in prescribing errors was observed with initiating the intervention. With implementation, a negative change in the slope of prescribing errors was observed (coefficient = -0.338; 95% CI: -0.612 to -0.064). The estimated RR of transcription errors was 0.74; 95% CI (0.59-0.92). The estimated RR of dosing calculation errors was 0.06; 95% CI (0.03-0.10). The estimated RR of chemotherapy frequency/duration errors was 0.51; 95% CI (0.42-0.62). Implementing standardized chemotherapy-prescribing templates significantly reduced all types of prescribing errors and improved chemotherapy safety.

  19. Lost in Translation: the Case for Integrated Testing

    NASA Technical Reports Server (NTRS)

    Young, Aaron

    2017-01-01

    The building of a spacecraft is complex and often involves multiple suppliers and companies that have their own designs and processes. Standards have been developed across the industries to reduce the chances for critical flight errors at the system level, but the spacecraft is still vulnerable to the introduction of critical errors during integration of these systems. Critical errors can occur at any time during the process and in many cases, human reliability analysis (HRA) identifies human error as a risk driver. Most programs have a test plan in place that is intended to catch these errors, but it is not uncommon for schedule and cost stress to result in less testing than initially planned. Therefore, integrated testing, or "testing as you fly," is essential as a final check on the design and assembly to catch any errors prior to the mission. This presentation will outline the unique benefits of integrated testing by catching critical flight errors that can otherwise go undetected, discuss HRA methods that are used to identify opportunities for human error, lessons learned and challenges over ownership of testing will be discussed.

  20. Image defects from surface and alignment errors in grazing incidence telescopes

    NASA Technical Reports Server (NTRS)

    Saha, Timo T.

    1989-01-01

    The rigid body motions and low frequency surface errors of grazing incidence Wolter telescopes are studied. The analysis is based on surface error descriptors proposed by Paul Glenn. In his analysis, the alignment and surface errors are expressed in terms of Legendre-Fourier polynomials. Individual terms in the expression correspond to rigid body motions (decenter and tilt) and low spatial frequency surface errors of mirrors. With the help of the Legendre-Fourier polynomials and the geometry of grazing incidence telescopes, exact and approximated first order equations are derived in this paper for the components of the ray intercepts at the image plane. These equations are then used to calculate the sensitivities of Wolter type I and II telescopes for the rigid body motions and surface deformations. The rms spot diameters calculated from this theory and OSAC ray tracing code agree very well. This theory also provides a tool to predict how rigid body motions and surface errors of the mirrors compensate each other.

  1. VizieR Online Data Catalog: V and R CCD photometry of visual binaries (Abad+, 2004)

    NASA Astrophysics Data System (ADS)

    Abad, C.; Docobo, J. A.; Lanchares, V.; Lahulla, J. F.; Abelleira, P.; Blanco, J.; Alvarez, C.

    2003-11-01

    Table 1 gives relevant data for the visual binaries observed. Observations were carried out over a short period of time, therefore we assign the mean epoch (1998.58) for the totality of data. Data of individual stars are presented as average data with errors, by parameter, when various observations have been calculated, as well as the number of observations involved. Errors corresponding to astrometric relative positions between components are always present. For single observations, parameter fitting errors, specially for dx and dy parameters, have been calculated analysing the chi2 test around the minimum. Following the rules for error propagation, theta and rho errors can be estimated. Then, Table 1 shows single observation errors with an additional significant digit. When a star does not have known references, we include it in Table 2, where J2000 position and magnitudes are from the USNO-A2.0 catalogue (Monet et al., 1998, Cat. ). (2 data files).

  2. Math Error Types and Correlates in Adolescents with and without Attention Deficit Hyperactivity Disorder

    PubMed Central

    Capodieci, Agnese; Martinussen, Rhonda

    2017-01-01

    Objective: The aim of this study was to examine the types of errors made by youth with and without a parent-reported diagnosis of attention deficit and hyperactivity disorder (ADHD) on a math fluency task and investigate the association between error types and youths’ performance on measures of processing speed and working memory. Method: Participants included 30 adolescents with ADHD and 39 typically developing peers between 14 and 17 years old matched in age and IQ. All youth completed standardized measures of math calculation and fluency as well as two tests of working memory and processing speed. Math fluency error patterns were examined. Results: Adolescents with ADHD showed less proficient math fluency despite having similar math calculation scores as their peers. Group differences were also observed in error types with youth with ADHD making more switch errors than their peers. Conclusion: This research has important clinical applications for the assessment and intervention on math ability in students with ADHD. PMID:29075227

  3. Math Error Types and Correlates in Adolescents with and without Attention Deficit Hyperactivity Disorder.

    PubMed

    Capodieci, Agnese; Martinussen, Rhonda

    2017-01-01

    Objective: The aim of this study was to examine the types of errors made by youth with and without a parent-reported diagnosis of attention deficit and hyperactivity disorder (ADHD) on a math fluency task and investigate the association between error types and youths' performance on measures of processing speed and working memory. Method: Participants included 30 adolescents with ADHD and 39 typically developing peers between 14 and 17 years old matched in age and IQ. All youth completed standardized measures of math calculation and fluency as well as two tests of working memory and processing speed. Math fluency error patterns were examined. Results: Adolescents with ADHD showed less proficient math fluency despite having similar math calculation scores as their peers. Group differences were also observed in error types with youth with ADHD making more switch errors than their peers. Conclusion: This research has important clinical applications for the assessment and intervention on math ability in students with ADHD.

  4. WE-G-213CD-03: A Dual Complementary Verification Method for Dynamic Tumor Tracking on Vero SBRT.

    PubMed

    Poels, K; Depuydt, T; Verellen, D; De Ridder, M

    2012-06-01

    to use complementary cine EPID and gimbals log file analysis for in-vivo tracking accuracy monitoring. A clinical prototype of dynamic tracking (DT) was installed on the Vero SBRT system. This prototype version allowed tumor tracking by gimballed linac rotations using an internal-external correspondence model. The DT prototype software allowed the detailed logging of all applied gimbals rotations during tracking. The integration of an EPID on the vero system allowed the acquisition of cine EPID images during DT. We quantified the tracking error on cine EPID (E-EPID) by subtracting the target center (fiducial marker detection) and the field centroid. Dynamic gimbals log file information was combined with orthogonal x-ray verification images to calculate the in-vivo tracking error (E-kVLog). The correlation between E-kVLog and E-EPID was calculated for validation of the gimbals log file. Further, we investigated the sensitivity of the log file tracking error by introducing predefined systematic tracking errors. As an application we calculate gimbals log file tracking error for dynamic hidden target tests to investigate gravity effects and decoupled gimbals rotation from gantry rotation. Finally, calculating complementary cine EPID and log file tracking errors evaluated the clinical accuracy of dynamic tracking. A strong correlation was found between log file and cine EPID tracking error distribution during concurrent measurements (R=0.98). We found sensitivity in the gimbals log files to detect a systematic tracking error up to 0.5 mm. Dynamic hidden target tests showed no gravity influence on tracking performance and high degree of decoupled gimbals and gantry rotation during dynamic arc dynamic tracking. A submillimetric agreement between clinical complementary tracking error measurements was found. Redundancy of the internal gimbals log file with x-ray verification images with complementary independent cine EPID images was implemented to monitor the accuracy of gimballed tumor tracking on Vero SBRT. Research was financially supported by the Flemish government (FWO), Hercules Foundation and BrainLAB AG. © 2012 American Association of Physicists in Medicine.

  5. Human factors in aircraft incidents - Results of a 7-year study (Andre Allard Memorial Lecture)

    NASA Technical Reports Server (NTRS)

    Billings, C. E.; Reynard, W. D.

    1984-01-01

    It is pointed out that nearly all fatal aircraft accidents are preventable, and that most such accidents are due to human error. The present discussion is concerned with the results of a seven-year study of the data collected by the NASA Aviation Safety Reporting System (ASRS). The Aviation Safety Reporting System was designed to stimulate as large a flow as possible of information regarding errors and operational problems in the conduct of air operations. It was implemented in April, 1976. In the following 7.5 years, 35,000 reports have been received from pilots, controllers, and the armed forces. Human errors are found in more than 80 percent of these reports. Attention is given to the types of events reported, possible causal factors in incidents, the relationship of incidents and accidents, and sources of error in the data. ASRS reports include sufficient detail to permit authorities to institute changes in the national aviation system designed to minimize the likelihood of human error, and to insulate the system against the effects of errors.

  6. Human factors in surgery: from Three Mile Island to the operating room.

    PubMed

    D'Addessi, Alessandro; Bongiovanni, Luca; Volpe, Andrea; Pinto, Francesco; Bassi, PierFrancesco

    2009-01-01

    Human factors is a definition that includes the science of understanding the properties of human capability, the application of this understanding to the design and development of systems and services, the art of ensuring their successful applications to a program. The field of human factors traces its origins to the Second World War, but Three Mile Island has been the best example of how groups of people react and make decisions under stress: this nuclear accident was exacerbated by wrong decisions made because the operators were overwhelmed with irrelevant, misleading or incorrect information. Errors and their nature are the same in all human activities. The predisposition for error is so intrinsic to human nature that scientifically it is best considered as inherently biologic. The causes of error in medical care may not be easily generalized. Surgery differs in important ways: most errors occur in the operating room and are technical in nature. Commonly, surgical error has been thought of as the consequence of lack of skill or ability, and is the result of thoughtless actions. Moreover the 'operating theatre' has a unique set of team dynamics: professionals from multiple disciplines are required to work in a closely coordinated fashion. This complex environment provides multiple opportunities for unclear communication, clashing motivations, errors arising not from technical incompetence but from poor interpersonal skills. Surgeons have to work closely with human factors specialists in future studies. By improving processes already in place in many operating rooms, safety will be enhanced and quality increased.

  7. Minimizing systematic errors from atmospheric multiple scattering and satellite viewing geometry in coastal zone color scanner level IIA imagery

    NASA Technical Reports Server (NTRS)

    Martin, D. L.; Perry, M. J.

    1994-01-01

    Water-leaving radiances and phytoplankton pigment concentrations are calculated from coastal zone color scanner (CZCS) radiance measurements by removing atmospheric Rayleigh and aerosol radiances from the total radiance signal measured at the satellite. The single greatest source of error in CZCS atmospheric correction algorithms in the assumption that these Rayleigh and aerosol radiances are separable. Multiple-scattering interactions between Rayleigh and aerosol components cause systematic errors in calculated aerosol radiances, and the magnitude of these errors is dependent on aerosol type and optical depth and on satellite viewing geometry. A technique was developed which extends the results of previous radiative transfer modeling by Gordon and Castano to predict the magnitude of these systematic errors for simulated CZCS orbital passes in which the ocean is viewed through a modeled, physically realistic atmosphere. The simulated image mathematically duplicates the exact satellite, Sun, and pixel locations of an actual CZCS image. Errors in the aerosol radiance at 443 nm are calculated for a range of aerosol optical depths. When pixels in the simulated image exceed an error threshhold, the corresponding pixels in the actual CZCS image are flagged and excluded from further analysis or from use in image compositing or compilation of pigment concentration databases. Studies based on time series analyses or compositing of CZCS imagery which do not address Rayleigh-aerosol multiple scattering should be interpreted cautiously, since the fundamental assumption used in their atmospheric correction algorithm is flawed.

  8. The Human Factors Analysis and Classification System : HFACS : final report.

    DOT National Transportation Integrated Search

    2000-02-01

    Human error has been implicated in 70 to 80% of all civil and military aviation accidents. Yet, most accident reporting systems are not designed around any theoretical framework of human error. As a result, most accident databases are not conducive t...

  9. S-193 scatterometer transfer function analysis for data processing

    NASA Technical Reports Server (NTRS)

    Johnson, L.

    1974-01-01

    A mathematical model for converting raw data measurements of the S-193 scatterometer into processed values of radar scattering coefficient is presented. The argument is based on an approximation derived from the Radar Equation and actual operating principles of the S-193 Scatterometer hardware. Possible error sources are inaccuracies in transmitted wavelength, range, antenna illumination integrals, and the instrument itself. The dominant source of error in the calculation of scattering coefficent is accuracy of the range. All other ractors with the possible exception of illumination integral are not considered to cause significant error in the calculation of scattering coefficient.

  10. Differential reliance of chimpanzees and humans on automatic and deliberate control of motor actions.

    PubMed

    Kaneko, Takaaki; Tomonaga, Masaki

    2014-06-01

    Humans are often unaware of how they control their limb motor movements. People pay attention to their own motor movements only when their usual motor routines encounter errors. Yet little is known about the extent to which voluntary actions rely on automatic control and when automatic control shifts to deliberate control in nonhuman primates. In this study, we demonstrate that chimpanzees and humans showed similar limb motor adjustment in response to feedback error during reaching actions, whereas attentional allocation inferred from gaze behavior differed. We found that humans shifted attention to their own motor kinematics as errors were induced in motor trajectory feedback regardless of whether the errors actually disrupted their reaching their action goals. In contrast, chimpanzees shifted attention to motor execution only when errors actually interfered with their achieving a planned action goal. These results indicate that the species differed in their criteria for shifting from automatic to deliberate control of motor actions. It is widely accepted that sophisticated motor repertoires have evolved in humans. Our results suggest that the deliberate monitoring of one's own motor kinematics may have evolved in the human lineage. Copyright © 2014 Elsevier B.V. All rights reserved.

  11. Dependence of calculated postshock thermodynamic variables on vibrational equilibrium and input uncertainty

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Campbell, Matthew Frederick; Owen, Kyle G.; Davidson, David F.

    The purpose of this article is to explore the dependence of calculated postshock thermodynamic properties in shock tube experiments upon the vibrational state of the test gas and upon the uncertainties inherent to calculation inputs. This paper first offers a comparison between state variables calculated according to a Rankine–Hugoniot–equation-based algorithm, known as FROSH, and those derived from shock tube experiments on vibrationally nonequilibrated gases. It is shown that incorrect vibrational relaxation assumptions could lead to errors in temperature as large as 8% for 25% oxygen/argon mixtures at 3500 K. Following this demonstration, this article employs the algorithm to show themore » importance of correct vibrational equilibration assumptions, noting, for instance, that errors in temperature of up to about 2% at 3500 K may be generated for 10% nitrogen/argon mixtures if vibrational relaxation is not treated properly. Lastly, this article presents an extensive uncertainty analysis, showing that postshock temperatures can be calculated with root-of-sum-of-square errors of better than ±1% given sufficiently accurate experimentally measured input parameters.« less

  12. Dependence of calculated postshock thermodynamic variables on vibrational equilibrium and input uncertainty

    DOE PAGES

    Campbell, Matthew Frederick; Owen, Kyle G.; Davidson, David F.; ...

    2017-01-30

    The purpose of this article is to explore the dependence of calculated postshock thermodynamic properties in shock tube experiments upon the vibrational state of the test gas and upon the uncertainties inherent to calculation inputs. This paper first offers a comparison between state variables calculated according to a Rankine–Hugoniot–equation-based algorithm, known as FROSH, and those derived from shock tube experiments on vibrationally nonequilibrated gases. It is shown that incorrect vibrational relaxation assumptions could lead to errors in temperature as large as 8% for 25% oxygen/argon mixtures at 3500 K. Following this demonstration, this article employs the algorithm to show themore » importance of correct vibrational equilibration assumptions, noting, for instance, that errors in temperature of up to about 2% at 3500 K may be generated for 10% nitrogen/argon mixtures if vibrational relaxation is not treated properly. Lastly, this article presents an extensive uncertainty analysis, showing that postshock temperatures can be calculated with root-of-sum-of-square errors of better than ±1% given sufficiently accurate experimentally measured input parameters.« less

  13. Adjoints and Low-rank Covariance Representation

    NASA Technical Reports Server (NTRS)

    Tippett, Michael K.; Cohn, Stephen E.

    2000-01-01

    Quantitative measures of the uncertainty of Earth System estimates can be as important as the estimates themselves. Second moments of estimation errors are described by the covariance matrix, whose direct calculation is impractical when the number of degrees of freedom of the system state is large. Ensemble and reduced-state approaches to prediction and data assimilation replace full estimation error covariance matrices by low-rank approximations. The appropriateness of such approximations depends on the spectrum of the full error covariance matrix, whose calculation is also often impractical. Here we examine the situation where the error covariance is a linear transformation of a forcing error covariance. We use operator norms and adjoints to relate the appropriateness of low-rank representations to the conditioning of this transformation. The analysis is used to investigate low-rank representations of the steady-state response to random forcing of an idealized discrete-time dynamical system.

  14. To image analysis in computed tomography

    NASA Astrophysics Data System (ADS)

    Chukalina, Marina; Nikolaev, Dmitry; Ingacheva, Anastasia; Buzmakov, Alexey; Yakimchuk, Ivan; Asadchikov, Victor

    2017-03-01

    The presence of errors in tomographic image may lead to misdiagnosis when computed tomography (CT) is used in medicine, or the wrong decision about parameters of technological processes when CT is used in the industrial applications. Two main reasons produce these errors. First, the errors occur on the step corresponding to the measurement, e.g. incorrect calibration and estimation of geometric parameters of the set-up. The second reason is the nature of the tomography reconstruction step. At the stage a mathematical model to calculate the projection data is created. Applied optimization and regularization methods along with their numerical implementations of the method chosen have their own specific errors. Nowadays, a lot of research teams try to analyze these errors and construct the relations between error sources. In this paper, we do not analyze the nature of the final error, but present a new approach for the calculation of its distribution in the reconstructed volume. We hope that the visualization of the error distribution will allow experts to clarify the medical report impression or expert summary given by them after analyzing of CT results. To illustrate the efficiency of the proposed approach we present both the simulation and real data processing results.

  15. Preventable Medical Errors Driven Modeling of Medical Best Practice Guidance Systems.

    PubMed

    Ou, Andrew Y-Z; Jiang, Yu; Wu, Po-Liang; Sha, Lui; Berlin, Richard B

    2017-01-01

    In a medical environment such as Intensive Care Unit, there are many possible reasons to cause errors, and one important reason is the effect of human intellectual tasks. When designing an interactive healthcare system such as medical Cyber-Physical-Human Systems (CPHSystems), it is important to consider whether the system design can mitigate the errors caused by these tasks or not. In this paper, we first introduce five categories of generic intellectual tasks of humans, where tasks among each category may lead to potential medical errors. Then, we present an integrated modeling framework to model a medical CPHSystem and use UPPAAL as the foundation to integrate and verify the whole medical CPHSystem design models. With a verified and comprehensive model capturing the human intellectual tasks effects, we can design a more accurate and acceptable system. We use a cardiac arrest resuscitation guidance and navigation system (CAR-GNSystem) for such medical CPHSystem modeling. Experimental results show that the CPHSystem models help determine system design flaws and can mitigate the potential medical errors caused by the human intellectual tasks.

  16. Dosimetry audit simulation of treatment planning system in multicenters radiotherapy

    NASA Astrophysics Data System (ADS)

    Kasmuri, S.; Pawiro, S. A.

    2017-07-01

    Treatment Planning System (TPS) is an important modality that determines radiotherapy outcome. TPS requires input data obtained through commissioning and the potentially error occurred. Error in this stage may result in the systematic error. The aim of this study to verify the TPS dosimetry to know deviation range between calculated and measurement dose. This study used CIRS phantom 002LFC representing the human thorax and simulated all external beam radiotherapy stages. The phantom was scanned using CT Scanner and planned 8 test cases that were similar to those in clinical practice situation were made, tested in four radiotherapy centers. Dose measurement using 0.6 cc ionization chamber. The results of this study showed that generally, deviation of all test cases in four centers was within agreement criteria with average deviation about -0.17±1.59 %, -1.64±1.92 %, 0.34±1.34 % and 0.13±1.81 %. The conclusion of this study was all TPS involved in this study showed good performance. The superposition algorithm showed rather poor performance than either analytic anisotropic algorithm (AAA) and convolution algorithm with average deviation about -1.64±1.92 %, -0.17±1.59 % and -0.27±1.51 % respectively.

  17. Double-pass measurement of human eye aberrations: limitations and practical realization

    NASA Astrophysics Data System (ADS)

    Letfullin, Renat R.; Belyakov, Alexey I.; Cherezova, Tatyana Y.; Kudryashov, Alexis V.

    2004-11-01

    The problem of correct eye aberrations measurement is very important with the rising widespread of a surgical procedure for reducing refractive error in the eye, so called, LASIK (laser-assisted in situ keratomileusis). The double-pass technique commonly used for measuring aberrations of a human eye involves some uncertainties. One of them is loosing the information about odd human eye aberrations. We report about investigations of the applicability limit of the double-pass measurements depending upon the aberrations status introduced by human eye and actual size of the entrance pupil. We evaluate the double-pass effects for various aberrations and different pupil diameters. It is shown that for small pupils the double-pass effects are negligible. The testing and alignment of aberrometer was performed using the schematic eye, developed in our lab. We also introduced a model of human eye based on bimorph flexible mirror. We perform calculations to demonstrate that our schematic eye is capable of reproducing spatial-temporal statistics of aberrations of living eye with normal vision or even myopic or hypermetropic or with high aberrations ones.

  18. SU-E-T-769: T-Test Based Prior Error Estimate and Stopping Criterion for Monte Carlo Dose Calculation in Proton Therapy

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hong, X; Gao, H; Schuemann, J

    2015-06-15

    Purpose: The Monte Carlo (MC) method is a gold standard for dose calculation in radiotherapy. However, it is not a priori clear how many particles need to be simulated to achieve a given dose accuracy. Prior error estimate and stopping criterion are not well established for MC. This work aims to fill this gap. Methods: Due to the statistical nature of MC, our approach is based on one-sample t-test. We design the prior error estimate method based on the t-test, and then use this t-test based error estimate for developing a simulation stopping criterion. The three major components are asmore » follows.First, the source particles are randomized in energy, space and angle, so that the dose deposition from a particle to the voxel is independent and identically distributed (i.i.d.).Second, a sample under consideration in the t-test is the mean value of dose deposition to the voxel by sufficiently large number of source particles. Then according to central limit theorem, the sample as the mean value of i.i.d. variables is normally distributed with the expectation equal to the true deposited dose.Third, the t-test is performed with the null hypothesis that the difference between sample expectation (the same as true deposited dose) and on-the-fly calculated mean sample dose from MC is larger than a given error threshold, in addition to which users have the freedom to specify confidence probability and region of interest in the t-test based stopping criterion. Results: The method is validated for proton dose calculation. The difference between the MC Result based on the t-test prior error estimate and the statistical Result by repeating numerous MC simulations is within 1%. Conclusion: The t-test based prior error estimate and stopping criterion are developed for MC and validated for proton dose calculation. Xiang Hong and Hao Gao were partially supported by the NSFC (#11405105), the 973 Program (#2015CB856000) and the Shanghai Pujiang Talent Program (#14PJ1404500)« less

  19. An electric field induced in the retina and brain at threshold magnetic flux density causing magnetophosphenes.

    PubMed

    Hirata, Akimasa; Takano, Yukinori; Fujiwara, Osamu; Dovan, Thanh; Kavet, Robert

    2011-07-07

    For magnetic field exposures at extremely low frequencies, the electrostimulatory response with the lowest threshold is the magnetophosphene, a response that corresponds to an adult exposed to a 20 Hz magnetic field of nominally 8.14 mT. In the IEEE standard C95.6 (2002), the corresponding in situ field in the retinal locus of an adult-sized ellipsoidal was calculated to be 53 mV m(-1). However, the associated dose in the retina and brain at a high level of resolution in anatomically correct human models is incompletely characterized. Furthermore, the dose maxima in tissue computed with voxel human models are prone to staircasing errors, particularly for the low-frequency dosimetry. In the analyses presented in this paper, analytical and quasi-static finite-difference time-domain (FDTD) solutions were first compared for a three-layer sphere exposed to a uniform 50 Hz magnetic field. Staircasing errors in the FDTD results were observed at the tissue interface, and were greatest at the skin-air boundary. The 99th percentile value was within 3% of the analytic maximum, depending on model resolution, and thus may be considered a close approximation of the analytic maximum. For the adult anatomical model, TARO, exposed to a uniform magnetic field, the differences in the 99th percentile value of in situ electric fields for 2 mm and 1 mm voxel models were at most several per cent. For various human models exposed at the magnetophosphene threshold at three orthogonal field orientations, the in situ electric field in the brain was between 10% and 70% greater than the analytical IEEE threshold of 53 mV m(-1), and in the retina was lower by roughly 50% for two horizontal orientations (anterior-posterior and lateral), and greater by about 15% for a vertically oriented field. Considering a reduction factor or safety factors of several folds applied to electrostimulatory thresholds, the 99th percentile dose to a tissue calculated with voxel human models may be used as an estimate of the tissue's maximum dose.

  20. Cross Section Sensitivity and Propagated Errors in HZE Exposures

    NASA Technical Reports Server (NTRS)

    Heinbockel, John H.; Wilson, John W.; Blatnig, Steve R.; Qualls, Garry D.; Badavi, Francis F.; Cucinotta, Francis A.

    2005-01-01

    It has long been recognized that galactic cosmic rays are of such high energy that they tend to pass through available shielding materials resulting in exposure of astronauts and equipment within space vehicles and habitats. Any protection provided by shielding materials result not so much from stopping such particles but by changing their physical character in interaction with shielding material nuclei forming, hopefully, less dangerous species. Clearly, the fidelity of the nuclear cross-sections is essential to correct specification of shield design and sensitivity to cross-section error is important in guiding experimental validation of cross-section models and database. We examine the Boltzmann transport equation which is used to calculate dose equivalent during solar minimum, with units (cSv/yr), associated with various depths of shielding materials. The dose equivalent is a weighted sum of contributions from neutrons, protons, light ions, medium ions and heavy ions. We investigate the sensitivity of dose equivalent calculations due to errors in nuclear fragmentation cross-sections. We do this error analysis for all possible projectile-fragment combinations (14,365 such combinations) to estimate the sensitivity of the shielding calculations to errors in the nuclear fragmentation cross-sections. Numerical differentiation with respect to the cross-sections will be evaluated in a broad class of materials including polyethylene, aluminum and copper. We will identify the most important cross-sections for further experimental study and evaluate their impact on propagated errors in shielding estimates.

  1. Modeling human tracking error in several different anti-tank systems

    NASA Technical Reports Server (NTRS)

    Kleinman, D. L.

    1981-01-01

    An optimal control model for generating time histories of human tracking errors in antitank systems is outlined. Monte Carlo simulations of human operator responses for three Army antitank systems are compared. System/manipulator dependent data comparisons reflecting human operator limitations in perceiving displayed quantities and executing intended control motions are presented. Motor noise parameters are also discussed.

  2. [Using some modern mathematical models of postmortem cooling of the human body for the time of death determination].

    PubMed

    Vavilov, A Iu; Viter, V I

    2007-01-01

    Mathematical questions of data errors of modern thermometrical models of postmortem cooling of the human body are considered. The main diagnostic areas used for thermometry are analyzed to minimize these data errors. The authors propose practical recommendations to decrease data errors of determination of prescription of death coming.

  3. A viscoelastic model for the prediction of transcranial ultrasound propagation: application for the estimation of shear acoustic properties in the human skull

    NASA Astrophysics Data System (ADS)

    Pichardo, Samuel; Moreno-Hernández, Carlos; Drainville, Robert Andrew; Sin, Vivian; Curiel, Laura; Hynynen, Kullervo

    2017-09-01

    A better understanding of ultrasound transmission through the human skull is fundamental to develop optimal imaging and therapeutic applications. In this study, we present global attenuation values and functions that correlate apparent density calculated from computed tomography scans to shear speed of sound. For this purpose, we used a model for sound propagation based on the viscoelastic wave equation (VWE) assuming isotropic conditions. The model was validated using a series of measurements with plates of different plastic materials and angles of incidence of 0°, 15° and 50°. The optimal functions for transcranial ultrasound propagation were established using the VWE, scan measurements of transcranial propagation with an angle of incidence of 40° and a genetic optimization algorithm. Ten (10) locations over three (3) skulls were used for ultrasound frequencies of 270 kHz and 836 kHz. Results with plastic materials demonstrated that the viscoelastic modeling predicted both longitudinal and shear propagation with an average (±s.d.) error of 9(±7)% of the wavelength in the predicted delay and an error of 6.7(±5)% in the estimation of transmitted power. Using the new optimal functions of speed of sound and global attenuation for the human skull, the proposed model predicted the transcranial ultrasound transmission for a frequency of 270 kHz with an expected error in the predicted delay of 5(±2.7)% of the wavelength. The sound propagation model predicted accurately the sound propagation regardless of either shear or longitudinal sound transmission dominated. For 836 kHz, the model predicted accurately in average with an error in the predicted delay of 17(±16)% of the wavelength. Results indicated the importance of the specificity of the information at a voxel level to better understand ultrasound transmission through the skull. These results and new model will be very valuable tools for the future development of transcranial applications of ultrasound therapy and imaging.

  4. Automatic image registration performance for two different CBCT systems; variation with imaging dose

    NASA Astrophysics Data System (ADS)

    Barber, J.; Sykes, J. R.; Holloway, L.; Thwaites, D. I.

    2014-03-01

    The performance of an automatic image registration algorithm was compared on image sets collected with two commercial CBCT systems, and the relationship with imaging dose was explored. CBCT images of a CIRS Virtually Human Male Pelvis phantom (VHMP) were collected on Varian TrueBeam/OBI and Elekta Synergy/XVI linear accelerators, across a range of mAs settings. Each CBCT image was registered 100 times, with random initial offsets introduced. Image registration was performed using the grey value correlation ratio algorithm in the Elekta XVI software, to a mask of the prostate volume with 5 mm expansion. Residual registration errors were calculated after correcting for the initial introduced phantom set-up error. Registration performance with the OBI images was similar to that of XVI. There was a clear dependence on imaging dose for the XVI images with residual errors increasing below 4mGy. It was not possible to acquire images with doses lower than ~5mGy with the OBI system and no evidence of reduced performance was observed at this dose. Registration failures (maximum target registration error > 3.6 mm on the surface of a 30mm sphere) occurred in 5% to 9% of registrations except for the lowest dose XVI scan (31%). The uncertainty in automatic image registration with both OBI and XVI images was found to be adequate for clinical use within a normal range of acquisition settings.

  5. The Hinton train disaster.

    PubMed

    Smiley, A M

    1990-10-01

    In February of 1986 a head-on collision occurred between a freight train and a passenger train in western Canada killing 23 people and causing over $30 million of damage. A Commission of Inquiry appointed by the Canadian government concluded that human error was the major reason for the collision. This report discusses the factors contributing to the human error: mainly poor work-rest schedules, the monotonous nature of the train driving task, insufficient information about train movements, and the inadequate backup systems in case of human error.

  6. A Conceptual Framework for Predicting Error in Complex Human-Machine Environments

    NASA Technical Reports Server (NTRS)

    Freed, Michael; Remington, Roger; Null, Cynthia H. (Technical Monitor)

    1998-01-01

    We present a Goals, Operators, Methods, and Selection Rules-Model Human Processor (GOMS-MHP) style model-based approach to the problem of predicting human habit capture errors. Habit captures occur when the model fails to allocate limited cognitive resources to retrieve task-relevant information from memory. Lacking the unretrieved information, decision mechanisms act in accordance with implicit default assumptions, resulting in error when relied upon assumptions prove incorrect. The model helps interface designers identify situations in which such failures are especially likely.

  7. Optimizer convergence and local minima errors and their clinical importance

    NASA Astrophysics Data System (ADS)

    Jeraj, Robert; Wu, Chuan; Mackie, Thomas R.

    2003-09-01

    Two of the errors common in the inverse treatment planning optimization have been investigated. The first error is the optimizer convergence error, which appears because of non-perfect convergence to the global or local solution, usually caused by a non-zero stopping criterion. The second error is the local minima error, which occurs when the objective function is not convex and/or the feasible solution space is not convex. The magnitude of the errors, their relative importance in comparison to other errors as well as their clinical significance in terms of tumour control probability (TCP) and normal tissue complication probability (NTCP) were investigated. Two inherently different optimizers, a stochastic simulated annealing and deterministic gradient method were compared on a clinical example. It was found that for typical optimization the optimizer convergence errors are rather small, especially compared to other convergence errors, e.g., convergence errors due to inaccuracy of the current dose calculation algorithms. This indicates that stopping criteria could often be relaxed leading into optimization speed-ups. The local minima errors were also found to be relatively small and typically in the range of the dose calculation convergence errors. Even for the cases where significantly higher objective function scores were obtained the local minima errors were not significantly higher. Clinical evaluation of the optimizer convergence error showed good correlation between the convergence of the clinical TCP or NTCP measures and convergence of the physical dose distribution. On the other hand, the local minima errors resulted in significantly different TCP or NTCP values (up to a factor of 2) indicating clinical importance of the local minima produced by physical optimization.

  8. Optimizer convergence and local minima errors and their clinical importance.

    PubMed

    Jeraj, Robert; Wu, Chuan; Mackie, Thomas R

    2003-09-07

    Two of the errors common in the inverse treatment planning optimization have been investigated. The first error is the optimizer convergence error, which appears because of non-perfect convergence to the global or local solution, usually caused by a non-zero stopping criterion. The second error is the local minima error, which occurs when the objective function is not convex and/or the feasible solution space is not convex. The magnitude of the errors, their relative importance in comparison to other errors as well as their clinical significance in terms of tumour control probability (TCP) and normal tissue complication probability (NTCP) were investigated. Two inherently different optimizers, a stochastic simulated annealing and deterministic gradient method were compared on a clinical example. It was found that for typical optimization the optimizer convergence errors are rather small, especially compared to other convergence errors, e.g., convergence errors due to inaccuracy of the current dose calculation algorithms. This indicates that stopping criteria could often be relaxed leading into optimization speed-ups. The local minima errors were also found to be relatively small and typically in the range of the dose calculation convergence errors. Even for the cases where significantly higher objective function scores were obtained the local minima errors were not significantly higher. Clinical evaluation of the optimizer convergence error showed good correlation between the convergence of the clinical TCP or NTCP measures and convergence of the physical dose distribution. On the other hand, the local minima errors resulted in significantly different TCP or NTCP values (up to a factor of 2) indicating clinical importance of the local minima produced by physical optimization.

  9. STAMP-Based HRA Considering Causality Within a Sociotechnical System: A Case of Minuteman III Missile Accident.

    PubMed

    Rong, Hao; Tian, Jin

    2015-05-01

    The study contributes to human reliability analysis (HRA) by proposing a method that focuses more on human error causality within a sociotechnical system, illustrating its rationality and feasibility by using a case of the Minuteman (MM) III missile accident. Due to the complexity and dynamics within a sociotechnical system, previous analyses of accidents involving human and organizational factors clearly demonstrated that the methods using a sequential accident model are inadequate to analyze human error within a sociotechnical system. System-theoretic accident model and processes (STAMP) was used to develop a universal framework of human error causal analysis. To elaborate the causal relationships and demonstrate the dynamics of human error, system dynamics (SD) modeling was conducted based on the framework. A total of 41 contributing factors, categorized into four types of human error, were identified through the STAMP-based analysis. All factors are related to a broad view of sociotechnical systems, and more comprehensive than the causation presented in the accident investigation report issued officially. Recommendations regarding both technical and managerial improvement for a lower risk of the accident are proposed. The interests of an interdisciplinary approach provide complementary support between system safety and human factors. The integrated method based on STAMP and SD model contributes to HRA effectively. The proposed method will be beneficial to HRA, risk assessment, and control of the MM III operating process, as well as other sociotechnical systems. © 2014, Human Factors and Ergonomics Society.

  10. Stitching-error reduction in gratings by shot-shifted electron-beam lithography

    NASA Technical Reports Server (NTRS)

    Dougherty, D. J.; Muller, R. E.; Maker, P. D.; Forouhar, S.

    2001-01-01

    Calculations of the grating spatial-frequency spectrum and the filtering properties of multiple-pass electron-beam writing demonstrate a tradeoff between stitching-error suppression and minimum pitch separation. High-resolution measurements of optical-diffraction patterns show a 25-dB reduction in stitching-error side modes.

  11. A system to use electromagnetic tracking for the quality assurance of brachytherapy catheter digitization

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Damato, Antonio L., E-mail: adamato@lroc.harvard.edu; Viswanathan, Akila N.; Don, Sarah M.

    2014-10-15

    Purpose: To investigate the use of a system using electromagnetic tracking (EMT), post-processing and an error-detection algorithm for detecting errors and resolving uncertainties in high-dose-rate brachytherapy catheter digitization for treatment planning. Methods: EMT was used to localize 15 catheters inserted into a phantom using a stepwise acquisition technique. Five distinct acquisition experiments were performed. Noise associated with the acquisition was calculated. The dwell location configuration was extracted from the EMT data. A CT scan of the phantom was performed, and five distinct catheter digitization sessions were performed. No a priori registration of the CT scan coordinate system with the EMTmore » coordinate system was performed. CT-based digitization was automatically extracted from the brachytherapy plan DICOM files (CT), and rigid registration was performed between EMT and CT dwell positions. EMT registration error was characterized in terms of the mean and maximum distance between corresponding EMT and CT dwell positions per catheter. An algorithm for error detection and identification was presented. Three types of errors were systematically simulated: swap of two catheter numbers, partial swap of catheter number identification for parts of the catheters (mix), and catheter-tip shift. Error-detection sensitivity (number of simulated scenarios correctly identified as containing an error/number of simulated scenarios containing an error) and specificity (number of scenarios correctly identified as not containing errors/number of correct scenarios) were calculated. Catheter identification sensitivity (number of catheters correctly identified as erroneous across all scenarios/number of erroneous catheters across all scenarios) and specificity (number of catheters correctly identified as correct across all scenarios/number of correct catheters across all scenarios) were calculated. The mean detected and identified shift was calculated. Results: The maximum noise ±1 standard deviation associated with the EMT acquisitions was 1.0 ± 0.1 mm, and the mean noise was 0.6 ± 0.1 mm. Registration of all the EMT and CT dwell positions was associated with a mean catheter error of 0.6 ± 0.2 mm, a maximum catheter error of 0.9 ± 0.4 mm, a mean dwell error of 1.0 ± 0.3 mm, and a maximum dwell error of 1.3 ± 0.7 mm. Error detection and catheter identification sensitivity and specificity of 100% were observed for swap, mix and shift (≥2.6 mm for error detection; ≥2.7 mm for catheter identification) errors. A mean detected shift of 1.8 ± 0.4 mm and a mean identified shift of 1.9 ± 0.4 mm were observed. Conclusions: Registration of the EMT dwell positions to the CT dwell positions was possible with a residual mean error per catheter of 0.6 ± 0.2 mm and a maximum error for any dwell of 1.3 ± 0.7 mm. These low residual registration errors show that quality assurance of the general characteristics of the catheters and of possible errors affecting one specific dwell position is possible. The sensitivity and specificity of the catheter digitization verification algorithm was 100% for swap and mix errors and for shifts ≥2.6 mm. On average, shifts ≥1.8 mm were detected, and shifts ≥1.9 mm were detected and identified.« less

  12. Identification of factors associated with diagnostic error in primary care.

    PubMed

    Minué, Sergio; Bermúdez-Tamayo, Clara; Fernández, Alberto; Martín-Martín, José Jesús; Benítez, Vivian; Melguizo, Miguel; Caro, Araceli; Orgaz, María José; Prados, Miguel Angel; Díaz, José Enrique; Montoro, Rafael

    2014-05-12

    Missed, delayed or incorrect diagnoses are considered to be diagnostic errors. The aim of this paper is to describe the methodology of a study to analyse cognitive aspects of the process by which primary care (PC) physicians diagnose dyspnoea. It examines the possible links between the use of heuristics, suboptimal cognitive acts and diagnostic errors, using Reason's taxonomy of human error (slips, lapses, mistakes and violations). The influence of situational factors (professional experience, perceived overwork and fatigue) is also analysed. Cohort study of new episodes of dyspnoea in patients receiving care from family physicians and residents at PC centres in Granada (Spain). With an initial expected diagnostic error rate of 20%, and a sampling error of 3%, 384 episodes of dyspnoea are calculated to be required. In addition to filling out the electronic medical record of the patients attended, each physician fills out 2 specially designed questionnaires about the diagnostic process performed in each case of dyspnoea. The first questionnaire includes questions on the physician's initial diagnostic impression, the 3 most likely diagnoses (in order of likelihood), and the diagnosis reached after the initial medical history and physical examination. It also includes items on the physicians' perceived overwork and fatigue during patient care. The second questionnaire records the confirmed diagnosis once it is reached. The complete diagnostic process is peer-reviewed to identify and classify the diagnostic errors. The possible use of heuristics of representativeness, availability, and anchoring and adjustment in each diagnostic process is also analysed. Each audit is reviewed with the physician responsible for the diagnostic process. Finally, logistic regression models are used to determine if there are differences in the diagnostic error variables based on the heuristics identified. This work sets out a new approach to studying the diagnostic decision-making process in PC, taking advantage of new technologies which allow immediate recording of the decision-making process.

  13. Identification of factors associated with diagnostic error in primary care

    PubMed Central

    2014-01-01

    Background Missed, delayed or incorrect diagnoses are considered to be diagnostic errors. The aim of this paper is to describe the methodology of a study to analyse cognitive aspects of the process by which primary care (PC) physicians diagnose dyspnoea. It examines the possible links between the use of heuristics, suboptimal cognitive acts and diagnostic errors, using Reason’s taxonomy of human error (slips, lapses, mistakes and violations). The influence of situational factors (professional experience, perceived overwork and fatigue) is also analysed. Methods Cohort study of new episodes of dyspnoea in patients receiving care from family physicians and residents at PC centres in Granada (Spain). With an initial expected diagnostic error rate of 20%, and a sampling error of 3%, 384 episodes of dyspnoea are calculated to be required. In addition to filling out the electronic medical record of the patients attended, each physician fills out 2 specially designed questionnaires about the diagnostic process performed in each case of dyspnoea. The first questionnaire includes questions on the physician’s initial diagnostic impression, the 3 most likely diagnoses (in order of likelihood), and the diagnosis reached after the initial medical history and physical examination. It also includes items on the physicians’ perceived overwork and fatigue during patient care. The second questionnaire records the confirmed diagnosis once it is reached. The complete diagnostic process is peer-reviewed to identify and classify the diagnostic errors. The possible use of heuristics of representativeness, availability, and anchoring and adjustment in each diagnostic process is also analysed. Each audit is reviewed with the physician responsible for the diagnostic process. Finally, logistic regression models are used to determine if there are differences in the diagnostic error variables based on the heuristics identified. Discussion This work sets out a new approach to studying the diagnostic decision-making process in PC, taking advantage of new technologies which allow immediate recording of the decision-making process. PMID:24884984

  14. A contribution to the calculation of measurement uncertainty and optimization of measuring strategies in coordinate measurement

    NASA Astrophysics Data System (ADS)

    Waeldele, F.

    1983-01-01

    The influence of sample shape deviations on the measurement uncertainties and the optimization of computer aided coordinate measurement were investigated for a circle and a cylinder. Using the complete error propagation law in matrix form the parameter uncertainties are calculated, taking the correlation between the measurement points into account. Theoretical investigations show that the measuring points have to be equidistantly distributed and that for a cylindrical body a measuring point distribution along a cross section is better than along a helical line. The theoretically obtained expressions to calculate the uncertainties prove to be a good estimation basis. The simple error theory is not satisfactory for estimation. The complete statistical data analysis theory helps to avoid aggravating measurement errors and to adjust the number of measuring points to the required measuring uncertainty.

  15. The theory precision analyse of RFM localization of satellite remote sensing imagery

    NASA Astrophysics Data System (ADS)

    Zhang, Jianqing; Xv, Biao

    2009-11-01

    The tradition method of detecting precision of Rational Function Model(RFM) is to make use of a great deal check points, and it calculates mean square error through comparing calculational coordinate with known coordinate. This method is from theory of probability, through a large number of samples to statistic estimate value of mean square error, we can think its estimate value approaches in its true when samples are well enough. This paper is from angle of survey adjustment, take law of propagation of error as the theory basis, and it calculates theory precision of RFM localization. Then take the SPOT5 three array imagery as experiment data, and the result of traditional method and narrated method in the paper are compared, while has confirmed tradition method feasible, and answered its theory precision question from the angle of survey adjustment.

  16. Statistics of the residual refraction errors in laser ranging data

    NASA Technical Reports Server (NTRS)

    Gardner, C. S.

    1977-01-01

    A theoretical model for the range error covariance was derived by assuming that the residual refraction errors are due entirely to errors in the meteorological data which are used to calculate the atmospheric correction. The properties of the covariance function are illustrated by evaluating the theoretical model for the special case of a dense network of weather stations uniformly distributed within a circle.

  17. Nature of the refractive errors in rhesus monkeys (Macaca mulatta) with experimentally induced ametropias.

    PubMed

    Qiao-Grider, Ying; Hung, Li-Fang; Kee, Chea-Su; Ramamirtham, Ramkumar; Smith, Earl L

    2010-08-23

    We analyzed the contribution of individual ocular components to vision-induced ametropias in 210 rhesus monkeys. The primary contribution to refractive-error development came from vitreous chamber depth; a minor contribution from corneal power was also detected. However, there was no systematic relationship between refractive error and anterior chamber depth or between refractive error and any crystalline lens parameter. Our results are in good agreement with previous studies in humans, suggesting that the refractive errors commonly observed in humans are created by vision-dependent mechanisms that are similar to those operating in monkeys. This concordance emphasizes the applicability of rhesus monkeys in refractive-error studies. Copyright 2010 Elsevier Ltd. All rights reserved.

  18. Nature of the Refractive Errors in Rhesus Monkeys (Macaca mulatta) with Experimentally Induced Ametropias

    PubMed Central

    Qiao-Grider, Ying; Hung, Li-Fang; Kee, Chea-su; Ramamirtham, Ramkumar; Smith, Earl L.

    2010-01-01

    We analyzed the contribution of individual ocular components to vision-induced ametropias in 210 rhesus monkeys. The primary contribution to refractive-error development came from vitreous chamber depth; a minor contribution from corneal power was also detected. However, there was no systematic relationship between refractive error and anterior chamber depth or between refractive error and any crystalline lens parameter. Our results are in good agreement with previous studies in humans, suggesting that the refractive errors commonly observed in humans are created by vision-dependent mechanisms that are similar to those operating in monkeys. This concordance emphasizes the applicability of rhesus monkeys in refractive-error studies. PMID:20600237

  19. Prevalence of refractive errors in the Slovak population calculated using the Gullstrand schematic eye model.

    PubMed

    Popov, I; Valašková, J; Štefaničková, J; Krásnik, V

    2017-01-01

    A substantial part of the population suffers from some kind of refractive errors. It is envisaged that their prevalence may change with the development of society. The aim of this study is to determine the prevalence of refractive errors using calculations based on the Gullstrand schematic eye model. We used the Gullstrand schematic eye model to calculate refraction retrospectively. Refraction was presented as the need for glasses correction at a vertex distance of 12 mm. The necessary data was obtained using the optical biometer Lenstar LS900. Data which could not be obtained due to the limitations of the device was substituted by theoretical data from the Gullstrand schematic eye model. Only analyses from the right eyes were presented. The data was interpreted using descriptive statistics, Pearson correlation and t-test. The statistical tests were conducted at a level of significance of 5%. Our sample included 1663 patients (665 male, 998 female) within the age range of 19 to 96 years. Average age was 70.8 ± 9.53 years. Average refraction of the eye was 2.73 ± 2.13D (males 2.49 ± 2.34, females 2.90 ± 2.76). The mean absolute error from emmetropia was 3.01 ± 1.58 (males 2.83 ± 2.95, females 3.25 ± 3.35). 89.06% of the sample was hyperopic, 6.61% was myopic and 4.33% emmetropic. We did not find any correlation between refraction and age. Females were more hyperopic than males. We did not find any statistically significant hypermetopic shift of refraction with age. According to our estimation, the calculations of refractive errors using the Gullstrand schematic eye model showed a significant hypermetropic shift of more than +2D. Our results could be used in future for comparing the prevalence of refractive errors using same methods we used.Key words: refractive errors, refraction, Gullstrand schematic eye model, population, emmetropia.

  20. Improvement of drug dose calculations by classroom teaching or e-learning: a randomised controlled trial in nurses.

    PubMed

    Simonsen, Bjoerg O; Daehlin, Gro K; Johansson, Inger; Farup, Per G

    2014-10-24

    Insufficient skills in drug dose calculations increase the risk for medication errors. Even experienced nurses may struggle with such calculations. Learning flexibility and cost considerations make e-learning interesting as an alternative to classroom teaching. This study compared the learning outcome and risk of error after a course in drug dose calculations for nurses with the two methods. In a randomised controlled open study, nurses from hospitals and primary healthcare were randomised to either e-learning or classroom teaching. Before and after a 2-day course, the nurses underwent a multiple choice test in drug dose calculations: 14 tasks with four alternative answers (score 0-14), and a statement regarding the certainty of each answer (score 0-3). High risk of error was being certain that incorrect answer was correct. The results are given as the mean (SD). 16 men and 167 women participated in the study, aged 42.0 (9.5) years with a working experience of 12.3 (9.5) years. The number of correct answers after e-learning was 11.6 (2.0) and after classroom teaching 11.9 (2.0) (p=0.18, NS); improvement were 0.5 (1.6) and 0.9 (2.2), respectively (p=0.07, NS). Classroom learning was significantly superior to e-learning among participants with a pretest score below 9. In support of e-learning was evaluation of specific value for the working situation. There was no difference in risk of error between groups after the course (p=0.77). The study showed no differences in learning outcome or risk of error between e-learning and classroom teaching in drug dose calculations. The overall learning outcome was small. Weak precourse knowledge was associated with better outcome after classroom teaching. Published by the BMJ Publishing Group Limited. For permission to use (where not already granted under a licence) please go to http://group.bmj.com/group/rights-licensing/permissions.

  1. Improvement of drug dose calculations by classroom teaching or e-learning: a randomised controlled trial in nurses

    PubMed Central

    Simonsen, Bjoerg O; Daehlin, Gro K; Johansson, Inger; Farup, Per G

    2014-01-01

    Introduction Insufficient skills in drug dose calculations increase the risk for medication errors. Even experienced nurses may struggle with such calculations. Learning flexibility and cost considerations make e-learning interesting as an alternative to classroom teaching. This study compared the learning outcome and risk of error after a course in drug dose calculations for nurses with the two methods. Methods In a randomised controlled open study, nurses from hospitals and primary healthcare were randomised to either e-learning or classroom teaching. Before and after a 2-day course, the nurses underwent a multiple choice test in drug dose calculations: 14 tasks with four alternative answers (score 0–14), and a statement regarding the certainty of each answer (score 0–3). High risk of error was being certain that incorrect answer was correct. The results are given as the mean (SD). Results 16 men and 167 women participated in the study, aged 42.0 (9.5) years with a working experience of 12.3 (9.5) years. The number of correct answers after e-learning was 11.6 (2.0) and after classroom teaching 11.9 (2.0) (p=0.18, NS); improvement were 0.5 (1.6) and 0.9 (2.2), respectively (p=0.07, NS). Classroom learning was significantly superior to e-learning among participants with a pretest score below 9. In support of e-learning was evaluation of specific value for the working situation. There was no difference in risk of error between groups after the course (p=0.77). Conclusions The study showed no differences in learning outcome or risk of error between e-learning and classroom teaching in drug dose calculations. The overall learning outcome was small. Weak precourse knowledge was associated with better outcome after classroom teaching. PMID:25344483

  2. Decision Making In A High-Tech World: Automation Bias and Countermeasures

    NASA Technical Reports Server (NTRS)

    Mosier, Kathleen L.; Skitka, Linda J.; Burdick, Mark R.; Heers, Susan T.; Rosekind, Mark R. (Technical Monitor)

    1996-01-01

    Automated decision aids and decision support systems have become essential tools in many high-tech environments. In aviation, for example, flight management systems computers not only fly the aircraft, but also calculate fuel efficient paths, detect and diagnose system malfunctions and abnormalities, and recommend or carry out decisions. Air Traffic Controllers will soon be utilizing decision support tools to help them predict and detect potential conflicts and to generate clearances. Other fields as disparate as nuclear power plants and medical diagnostics are similarly becoming more and more automated. Ideally, the combination of human decision maker and automated decision aid should result in a high-performing team, maximizing the advantages of additional cognitive and observational power in the decision-making process. In reality, however, the presence of these aids often short-circuits the way that even very experienced decision makers have traditionally handled tasks and made decisions, and introduces opportunities for new decision heuristics and biases. Results of recent research investigating the use of automated aids have indicated the presence of automation bias, that is, errors made when decision makers rely on automated cues as a heuristic replacement for vigilant information seeking and processing. Automation commission errors, i.e., errors made when decision makers inappropriately follow an automated directive, or automation omission errors, i.e., errors made when humans fail to take action or notice a problem because an automated aid fails to inform them, can result from this tendency. Evidence of the tendency to make automation-related omission and commission errors has been found in pilot self reports, in studies using pilots in flight simulations, and in non-flight decision making contexts with student samples. Considerable research has found that increasing social accountability can successfully ameliorate a broad array of cognitive biases and resultant errors. To what extent these effects generalize to performance situations is not yet empirically established. The two studies to be presented represent concurrent efforts, with student and professional pilot samples, to determine the effects of accountability pressures on automation bias and on the verification of the accurate functioning of automated aids. Students (Experiment 1) and commercial pilots (Experiment 2) performed simulated flight tasks using automated aids. In both studies, participants who perceived themselves as accountable for their strategies of interaction with the automation were significantly more likely to verify its correctness, and committed significantly fewer automation-related errors than those who did not report this perception.

  3. GUM Analysis for TIMS and SIMS Isotopic Ratios in Graphite

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Heasler, Patrick G.; Gerlach, David C.; Cliff, John B.

    2007-04-01

    This report describes GUM calculations for TIMS and SIMS isotopic ratio measurements of reactor graphite samples. These isotopic ratios are used to estimate reactor burn-up, and currently consist of various ratios of U, Pu, and Boron impurities in the graphite samples. The GUM calculation is a propagation of error methodology that assigns uncertainties (in the form of standard error and confidence bound) to the final estimates.

  4. Nuclear binding energy using semi empirical mass formula

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ankita,, E-mail: ankitagoyal@gmail.com; Suthar, B.

    2016-05-06

    In the present communication, semi empirical mass formula using the liquid drop model has been presented. Nuclear binding energies are calculated using semi empirical mass formula with various constants given by different researchers. We also compare these calculated values with experimental data and comparative study for finding suitable constants is added using the error plot. The study is extended to find the more suitable constant to reduce the error.

  5. Nonconvergence of the Wang-Landau algorithms with multiple random walkers.

    PubMed

    Belardinelli, R E; Pereyra, V D

    2016-05-01

    This paper discusses some convergence properties in the entropic sampling Monte Carlo methods with multiple random walkers, particularly in the Wang-Landau (WL) and 1/t algorithms. The classical algorithms are modified by the use of m-independent random walkers in the energy landscape to calculate the density of states (DOS). The Ising model is used to show the convergence properties in the calculation of the DOS, as well as the critical temperature, while the calculation of the number π by multiple dimensional integration is used in the continuum approximation. In each case, the error is obtained separately for each walker at a fixed time, t; then, the average over m walkers is performed. It is observed that the error goes as 1/sqrt[m]. However, if the number of walkers increases above a certain critical value m>m_{x}, the error reaches a constant value (i.e., it saturates). This occurs for both algorithms; however, it is shown that for a given system, the 1/t algorithm is more efficient and accurate than the similar version of the WL algorithm. It follows that it makes no sense to increase the number of walkers above a critical value m_{x}, since it does not reduce the error in the calculation. Therefore, the number of walkers does not guarantee convergence.

  6. An Introduction to Error Analysis for Quantitative Chemistry

    ERIC Educational Resources Information Center

    Neman, R. L.

    1972-01-01

    Describes two formulas for calculating errors due to instrument limitations which are usually found in gravimetric volumetric analysis and indicates their possible applications to other fields of science. (CC)

  7. Assessing the Impact of Analytical Error on Perceived Disease Severity.

    PubMed

    Kroll, Martin H; Garber, Carl C; Bi, Caixia; Suffin, Stephen C

    2015-10-01

    The perception of the severity of disease from laboratory results assumes that the results are free of analytical error; however, analytical error creates a spread of results into a band and thus a range of perceived disease severity. To assess the impact of analytical errors by calculating the change in perceived disease severity, represented by the hazard ratio, using non-high-density lipoprotein (nonHDL) cholesterol as an example. We transformed nonHDL values into ranges using the assumed total allowable errors for total cholesterol (9%) and high-density lipoprotein cholesterol (13%). Using a previously determined relationship between the hazard ratio and nonHDL, we calculated a range of hazard ratios for specified nonHDL concentrations affected by analytical error. Analytical error, within allowable limits, created a band of values of nonHDL, with a width spanning 30 to 70 mg/dL (0.78-1.81 mmol/L), depending on the cholesterol and high-density lipoprotein cholesterol concentrations. Hazard ratios ranged from 1.0 to 2.9, a 16% to 50% error. Increased bias widens this range and decreased bias narrows it. Error-transformed results produce a spread of values that straddle the various cutoffs for nonHDL. The range of the hazard ratio obscures the meaning of results, because the spread of ratios at different cutoffs overlap. The magnitude of the perceived hazard ratio error exceeds that for the allowable analytical error, and significantly impacts the perceived cardiovascular disease risk. Evaluating the error in the perceived severity (eg, hazard ratio) provides a new way to assess the impact of analytical error.

  8. Moments of inclination error distribution computer program

    NASA Technical Reports Server (NTRS)

    Myler, T. R.

    1981-01-01

    A FORTRAN coded computer program is described which calculates orbital inclination error statistics using a closed-form solution. This solution uses a data base of trajectory errors from actual flights to predict the orbital inclination error statistics. The Scott flight history data base consists of orbit insertion errors in the trajectory parameters - altitude, velocity, flight path angle, flight azimuth, latitude and longitude. The methods used to generate the error statistics are of general interest since they have other applications. Program theory, user instructions, output definitions, subroutine descriptions and detailed FORTRAN coding information are included.

  9. Operational hydrological forecasting in Bavaria. Part I: Forecast uncertainty

    NASA Astrophysics Data System (ADS)

    Ehret, U.; Vogelbacher, A.; Moritz, K.; Laurent, S.; Meyer, I.; Haag, I.

    2009-04-01

    In Bavaria, operational flood forecasting has been established since the disastrous flood of 1999. Nowadays, forecasts based on rainfall information from about 700 raingauges and 600 rivergauges are calculated and issued for nearly 100 rivergauges. With the added experience of the 2002 and 2005 floods, awareness grew that the standard deterministic forecast, neglecting the uncertainty associated with each forecast is misleading, creating a false feeling of unambiguousness. As a consequence, a system to identify, quantify and communicate the sources and magnitude of forecast uncertainty has been developed, which will be presented in part I of this study. In this system, the use of ensemble meteorological forecasts plays a key role which will be presented in part II. Developing the system, several constraints stemming from the range of hydrological regimes and operational requirements had to be met: Firstly, operational time constraints obviate the variation of all components of the modeling chain as would be done in a full Monte Carlo simulation. Therefore, an approach was chosen where only the most relevant sources of uncertainty were dynamically considered while the others were jointly accounted for by static error distributions from offline analysis. Secondly, the dominant sources of uncertainty vary over the wide range of forecasted catchments: In alpine headwater catchments, typically of a few hundred square kilometers in size, rainfall forecast uncertainty is the key factor for forecast uncertainty, with a magnitude dynamically changing with the prevailing predictability of the atmosphere. In lowland catchments encompassing several thousands of square kilometers, forecast uncertainty in the desired range (usually up to two days) is mainly dependent on upstream gauge observation quality, routing and unpredictable human impact such as reservoir operation. The determination of forecast uncertainty comprised the following steps: a) From comparison of gauge observations and several years of archived forecasts, overall empirical error distributions termed 'overall error' were for each gauge derived for a range of relevant forecast lead times. b) The error distributions vary strongly with the hydrometeorological situation, therefore a subdivision into the hydrological cases 'low flow, 'rising flood', 'flood', flood recession' was introduced. c) For the sake of numerical compression, theoretical distributions were fitted to the empirical distributions using the method of moments. Here, the normal distribution was generally best suited. d) Further data compression was achieved by representing the distribution parameters as a function (second-order polynome) of lead time. In general, the 'overall error' obtained from the above procedure is most useful in regions where large human impact occurs and where the influence of the meteorological forecast is limited. In upstream regions however, forecast uncertainty is strongly dependent on the current predictability of the atmosphere, which is contained in the spread of an ensemble forecast. Including this dynamically in the hydrological forecast uncertainty estimation requires prior elimination of the contribution of the weather forecast to the 'overall error'. This was achieved by calculating long series of hydrometeorological forecast tests, where rainfall observations were used instead of forecasts. The resulting error distribution is termed 'model error' and can be applied on hydrological ensemble forecasts, where ensemble rainfall forecasts are used as forcing. The concept will be illustrated by examples (good and bad ones) covering a wide range of catchment sizes, hydrometeorological regimes and quality of hydrological model calibration. The methodology to combine the static and dynamic shares of uncertainty will be presented in part II of this study.

  10. Convergence of highly parallel stray field calculation using the fast multipole method on irregular meshes

    NASA Astrophysics Data System (ADS)

    Palmesi, P.; Abert, C.; Bruckner, F.; Suess, D.

    2018-05-01

    Fast stray field calculation is commonly considered of great importance for micromagnetic simulations, since it is the most time consuming part of the simulation. The Fast Multipole Method (FMM) has displayed linear O(N) parallelization behavior on many cores. This article investigates the error of a recent FMM approach approximating sources using linear—instead of constant—finite elements in the singular integral for calculating the stray field and the corresponding potential. After measuring performance in an earlier manuscript, this manuscript investigates the convergence of the relative L2 error for several FMM simulation parameters. Various scenarios either calculating the stray field directly or via potential are discussed.

  11. Improvement of the technique of calculating the energy-force parameters of pinch-pass mills for increasing the efficiency of producing cold-rolled strips

    NASA Astrophysics Data System (ADS)

    Garber, E. A.; Timofeeva, M. A.

    2016-11-01

    New propositions are introduced into the technique of energy-force calculation of pinch-pass mills in order to determine the energy-force and technological parameters of skin rolling of cold-rolled steel strips at the minimum errors. The application of these propositions decreases the errors of calculating the forces and torques in a working stand by a factor of 3-5 as compared to the calculation according to the well-known technique, saves the electric power in the existing mills, and demonstrates the possibility of decreasing the dimensions of working stands and the power of the rolling mill engine.

  12. Conditional Entropy and Location Error in Indoor Localization Using Probabilistic Wi-Fi Fingerprinting.

    PubMed

    Berkvens, Rafael; Peremans, Herbert; Weyn, Maarten

    2016-10-02

    Localization systems are increasingly valuable, but their location estimates are only useful when the uncertainty of the estimate is known. This uncertainty is currently calculated as the location error given a ground truth, which is then used as a static measure in sometimes very different environments. In contrast, we propose the use of the conditional entropy of a posterior probability distribution as a complementary measure of uncertainty. This measure has the advantage of being dynamic, i.e., it can be calculated during localization based on individual sensor measurements, does not require a ground truth, and can be applied to discrete localization algorithms. Furthermore, for every consistent location estimation algorithm, both the location error and the conditional entropy measures must be related, i.e., a low entropy should always correspond with a small location error, while a high entropy can correspond with either a small or large location error. We validate this relationship experimentally by calculating both measures of uncertainty in three publicly available datasets using probabilistic Wi-Fi fingerprinting with eight different implementations of the sensor model. We show that the discrepancy between these measures, i.e., many location estimates having a high location error while simultaneously having a low conditional entropy, is largest for the least realistic implementations of the probabilistic sensor model. Based on the results presented in this paper, we conclude that conditional entropy, being dynamic, complementary to location error, and applicable to both continuous and discrete localization, provides an important extra means of characterizing a localization method.

  13. Conditional Entropy and Location Error in Indoor Localization Using Probabilistic Wi-Fi Fingerprinting

    PubMed Central

    Berkvens, Rafael; Peremans, Herbert; Weyn, Maarten

    2016-01-01

    Localization systems are increasingly valuable, but their location estimates are only useful when the uncertainty of the estimate is known. This uncertainty is currently calculated as the location error given a ground truth, which is then used as a static measure in sometimes very different environments. In contrast, we propose the use of the conditional entropy of a posterior probability distribution as a complementary measure of uncertainty. This measure has the advantage of being dynamic, i.e., it can be calculated during localization based on individual sensor measurements, does not require a ground truth, and can be applied to discrete localization algorithms. Furthermore, for every consistent location estimation algorithm, both the location error and the conditional entropy measures must be related, i.e., a low entropy should always correspond with a small location error, while a high entropy can correspond with either a small or large location error. We validate this relationship experimentally by calculating both measures of uncertainty in three publicly available datasets using probabilistic Wi-Fi fingerprinting with eight different implementations of the sensor model. We show that the discrepancy between these measures, i.e., many location estimates having a high location error while simultaneously having a low conditional entropy, is largest for the least realistic implementations of the probabilistic sensor model. Based on the results presented in this paper, we conclude that conditional entropy, being dynamic, complementary to location error, and applicable to both continuous and discrete localization, provides an important extra means of characterizing a localization method. PMID:27706099

  14. Point of optimal kinematic error: improvement of the instantaneous helical pivot method for locating centers of rotation.

    PubMed

    De Rosario, Helios; Page, Alvaro; Mata, Vicente

    2014-05-07

    This paper proposes a variation of the instantaneous helical pivot technique for locating centers of rotation. The point of optimal kinematic error (POKE), which minimizes the velocity at the center of rotation, may be obtained by just adding a weighting factor equal to the square of angular velocity in Woltring׳s equation of the pivot of instantaneous helical axes (PIHA). Calculations are simplified with respect to the original method, since it is not necessary to make explicit calculations of the helical axis, and the effect of accidental errors is reduced. The improved performance of this method was validated by simulations based on a functional calibration task for the gleno-humeral joint center. Noisy data caused a systematic dislocation of the calculated center of rotation towards the center of the arm marker cluster. This error in PIHA could even exceed the effect of soft tissue artifacts associated to small and medium deformations, but it was successfully reduced by the POKE estimation. Copyright © 2014 Elsevier Ltd. All rights reserved.

  15. Reliability study of biometrics "do not contact" in myopia.

    PubMed

    Migliorini, R; Fratipietro, M; Comberiati, A M; Pattavina, L; Arrico, L

    The aim of the study is a comparison between the actually achieved after surgery condition versus the expected refractive condition of the eye as calculated via a biometer. The study was conducted in a random group of 38 eyes of patients undergoing surgery by phacoemulsification. The mean absolute error was calculated between the predicted values from the measurements with the optical biometer and those obtained in the post-operative error which was at around 0.47% Our study shows results not far from those reported in the literature, and in relation, to the mean absolute error is among the lowest values at 0.47 ± 0.11 SEM.

  16. Efficacy and workload analysis of a fixed vertical couch position technique and a fixed‐action–level protocol in whole‐breast radiotherapy

    PubMed Central

    Verhoeven, Karolien; Weltens, Caroline; Van den Heuvel, Frank

    2015-01-01

    Quantification of the setup errors is vital to define appropriate setup margins preventing geographical misses. The no‐action–level (NAL) correction protocol reduces the systematic setup errors and, hence, the setup margins. The manual entry of the setup corrections in the record‐and‐verify software, however, increases the susceptibility of the NAL protocol to human errors. Moreover, the impact of the skin mobility on the anteroposterior patient setup reproducibility in whole‐breast radiotherapy (WBRT) is unknown. In this study, we therefore investigated the potential of fixed vertical couch position‐based patient setup in WBRT. The possibility to introduce a threshold for correction of the systematic setup errors was also explored. We measured the anteroposterior, mediolateral, and superior–inferior setup errors during fractions 1–12 and weekly thereafter with tangential angled single modality paired imaging. These setup data were used to simulate the residual setup errors of the NAL protocol, the fixed vertical couch position protocol, and the fixed‐action–level protocol with different correction thresholds. Population statistics of the setup errors of 20 breast cancer patients and 20 breast cancer patients with additional regional lymph node (LN) irradiation were calculated to determine the setup margins of each off‐line correction protocol. Our data showed the potential of the fixed vertical couch position protocol to restrict the systematic and random anteroposterior residual setup errors to 1.8 mm and 2.2 mm, respectively. Compared to the NAL protocol, a correction threshold of 2.5 mm reduced the frequency of mediolateral and superior–inferior setup corrections with 40% and 63%, respectively. The implementation of the correction threshold did not deteriorate the accuracy of the off‐line setup correction compared to the NAL protocol. The combination of the fixed vertical couch position protocol, for correction of the anteroposterior setup error, and the fixed‐action–level protocol with 2.5 mm correction threshold, for correction of the mediolateral and the superior–inferior setup errors, was proved to provide adequate and comparable patient setup accuracy in WBRT and WBRT with additional LN irradiation. PACS numbers: 87.53.Kn, 87.57.‐s

  17. SU-E-J-30: Benchmark Image-Based TCP Calculation for Evaluation of PTV Margins for Lung SBRT Patients

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Li, M; Chetty, I; Zhong, H

    2014-06-01

    Purpose: Tumor control probability (TCP) calculated with accumulated radiation doses may help design appropriate treatment margins. Image registration errors, however, may compromise the calculated TCP. The purpose of this study is to develop benchmark CT images to quantify registration-induced errors in the accumulated doses and their corresponding TCP. Methods: 4DCT images were registered from end-inhale (EI) to end-exhale (EE) using a “demons” algorithm. The demons DVFs were corrected by an FEM model to get realistic deformation fields. The FEM DVFs were used to warp the EI images to create the FEM-simulated images. The two images combined with the FEM DVFmore » formed a benchmark model. Maximum intensity projection (MIP) images, created from the EI and simulated images, were used to develop IMRT plans. Two plans with 3 and 5 mm margins were developed for each patient. With these plans, radiation doses were recalculated on the simulated images and warped back to the EI images using the FEM DVFs to get the accumulated doses. The Elastix software was used to register the FEM-simulated images to the EI images. TCPs calculated with the Elastix-accumulated doses were compared with those generated by the FEM to get the TCP error of the Elastix registrations. Results: For six lung patients, the mean Elastix registration error ranged from 0.93 to 1.98 mm. Their relative dose errors in PTV were between 0.28% and 6.8% for 3mm margin plans, and between 0.29% and 6.3% for 5mm-margin plans. As the PTV margin reduced from 5 to 3 mm, the mean TCP error of the Elastix-reconstructed doses increased from 2.0% to 2.9%, and the mean NTCP errors decreased from 1.2% to 1.1%. Conclusion: Patient-specific benchmark images can be used to evaluate the impact of registration errors on the computed TCPs, and may help select appropriate PTV margins for lung SBRT patients.« less

  18. Sensitivity Analysis of Nuclide Importance to One-Group Neutron Cross Sections

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sekimoto, Hiroshi; Nemoto, Atsushi; Yoshimura, Yoshikane

    The importance of nuclides is useful when investigating nuclide characteristics in a given neutron spectrum. However, it is derived using one-group microscopic cross sections, which may contain large errors or uncertainties. The sensitivity coefficient shows the effect of these errors or uncertainties on the importance.The equations for calculating sensitivity coefficients of importance to one-group nuclear constants are derived using the perturbation method. Numerical values are also evaluated for some important cases for fast and thermal reactor systems.Many characteristics of the sensitivity coefficients are derived from the derived equations and numerical results. The matrix of sensitivity coefficients seems diagonally dominant. However,more » it is not always satisfied in a detailed structure. The detailed structure of the matrix and the characteristics of coefficients are given.By using the obtained sensitivity coefficients, some demonstration calculations have been performed. The effects of error and uncertainty of nuclear data and of the change of one-group cross-section input caused by fuel design changes through the neutron spectrum are investigated. These calculations show that the sensitivity coefficient is useful when evaluating error or uncertainty of nuclide importance caused by the cross-section data error or uncertainty and when checking effectiveness of fuel cell or core design change for improving neutron economy.« less

  19. Human emotion detector based on genetic algorithm using lip features

    NASA Astrophysics Data System (ADS)

    Brown, Terrence; Fetanat, Gholamreza; Homaifar, Abdollah; Tsou, Brian; Mendoza-Schrock, Olga

    2010-04-01

    We predicted human emotion using a Genetic Algorithm (GA) based lip feature extractor from facial images to classify all seven universal emotions of fear, happiness, dislike, surprise, anger, sadness and neutrality. First, we isolated the mouth from the input images using special methods, such as Region of Interest (ROI) acquisition, grayscaling, histogram equalization, filtering, and edge detection. Next, the GA determined the optimal or near optimal ellipse parameters that circumvent and separate the mouth into upper and lower lips. The two ellipses then went through fitness calculation and were followed by training using a database of Japanese women's faces expressing all seven emotions. Finally, our proposed algorithm was tested using a published database consisting of emotions from several persons. The final results were then presented in confusion matrices. Our results showed an accuracy that varies from 20% to 60% for each of the seven emotions. The errors were mainly due to inaccuracies in the classification, and also due to the different expressions in the given emotion database. Detailed analysis of these errors pointed to the limitation of detecting emotion based on the lip features alone. Similar work [1] has been done in the literature for emotion detection in only one person, we have successfully extended our GA based solution to include several subjects.

  20. Effect of monochromatic aberrations on photorefractive patterns

    NASA Astrophysics Data System (ADS)

    Campbell, Melanie C. W.; Bobier, W. R.; Roorda, A.

    1995-08-01

    Photorefractive methods have become popular in the measurement of refractive and accommodative states of infants and children owing to their photographic nature and rapid speed of measurement. As in the case of any method that measures the refractive state of the human eye, monochromatic aberrations will reduce the accuracy of the measurement. Monochromatic aberrations cannot be as easily predicted or controlled as chromatic aberrations during the measurement, and accordingly they will introduce measurement errors. This study defines this error or uncertainty by extending the existing paraxial optical analyses of coaxial and eccentric photorefraction. This new optical analysis predicts that, for the amounts of spherical aberration (SA) reported for the human eye, there will be a significant degree of measurement uncertainty introduced for all photorefractive methods. The dioptric amount of this uncertainty may exceed the maximum amount of SA present in the eye. The calculated effects on photorefractive measurement of a real eye with a mixture of spherical aberration and coma are shown to be significant. The ability, developed here, to predict photorefractive patterns corresponding to different amounts and types of monochromatic aberration may in the future lead to an extension of photorefractive methods to the dual measurement of refractive states and aberrations of individual eyes. aberration, retinal image quality,

  1. Quantitative prediction of ionization effect on human skin permeability.

    PubMed

    Baba, Hiromi; Ueno, Yusuke; Hashida, Mitsuru; Yamashita, Fumiyoshi

    2017-04-30

    Although skin permeability of an active ingredient can be severely affected by its ionization in a dose solution, most of the existing prediction models cannot predict such impacts. To provide reliable predictors, we curated a novel large dataset of in vitro human skin permeability coefficients for 322 entries comprising chemically diverse permeants whose ionization fractions can be calculated. Subsequently, we generated thousands of computational descriptors, including LogD (octanol-water distribution coefficient at a specific pH), and analyzed the dataset using nonlinear support vector regression (SVR) and Gaussian process regression (GPR) combined with greedy descriptor selection. The SVR model was slightly superior to the GPR model, with externally validated squared correlation coefficient, root mean square error, and mean absolute error values of 0.94, 0.29, and 0.21, respectively. These models indicate that Log D is effective for a comprehensive prediction of ionization effects on skin permeability. In addition, the proposed models satisfied the statistical criteria endorsed in recent model validation studies. These models can evaluate virtually generated compounds at any pH; therefore, they can be used for high-throughput evaluations of numerous active ingredients and optimization of their skin permeability with respect to permeant ionization. Copyright © 2017 Elsevier B.V. All rights reserved.

  2. Human Error as an Emergent Property of Action Selection and Task Place-Holding.

    PubMed

    Tamborello, Franklin P; Trafton, J Gregory

    2017-05-01

    A computational process model could explain how the dynamic interaction of human cognitive mechanisms produces each of multiple error types. With increasing capability and complexity of technological systems, the potential severity of consequences of human error is magnified. Interruption greatly increases people's error rates, as does the presence of other information to maintain in an active state. The model executed as a software-instantiated Monte Carlo simulation. It drew on theoretical constructs such as associative spreading activation for prospective memory, explicit rehearsal strategies as a deliberate cognitive operation to aid retrospective memory, and decay. The model replicated the 30% effect of interruptions on postcompletion error in Ratwani and Trafton's Stock Trader task, the 45% interaction effect on postcompletion error of working memory capacity and working memory load from Byrne and Bovair's Phaser Task, as well as the 5% perseveration and 3% omission effects of interruption from the UNRAVEL Task. Error classes including perseveration, omission, and postcompletion error fall naturally out of the theory. The model explains post-interruption error in terms of task state representation and priming for recall of subsequent steps. Its performance suggests that task environments providing more cues to current task state will mitigate error caused by interruption. For example, interfaces could provide labeled progress indicators or facilities for operators to quickly write notes about their task states when interrupted.

  3. A priori mesh grading for the numerical calculation of the head-related transfer functions

    PubMed Central

    Ziegelwanger, Harald; Kreuzer, Wolfgang; Majdak, Piotr

    2017-01-01

    Head-related transfer functions (HRTFs) describe the directional filtering of the incoming sound caused by the morphology of a listener’s head and pinnae. When an accurate model of a listener’s morphology exists, HRTFs can be calculated numerically with the boundary element method (BEM). However, the general recommendation to model the head and pinnae with at least six elements per wavelength renders the BEM as a time-consuming procedure when calculating HRTFs for the full audible frequency range. In this study, a mesh preprocessing algorithm is proposed, viz., a priori mesh grading, which reduces the computational costs in the HRTF calculation process significantly. The mesh grading algorithm deliberately violates the recommendation of at least six elements per wavelength in certain regions of the head and pinnae and varies the size of elements gradually according to an a priori defined grading function. The evaluation of the algorithm involved HRTFs calculated for various geometric objects including meshes of three human listeners and various grading functions. The numerical accuracy and the predicted sound-localization performance of calculated HRTFs were analyzed. A-priori mesh grading appeared to be suitable for the numerical calculation of HRTFs in the full audible frequency range and outperformed uniform meshes in terms of numerical errors, perception based predictions of sound-localization performance, and computational costs. PMID:28239186

  4. The reliability of humerothoracic angles during arm elevation depends on the representation of rotations.

    PubMed

    López-Pascual, Juan; Cáceres, Magda Liliana; De Rosario, Helios; Page, Álvaro

    2016-02-08

    The reliability of joint rotation measurements is an issue of major interest, especially in clinical applications. The effect of instrumental errors and soft tissue artifacts on the variability of human motion measures is well known, but the influence of the representation of joint motion has not yet been studied. The aim of the study was to compare the within-subject reliability of three rotation formalisms for the calculation of the shoulder elevation joint angles. Five repetitions of humeral elevation in the scapular plane of 27 healthy subjects were recorded using a stereophotogrammetry system. The humerothoracic joint angles were calculated using the YX'Y" and XZ'Y" Euler angle sequences and the attitude vector. A within-subject repeatability study was performed for the three representations. ICC, SEM and CV were the indices used to estimate the error in the calculation of the angle amplitudes and the angular waveforms with each method. Excellent results were obtained in all representations for the main angle (elevation), but there were remarkable differences for axial rotation and plane of elevation. The YX'Y" sequence generally had the poorest reliability in the secondary angles. The XZ'Y' sequence proved to be the most reliable representation of axial rotation, whereas the attitude vector had the highest reliability in the plane of elevation. These results highlight the importance of selecting the method used to describe the joint motion when within-subjects reliability is an important issue of the experiment. This may be of particular importance when the secondary angles of motions are being studied. Copyright © 2016 Elsevier Ltd. All rights reserved.

  5. Spent fuel pool storage calculations using the ISOCRIT burnup credit tool

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kucukboyaci, Vefa; Marshall, William BJ J

    2012-01-01

    In order to conservatively apply burnup credit in spent fuel pool criticality safety analyses, Westinghouse has developed a software tool, ISOCRIT, for generating depletion isotopics. This tool is used to create isotopics data based on specific reactor input parameters, such as design basis assembly type; bounding power/burnup profiles; reactor specific moderator temperature profiles; pellet percent theoretical density; burnable absorbers, axial blanket regions, and bounding ppm boron concentration. ISOCRIT generates burnup dependent isotopics using PARAGON; Westinghouse's state-of-the-art and licensed lattice physics code. Generation of isotopics and passing the data to the subsequent 3D KENO calculations are performed in an automated fashion,more » thus reducing the chance for human error. Furthermore, ISOCRIT provides the means for responding to any customer request regarding re-analysis due to changed parameters (e.g., power uprate, exit temperature changes, etc.) with a quick turnaround.« less

  6. Vanishing Point Extraction and Refinement for Robust Camera Calibration

    PubMed Central

    Tsai, Fuan

    2017-01-01

    This paper describes a flexible camera calibration method using refined vanishing points without prior information. Vanishing points are estimated from human-made features like parallel lines and repeated patterns. With the vanishing points extracted from the three mutually orthogonal directions, the interior and exterior orientation parameters can be further calculated using collinearity condition equations. A vanishing point refinement process is proposed to reduce the uncertainty caused by vanishing point localization errors. The fine-tuning algorithm is based on the divergence of grouped feature points projected onto the reference plane, minimizing the standard deviation of each of the grouped collinear points with an O(1) computational complexity. This paper also presents an automated vanishing point estimation approach based on the cascade Hough transform. The experiment results indicate that the vanishing point refinement process can significantly improve camera calibration parameters and the root mean square error (RMSE) of the constructed 3D model can be reduced by about 30%. PMID:29280966

  7. Design and Calibration of a New 6 DOF Haptic Device

    PubMed Central

    Qin, Huanhuan; Song, Aiguo; Liu, Yuqing; Jiang, Guohua; Zhou, Bohe

    2015-01-01

    For many applications such as tele-operational robots and interactions with virtual environments, it is better to have performance with force feedback than without. Haptic devices are force reflecting interfaces. They can also track human hand positions simultaneously. A new 6 DOF (degree-of-freedom) haptic device was designed and calibrated in this study. It mainly contains a double parallel linkage, a rhombus linkage, a rotating mechanical structure and a grasping interface. Benefited from the unique design, it is a hybrid structure device with a large workspace and high output capability. Therefore, it is capable of multi-finger interactions. Moreover, with an adjustable base, operators can change different postures without interrupting haptic tasks. To investigate the performance regarding position tracking accuracy and static output forces, we conducted experiments on a three-dimensional electric sliding platform and a digital force gauge, respectively. Displacement errors and force errors are calculated and analyzed. To identify the capability and potential of the device, four application examples were programmed. PMID:26690449

  8. Impact of Uncertainties and Errors in Converting NWS Radiosonde Hygristor Resistances to Relative Humidity

    NASA Technical Reports Server (NTRS)

    Westphal, Douglas L.; Russell, Philip (Technical Monitor)

    1994-01-01

    A set of 2,600 6-second, National Weather Service soundings from NASA's FIRE-II Cirrus field experiment are used to illustrate previously known errors and new potential errors in the VIZ and SDD brand relative humidity (RH) sensors and the MicroART processing software. The entire spectrum of RH is potentially affected by at least one of these errors. (These errors occur before being converted to dew point temperature.) Corrections to the errors are discussed. Examples are given of the effect that these errors and biases may have on numerical weather prediction and radiative transfer. The figure shows the OLR calculated for the corrected and uncorrected soundings using an 18-band radiative transfer code. The OLR differences are sufficiently large to warrant consideration when validating line-by-line radiation calculations that use radiosonde data to specify the atmospheric state, or when validating satellite retrievals. In addition, a comparison of observations of RE during FIRE-II derived from GOES satellite, raman lidar, MAPS analyses, NCAR CLASS sondes, and the NWS sondes reveals disagreement in the RH distribution and underlines our lack of an understanding of the climatology of water vapor.

  9. Impact of Uncertainties and Errors in Converting NWS Radiosonde Hygristor Resistances to Relative Humidity

    NASA Technical Reports Server (NTRS)

    Westphal, Douglas L.; Russell, Philip B. (Technical Monitor)

    1994-01-01

    A set of 2,600 6-second, National Weather Service soundings from NASA's FIRE-II Cirrus field experiment are used to illustrate previously known errors and new potential errors in the VIZ and SDD ) brand relative humidity (RH) sensors and the MicroART processing software. The entire spectrum of RH is potentially affected by at least one of these errors. (These errors occur before being converted to dew point temperature.) Corrections to the errors are discussed. Examples are given of the effect that these errors and biases may have on numerical weather prediction and radiative transfer. The figure shows the OLR calculated for the corrected and uncorrected soundings using an 18-band radiative transfer code. The OLR differences are sufficiently large to warrant consideration when validating line-by-line radiation calculations that use radiosonde data to specify the atmospheric state, or when validating satellite retrievals. in addition, a comparison of observations of RH during FIRE-II derived from GOES satellite, raman lidar, MAPS analyses, NCAR CLASS sondes, and the NWS sondes reveals disagreement in the RH distribution and underlines our lack of an understanding of the climatology of water vapor.

  10. The solar vector error within the SNPP Common GEO code, the correction, and the effects on the VIIRS SDR RSB calibration

    NASA Astrophysics Data System (ADS)

    Fulbright, Jon; Anderson, Samuel; Lei, Ning; Efremova, Boryana; Wang, Zhipeng; McIntire, Jeffrey; Chiang, Kwofu; Xiong, Xiaoxiong

    2014-11-01

    Due to a software error, the solar and lunar vectors reported in the on-board calibrator intermediate product (OBC-IP) files for SNPP VIIRS are incorrect. The magnitude of the error is about 0.2 degree, and the magnitude is increasing by about 0.01 degree per year. This error, although small, has an effect on the radiometric calibration of the reflective solar bands (RSB) because accurate solar angles are required for calculating the screen transmission functions and for calculating the illumination of the Solar Diffuser panel. In this paper, we describe the error in the Common GEO code, and how it may be fixed. We present evidence for the error from within the OBC-IP data. We also describe the effects of the solar vector error on the RSB calibration and the Sensor Data Record (SDR). In order to perform this evaluation, we have reanalyzed the yaw-maneuver data to compute the vignetting functions required for the on-orbit SD RSB radiometric calibration. After the reanalysis, we find effect of up to 0.5% on the shortwave infrared (SWIR) RSB calibration.

  11. Minimizing the IOL power error induced by keratometric power.

    PubMed

    Camps, Vicente J; Piñero, David P; de Fez, Dolores; Mateo, Verónica

    2013-07-01

    To evaluate theoretically in normal eyes the influence on IOL power (PIOL) calculation of the use of a keratometric index (nk) and to analyze and validate preliminarily the use of an adjusted keratometric index (nkadj) in the IOL power calculation (PIOLadj). A model of variable keratometric index (nkadj) for corneal power calculation (Pc) was used for IOL power calculation (named PIOLadj). Theoretical differences (ΔPIOL) between the new proposed formula (PIOLadj) and which is obtained through Gaussian optics ((Equation is included in full-text article.)) were determined using Gullstrand and Le Grand eye models. The proposed new formula for IOL power calculation (PIOLadj) was prevalidated clinically in 81 eyes of 81 candidates for corneal refractive surgery and compared with Haigis, HofferQ, Holladay, and SRK/T formulas. A theoretical PIOL underestimation greater than 0.5 diopters was present in most of the cases when nk = 1.3375 was used. If nkadj was used for Pc calculation, a maximal calculated error in ΔPIOL of ±0.5 diopters at corneal vertex in most cases was observed independently from the eye model, r1c, and the desired postoperative refraction. The use of nkadj in IOL power calculation (PIOLadj) could be valid with effective lens position optimization nondependent of the corneal power. The use of a single value of nk for Pc calculation can lead to significant errors in PIOL calculation that may explain some IOL power overestimations with conventional formulas. These inaccuracies can be minimized by using the new PIOLadj based on the algorithm of nkadj.

  12. Systematic Error Modeling and Bias Estimation

    PubMed Central

    Zhang, Feihu; Knoll, Alois

    2016-01-01

    This paper analyzes the statistic properties of the systematic error in terms of range and bearing during the transformation process. Furthermore, we rely on a weighted nonlinear least square method to calculate the biases based on the proposed models. The results show the high performance of the proposed approach for error modeling and bias estimation. PMID:27213386

  13. 7 CFR 275.23 - Determination of State agency program performance.

    Code of Federal Regulations, 2011 CFR

    2011-01-01

    ... NUTRITION SERVICE, DEPARTMENT OF AGRICULTURE FOOD STAMP AND FOOD DISTRIBUTION PROGRAM PERFORMANCE REPORTING... section, the adjusted regressed payment error rate shall be calculated to yield the State agency's payment error rate. The adjusted regressed payment error rate is given by r 1″ + r 2″. (ii) If FNS determines...

  14. Monte Carlo dose calculation in dental amalgam phantom

    PubMed Central

    Aziz, Mohd. Zahri Abdul; Yusoff, A. L.; Osman, N. D.; Abdullah, R.; Rabaie, N. A.; Salikin, M. S.

    2015-01-01

    It has become a great challenge in the modern radiation treatment to ensure the accuracy of treatment delivery in electron beam therapy. Tissue inhomogeneity has become one of the factors for accurate dose calculation, and this requires complex algorithm calculation like Monte Carlo (MC). On the other hand, computed tomography (CT) images used in treatment planning system need to be trustful as they are the input in radiotherapy treatment. However, with the presence of metal amalgam in treatment volume, the CT images input showed prominent streak artefact, thus, contributed sources of error. Hence, metal amalgam phantom often creates streak artifacts, which cause an error in the dose calculation. Thus, a streak artifact reduction technique was applied to correct the images, and as a result, better images were observed in terms of structure delineation and density assigning. Furthermore, the amalgam density data were corrected to provide amalgam voxel with accurate density value. As for the errors of dose uncertainties due to metal amalgam, they were reduced from 46% to as low as 2% at d80 (depth of the 80% dose beyond Zmax) using the presented strategies. Considering the number of vital and radiosensitive organs in the head and the neck regions, this correction strategy is suggested in reducing calculation uncertainties through MC calculation. PMID:26500401

  15. [Medication error management climate and perception for system use according to construction of medication error prevention system].

    PubMed

    Kim, Myoung Soo

    2012-08-01

    The purpose of this cross-sectional study was to examine current status of IT-based medication error prevention system construction and the relationships among system construction, medication error management climate and perception for system use. The participants were 124 patient safety chief managers working for 124 hospitals with over 300 beds in Korea. The characteristics of the participants, construction status and perception of systems (electric pharmacopoeia, electric drug dosage calculation system, computer-based patient safety reporting and bar-code system) and medication error management climate were measured in this study. The data were collected between June and August 2011. Descriptive statistics, partial Pearson correlation and MANCOVA were used for data analysis. Electric pharmacopoeia were constructed in 67.7% of participating hospitals, computer-based patient safety reporting systems were constructed in 50.8%, electric drug dosage calculation systems were in use in 32.3%. Bar-code systems showed up the lowest construction rate at 16.1% of Korean hospitals. Higher rates of construction of IT-based medication error prevention systems resulted in greater safety and a more positive error management climate prevailed. The supportive strategies for improving perception for use of IT-based systems would add to system construction, and positive error management climate would be more easily promoted.

  16. Simultaneous Control of Error Rates in fMRI Data Analysis

    PubMed Central

    Kang, Hakmook; Blume, Jeffrey; Ombao, Hernando; Badre, David

    2015-01-01

    The key idea of statistical hypothesis testing is to fix, and thereby control, the Type I error (false positive) rate across samples of any size. Multiple comparisons inflate the global (family-wise) Type I error rate and the traditional solution to maintaining control of the error rate is to increase the local (comparison-wise) Type II error (false negative) rates. However, in the analysis of human brain imaging data, the number of comparisons is so large that this solution breaks down: the local Type II error rate ends up being so large that scientifically meaningful analysis is precluded. Here we propose a novel solution to this problem: allow the Type I error rate to converge to zero along with the Type II error rate. It works because when the Type I error rate per comparison is very small, the accumulation (or global) Type I error rate is also small. This solution is achieved by employing the Likelihood paradigm, which uses likelihood ratios to measure the strength of evidence on a voxel-by-voxel basis. In this paper, we provide theoretical and empirical justification for a likelihood approach to the analysis of human brain imaging data. In addition, we present extensive simulations that show the likelihood approach is viable, leading to ‘cleaner’ looking brain maps and operationally superiority (lower average error rate). Finally, we include a case study on cognitive control related activation in the prefrontal cortex of the human brain. PMID:26272730

  17. Human error and the search for blame

    NASA Technical Reports Server (NTRS)

    Denning, Peter J.

    1989-01-01

    Human error is a frequent topic in discussions about risks in using computer systems. A rational analysis of human error leads through the consideration of mistakes to standards that designers use to avoid mistakes that lead to known breakdowns. The irrational side, however, is more interesting. It conditions people to think that breakdowns are inherently wrong and that there is ultimately someone who is responsible. This leads to a search for someone to blame which diverts attention from: learning from the mistakes; seeing the limitations of current engineering methodology; and improving the discourse of design.

  18. Colour compatibility between teeth and dental shade guides in Quinquagenarians and Septuagenarians.

    PubMed

    Cocking, C; Cevirgen, E; Helling, S; Oswald, M; Corcodel, N; Rammelsberg, P; Reinelt, G; Hassel, A J

    2009-11-01

    The aim of this investigation was to determine colour compatibility between dental shade guides, namely, VITA Classical (VC) and VITA 3D-Master (3D), and human teeth in quinquagenarians and septuagenarians. Tooth colour, described in terms of L*a*b* values of the middle third of facial tooth surface of 1391 teeth, was measured using VITA Easyshade in 195 subjects (48% female). These were compared with the colours (L*a*b* values) of the shade tabs of VC and 3D. The mean coverage error and the percentage of tooth colours being within a given colour difference (DeltaE(ab)) from the tabs of VC and 3D were calculated. For comparison, hypothetical, optimized, population-specific shade guides were additionally calculated based on discrete optimization techniques for optimizing coverage. Mean coverage error was DeltaE(ab) = 3.51 for VC and DeltaE(ab) = 2.96 for 3D. Coverage of tooth colours by the tabs of VC and 3D within DeltaE(ab) = 2 was 23% and 24%, respectively, (DeltaE(ab)

  19. The complexity of hair/blood mercury concentration ratios and its implications.

    PubMed

    Liberda, Eric N; Tsuji, Leonard J S; Martin, Ian D; Ayotte, Pierre; Dewailly, Eric; Nieboer, Evert

    2014-10-01

    The World Health Organization (WHO) recommends a mercury (Hg) hair-to-blood ratio of 250 for the conversion of Hg hair levels to those in whole blood. This encouraged the selection of hair as the preferred analyte because it minimizes collection, storage, and transportation issues. In spite of these advantages, there is concern about inherent uncertainties in the use of this ratio. To evaluate the appropriateness of the WHO ratio, we investigated total hair and total blood Hg concentrations in 1333 individuals from 9 First Nations (Aboriginal) communities in northern Québec, Canada. We grouped participants by sex, age, and community and performed a 3-factor (M)ANOVA for total Hg in hair (0-2 cm), total Hg in blood, and their ratio. In addition, we calculated the percent error associated with the use of the WHO ratio in predicting blood Hg concentrations from hair Hg. For group comparisons, Estimated Marginal Means (EMMS) were calculated following ANOVA. At the community level, the error in blood Hg estimated from hair Hg ranged -25% to +24%. Systematic underestimation (-8.4%) occurred for females and overestimation for males (+5.8%). At the individual level, the corresponding error range was -98.7% to 1040%, with observed hair-to-blood ratios spanning 3 to 2845. The application of the ratio endorsed by the WHO would be unreliable for determining individual follow-up. We propose that Hg exposure be assessed by blood measurements when there are human health concerns, and that the singular use of hair and the hair-to-blood concentration conversion be discouraged in establishing individual risk. Crown Copyright © 2014. Published by Elsevier Inc. All rights reserved.

  20. Assessment of Mental, Emotional and Physical Stress through Analysis of Physiological Signals Using Smartphones.

    PubMed

    Mohino-Herranz, Inma; Gil-Pita, Roberto; Ferreira, Javier; Rosa-Zurera, Manuel; Seoane, Fernando

    2015-10-08

    Determining the stress level of a subject in real time could be of special interest in certain professional activities to allow the monitoring of soldiers, pilots, emergency personnel and other professionals responsible for human lives. Assessment of current mental fitness for executing a task at hand might avoid unnecessary risks. To obtain this knowledge, two physiological measurements were recorded in this work using customized non-invasive wearable instrumentation that measures electrocardiogram (ECG) and thoracic electrical bioimpedance (TEB) signals. The relevant information from each measurement is extracted via evaluation of a reduced set of selected features. These features are primarily obtained from filtered and processed versions of the raw time measurements with calculations of certain statistical and descriptive parameters. Selection of the reduced set of features was performed using genetic algorithms, thus constraining the computational cost of the real-time implementation. Different classification approaches have been studied, but neural networks were chosen for this investigation because they represent a good tradeoff between the intelligence of the solution and computational complexity. Three different application scenarios were considered. In the first scenario, the proposed system is capable of distinguishing among different types of activity with a 21.2% probability error, for activities coded as neutral, emotional, mental and physical. In the second scenario, the proposed solution distinguishes among the three different emotional states of neutral, sadness and disgust, with a probability error of 4.8%. In the third scenario, the system is able to distinguish between low mental load and mental overload with a probability error of 32.3%. The computational cost was calculated, and the solution was implemented in commercially available Android-based smartphones. The results indicate that execution of such a monitoring solution is negligible compared to the nominal computational load of current smartphones.

  1. Augmented Reality Using Transurethral Ultrasound for Laparoscopic Radical Prostatectomy: Preclinical Evaluation.

    PubMed

    Lanchon, Cecilia; Custillon, Guillaume; Moreau-Gaudry, Alexandre; Descotes, Jean-Luc; Long, Jean-Alexandre; Fiard, Gaelle; Voros, Sandrine

    2016-07-01

    To guide the surgeon during laparoscopic or robot-assisted radical prostatectomy an innovative laparoscopic/ultrasound fusion platform was developed using a motorized 3-dimensional transurethral ultrasound probe. We present what is to our knowledge the first preclinical evaluation of 3-dimensional prostate visualization using transurethral ultrasound and the preliminary results of this new augmented reality. The transurethral probe and laparoscopic/ultrasound registration were tested on realistic prostate phantoms made of standard polyvinyl chloride. The quality of transurethral ultrasound images and the detection of passive markers placed on the prostate surface were evaluated on 2-dimensional dynamic views and 3-dimensional reconstructions. The feasibility, precision and reproducibility of laparoscopic/transurethral ultrasound registration was then determined using 4, 5, 6 and 7 markers to assess the optimal amount needed. The root mean square error was calculated for each registration and the median root mean square error and IQR were calculated according to the number of markers. The transurethral ultrasound probe was easy to manipulate and the prostatic capsule was well visualized in 2 and 3 dimensions. Passive markers could precisely be localized in the volume. Laparoscopic/transurethral ultrasound registration procedures were performed on 74 phantoms of various sizes and shapes. All were successful. The median root mean square error of 1.1 mm (IQR 0.8-1.4) was significantly associated with the number of landmarks (p = 0.001). The highest accuracy was achieved using 6 markers. However, prostate volume did not affect registration precision. Transurethral ultrasound provided high quality prostate reconstruction and easy marker detection. Laparoscopic/ultrasound registration was successful with acceptable mm precision. Further investigations are necessary to achieve sub mm accuracy and assess feasibility in a human model. Copyright © 2016 American Urological Association Education and Research, Inc. Published by Elsevier Inc. All rights reserved.

  2. Kinetics and thermodynamics of exonuclease-deficient DNA polymerases

    NASA Astrophysics Data System (ADS)

    Gaspard, Pierre

    2016-04-01

    A kinetic theory is developed for exonuclease-deficient DNA polymerases, based on the experimental observation that the rates depend not only on the newly incorporated nucleotide, but also on the previous one, leading to the growth of Markovian DNA sequences from a Bernoullian template. The dependencies on nucleotide concentrations and template sequence are explicitly taken into account. In this framework, the kinetic and thermodynamic properties of DNA replication, in particular, the mean growth velocity, the error probability, and the entropy production are calculated analytically in terms of the rate constants and the concentrations. Theory is compared with numerical simulations for the DNA polymerases of T7 viruses and human mitochondria.

  3. IMRT QA: Selecting gamma criteria based on error detection sensitivity.

    PubMed

    Steers, Jennifer M; Fraass, Benedick A

    2016-04-01

    The gamma comparison is widely used to evaluate the agreement between measurements and treatment planning system calculations in patient-specific intensity modulated radiation therapy (IMRT) quality assurance (QA). However, recent publications have raised concerns about the lack of sensitivity when employing commonly used gamma criteria. Understanding the actual sensitivity of a wide range of different gamma criteria may allow the definition of more meaningful gamma criteria and tolerance limits in IMRT QA. We present a method that allows the quantitative determination of gamma criteria sensitivity to induced errors which can be applied to any unique combination of device, delivery technique, and software utilized in a specific clinic. A total of 21 DMLC IMRT QA measurements (ArcCHECK®, Sun Nuclear) were compared to QA plan calculations with induced errors. Three scenarios were studied: MU errors, multi-leaf collimator (MLC) errors, and the sensitivity of the gamma comparison to changes in penumbra width. Gamma comparisons were performed between measurements and error-induced calculations using a wide range of gamma criteria, resulting in a total of over 20 000 gamma comparisons. Gamma passing rates for each error class and case were graphed against error magnitude to create error curves in order to represent the range of missed errors in routine IMRT QA using 36 different gamma criteria. This study demonstrates that systematic errors and case-specific errors can be detected by the error curve analysis. Depending on the location of the error curve peak (e.g., not centered about zero), 3%/3 mm threshold = 10% at 90% pixels passing may miss errors as large as 15% MU errors and ±1 cm random MLC errors for some cases. As the dose threshold parameter was increased for a given %Diff/distance-to-agreement (DTA) setting, error sensitivity was increased by up to a factor of two for select cases. This increased sensitivity with increasing dose threshold was consistent across all studied combinations of %Diff/DTA. Criteria such as 2%/3 mm and 3%/2 mm with a 50% threshold at 90% pixels passing are shown to be more appropriately sensitive without being overly strict. However, a broadening of the penumbra by as much as 5 mm in the beam configuration was difficult to detect with commonly used criteria, as well as with the previously mentioned criteria utilizing a threshold of 50%. We have introduced the error curve method, an analysis technique which allows the quantitative determination of gamma criteria sensitivity to induced errors. The application of the error curve method using DMLC IMRT plans measured on the ArcCHECK® device demonstrated that large errors can potentially be missed in IMRT QA with commonly used gamma criteria (e.g., 3%/3 mm, threshold = 10%, 90% pixels passing). Additionally, increasing the dose threshold value can offer dramatic increases in error sensitivity. This approach may allow the selection of more meaningful gamma criteria for IMRT QA and is straightforward to apply to other combinations of devices and treatment techniques.

  4. GUM Analysis for SIMS Isotopic Ratios in BEP0 Graphite Qualification Samples, Round 2

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gerlach, David C.; Heasler, Patrick G.; Reid, Bruce D.

    2009-01-01

    This report describes GUM calculations for TIMS and SIMS isotopic ratio measurements of reactor graphite samples. These isotopic ratios are used to estimate reactor burn-up, and currently consist of various ratios of U, Pu, and Boron impurities in the graphite samples. The GUM calculation is a propagation of error methodology that assigns uncertainties (in the form of standard error and confidence bound) to the final estimates.

  5. Apparatus, Method and Program Storage Device for Determining High-Energy Neutron/Ion Transport to a Target of Interest

    NASA Technical Reports Server (NTRS)

    Wilson, John W. (Inventor); Tripathi, Ram K. (Inventor); Cucinotta, Francis A. (Inventor); Badavi, Francis F. (Inventor)

    2012-01-01

    An apparatus, method and program storage device for determining high-energy neutron/ion transport to a target of interest. Boundaries are defined for calculation of a high-energy neutron/ion transport to a target of interest; the high-energy neutron/ion transport to the target of interest is calculated using numerical procedures selected to reduce local truncation error by including higher order terms and to allow absolute control of propagated error by ensuring truncation error is third order in step size, and using scaling procedures for flux coupling terms modified to improve computed results by adding a scaling factor to terms describing production of j-particles from collisions of k-particles; and the calculated high-energy neutron/ion transport is provided to modeling modules to control an effective radiation dose at the target of interest.

  6. Sensitivity of thermal inertia calculations to variations in environmental factors. [in mapping of Earth's surface by remote sensing

    NASA Technical Reports Server (NTRS)

    Kahle, A. B.; Alley, R. E.; Schieldge, J. P.

    1984-01-01

    The sensitivity of thermal inertia (TI) calculations to errors in the measurement or parameterization of a number of environmental factors is considered here. The factors include effects of radiative transfer in the atmosphere, surface albedo and emissivity, variations in surface turbulent heat flux density, cloud cover, vegetative cover, and topography. The error analysis is based upon data from the Heat Capacity Mapping Mission (HCMM) satellite for July 1978 at three separate test sites in the deserts of the western United States. Results show that typical errors in atmospheric radiative transfer, cloud cover, and vegetative cover can individually cause root-mean-square (RMS) errors of about 10 percent (with atmospheric effects sometimes as large as 30-40 percent) in HCMM-derived thermal inertia images of 20,000-200,000 pixels.

  7. Methodology for Computational Fluid Dynamic Validation for Medical Use: Application to Intracranial Aneurysm.

    PubMed

    Paliwal, Nikhil; Damiano, Robert J; Varble, Nicole A; Tutino, Vincent M; Dou, Zhongwang; Siddiqui, Adnan H; Meng, Hui

    2017-12-01

    Computational fluid dynamics (CFD) is a promising tool to aid in clinical diagnoses of cardiovascular diseases. However, it uses assumptions that simplify the complexities of the real cardiovascular flow. Due to high-stakes in the clinical setting, it is critical to calculate the effect of these assumptions in the CFD simulation results. However, existing CFD validation approaches do not quantify error in the simulation results due to the CFD solver's modeling assumptions. Instead, they directly compare CFD simulation results against validation data. Thus, to quantify the accuracy of a CFD solver, we developed a validation methodology that calculates the CFD model error (arising from modeling assumptions). Our methodology identifies independent error sources in CFD and validation experiments, and calculates the model error by parsing out other sources of error inherent in simulation and experiments. To demonstrate the method, we simulated the flow field of a patient-specific intracranial aneurysm (IA) in the commercial CFD software star-ccm+. Particle image velocimetry (PIV) provided validation datasets for the flow field on two orthogonal planes. The average model error in the star-ccm+ solver was 5.63 ± 5.49% along the intersecting validation line of the orthogonal planes. Furthermore, we demonstrated that our validation method is superior to existing validation approaches by applying three representative existing validation techniques to our CFD and experimental dataset, and comparing the validation results. Our validation methodology offers a streamlined workflow to extract the "true" accuracy of a CFD solver.

  8. Why do adult dogs (Canis familiaris) commit the A-not-B search error?

    PubMed

    Sümegi, Zsófia; Kis, Anna; Miklósi, Ádám; Topál, József

    2014-02-01

    It has been recently reported that adult domestic dogs, like human infants, tend to commit perseverative search errors; that is, they select the previously rewarded empty location in Piagetian A-not-B search task because of the experimenter's ostensive communicative cues. There is, however, an ongoing debate over whether these findings reveal that dogs can use the human ostensive referential communication as a source of information or the phenomenon can be accounted for by "more simple" explanations like insufficient attention and learning based on local enhancement. In 2 experiments the authors systematically manipulated the type of human cueing (communicative or noncommunicative) adjacent to the A hiding place during both the A and B trials. Results highlight 3 important aspects of the dogs' A-not-B error: (a) search errors are influenced to a certain extent by dogs' motivation to retrieve the toy object; (b) human communicative and noncommunicative signals have different error-inducing effects; and (3) communicative signals presented at the A hiding place during the B trials but not during the A trials play a crucial role in inducing the A-not-B error and it can be induced even without demonstrating repeated hiding events at location A. These findings further confirm the notion that perseverative search error, at least partially, reflects a "ready-to-obey" attitude in the dog rather than insufficient attention and/or working memory.

  9. DNA/RNA transverse current sequencing: intrinsic structural noise from neighboring bases

    PubMed Central

    Alvarez, Jose R.; Skachkov, Dmitry; Massey, Steven E.; Kalitsov, Alan; Velev, Julian P.

    2015-01-01

    Nanopore DNA sequencing via transverse current has emerged as a promising candidate for third-generation sequencing technology. It produces long read lengths which could alleviate problems with assembly errors inherent in current technologies. However, the high error rates of nanopore sequencing have to be addressed. A very important source of the error is the intrinsic noise in the current arising from carrier dispersion along the chain of the molecule, i.e., from the influence of neighboring bases. In this work we perform calculations of the transverse current within an effective multi-orbital tight-binding model derived from first-principles calculations of the DNA/RNA molecules, to study the effect of this structural noise on the error rates in DNA/RNA sequencing via transverse current in nanopores. We demonstrate that a statistical technique, utilizing not only the currents through the nucleotides but also the correlations in the currents, can in principle reduce the error rate below any desired precision. PMID:26150827

  10. Common medial frontal mechanisms of adaptive control in humans and rodents

    PubMed Central

    Frank, Michael J.; Laubach, Mark

    2013-01-01

    In this report, we describe how common brain networks within the medial frontal cortex facilitate adaptive behavioral control in rodents and humans. We demonstrate that low frequency oscillations below 12 Hz are dramatically modulated after errors in humans over mid-frontal cortex and in rats within prelimbic and anterior cingulate regions of medial frontal cortex. These oscillations were phase-locked between medial frontal cortex and motor areas in both rats and humans. In rats, single neurons that encoded prior behavioral outcomes were phase-coherent with low-frequency field oscillations particularly after errors. Inactivating medial frontal regions in rats led to impaired behavioral adjustments after errors, eliminated the differential expression of low frequency oscillations after errors, and increased low-frequency spike-field coupling within motor cortex. Our results describe a novel mechanism for behavioral adaptation via low-frequency oscillations and elucidate how medial frontal networks synchronize brain activity to guide performance. PMID:24141310

  11. #2 - An Empirical Assessment of Exposure Measurement Error ...

    EPA Pesticide Factsheets

    Background• Differing degrees of exposure error acrosspollutants• Previous focus on quantifying and accounting forexposure error in single-pollutant models• Examine exposure errors for multiple pollutantsand provide insights on the potential for bias andattenuation of effect estimates in single and bipollutantepidemiological models The National Exposure Research Laboratory (NERL) Human Exposure and Atmospheric Sciences Division (HEASD) conducts research in support of EPA mission to protect human health and the environment. HEASD research program supports Goal 1 (Clean Air) and Goal 4 (Healthy People) of EPA strategic plan. More specifically, our division conducts research to characterize the movement of pollutants from the source to contact with humans. Our multidisciplinary research program produces Methods, Measurements, and Models to identify relationships between and characterize processes that link source emissions, environmental concentrations, human exposures, and target-tissue dose. The impact of these tools is improved regulatory programs and policies for EPA.

  12. Recent advances in electronic structure theory and their influence on the accuracy of ab initio potential energy surfaces

    NASA Technical Reports Server (NTRS)

    Bauschlicher, Charles W., Jr.; Langhoff, Stephen R.; Taylor, Peter R.

    1989-01-01

    Recent advances in electronic structure theory and the availability of high speed vector processors have substantially increased the accuracy of ab initio potential energy surfaces. The recently developed atomic natural orbital approach for basis set contraction has reduced both the basis set incompleteness and superposition errors in molecular calculations. Furthermore, full CI calculations can often be used to calibrate a CASSCF/MRCI approach that quantitatively accounts for the valence correlation energy. These computational advances also provide a vehicle for systematically improving the calculations and for estimating the residual error in the calculations. Calculations on selected diatomic and triatomic systems will be used to illustrate the accuracy that currently can be achieved for molecular systems. In particular, the F + H2 yields HF + H potential energy hypersurface is used to illustrate the impact of these computational advances on the calculation of potential energy surfaces.

  13. Recent advances in electronic structure theory and their influence on the accuracy of ab initio potential energy surfaces

    NASA Technical Reports Server (NTRS)

    Bauschlicher, Charles W., Jr.; Langhoff, Stephen R.; Taylor, Peter R.

    1988-01-01

    Recent advances in electronic structure theory and the availability of high speed vector processors have substantially increased the accuracy of ab initio potential energy surfaces. The recently developed atomic natural orbital approach for basis set contraction has reduced both the basis set incompleteness and superposition errors in molecular calculations. Furthermore, full CI calculations can often be used to calibrate a CASSCF/MRCI approach that quantitatively accounts for the valence correlation energy. These computational advances also provide a vehicle for systematically improving the calculations and for estimating the residual error in the calculations. Calculations on selected diatomic and triatomic systems will be used to illustrate the accuracy that currently can be achieved for molecular systems. In particular, the F+H2 yields HF+H potential energy hypersurface is used to illustrate the impact of these computational advances on the calculation of potential energy surfaces.

  14. Effect of correlated observation error on parameters, predictions, and uncertainty

    USGS Publications Warehouse

    Tiedeman, Claire; Green, Christopher T.

    2013-01-01

    Correlations among observation errors are typically omitted when calculating observation weights for model calibration by inverse methods. We explore the effects of omitting these correlations on estimates of parameters, predictions, and uncertainties. First, we develop a new analytical expression for the difference in parameter variance estimated with and without error correlations for a simple one-parameter two-observation inverse model. Results indicate that omitting error correlations from both the weight matrix and the variance calculation can either increase or decrease the parameter variance, depending on the values of error correlation (ρ) and the ratio of dimensionless scaled sensitivities (rdss). For small ρ, the difference in variance is always small, but for large ρ, the difference varies widely depending on the sign and magnitude of rdss. Next, we consider a groundwater reactive transport model of denitrification with four parameters and correlated geochemical observation errors that are computed by an error-propagation approach that is new for hydrogeologic studies. We compare parameter estimates, predictions, and uncertainties obtained with and without the error correlations. Omitting the correlations modestly to substantially changes parameter estimates, and causes both increases and decreases of parameter variances, consistent with the analytical expression. Differences in predictions for the models calibrated with and without error correlations can be greater than parameter differences when both are considered relative to their respective confidence intervals. These results indicate that including observation error correlations in weighting for nonlinear regression can have important effects on parameter estimates, predictions, and their respective uncertainties.

  15. A Novel Two-Compartment Model for Calculating Bone Volume Fractions and Bone Mineral Densities From Computed Tomography Images.

    PubMed

    Lin, Hsin-Hon; Peng, Shin-Lei; Wu, Jay; Shih, Tian-Yu; Chuang, Keh-Shih; Shih, Cheng-Ting

    2017-05-01

    Osteoporosis is a disease characterized by a degradation of bone structures. Various methods have been developed to diagnose osteoporosis by measuring bone mineral density (BMD) of patients. However, BMDs from these methods were not equivalent and were incomparable. In addition, partial volume effect introduces errors in estimating bone volume from computed tomography (CT) images using image segmentation. In this study, a two-compartment model (TCM) was proposed to calculate bone volume fraction (BV/TV) and BMD from CT images. The TCM considers bones to be composed of two sub-materials. Various equivalent BV/TV and BMD can be calculated by applying corresponding sub-material pairs in the TCM. In contrast to image segmentation, the TCM prevented the influence of the partial volume effect by calculating the volume percentage of sub-material in each image voxel. Validations of the TCM were performed using bone-equivalent uniform phantoms, a 3D-printed trabecular-structural phantom, a temporal bone flap, and abdominal CT images. By using the TCM, the calculated BV/TVs of the uniform phantoms were within percent errors of ±2%; the percent errors of the structural volumes with various CT slice thickness were below 9%; the volume of the temporal bone flap was close to that from micro-CT images with a percent error of 4.1%. No significant difference (p >0.01) was found between the areal BMD of lumbar vertebrae calculated using the TCM and measured using dual-energy X-ray absorptiometry. In conclusion, the proposed TCM could be applied to diagnose osteoporosis, while providing a basis for comparing various measurement methods.

  16. Cutting the Cord: Discrimination and Command Responsibility in Autonomous Lethal Weapons

    DTIC Science & Technology

    2014-02-13

    machine responses to identical stimuli, and it was the job of a third party human “witness” to determine which participant was man and which was...machines may be error free, but there are potential benefits to be gained through autonomy if machines can meet or exceed human performance in...lieu of human operators and reap the benefits that autonomy provides. Human and Machine Error It would be foolish to assert that either humans

  17. Density Functional Theory Calculation of pKa's of Thiols in Aqueous Solution Using Explicit Water Molecules and the Polarizable Continuum Model.

    PubMed

    Thapa, Bishnu; Schlegel, H Bernhard

    2016-07-21

    The pKa's of substituted thiols are important for understanding their properties and reactivities in applications in chemistry, biochemistry, and material chemistry. For a collection of 175 different density functionals and the SMD implicit solvation model, the average errors in the calculated pKa's of methanethiol and ethanethiol are almost 10 pKa units higher than for imidazole. A test set of 45 substituted thiols with pKa's ranging from 4 to 12 has been used to assess the performance of 8 functionals with 3 different basis sets. As expected, the basis set needs to include polarization functions on the hydrogens and diffuse functions on the heavy atoms. Solvent cavity scaling was ineffective in correcting the errors in the calculated pKa's. Inclusion of an explicit water molecule that is hydrogen bonded with the H of the thiol group (in neutral) or S(-) (in thiolates) lowers error by an average of 3.5 pKa units. With one explicit water and the SMD solvation model, pKa's calculated with the M06-2X, PBEPBE, BP86, and LC-BLYP functionals are found to deviate from the experimental values by about 1.5-2.0 pKa units whereas pKa's with the B3LYP, ωB97XD and PBEVWN5 functionals are still in error by more than 3 pKa units. The inclusion of three explicit water molecules lowers the calculated pKa further by about 4.5 pKa units. With the B3LYP and ωB97XD functionals, the calculated pKa's are within one unit of the experimental values whereas most other functionals used in this study underestimate the pKa's. This study shows that the ωB97XD functional with the 6-31+G(d,p) and 6-311++G(d,p) basis sets, and the SMD solvation model with three explicit water molecules hydrogen bonded to the sulfur produces the best result for the test set (average error -0.11 ± 0.50 and +0.15 ± 0.58, respectively). The B3LYP functional also performs well (average error -1.11 ± 0.82 and -0.78 ± 0.79, respectively).

  18. Verification of calculated skin doses in postmastectomy helical tomotherapy.

    PubMed

    Ito, Shima; Parker, Brent C; Levine, Renee; Sanders, Mary Ella; Fontenot, Jonas; Gibbons, John; Hogstrom, Kenneth

    2011-10-01

    To verify the accuracy of calculated skin doses in helical tomotherapy for postmastectomy radiation therapy (PMRT). In vivo thermoluminescent dosimeters (TLDs) were used to measure the skin dose at multiple points in each of 14 patients throughout the course of treatment on a TomoTherapy Hi·Art II system, for a total of 420 TLD measurements. Five patients were evaluated near the location of the mastectomy scar, whereas 9 patients were evaluated throughout the treatment volume. The measured dose at each location was compared with calculations from the treatment planning system. The mean difference and standard error of the mean difference between measurement and calculation for the scar measurements was -1.8% ± 0.2% (standard deviation [SD], 4.3%; range, -11.1% to 10.6%). The mean difference and standard error of the mean difference between measurement and calculation for measurements throughout the treatment volume was -3.0% ± 0.4% (SD, 4.7%; range, -18.4% to 12.6%). The mean difference and standard error of the mean difference between measurement and calculation for all measurements was -2.1% ± 0.2% (standard deviation, 4.5%: range, -18.4% to 12.6%). The mean difference between measured and calculated TLD doses was statistically significant at two standard deviations of the mean, but was not clinically significant (i.e., was <5%). However, 23% of the measured TLD doses differed from the calculated TLD doses by more than 5%. The mean of the measured TLD doses agreed with TomoTherapy calculated TLD doses within our clinical criterion of 5%. Copyright © 2011 Elsevier Inc. All rights reserved.

  19. Validation of Calculations in a Digital Thermometer Firmware

    NASA Astrophysics Data System (ADS)

    Batagelj, V.; Miklavec, A.; Bojkovski, J.

    2014-04-01

    State-of-the-art digital thermometers are arguably remarkable measurement instruments, measuring outputs from resistance thermometers and/or thermocouples. Not only that they can readily achieve measuring accuracies in the parts-per-million range, but they also incorporate sophisticated algorithms for the transformation calculation of the measured resistance or voltage to temperature. These algorithms often include high-order polynomials, exponentials and logarithms, and must be performed using both standard coefficients and particular calibration coefficients. The numerical accuracy of these calculations and the associated uncertainty component must be much better than the accuracy of the raw measurement in order to be negligible in the total measurement uncertainty. In order for the end-user to gain confidence in these calculations as well as to conform to formal requirements of ISO/IEC 17025 and other standards, a way of validation of these numerical procedures performed in the firmware of the instrument is required. A software architecture which allows a simple validation of internal measuring instrument calculations is suggested. The digital thermometer should be able to expose all its internal calculation functions to the communication interface, so the end-user can compare the results of the internal measuring instrument calculation with reference results. The method can be regarded as a variation of the black-box software validation. Validation results on a thermometer prototype with implemented validation ability show that the calculation error of basic arithmetic operations is within the expected rounding error. For conversion functions, the calculation error is at least ten times smaller than the thermometer effective resolution for the particular probe type.

  20. Human error identification for laparoscopic surgery: Development of a motion economy perspective.

    PubMed

    Al-Hakim, Latif; Sevdalis, Nick; Maiping, Tanaphon; Watanachote, Damrongpan; Sengupta, Shomik; Dissaranan, Charuspong

    2015-09-01

    This study postulates that traditional human error identification techniques fail to consider motion economy principles and, accordingly, their applicability in operating theatres may be limited. This study addresses this gap in the literature with a dual aim. First, it identifies the principles of motion economy that suit the operative environment and second, it develops a new error mode taxonomy for human error identification techniques which recognises motion economy deficiencies affecting the performance of surgeons and predisposing them to errors. A total of 30 principles of motion economy were developed and categorised into five areas. A hierarchical task analysis was used to break down main tasks of a urological laparoscopic surgery (hand-assisted laparoscopic nephrectomy) to their elements and the new taxonomy was used to identify errors and their root causes resulting from violation of motion economy principles. The approach was prospectively tested in 12 observed laparoscopic surgeries performed by 5 experienced surgeons. A total of 86 errors were identified and linked to the motion economy deficiencies. Results indicate the developed methodology is promising. Our methodology allows error prevention in surgery and the developed set of motion economy principles could be useful for training surgeons on motion economy principles. Copyright © 2015 Elsevier Ltd and The Ergonomics Society. All rights reserved.

  1. Density Functional Calculations of Native Defects in CH 3 NH 3 PbI 3 : Effects of Spin–Orbit Coupling and Self-Interaction Error

    DOE PAGES

    Du, Mao-Hua

    2015-04-02

    We know that native point defects play an important role in carrier transport properties of CH3NH3PbI3. However, the nature of many important defects remains controversial due partly to the conflicting results reported by recent density functional theory (DFT) calculations. In this Letter, we show that self-interaction error and the neglect of spin–orbit coupling (SOC) in many previous DFT calculations resulted in incorrect positions of valence and conduction band edges, although their difference, which is the band gap, is in good agreement with the experimental value. Moreover, this problem has led to incorrect predictions of defect-level positions. Hybrid density functional calculations,more » which partially correct the self-interaction error and include the SOC, show that, among native point defects (including vacancies, interstitials, and antisites), only the iodine vacancy and its complexes induce deep electron and hole trapping levels inside of the band gap, acting as nonradiative recombination centers.« less

  2. Managing human error in aviation.

    PubMed

    Helmreich, R L

    1997-05-01

    Crew resource management (CRM) programs were developed to address team and leadership aspects of piloting modern airplanes. The goal is to reduce errors through team work. Human factors research and social, cognitive, and organizational psychology are used to develop programs tailored for individual airlines. Flight crews study accident case histories, group dynamics, and human error. Simulators provide pilots with the opportunity to solve complex flight problems. CRM in the simulator is called line-oriented flight training (LOFT). In automated cockpits CRM promotes the idea of automation as a crew member. Cultural aspects of aviation include professional, business, and national culture. The aviation CRM model has been adapted for training surgeons and operating room staff in human factors.

  3. Empirical Analysis of Systematic Communication Errors.

    DTIC Science & Technology

    1981-09-01

    human o~ . .... 8 components in communication systems. (Systematic errors were defined to be those that occur regularly in human communication links...phase of the human communication process and focuses on the linkage between a specific piece of information (and the receiver) and the transmission...communication flow. (2) Exchange. Exchange is the next phase in human communication and entails a concerted effort on the part of the sender and receiver to share

  4. HRA Aerospace Challenges

    NASA Technical Reports Server (NTRS)

    DeMott, Diana

    2013-01-01

    Compared to equipment designed to perform the same function over and over, humans are just not as reliable. Computers and machines perform the same action in the same way repeatedly getting the same result, unless equipment fails or a human interferes. Humans who are supposed to perform the same actions repeatedly often perform them incorrectly due to a variety of issues including: stress, fatigue, illness, lack of training, distraction, acting at the wrong time, not acting when they should, not following procedures, misinterpreting information or inattention to detail. Why not use robots and automatic controls exclusively if human error is so common? In an emergency or off normal situation that the computer, robotic element, or automatic control system is not designed to respond to, the result is failure unless a human can intervene. The human in the loop may be more likely to cause an error, but is also more likely to catch the error and correct it. When it comes to unexpected situations, or performing multiple tasks outside the defined mission parameters, humans are the only viable alternative. Human Reliability Assessments (HRA) identifies ways to improve human performance and reliability and can lead to improvements in systems designed to interact with humans. Understanding the context of the situation that can lead to human errors, which include taking the wrong action, no action or making bad decisions provides additional information to mitigate risks. With improved human reliability comes reduced risk for the overall operation or project.

  5. TU-F-17A-05: Calculating Tumor Trajectory and Dose-Of-The-Day for Highly Mobile Tumors Using Cone-Beam CT Projections

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Jones, B; Miften, M

    2014-06-15

    Purpose: Cone-beam CT (CBCT) projection images provide anatomical data in real-time over several respiratory cycles, forming a comprehensive picture of tumor movement. We developed a method using these projections to determine the trajectory and dose of highly mobile tumors during each fraction of treatment. Methods: CBCT images of a respiration phantom were acquired, where the trajectory mimicked a lung tumor with high amplitude (2.4 cm) and hysteresis. A template-matching algorithm was used to identify the location of a steel BB in each projection. A Gaussian probability density function for tumor position was calculated which best fit the observed trajectory ofmore » the BB in the imager geometry. Two methods to improve the accuracy of tumor track reconstruction were investigated: first, using respiratory phase information to refine the trajectory estimation, and second, using the Monte Carlo method to sample the estimated Gaussian tumor position distribution. 15 clinically-drawn abdominal/lung CTV volumes were used to evaluate the accuracy of the proposed methods by comparing the known and calculated BB trajectories. Results: With all methods, the mean position of the BB was determined with accuracy better than 0.1 mm, and root-mean-square (RMS) trajectory errors were lower than 5% of marker amplitude. Use of respiratory phase information decreased RMS errors by 30%, and decreased the fraction of large errors (>3 mm) by half. Mean dose to the clinical volumes was calculated with an average error of 0.1% and average absolute error of 0.3%. Dosimetric parameters D90/D95 were determined within 0.5% of maximum dose. Monte-Carlo sampling increased RMS trajectory and dosimetric errors slightly, but prevented over-estimation of dose in trajectories with high noise. Conclusions: Tumor trajectory and dose-of-the-day were accurately calculated using CBCT projections. This technique provides a widely-available method to evaluate highly-mobile tumors, and could facilitate better strategies to mitigate or compensate for motion during SBRT.« less

  6. Identifying Human Factors Issues in Aircraft Maintenance Operations

    NASA Technical Reports Server (NTRS)

    Veinott, Elizabeth S.; Kanki, Barbara G.; Shafto, Michael G. (Technical Monitor)

    1995-01-01

    Maintenance operations incidents submitted to the Aviation Safety Reporting System (ASRS) between 1986-1992 were systematically analyzed in order to identify issues relevant to human factors and crew coordination. This exploratory analysis involved 95 ASRS reports which represented a wide range of maintenance incidents. The reports were coded and analyzed according to the type of error (e.g, wrong part, procedural error, non-procedural error), contributing factors (e.g., individual, within-team, cross-team, procedure, tools), result of the error (e.g., aircraft damage or not) as well as the operational impact (e.g., aircraft flown to destination, air return, delay at gate). The main findings indicate that procedural errors were most common (48.4%) and that individual and team actions contributed to the errors in more than 50% of the cases. As for operational results, most errors were either corrected after landing at the destination (51.6%) or required the flight crew to stop enroute (29.5%). Interactions among these variables are also discussed. This analysis is a first step toward developing a taxonomy of crew coordination problems in maintenance. By understanding what variables are important and how they are interrelated, we may develop intervention strategies that are better tailored to the human factor issues involved.

  7. Managing Errors to Reduce Accidents in High Consequence Networked Information Systems

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ganter, J.H.

    1999-02-01

    Computers have always helped to amplify and propagate errors made by people. The emergence of Networked Information Systems (NISs), which allow people and systems to quickly interact worldwide, has made understanding and minimizing human error more critical. This paper applies concepts from system safety to analyze how hazards (from hackers to power disruptions) penetrate NIS defenses (e.g., firewalls and operating systems) to cause accidents. Such events usually result from both active, easily identified failures and more subtle latent conditions that have resided in the system for long periods. Both active failures and latent conditions result from human errors. We classifymore » these into several types (slips, lapses, mistakes, etc.) and provide NIS examples of how they occur. Next we examine error minimization throughout the NIS lifecycle, from design through operation to reengineering. At each stage, steps can be taken to minimize the occurrence and effects of human errors. These include defensive design philosophies, architectural patterns to guide developers, and collaborative design that incorporates operational experiences and surprises into design efforts. We conclude by looking at three aspects of NISs that will cause continuing challenges in error and accident management: immaturity of the industry, limited risk perception, and resource tradeoffs.« less

  8. Model Error Estimation for the CPTEC Eta Model

    NASA Technical Reports Server (NTRS)

    Tippett, Michael K.; daSilva, Arlindo

    1999-01-01

    Statistical data assimilation systems require the specification of forecast and observation error statistics. Forecast error is due to model imperfections and differences between the initial condition and the actual state of the atmosphere. Practical four-dimensional variational (4D-Var) methods try to fit the forecast state to the observations and assume that the model error is negligible. Here with a number of simplifying assumption, a framework is developed for isolating the model error given the forecast error at two lead-times. Two definitions are proposed for the Talagrand ratio tau, the fraction of the forecast error due to model error rather than initial condition error. Data from the CPTEC Eta Model running operationally over South America are used to calculate forecast error statistics and lower bounds for tau.

  9. Auto-tracking system for human lumbar motion analysis.

    PubMed

    Sui, Fuge; Zhang, Da; Lam, Shing Chun Benny; Zhao, Lifeng; Wang, Dongjun; Bi, Zhenggang; Hu, Yong

    2011-01-01

    Previous lumbar motion analyses suggest the usefulness of quantitatively characterizing spine motion. However, the application of such measurements is still limited by the lack of user-friendly automatic spine motion analysis systems. This paper describes an automatic analysis system to measure lumbar spine disorders that consists of a spine motion guidance device, an X-ray imaging modality to acquire digitized video fluoroscopy (DVF) sequences and an automated tracking module with a graphical user interface (GUI). DVF sequences of the lumbar spine are recorded during flexion-extension under a guidance device. The automatic tracking software utilizing a particle filter locates the vertebra-of-interest in every frame of the sequence, and the tracking result is displayed on the GUI. Kinematic parameters are also extracted from the tracking results for motion analysis. We observed that, in a bone model test, the maximum fiducial error was 3.7%, and the maximum repeatability error in translation and rotation was 1.2% and 2.6%, respectively. In our simulated DVF sequence study, the automatic tracking was not successful when the noise intensity was greater than 0.50. In a noisy situation, the maximal difference was 1.3 mm in translation and 1° in the rotation angle. The errors were calculated in translation (fiducial error: 2.4%, repeatability error: 0.5%) and in the rotation angle (fiducial error: 1.0%, repeatability error: 0.7%). However, the automatic tracking software could successfully track simulated sequences contaminated by noise at a density ≤ 0.5 with very high accuracy, providing good reliability and robustness. A clinical trial with 10 healthy subjects and 2 lumbar spondylolisthesis patients were enrolled in this study. The measurement with auto-tacking of DVF provided some information not seen in the conventional X-ray. The results proposed the potential use of the proposed system for clinical applications.

  10. Investigating mode errors on automated flight decks: illustrating the problem-driven, cumulative, and interdisciplinary nature of human factors research.

    PubMed

    Sarter, Nadine

    2008-06-01

    The goal of this article is to illustrate the problem-driven, cumulative, and highly interdisciplinary nature of human factors research by providing a brief overview of the work on mode errors on modern flight decks over the past two decades. Mode errors on modem flight decks were first reported in the late 1980s. Poor feedback, inadequate mental models of the automation, and the high degree of coupling and complexity of flight deck systems were identified as main contributors to these breakdowns in human-automation interaction. Various improvements of design, training, and procedures were proposed to address these issues. The author describes when and why the problem of mode errors surfaced, summarizes complementary research activities that helped identify and understand the contributing factors to mode errors, and describes some countermeasures that have been developed in recent years. This brief review illustrates how one particular human factors problem in the aviation domain enabled various disciplines and methodological approaches to contribute to a better understanding of, as well as provide better support for, effective human-automation coordination. Converging operations and interdisciplinary collaboration over an extended period of time are hallmarks of successful human factors research. The reported body of research can serve as a model for future research and as a teaching tool for students in this field of work.

  11. Influence of diffuse reflectance measurement accuracy on the scattering coefficient in determination of optical properties with integrating sphere optics (a secondary publication).

    PubMed

    Horibe, Takuro; Ishii, Katsunori; Fukutomi, Daichi; Awazu, Kunio

    2015-12-30

    An estimation error of the scattering coefficient of hemoglobin in the high absorption wavelength range has been observed in optical property calculations of blood-rich tissues. In this study, the relationship between the accuracy of diffuse reflectance measurement in the integrating sphere and calculated scattering coefficient was evaluated with a system to calculate optical properties combined with an integrating sphere setup and the inverse Monte Carlo simulation. Diffuse reflectance was measured with the integrating sphere using a small incident port diameter and optical properties were calculated. As a result, the estimation error of the scattering coefficient was improved by accurate measurement of diffuse reflectance. In the high absorption wavelength range, the accuracy of diffuse reflectance measurement has an effect on the calculated scattering coefficient.

  12. The Swiss cheese model of adverse event occurrence--Closing the holes.

    PubMed

    Stein, James E; Heiss, Kurt

    2015-12-01

    Traditional surgical attitude regarding error and complications has focused on individual failings. Human factors research has brought new and significant insights into the occurrence of error in healthcare, helping us identify systemic problems that injure patients while enhancing individual accountability and teamwork. This article introduces human factors science and its applicability to teamwork, surgical culture, medical error, and individual accountability. Copyright © 2015 Elsevier Inc. All rights reserved.

  13. Behind Human Error: Cognitive Systems, Computers and Hindsight

    DTIC Science & Technology

    1994-12-01

    evaluations • Organize and/or conduct workshops and conferences CSERIAC is a Department of Defense Information Analysis Cen- ter sponsored by the Defense...Process 185 Neutral Observer Criteria 191 Error Analysis as Causal Judgment 193 Error as Information 195 A Fundamental Surprise 195 What is Human...Kahnemann, 1974), and in risk analysis (Dougherty and Fragola, 1990). The discussions have continued in a wide variety of forums, includ- ing the

  14. Gravity Compensation Using EGM2008 for High-Precision Long-Term Inertial Navigation Systems

    PubMed Central

    Wu, Ruonan; Wu, Qiuping; Han, Fengtian; Liu, Tianyi; Hu, Peida; Li, Haixia

    2016-01-01

    The gravity disturbance vector is one of the major error sources in high-precision and long-term inertial navigation applications. Specific to the inertial navigation systems (INSs) with high-order horizontal damping networks, analyses of the error propagation show that the gravity-induced errors exist almost exclusively in the horizontal channels and are mostly caused by deflections of the vertical (DOV). Low-frequency components of the DOV propagate into the latitude and longitude errors at a ratio of 1:1 and time-varying fluctuations in the DOV excite Schuler oscillation. This paper presents two gravity compensation methods using the Earth Gravitational Model 2008 (EGM2008), namely, interpolation from the off-line database and computing gravity vectors directly using the spherical harmonic model. Particular attention is given to the error contribution of the gravity update interval and computing time delay. It is recommended for the marine navigation that a gravity vector should be calculated within 1 s and updated every 100 s at most. To meet this demand, the time duration of calculating the current gravity vector using EGM2008 has been reduced to less than 1 s by optimizing the calculation procedure. A few off-line experiments were conducted using the data of a shipborne INS collected during an actual sea test. With the aid of EGM2008, most of the low-frequency components of the position errors caused by the gravity disturbance vector have been removed and the Schuler oscillation has been attenuated effectively. In the rugged terrain, the horizontal position error could be reduced at best 48.85% of its regional maximum. The experimental results match with the theoretical analysis and indicate that EGM2008 is suitable for gravity compensation of the high-precision and long-term INSs. PMID:27999351

  15. Gravity Compensation Using EGM2008 for High-Precision Long-Term Inertial Navigation Systems.

    PubMed

    Wu, Ruonan; Wu, Qiuping; Han, Fengtian; Liu, Tianyi; Hu, Peida; Li, Haixia

    2016-12-18

    The gravity disturbance vector is one of the major error sources in high-precision and long-term inertial navigation applications. Specific to the inertial navigation systems (INSs) with high-order horizontal damping networks, analyses of the error propagation show that the gravity-induced errors exist almost exclusively in the horizontal channels and are mostly caused by deflections of the vertical (DOV). Low-frequency components of the DOV propagate into the latitude and longitude errors at a ratio of 1:1 and time-varying fluctuations in the DOV excite Schuler oscillation. This paper presents two gravity compensation methods using the Earth Gravitational Model 2008 (EGM2008), namely, interpolation from the off-line database and computing gravity vectors directly using the spherical harmonic model. Particular attention is given to the error contribution of the gravity update interval and computing time delay. It is recommended for the marine navigation that a gravity vector should be calculated within 1 s and updated every 100 s at most. To meet this demand, the time duration of calculating the current gravity vector using EGM2008 has been reduced to less than 1 s by optimizing the calculation procedure. A few off-line experiments were conducted using the data of a shipborne INS collected during an actual sea test. With the aid of EGM2008, most of the low-frequency components of the position errors caused by the gravity disturbance vector have been removed and the Schuler oscillation has been attenuated effectively. In the rugged terrain, the horizontal position error could be reduced at best 48.85% of its regional maximum. The experimental results match with the theoretical analysis and indicate that EGM2008 is suitable for gravity compensation of the high-precision and long-term INSs.

  16. Garbage in, Garbage Out: Data Collection, Quality Assessment and Reporting Standards for Social Media Data Use in Health Research, Infodemiology and Digital Disease Detection

    PubMed Central

    Huang, Jidong; Emery, Sherry

    2016-01-01

    Background Social media have transformed the communications landscape. People increasingly obtain news and health information online and via social media. Social media platforms also serve as novel sources of rich observational data for health research (including infodemiology, infoveillance, and digital disease detection detection). While the number of studies using social data is growing rapidly, very few of these studies transparently outline their methods for collecting, filtering, and reporting those data. Keywords and search filters applied to social data form the lens through which researchers may observe what and how people communicate about a given topic. Without a properly focused lens, research conclusions may be biased or misleading. Standards of reporting data sources and quality are needed so that data scientists and consumers of social media research can evaluate and compare methods and findings across studies. Objective We aimed to develop and apply a framework of social media data collection and quality assessment and to propose a reporting standard, which researchers and reviewers may use to evaluate and compare the quality of social data across studies. Methods We propose a conceptual framework consisting of three major steps in collecting social media data: develop, apply, and validate search filters. This framework is based on two criteria: retrieval precision (how much of retrieved data is relevant) and retrieval recall (how much of the relevant data is retrieved). We then discuss two conditions that estimation of retrieval precision and recall rely on—accurate human coding and full data collection—and how to calculate these statistics in cases that deviate from the two ideal conditions. We then apply the framework on a real-world example using approximately 4 million tobacco-related tweets collected from the Twitter firehose. Results We developed and applied a search filter to retrieve e-cigarette–related tweets from the archive based on three keyword categories: devices, brands, and behavior. The search filter retrieved 82,205 e-cigarette–related tweets from the archive and was validated. Retrieval precision was calculated above 95% in all cases. Retrieval recall was 86% assuming ideal conditions (no human coding errors and full data collection), 75% when unretrieved messages could not be archived, 86% assuming no false negative errors by coders, and 93% allowing both false negative and false positive errors by human coders. Conclusions This paper sets forth a conceptual framework for the filtering and quality evaluation of social data that addresses several common challenges and moves toward establishing a standard of reporting social data. Researchers should clearly delineate data sources, how data were accessed and collected, and the search filter building process and how retrieval precision and recall were calculated. The proposed framework can be adapted to other public social media platforms. PMID:26920122

  17. Comparing biomarker measurements to a normal range: when to use standard error of the mean (SEM) or standard deviation (SD) confidence intervals tests.

    PubMed

    Pleil, Joachim D

    2016-01-01

    This commentary is the second of a series outlining one specific concept in interpreting biomarkers data. In the first, an observational method was presented for assessing the distribution of measurements before making parametric calculations. Here, the discussion revolves around the next step, the choice of using standard error of the mean or the calculated standard deviation to compare or predict measurement results.

  18. The efficacy of a novel mobile phone application for goldmann ptosis visual field interpretation.

    PubMed

    Maamari, Robi N; D'Ambrosio, Michael V; Joseph, Jeffrey M; Tao, Jeremiah P

    2014-01-01

    To evaluate the efficacy of a novel mobile phone application that calculates superior visual field defects on Goldmann visual field charts. Experimental study in which the mobile phone application and 14 oculoplastic surgeons interpreted the superior visual field defect in 10 Goldmann charts. Percent error of the mobile phone application and the oculoplastic surgeons' estimates were calculated compared with computer software computation of the actual defects. Precision and time efficiency of the application were evaluated by processing the same Goldmann visual field chart 10 repeated times. The mobile phone application was associated with a mean percent error of 1.98% (95% confidence interval[CI], 0.87%-3.10%) in superior visual field defect calculation. The average mean percent error of the oculoplastic surgeons' visual estimates was 19.75% (95% CI, 14.39%-25.11%). Oculoplastic surgeons, on average, underestimated the defect in all 10 Goldmann charts. There was high interobserver variance among oculoplastic surgeons. The percent error of the 10 repeated measurements on a single chart was 0.93% (95% CI, 0.40%-1.46%). The average time to process 1 chart was 12.9 seconds (95% CI, 10.9-15.0 seconds). The mobile phone application was highly accurate, precise, and time-efficient in calculating the percent superior visual field defect using Goldmann charts. Oculoplastic surgeon visual interpretations were highly inaccurate, highly variable, and usually underestimated the field vision loss.

  19. Performance Test Data Analysis of Scintillation Cameras

    NASA Astrophysics Data System (ADS)

    Demirkaya, Omer; Mazrou, Refaat Al

    2007-10-01

    In this paper, we present a set of image analysis tools to calculate the performance parameters of gamma camera systems from test data acquired according to the National Electrical Manufacturers Association NU 1-2001 guidelines. The calculation methods are either completely automated or require minimal user interaction; minimizing potential human errors. The developed methods are robust with respect to varying conditions under which these tests may be performed. The core algorithms have been validated for accuracy. They have been extensively tested on images acquired by the gamma cameras from different vendors. All the algorithms are incorporated into a graphical user interface that provides a convenient way to process the data and report the results. The entire application has been developed in MATLAB programming environment and is compiled to run as a stand-alone program. The developed image analysis tools provide an automated, convenient and accurate means to calculate the performance parameters of gamma cameras and SPECT systems. The developed application is available upon request for personal or non-commercial uses. The results of this study have been partially presented in Society of Nuclear Medicine Annual meeting as an InfoSNM presentation.

  20. Human-computer interaction in multitask situations

    NASA Technical Reports Server (NTRS)

    Rouse, W. B.

    1977-01-01

    Human-computer interaction in multitask decisionmaking situations is considered, and it is proposed that humans and computers have overlapping responsibilities. Queueing theory is employed to model this dynamic approach to the allocation of responsibility between human and computer. Results of simulation experiments are used to illustrate the effects of several system variables including number of tasks, mean time between arrivals of action-evoking events, human-computer speed mismatch, probability of computer error, probability of human error, and the level of feedback between human and computer. Current experimental efforts are discussed and the practical issues involved in designing human-computer systems for multitask situations are considered.

  1. Space-Time Error Representation and Estimation in Navier-Stokes Calculations

    NASA Technical Reports Server (NTRS)

    Barth, Timothy J.

    2006-01-01

    The mathematical framework for a-posteriori error estimation of functionals elucidated by Eriksson et al. [7] and Becker and Rannacher [3] is revisited in a space-time context. Using these theories, a hierarchy of exact and approximate error representation formulas are presented for use in error estimation and mesh adaptivity. Numerical space-time results for simple model problems as well as compressible Navier-Stokes flow at Re = 300 over a 2D circular cylinder are then presented to demonstrate elements of the error representation theory for time-dependent problems.

  2. Estimating IMU heading error from SAR images.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Doerry, Armin Walter

    Angular orientation errors of the real antenna for Synthetic Aperture Radar (SAR) will manifest as undesired illumination gradients in SAR images. These gradients can be measured, and the pointing error can be calculated. This can be done for single images, but done more robustly using multi-image methods. Several methods are provided in this report. The pointing error can then be fed back to the navigation Kalman filter to correct for problematic heading (yaw) error drift. This can mitigate the need for uncomfortable and undesired IMU alignment maneuvers such as S-turns.

  3. An overview of intravenous-related medication administration errors as reported to MEDMARX, a national medication error-reporting program.

    PubMed

    Hicks, Rodney W; Becker, Shawn C

    2006-01-01

    Medication errors can be harmful, especially if they involve the intravenous (IV) route of administration. A mixed-methodology study using a 5-year review of 73,769 IV-related medication errors from a national medication error reporting program indicates that between 3% and 5% of these errors were harmful. The leading type of error was omission, and the leading cause of error involved clinician performance deficit. Using content analysis, three themes-product shortage, calculation errors, and tubing interconnectivity-emerge and appear to predispose patients to harm. Nurses often participate in IV therapy, and these findings have implications for practice and patient safety. Voluntary medication error-reporting programs afford an opportunity to improve patient care and to further understanding about the nature of IV-related medication errors.

  4. The Individual Virtual Eye: a Computer Model for Advanced Intraocular Lens Calculation

    PubMed Central

    Einighammer, Jens; Oltrup, Theo; Bende, Thomas; Jean, Benedikt

    2010-01-01

    Purpose To describe the individual virtual eye, a computer model of a human eye with respect to its optical properties. It is based on measurements of an individual person and one of its major application is calculating intraocular lenses (IOLs) for cataract surgery. Methods The model is constructed from an eye's geometry, including axial length and topographic measurements of the anterior corneal surface. All optical components of a pseudophakic eye are modeled with computer scientific methods. A spline-based interpolation method efficiently includes data from corneal topographic measurements. The geometrical optical properties, such as the wavefront aberration, are simulated with real ray-tracing using Snell's law. Optical components can be calculated using computer scientific optimization procedures. The geometry of customized aspheric IOLs was calculated for 32 eyes and the resulting wavefront aberration was investigated. Results The more complex the calculated IOL is, the lower the residual wavefront error is. Spherical IOLs are only able to correct for the defocus, while toric IOLs also eliminate astigmatism. Spherical aberration is additionally reduced by aspheric and toric aspheric IOLs. The efficient implementation of time-critical numerical ray-tracing and optimization procedures allows for short calculation times, which may lead to a practicable method integrated in some device. Conclusions The individual virtual eye allows for simulations and calculations regarding geometrical optics for individual persons. This leads to clinical applications like IOL calculation, with the potential to overcome the limitations of those current calculation methods that are based on paraxial optics, exemplary shown by calculating customized aspheric IOLs.

  5. Combining task analysis and fault tree analysis for accident and incident analysis: a case study from Bulgaria.

    PubMed

    Doytchev, Doytchin E; Szwillus, Gerd

    2009-11-01

    Understanding the reasons for incident and accident occurrence is important for an organization's safety. Different methods have been developed to achieve this goal. To better understand the human behaviour in incident occurrence we propose an analysis concept that combines Fault Tree Analysis (FTA) and Task Analysis (TA). The former method identifies the root causes of an accident/incident, while the latter analyses the way people perform the tasks in their work environment and how they interact with machines or colleagues. These methods were complemented with the use of the Human Error Identification in System Tools (HEIST) methodology and the concept of Performance Shaping Factors (PSF) to deepen the insight into the error modes of an operator's behaviour. HEIST shows the external error modes that caused the human error and the factors that prompted the human to err. To show the validity of the approach, a case study at a Bulgarian Hydro power plant was carried out. An incident - the flooding of the plant's basement - was analysed by combining the afore-mentioned methods. The case study shows that Task Analysis in combination with other methods can be applied successfully to human error analysis, revealing details about erroneous actions in a realistic situation.

  6. Object permanence in adult common marmosets (Callithrix jacchus): not everything is an "A-not-B" error that seems to be one.

    PubMed

    Kis, Anna; Gácsi, Márta; Range, Friederike; Virányi, Zsófia

    2012-01-01

    In this paper, we describe a behaviour pattern similar to the "A-not-B" error found in human infants and young apes in a monkey species, the common marmosets (Callithrix jacchus). In contrast to the classical explanation, recently it has been suggested that the "A-not-B" error committed by human infants is at least partially due to misinterpretation of the hider's ostensively communicated object hiding actions as potential 'teaching' demonstrations during the A trials. We tested whether this so-called Natural Pedagogy hypothesis would account for the A-not-B error that marmosets commit in a standard object permanence task, but found no support for the hypothesis in this species. Alternatively, we present evidence that lower level mechanisms, such as attention and motivation, play an important role in committing the "A-not-B" error in marmosets. We argue that these simple mechanisms might contribute to the effect of undeveloped object representational skills in other species including young non-human primates that commit the A-not-B error.

  7. Computational dosimetry for grounded and ungrounded human models due to contact current

    NASA Astrophysics Data System (ADS)

    Chan, Kwok Hung; Hattori, Junya; Laakso, Ilkka; Hirata, Akimasa; Taki, Masao

    2013-08-01

    This study presents the computational dosimetry of contact currents for grounded and ungrounded human models. The uncertainty of the quasi-static (QS) approximation of the in situ electric field induced in a grounded/ungrounded human body due to the contact current is first estimated. Different scenarios of cylindrical and anatomical human body models are considered, and the results are compared with the full-wave analysis. In the QS analysis, the induced field in the grounded cylindrical model is calculated by the QS finite-difference time-domain (QS-FDTD) method, and compared with the analytical solution. Because no analytical solution is available for the grounded/ungrounded anatomical human body model, the results of the QS-FDTD method are then compared with those of the conventional FDTD method. The upper frequency limit for the QS approximation in the contact current dosimetry is found to be 3 MHz, with a relative local error of less than 10%. The error increases above this frequency, which can be attributed to the neglect of the displacement current. The QS or conventional FDTD method is used for the dosimetry of induced electric field and/or specific absorption rate (SAR) for a contact current injected into the index finger of a human body model in the frequency range from 10 Hz to 100 MHz. The in situ electric fields or SAR are compared with the basic restrictions in the international guidelines/standards. The maximum electric field or the 99th percentile value of the electric fields appear not only in the fat and muscle tissues of the finger, but also around the wrist, forearm, and the upper arm. Some discrepancies are observed between the basic restrictions for the electric field and SAR and the reference levels for the contact current, especially in the extremities. These discrepancies are shown by an equation that relates the current density, tissue conductivity, and induced electric field in the finger with a cross-sectional area of 1 cm2.

  8. Research on Signature Verification Method Based on Discrete Fréchet Distance

    NASA Astrophysics Data System (ADS)

    Fang, J. L.; Wu, W.

    2018-05-01

    This paper proposes a multi-feature signature template based on discrete Fréchet distance, which breaks through the limitation of traditional signature authentication using a single signature feature. It solves the online handwritten signature authentication signature global feature template extraction calculation workload, signature feature selection unreasonable problem. In this experiment, the false recognition rate (FAR) and false rejection rate (FRR) of the statistical signature are calculated and the average equal error rate (AEER) is calculated. The feasibility of the combined template scheme is verified by comparing the average equal error rate of the combination template and the original template.

  9. Modeling and calculation of impact friction caused by corner contact in gear transmission

    NASA Astrophysics Data System (ADS)

    Zhou, Changjiang; Chen, Siyu

    2014-09-01

    Corner contact in gear pair causes vibration and noise, which has attracted many attentions. However, teeth errors and deformation make it difficulty to determine the point situated at corner contact and study the mechanism of teeth impact friction in the current researches. Based on the mechanism of corner contact, the process of corner contact is divided into two stages of impact and scratch, and the calculation model including gear equivalent error—combined deformation is established along the line of action. According to the distributive law, gear equivalent error is synthesized by base pitch error, normal backlash and tooth profile modification on the line of action. The combined tooth compliance of the first point lying in corner contact before the normal path is inversed along the line of action, on basis of the theory of engagement and the curve of tooth synthetic compliance & load-history. Combined secondarily the equivalent error with the combined deflection, the position standard of the point situated at corner contact is probed. Then the impact positions and forces, from the beginning to the end during corner contact before the normal path, are calculated accurately. Due to the above results, the lash model during corner contact is founded, and the impact force and frictional coefficient are quantified. A numerical example is performed and the averaged impact friction coefficient based on the presented calculation method is validated. This research obtains the results which could be referenced to understand the complex mechanism of teeth impact friction and quantitative calculation of the friction force and coefficient, and to gear exact design for tribology.

  10. Updating expected action outcome in the medial frontal cortex involves an evaluation of error type.

    PubMed

    Maier, Martin E; Steinhauser, Marco

    2013-10-02

    Forming expectations about the outcome of an action is an important prerequisite for action control and reinforcement learning in the human brain. The medial frontal cortex (MFC) has been shown to play an important role in the representation of outcome expectations, particularly when an update of expected outcome becomes necessary because an error is detected. However, error detection alone is not always sufficient to compute expected outcome because errors can occur in various ways and different types of errors may be associated with different outcomes. In the present study, we therefore investigate whether updating expected outcome in the human MFC is based on an evaluation of error type. Our approach was to consider an electrophysiological correlate of MFC activity on errors, the error-related negativity (Ne/ERN), in a task in which two types of errors could occur. Because the two error types were associated with different amounts of monetary loss, updating expected outcomes on error trials required an evaluation of error type. Our data revealed a pattern of Ne/ERN amplitudes that closely mirrored the amount of monetary loss associated with each error type, suggesting that outcome expectations are updated based on an evaluation of error type. We propose that this is achieved by a proactive evaluation process that anticipates error types by continuously monitoring error sources or by dynamically representing possible response-outcome relations.

  11. Accounting for uncertainty in DNA sequencing data.

    PubMed

    O'Rawe, Jason A; Ferson, Scott; Lyon, Gholson J

    2015-02-01

    Science is defined in part by an honest exposition of the uncertainties that arise in measurements and propagate through calculations and inferences, so that the reliabilities of its conclusions are made apparent. The recent rapid development of high-throughput DNA sequencing technologies has dramatically increased the number of measurements made at the biochemical and molecular level. These data come from many different DNA-sequencing technologies, each with their own platform-specific errors and biases, which vary widely. Several statistical studies have tried to measure error rates for basic determinations, but there are no general schemes to project these uncertainties so as to assess the surety of the conclusions drawn about genetic, epigenetic, and more general biological questions. We review here the state of uncertainty quantification in DNA sequencing applications, describe sources of error, and propose methods that can be used for accounting and propagating these errors and their uncertainties through subsequent calculations. Copyright © 2014 Elsevier Ltd. All rights reserved.

  12. Personnel reliability impact on petrochemical facilities monitoring system's failure skipping probability

    NASA Astrophysics Data System (ADS)

    Kostyukov, V. N.; Naumenko, A. P.

    2017-08-01

    The paper dwells upon urgent issues of evaluating impact of actions conducted by complex technological systems operators on their safe operation considering application of condition monitoring systems for elements and sub-systems of petrochemical production facilities. The main task for the research is to distinguish factors and criteria of monitoring system properties description, which would allow to evaluate impact of errors made by personnel on operation of real-time condition monitoring and diagnostic systems for machinery of petrochemical facilities, and find and objective criteria for monitoring system class, considering a human factor. On the basis of real-time condition monitoring concepts of sudden failure skipping risk, static and dynamic error, monitoring systems, one may solve a task of evaluation of impact that personnel's qualification has on monitoring system operation in terms of error in personnel or operators' actions while receiving information from monitoring systems and operating a technological system. Operator is considered as a part of the technological system. Although, personnel's behavior is usually a combination of the following parameters: input signal - information perceiving, reaction - decision making, response - decision implementing. Based on several researches on behavior of nuclear powers station operators in USA, Italy and other countries, as well as on researches conducted by Russian scientists, required data on operator's reliability were selected for analysis of operator's behavior at technological facilities diagnostics and monitoring systems. The calculations revealed that for the monitoring system selected as an example, the failure skipping risk for the set values of static (less than 0.01) and dynamic (less than 0.001) errors considering all related factors of data on reliability of information perception, decision-making, and reaction fulfilled is 0.037, in case when all the facilities and error probability are under control - not more than 0.027. In case when only pump and compressor units are under control, the failure skipping risk is not more than 0.022, when the probability of error in operator's action is not more than 0.011. The work output shows that on the basis of the researches results an assessment of operators' reliability can be made in terms of almost any kind of production, but considering only technological capabilities, since operators' psychological and general training considerable vary in different production industries. Using latest technologies of engineering psychology and design of data support systems, situation assessment systems, decision-making and responding system, as well as achievement in condition monitoring in various production industries one can evaluate hazardous condition skipping risk probability considering static, dynamic errors and human factor.

  13. Quantifying errors without random sampling.

    PubMed

    Phillips, Carl V; LaPole, Luwanna M

    2003-06-12

    All quantifications of mortality, morbidity, and other health measures involve numerous sources of error. The routine quantification of random sampling error makes it easy to forget that other sources of error can and should be quantified. When a quantification does not involve sampling, error is almost never quantified and results are often reported in ways that dramatically overstate their precision. We argue that the precision implicit in typical reporting is problematic and sketch methods for quantifying the various sources of error, building up from simple examples that can be solved analytically to more complex cases. There are straightforward ways to partially quantify the uncertainty surrounding a parameter that is not characterized by random sampling, such as limiting reported significant figures. We present simple methods for doing such quantifications, and for incorporating them into calculations. More complicated methods become necessary when multiple sources of uncertainty must be combined. We demonstrate that Monte Carlo simulation, using available software, can estimate the uncertainty resulting from complicated calculations with many sources of uncertainty. We apply the method to the current estimate of the annual incidence of foodborne illness in the United States. Quantifying uncertainty from systematic errors is practical. Reporting this uncertainty would more honestly represent study results, help show the probability that estimated values fall within some critical range, and facilitate better targeting of further research.

  14. Linear error analysis of slope-area discharge determinations

    USGS Publications Warehouse

    Kirby, W.H.

    1987-01-01

    The slope-area method can be used to calculate peak flood discharges when current-meter measurements are not possible. This calculation depends on several quantities, such as water-surface fall, that are subject to large measurement errors. Other critical quantities, such as Manning's n, are not even amenable to direct measurement but can only be estimated. Finally, scour and fill may cause gross discrepancies between the observed condition of the channel and the hydraulic conditions during the flood peak. The effects of these potential errors on the accuracy of the computed discharge have been estimated by statistical error analysis using a Taylor-series approximation of the discharge formula and the well-known formula for the variance of a sum of correlated random variates. The resultant error variance of the computed discharge is a weighted sum of covariances of the various observational errors. The weights depend on the hydraulic and geometric configuration of the channel. The mathematical analysis confirms the rule of thumb that relative errors in computed discharge increase rapidly when velocity heads exceed the water-surface fall, when the flow field is expanding and when lateral velocity variation (alpha) is large. It also confirms the extreme importance of accurately assessing the presence of scour or fill. ?? 1987.

  15. Accounting for measurement error in human life history trade-offs using structural equation modeling.

    PubMed

    Helle, Samuli

    2018-03-01

    Revealing causal effects from correlative data is very challenging and a contemporary problem in human life history research owing to the lack of experimental approach. Problems with causal inference arising from measurement error in independent variables, whether related either to inaccurate measurement technique or validity of measurements, seem not well-known in this field. The aim of this study is to show how structural equation modeling (SEM) with latent variables can be applied to account for measurement error in independent variables when the researcher has recorded several indicators of a hypothesized latent construct. As a simple example of this approach, measurement error in lifetime allocation of resources to reproduction in Finnish preindustrial women is modelled in the context of the survival cost of reproduction. In humans, lifetime energetic resources allocated in reproduction are almost impossible to quantify with precision and, thus, typically used measures of lifetime reproductive effort (e.g., lifetime reproductive success and parity) are likely to be plagued by measurement error. These results are contrasted with those obtained from a traditional regression approach where the single best proxy of lifetime reproductive effort available in the data is used for inference. As expected, the inability to account for measurement error in women's lifetime reproductive effort resulted in the underestimation of its underlying effect size on post-reproductive survival. This article emphasizes the advantages that the SEM framework can provide in handling measurement error via multiple-indicator latent variables in human life history studies. © 2017 Wiley Periodicals, Inc.

  16. SU-E-T-396: Dosimetric Accuracy of Proton Therapy for Patients with Metal Implants in CT Scans Using Metal Deletion Technique (MDT) Artifacts Reduction

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Li, X; Kantor, M; Zhu, X

    2014-06-01

    Purpose: To evaluate the dosimetric accuracy for proton therapy patients with metal implants in CT using metal deletion technique (MDT) artifacts reduction. Methods: Proton dose accuracies under CT metal artifacts were first evaluated using a water phantom with cylindrical inserts of different materials (titanium and steel). Ranges and dose profiles along different beam angles were calculated using treatment planning system (Eclipse version 8.9) on uncorrected CT, MDT CT, and manually-corrected CT, where true Hounsfield units (water) were assigned to the streak artifacts. In patient studies, the treatment plans were developed on manually-corrected CTs, then recalculated on MDT and uncorrected CTs.more » DVH indices were compared between the dose distributions on all the CTs. Results: For water phantom study with 1/2 inch titanium insert, the proton range differences estimated by MDT CT were with 1% for all beam angles, while the range error can be up to 2.6% for uncorrected CT. For the study with 1 inch stainless steel insert, the maximum range error calculated by MDT CT was 1.09% among all the beam angles compared with maximum range error with 4.7% for uncorrected CT. The dose profiles calculated on MDT CTs for both titanium and steel inserts showed very good agreements with the ones calculated on manually-corrected CTs, while large dose discrepancies calculated using uncorrected CTs were observed in the distal end region of the proton beam. The patient study showed similar dose distribution and DVHs for organs near the metal artifacts recalculated on MDT CT compared with the ones calculated on manually-corrected CT, while the differences between uncorrected and corrected CTs were much pronounced. Conclusion: In proton therapy, large dose error could occur due to metal artifact. The MDT CT can be used for proton dose calculation to achieve similar dose accuracy as the current clinical practice using manual correction.« less

  17. Frontal Theta Links Prediction Errors to Behavioral Adaptation in Reinforcement Learning

    PubMed Central

    Cavanagh, James F.; Frank, Michael J.; Klein, Theresa J.; Allen, John J.B.

    2009-01-01

    Investigations into action monitoring have consistently detailed a fronto-central voltage deflection in the Event-Related Potential (ERP) following the presentation of negatively valenced feedback, sometimes termed the Feedback Related Negativity (FRN). The FRN has been proposed to reflect a neural response to prediction errors during reinforcement learning, yet the single trial relationship between neural activity and the quanta of expectation violation remains untested. Although ERP methods are not well suited to single trial analyses, the FRN has been associated with theta band oscillatory perturbations in the medial prefrontal cortex. Medio-frontal theta oscillations have been previously associated with expectation violation and behavioral adaptation and are well suited to single trial analysis. Here, we recorded EEG activity during a probabilistic reinforcement learning task and fit the performance data to an abstract computational model (Q-learning) for calculation of single-trial reward prediction errors. Single-trial theta oscillatory activities following feedback were investigated within the context of expectation (prediction error) and adaptation (subsequent reaction time change). Results indicate that interactive medial and lateral frontal theta activities reflect the degree of negative and positive reward prediction error in the service of behavioral adaptation. These different brain areas use prediction error calculations for different behavioral adaptations: with medial frontal theta reflecting the utilization of prediction errors for reaction time slowing (specifically following errors), but lateral frontal theta reflecting prediction errors leading to working memory-related reaction time speeding for the correct choice. PMID:19969093

  18. Evaluation of lens distortion errors in video-based motion analysis

    NASA Technical Reports Server (NTRS)

    Poliner, Jeffrey; Wilmington, Robert; Klute, Glenn K.; Micocci, Angelo

    1993-01-01

    In an effort to study lens distortion errors, a grid of points of known dimensions was constructed and videotaped using a standard and a wide-angle lens. Recorded images were played back on a VCR and stored on a personal computer. Using these stored images, two experiments were conducted. Errors were calculated as the difference in distance from the known coordinates of the points to the calculated coordinates. The purposes of this project were as follows: (1) to develop the methodology to evaluate errors introduced by lens distortion; (2) to quantify and compare errors introduced by use of both a 'standard' and a wide-angle lens; (3) to investigate techniques to minimize lens-induced errors; and (4) to determine the most effective use of calibration points when using a wide-angle lens with a significant amount of distortion. It was seen that when using a wide-angle lens, errors from lens distortion could be as high as 10 percent of the size of the entire field of view. Even with a standard lens, there was a small amount of lens distortion. It was also found that the choice of calibration points influenced the lens distortion error. By properly selecting the calibration points and avoidance of the outermost regions of a wide-angle lens, the error from lens distortion can be kept below approximately 0.5 percent with a standard lens and 1.5 percent with a wide-angle lens.

  19. IMRT QA: Selecting gamma criteria based on error detection sensitivity

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Steers, Jennifer M.; Fraass, Benedick A., E-mail: benedick.fraass@cshs.org

    Purpose: The gamma comparison is widely used to evaluate the agreement between measurements and treatment planning system calculations in patient-specific intensity modulated radiation therapy (IMRT) quality assurance (QA). However, recent publications have raised concerns about the lack of sensitivity when employing commonly used gamma criteria. Understanding the actual sensitivity of a wide range of different gamma criteria may allow the definition of more meaningful gamma criteria and tolerance limits in IMRT QA. We present a method that allows the quantitative determination of gamma criteria sensitivity to induced errors which can be applied to any unique combination of device, delivery technique,more » and software utilized in a specific clinic. Methods: A total of 21 DMLC IMRT QA measurements (ArcCHECK®, Sun Nuclear) were compared to QA plan calculations with induced errors. Three scenarios were studied: MU errors, multi-leaf collimator (MLC) errors, and the sensitivity of the gamma comparison to changes in penumbra width. Gamma comparisons were performed between measurements and error-induced calculations using a wide range of gamma criteria, resulting in a total of over 20 000 gamma comparisons. Gamma passing rates for each error class and case were graphed against error magnitude to create error curves in order to represent the range of missed errors in routine IMRT QA using 36 different gamma criteria. Results: This study demonstrates that systematic errors and case-specific errors can be detected by the error curve analysis. Depending on the location of the error curve peak (e.g., not centered about zero), 3%/3 mm threshold = 10% at 90% pixels passing may miss errors as large as 15% MU errors and ±1 cm random MLC errors for some cases. As the dose threshold parameter was increased for a given %Diff/distance-to-agreement (DTA) setting, error sensitivity was increased by up to a factor of two for select cases. This increased sensitivity with increasing dose threshold was consistent across all studied combinations of %Diff/DTA. Criteria such as 2%/3 mm and 3%/2 mm with a 50% threshold at 90% pixels passing are shown to be more appropriately sensitive without being overly strict. However, a broadening of the penumbra by as much as 5 mm in the beam configuration was difficult to detect with commonly used criteria, as well as with the previously mentioned criteria utilizing a threshold of 50%. Conclusions: We have introduced the error curve method, an analysis technique which allows the quantitative determination of gamma criteria sensitivity to induced errors. The application of the error curve method using DMLC IMRT plans measured on the ArcCHECK® device demonstrated that large errors can potentially be missed in IMRT QA with commonly used gamma criteria (e.g., 3%/3 mm, threshold = 10%, 90% pixels passing). Additionally, increasing the dose threshold value can offer dramatic increases in error sensitivity. This approach may allow the selection of more meaningful gamma criteria for IMRT QA and is straightforward to apply to other combinations of devices and treatment techniques.« less

  20. Safety coaches in radiology: decreasing human error and minimizing patient harm.

    PubMed

    Dickerson, Julie M; Koch, Bernadette L; Adams, Janet M; Goodfriend, Martha A; Donnelly, Lane F

    2010-09-01

    Successful programs to improve patient safety require a component aimed at improving safety culture and environment, resulting in a reduced number of human errors that could lead to patient harm. Safety coaching provides peer accountability. It involves observing for safety behaviors and use of error prevention techniques and provides immediate feedback. For more than a decade, behavior-based safety coaching has been a successful strategy for reducing error within the context of occupational safety in industry. We describe the use of safety coaches in radiology. Safety coaches are an important component of our comprehensive patient safety program.

  1. Error-associated behaviors and error rates for robotic geology

    NASA Technical Reports Server (NTRS)

    Anderson, Robert C.; Thomas, Geb; Wagner, Jacob; Glasgow, Justin

    2004-01-01

    This study explores human error as a function of the decision-making process. One of many models for human decision-making is Rasmussen's decision ladder [9]. The decision ladder identifies the multiple tasks and states of knowledge involved in decision-making. The tasks and states of knowledge can be classified by the level of cognitive effort required to make the decision, leading to the skill, rule, and knowledge taxonomy (Rasmussen, 1987). Skill based decisions require the least cognitive effort and knowledge based decisions require the greatest cognitive effort. Errors can occur at any of the cognitive levels.

  2. Procedural error monitoring and smart checklists

    NASA Technical Reports Server (NTRS)

    Palmer, Everett

    1990-01-01

    Human beings make and usually detect errors routinely. The same mental processes that allow humans to cope with novel problems can also lead to error. Bill Rouse has argued that errors are not inherently bad but their consequences may be. He proposes the development of error-tolerant systems that detect errors and take steps to prevent the consequences of the error from occurring. Research should be done on self and automatic detection of random and unanticipated errors. For self detection, displays should be developed that make the consequences of errors immediately apparent. For example, electronic map displays graphically show the consequences of horizontal flight plan entry errors. Vertical profile displays should be developed to make apparent vertical flight planning errors. Other concepts such as energy circles could also help the crew detect gross flight planning errors. For automatic detection, systems should be developed that can track pilot activity, infer pilot intent and inform the crew of potential errors before their consequences are realized. Systems that perform a reasonableness check on flight plan modifications by checking route length and magnitude of course changes are simple examples. Another example would be a system that checked the aircraft's planned altitude against a data base of world terrain elevations. Information is given in viewgraph form.

  3. Understanding the many-body expansion for large systems. II. Accuracy considerations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lao, Ka Un; Liu, Kuan-Yu; Richard, Ryan M.

    2016-04-28

    To complement our study of the role of finite precision in electronic structure calculations based on a truncated many-body expansion (MBE, or “n-body expansion”), we examine the accuracy of such methods in the present work. Accuracy may be defined either with respect to a supersystem calculation computed at the same level of theory as the n-body calculations, or alternatively with respect to high-quality benchmarks. Both metrics are considered here. In applications to a sequence of water clusters, (H{sub 2}O){sub N=6−55} described at the B3LYP/cc-pVDZ level, we obtain mean absolute errors (MAEs) per H{sub 2}O monomer of ∼1.0 kcal/mol for two-bodymore » expansions, where the benchmark is a B3LYP/cc-pVDZ calculation on the entire cluster. Three- and four-body expansions exhibit MAEs of 0.5 and 0.1 kcal/mol/monomer, respectively, without resort to charge embedding. A generalized many-body expansion truncated at two-body terms [GMBE(2)], using 3–4 H{sub 2}O molecules per fragment, outperforms all of these methods and affords a MAE of ∼0.02 kcal/mol/monomer, also without charge embedding. GMBE(2) requires significantly fewer (although somewhat larger) subsystem calculations as compared to MBE(4), reducing problems associated with floating-point roundoff errors. When compared to high-quality benchmarks, we find that error cancellation often plays a critical role in the success of MBE(n) calculations, even at the four-body level, as basis-set superposition error can compensate for higher-order polarization interactions. A many-body counterpoise correction is introduced for the GMBE, and its two-body truncation [GMBCP(2)] is found to afford good results without error cancellation. Together with a method such as ωB97X-V/aug-cc-pVTZ that can describe both covalent and non-covalent interactions, the GMBE(2)+GMBCP(2) approach provides an accurate, stable, and tractable approach for large systems.« less

  4. Position Error Covariance Matrix Validation and Correction

    NASA Technical Reports Server (NTRS)

    Frisbee, Joe, Jr.

    2016-01-01

    In order to calculate operationally accurate collision probabilities, the position error covariance matrices predicted at times of closest approach must be sufficiently accurate representations of the position uncertainties. This presentation will discuss why the Gaussian distribution is a reasonable expectation for the position uncertainty and how this assumed distribution type is used in the validation and correction of position error covariance matrices.

  5. Analysis and Compensation for Lateral Chromatic Aberration in a Color Coding Structured Light 3D Measurement System.

    PubMed

    Huang, Junhui; Xue, Qi; Wang, Zhao; Gao, Jianmin

    2016-09-03

    While color-coding methods have improved the measuring efficiency of a structured light three-dimensional (3D) measurement system, they decreased the measuring accuracy significantly due to lateral chromatic aberration (LCA). In this study, the LCA in a structured light measurement system is analyzed, and a method is proposed to compensate the error caused by the LCA. Firstly, based on the projective transformation, a 3D error map of LCA is constructed in the projector images by using a flat board and comparing the image coordinates of red, green and blue circles with the coordinates of white circles at preselected sample points within the measurement volume. The 3D map consists of the errors, which are the equivalent errors caused by LCA of the camera and projector. Then in measurements, error values of LCA are calculated and compensated to correct the projector image coordinates through the 3D error map and a tri-linear interpolation method. Eventually, 3D coordinates with higher accuracy are re-calculated according to the compensated image coordinates. The effectiveness of the proposed method is verified in the following experiments.

  6. Analysis and Compensation for Lateral Chromatic Aberration in a Color Coding Structured Light 3D Measurement System

    PubMed Central

    Huang, Junhui; Xue, Qi; Wang, Zhao; Gao, Jianmin

    2016-01-01

    While color-coding methods have improved the measuring efficiency of a structured light three-dimensional (3D) measurement system, they decreased the measuring accuracy significantly due to lateral chromatic aberration (LCA). In this study, the LCA in a structured light measurement system is analyzed, and a method is proposed to compensate the error caused by the LCA. Firstly, based on the projective transformation, a 3D error map of LCA is constructed in the projector images by using a flat board and comparing the image coordinates of red, green and blue circles with the coordinates of white circles at preselected sample points within the measurement volume. The 3D map consists of the errors, which are the equivalent errors caused by LCA of the camera and projector. Then in measurements, error values of LCA are calculated and compensated to correct the projector image coordinates through the 3D error map and a tri-linear interpolation method. Eventually, 3D coordinates with higher accuracy are re-calculated according to the compensated image coordinates. The effectiveness of the proposed method is verified in the following experiments. PMID:27598174

  7. Dual-energy X-ray absorptiometry: analysis of pediatric fat estimate errors due to tissue hydration effects.

    PubMed

    Testolin, C G; Gore, R; Rivkin, T; Horlick, M; Arbo, J; Wang, Z; Chiumello, G; Heymsfield, S B

    2000-12-01

    Dual-energy X-ray absorptiometry (DXA) percent (%) fat estimates may be inaccurate in young children, who typically have high tissue hydration levels. This study was designed to provide a comprehensive analysis of pediatric tissue hydration effects on DXA %fat estimates. Phase 1 was experimental and included three in vitro studies to establish the physical basis of DXA %fat-estimation models. Phase 2 extended phase 1 models and consisted of theoretical calculations to estimate the %fat errors emanating from previously reported pediatric hydration effects. Phase 1 experiments supported the two-compartment DXA soft tissue model and established that pixel ratio of low to high energy (R values) are a predictable function of tissue elemental content. In phase 2, modeling of reference body composition values from birth to age 120 mo revealed that %fat errors will arise if a "constant" adult lean soft tissue R value is applied to the pediatric population; the maximum %fat error, approximately 0.8%, would be present at birth. High tissue hydration, as observed in infants and young children, leads to errors in DXA %fat estimates. The magnitude of these errors based on theoretical calculations is small and may not be of clinical or research significance.

  8. A circadian rhythm in skill-based errors in aviation maintenance.

    PubMed

    Hobbs, Alan; Williamson, Ann; Van Dongen, Hans P A

    2010-07-01

    In workplaces where activity continues around the clock, human error has been observed to exhibit a circadian rhythm, with a characteristic peak in the early hours of the morning. Errors are commonly distinguished by the nature of the underlying cognitive failure, particularly the level of intentionality involved in the erroneous action. The Skill-Rule-Knowledge (SRK) framework of Rasmussen is used widely in the study of industrial errors and accidents. The SRK framework describes three fundamental types of error, according to whether behavior is under the control of practiced sensori-motor skill routines with minimal conscious awareness; is guided by implicit or explicit rules or expertise; or where the planning of actions requires the conscious application of domain knowledge. Up to now, examinations of circadian patterns of industrial errors have not distinguished between different types of error. Consequently, it is not clear whether all types of error exhibit the same circadian rhythm. A survey was distributed to aircraft maintenance personnel in Australia. Personnel were invited to anonymously report a safety incident and were prompted to describe, in detail, the human involvement (if any) that contributed to it. A total of 402 airline maintenance personnel reported an incident, providing 369 descriptions of human error in which the time of the incident was reported and sufficient detail was available to analyze the error. Errors were categorized using a modified version of the SRK framework, in which errors are categorized as skill-based, rule-based, or knowledge-based, or as procedure violations. An independent check confirmed that the SRK framework had been applied with sufficient consistency and reliability. Skill-based errors were the most common form of error, followed by procedure violations, rule-based errors, and knowledge-based errors. The frequency of errors was adjusted for the estimated proportion of workers present at work/each hour of the day, and the 24 h pattern of each error type was examined. Skill-based errors exhibited a significant circadian rhythm, being most prevalent in the early hours of the morning. Variation in the frequency of rule-based errors, knowledge-based errors, and procedure violations over the 24 h did not reach statistical significance. The results suggest that during the early hours of the morning, maintenance technicians are at heightened risk of "absent minded" errors involving failures to execute action plans as intended.

  9. Error of the modelled peak flow of the hydraulically reconstructed 1907 flood of the Ebro River in Xerta (NE Iberian Peninsula)

    NASA Astrophysics Data System (ADS)

    Lluís Ruiz-Bellet, Josep; Castelltort, Xavier; Carles Balasch, J.; Tuset, Jordi

    2016-04-01

    The estimation of the uncertainty of the results of the hydraulic modelling has been deeply analysed, but no clear methodological procedures as to its determination have been formulated when applied to historical hydrology. The main objective of this study was to calculate the uncertainty of the resulting peak flow of a typical historical flood reconstruction. The secondary objective was to identify the input variables that influenced the result the most and their contribution to peak flow total error. The uncertainty of 21-23 October 1907 flood of the Ebro River (NE Iberian Peninsula) in the town of Xerta (83,000 km2) was calculated with a series of local sensitivity analyses of the main variables affecting the resulting peak flow. Besides, in order to see to what degree the result depended on the chosen model, the HEC-RAS resulting peak flow was compared to the ones obtained with the 2D model Iber and with Manning's equation. The peak flow of 1907 flood in the Ebro River in Xerta, reconstructed with HEC-RAS, was 11500 m3·s-1 and its total error was ±31%. The most influential input variable over HEC-RAS peak flow results was water height; however, the one that contributed the most to peak flow error was Manning's n, because its uncertainty was far greater than water height's. The main conclusion is that, to ensure the lowest peak flow error, the reliability and precision of the flood mark should be thoroughly assessed. The peak flow was 12000 m3·s-1 when calculated with the 2D model Iber and 11500 m3·s-1 when calculated with the Manning equation.

  10. Evaluation of Two Computational Techniques of Calculating Multipath Using Global Positioning System Carrier Phase Measurements

    NASA Technical Reports Server (NTRS)

    Gomez, Susan F.; Hood, Laura; Panneton, Robert J.; Saunders, Penny E.; Adkins, Antha; Hwu, Shian U.; Lu, Ba P.

    1996-01-01

    Two computational techniques are used to calculate differential phase errors on Global Positioning System (GPS) carrier war phase measurements due to certain multipath-producing objects. The two computational techniques are a rigorous computati electromagnetics technique called Geometric Theory of Diffraction (GTD) and the other is a simple ray tracing method. The GTD technique has been used successfully to predict microwave propagation characteristics by taking into account the dominant multipath components due to reflections and diffractions from scattering structures. The ray tracing technique only solves for reflected signals. The results from the two techniques are compared to GPS differential carrier phase ns taken on the ground using a GPS receiver in the presence of typical International Space Station (ISS) interference structures. The calculations produced using the GTD code compared to the measured results better than the ray tracing technique. The agreement was good, demonstrating that the phase errors due to multipath can be modeled and characterized using the GTD technique and characterized to a lesser fidelity using the DECAT technique. However, some discrepancies were observed. Most of the discrepancies occurred at lower devations and were either due to phase center deviations of the antenna, the background multipath environment, or the receiver itself. Selected measured and predicted differential carrier phase error results are presented and compared. Results indicate that reflections and diffractions caused by the multipath producers, located near the GPS antennas, can produce phase shifts of greater than 10 mm, and as high as 95 mm. It should be noted tl the field test configuration was meant to simulate typical ISS structures, but the two environments are not identical. The GZ and DECAT techniques have been used to calculate phase errors due to multipath o the ISS configuration to quantify the expected attitude determination errors.

  11. A passive microwave technique for estimating rainfall and vertical structure information from space. Part 1: Algorithm description

    NASA Technical Reports Server (NTRS)

    Kummerow, Christian; Giglio, Louis

    1994-01-01

    This paper describes a multichannel physical approach for retrieving rainfall and vertical structure information from satellite-based passive microwave observations. The algorithm makes use of statistical inversion techniques based upon theoretically calculated relations between rainfall rates and brightness temperatures. Potential errors introduced into the theoretical calculations by the unknown vertical distribution of hydrometeors are overcome by explicity accounting for diverse hydrometeor profiles. This is accomplished by allowing for a number of different vertical distributions in the theoretical brightness temperature calculations and requiring consistency between the observed and calculated brightness temperatures. This paper will focus primarily on the theoretical aspects of the retrieval algorithm, which includes a procedure used to account for inhomogeneities of the rainfall within the satellite field of view as well as a detailed description of the algorithm as it is applied over both ocean and land surfaces. The residual error between observed and calculated brightness temperatures is found to be an important quantity in assessing the uniqueness of the solution. It is further found that the residual error is a meaningful quantity that can be used to derive expected accuracies from this retrieval technique. Examples comparing the retrieved results as well as the detailed analysis of the algorithm performance under various circumstances are the subject of a companion paper.

  12. Calculating sediment discharge from a highway construction site in central Pennsylvania

    USGS Publications Warehouse

    Reed, L.A.; Ward, J.R.; Wetzel, K.L.

    1985-01-01

    The Pennsylvania Department of Transportation, the Federal Highway Administration, and the U.S. Geological Survey have cooperated in a study to evaluate two methods of predicting sediment yields during highway construction. Sediment yields were calculated using the Universal Soil Loss and the Younkin Sediment Prediction Equations. Results were compared to the actual measured values, and standard errors and coefficients of correlation were calculated. Sediment discharge from the construction area was determined for storms that occurred during construction of Interstate 81 in a 0.38-square mile basin near Harrisburg, Pennsylvania. Precipitation data tabulated included total rainfall, maximum 30-minute rainfall, kinetic energy, and the erosive index of the precipitation. Highway construction data tabulated included the area disturbed by clearing and grubbing, the area in cuts and fills, the average depths of cuts and fills, the area seeded and mulched, and the area paved. Using the Universal Soil Loss Equation, sediment discharge from the construction area was calculated for storms. The standard error of estimate was 0.40 (about 105 percent), and the coefficient of correlation was 0.79. Sediment discharge from the construction area was also calculated using the Younkin Equation. The standard error of estimate of 0.42 (about 110 percent), and the coefficient of correlation of 0.77 are comparable to those from the Universal Soil Loss Equation.

  13. Validation of a program for supercritical power plant calculations

    NASA Astrophysics Data System (ADS)

    Kotowicz, Janusz; Łukowicz, Henryk; Bartela, Łukasz; Michalski, Sebastian

    2011-12-01

    This article describes the validation of a supercritical steam cycle. The cycle model was created with the commercial program GateCycle and validated using in-house code of the Institute of Power Engineering and Turbomachinery. The Institute's in-house code has been used extensively for industrial power plants calculations with good results. In the first step of the validation process, assumptions were made about the live steam temperature and pressure, net power, characteristic quantities for high- and low-pressure regenerative heat exchangers and pressure losses in heat exchangers. These assumptions were then used to develop a steam cycle model in Gate-Cycle and a model based on the code developed in-house at the Institute of Power Engineering and Turbomachinery. Properties, such as thermodynamic parameters at characteristic points of the steam cycle, net power values and efficiencies, heat provided to the steam cycle and heat taken from the steam cycle, were compared. The last step of the analysis was calculation of relative errors of compared values. The method used for relative error calculations is presented in the paper. The assigned relative errors are very slight, generally not exceeding 0.1%. Based on our analysis, it can be concluded that using the GateCycle software for calculations of supercritical power plants is possible.

  14. User Performance Evaluation of Four Blood Glucose Monitoring Systems Applying ISO 15197:2013 Accuracy Criteria and Calculation of Insulin Dosing Errors.

    PubMed

    Freckmann, Guido; Jendrike, Nina; Baumstark, Annette; Pleus, Stefan; Liebing, Christina; Haug, Cornelia

    2018-04-01

    The international standard ISO 15197:2013 requires a user performance evaluation to assess if intended users are able to obtain accurate blood glucose measurement results with a self-monitoring of blood glucose (SMBG) system. In this study, user performance was evaluated for four SMBG systems on the basis of ISO 15197:2013, and possibly related insulin dosing errors were calculated. Additionally, accuracy was assessed in the hands of study personnel. Accu-Chek ® Performa Connect (A), Contour ® plus ONE (B), FreeStyle Optium Neo (C), and OneTouch Select ® Plus (D) were evaluated with one test strip lot. After familiarization with the systems, subjects collected a capillary blood sample and performed an SMBG measurement. Study personnel observed the subjects' measurement technique. Then, study personnel performed SMBG measurements and comparison measurements. Number and percentage of SMBG measurements within ± 15 mg/dl and ± 15% of the comparison measurements at glucose concentrations < 100 and ≥ 100 mg/dl, respectively, were calculated. In addition, insulin dosing errors were modelled. In the hands of lay-users three systems fulfilled ISO 15197:2013 accuracy criteria with the investigated test strip lot showing 96% (A), 100% (B), and 98% (C) of results within the defined limits. All systems fulfilled minimum accuracy criteria in the hands of study personnel [99% (A), 100% (B), 99.5% (C), 96% (D)]. Measurements with all four systems were within zones of the consensus error grid and surveillance error grid associated with no or minimal risk. Regarding calculated insulin dosing errors, all 99% ranges were between dosing errors of - 2.7 and + 1.4 units for measurements in the hands of lay-users and between - 2.5 and + 1.4 units for study personnel. Frequent lay-user errors were not checking the test strips' expiry date and applying blood incorrectly. Data obtained in this study show that not all available SMBG systems complied with ISO 15197:2013 accuracy criteria when measurements were performed by lay-users. The study was registered at ClinicalTrials.gov (NCT02916576). Ascensia Diabetes Care Deutschland GmbH.

  15. Spatial durbin error model for human development index in Province of Central Java.

    NASA Astrophysics Data System (ADS)

    Septiawan, A. R.; Handajani, S. S.; Martini, T. S.

    2018-05-01

    The Human Development Index (HDI) is an indicator used to measure success in building the quality of human life, explaining how people access development outcomes when earning income, health and education. Every year HDI in Central Java has improved to a better direction. In 2016, HDI in Central Java was 69.98 %, an increase of 0.49 % over the previous year. The objective of this study was to apply the spatial Durbin error model using angle weights queen contiguity to measure HDI in Central Java Province. Spatial Durbin error model is used because the model overcomes the spatial effect of errors and the effects of spatial depedency on the independent variable. Factors there use is life expectancy, mean years of schooling, expected years of schooling, and purchasing power parity. Based on the result of research, we get spatial Durbin error model for HDI in Central Java with influencing factors are life expectancy, mean years of schooling, expected years of schooling, and purchasing power parity.

  16. Hierarchical learning induces two simultaneous, but separable, prediction errors in human basal ganglia.

    PubMed

    Diuk, Carlos; Tsai, Karin; Wallis, Jonathan; Botvinick, Matthew; Niv, Yael

    2013-03-27

    Studies suggest that dopaminergic neurons report a unitary, global reward prediction error signal. However, learning in complex real-life tasks, in particular tasks that show hierarchical structure, requires multiple prediction errors that may coincide in time. We used functional neuroimaging to measure prediction error signals in humans performing such a hierarchical task involving simultaneous, uncorrelated prediction errors. Analysis of signals in a priori anatomical regions of interest in the ventral striatum and the ventral tegmental area indeed evidenced two simultaneous, but separable, prediction error signals corresponding to the two levels of hierarchy in the task. This result suggests that suitably designed tasks may reveal a more intricate pattern of firing in dopaminergic neurons. Moreover, the need for downstream separation of these signals implies possible limitations on the number of different task levels that we can learn about simultaneously.

  17. Spectral optimization for measuring electron density by the dual-energy computed tomography coupled with balanced filter method.

    PubMed

    Saito, Masatoshi

    2009-08-01

    Dual-energy computed tomography (DECT) has the potential for measuring electron density distribution in a human body to predict the range of particle beams for treatment planning in proton or heavy-ion radiotherapy. However, thus far, a practical dual-energy method that can be used to precisely determine electron density for treatment planning in particle radiotherapy has not been developed. In this article, another DECT technique involving a balanced filter method using a conventional x-ray tube is described. For the spectral optimization of DECT using balanced filters, the author calculates beam-hardening error and air kerma required to achieve a desired noise level in electron density and effective atomic number images of a cylindrical water phantom with 50 cm diameter. The calculation enables the selection of beam parameters such as tube voltage, balanced filter material, and its thickness. The optimized parameters were applied to cases with different phantom diameters ranging from 5 to 50 cm for the calculations. The author predicts that the optimal combination of tube voltages would be 80 and 140 kV with Tb/Hf and Bi/Mo filter pairs for the 50-cm-diameter water phantom. When a single phantom calibration at a diameter of 25 cm was employed to cover all phantom sizes, maximum absolute beam-hardening errors were 0.3% and 0.03% for electron density and effective atomic number, respectively, over a range of diameters of the water phantom. The beam-hardening errors were 1/10 or less as compared to those obtained by conventional DECT, although the dose was twice that of the conventional DECT case. From the viewpoint of beam hardening and the tube-loading efficiency, the present DECT using balanced filters would be significantly more effective in measuring the electron density than the conventional DECT. Nevertheless, further developments of low-exposure imaging technology should be necessary as well as x-ray tubes with higher outputs to apply DECT coupled with the balanced filter method for clinical use.

  18. Mental representation of symbols as revealed by vocabulary errors in two bonobos (Pan paniscus).

    PubMed

    Lyn, Heidi

    2007-10-01

    Error analysis has been used in humans to detect implicit representations and categories in language use. The present study utilizes the same technique to report on mental representations and categories in symbol use from two bonobos (Pan paniscus). These bonobos have been shown in published reports to comprehend English at the level of a two-and-a-half year old child and to use a keyboard with over 200 visuographic symbols (lexigrams). In this study, vocabulary test errors from over 10 years of data revealed auditory, visual, and spatio-temporal generalizations (errors were more likely items that looked like sounded like, or were frequently associated with the sample item in space or in time), as well as hierarchical and conceptual categorizations. These error data, like those of humans, are a result of spontaneous responding rather than specific training and do not solely depend upon the sample mode (e.g. auditory similarity errors are not universally more frequent with an English sample, nor were visual similarity errors universally more frequent with a photograph sample). However, unlike humans, these bonobos do not make errors based on syntactical confusions (e.g. confusing semantically unrelated nouns), suggesting that they may not separate syntactical and semantic information. These data suggest that apes spontaneously create a complex, hierarchical, web of representations when exposed to a symbol system.

  19. 78 FR 11237 - Public Hearing

    Federal Register 2010, 2011, 2012, 2013, 2014

    2013-02-15

    ... management of human error in its operations and system safety programs, and the status of PTC implementation... UP's safety management policies and programs associated with human error, operational accident and... Chairman of the Board of Inquiry 2. Introduction of the Board of Inquiry and Technical Panel 3...

  20. Accurate Bit Error Rate Calculation for Asynchronous Chaos-Based DS-CDMA over Multipath Channel

    NASA Astrophysics Data System (ADS)

    Kaddoum, Georges; Roviras, Daniel; Chargé, Pascal; Fournier-Prunaret, Daniele

    2009-12-01

    An accurate approach to compute the bit error rate expression for multiuser chaosbased DS-CDMA system is presented in this paper. For more realistic communication system a slow fading multipath channel is considered. A simple RAKE receiver structure is considered. Based on the bit energy distribution, this approach compared to others computation methods existing in literature gives accurate results with low computation charge. Perfect estimation of the channel coefficients with the associated delays and chaos synchronization is assumed. The bit error rate is derived in terms of the bit energy distribution, the number of paths, the noise variance, and the number of users. Results are illustrated by theoretical calculations and numerical simulations which point out the accuracy of our approach.

  1. Statistical error in simulations of Poisson processes: Example of diffusion in solids

    NASA Astrophysics Data System (ADS)

    Nilsson, Johan O.; Leetmaa, Mikael; Vekilova, Olga Yu.; Simak, Sergei I.; Skorodumova, Natalia V.

    2016-08-01

    Simulations of diffusion in solids often produce poor statistics of diffusion events. We present an analytical expression for the statistical error in ion conductivity obtained in such simulations. The error expression is not restricted to any computational method in particular, but valid in the context of simulation of Poisson processes in general. This analytical error expression is verified numerically for the case of Gd-doped ceria by running a large number of kinetic Monte Carlo calculations.

  2. Errors from approximation of ODE systems with reduced order models

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Vassilevska, Tanya

    2016-12-30

    This is a code to calculate the error from approximation of systems of ordinary differential equations (ODEs) by using Proper Orthogonal Decomposition (POD) Reduced Order Models (ROM) methods and to compare and analyze the errors for two POD ROM variants. The first variant is the standard POD ROM, the second variant is a modification of the method using the values of the time derivatives (a.k.a. time-derivative snapshots). The code compares the errors from the two variants under different conditions.

  3. Horizon sensor errors calculated by computer models compared with errors measured in orbit

    NASA Technical Reports Server (NTRS)

    Ward, K. A.; Hogan, R.; Andary, J.

    1982-01-01

    Using a computer program to model the earth's horizon and to duplicate the signal processing procedure employed by the ESA (Earth Sensor Assembly), errors due to radiance variation have been computed for a particular time of the year. Errors actually occurring in flight at the same time of year are inferred from integrated rate gyro data for a satellite of the TIROS series of NASA weather satellites (NOAA-A). The predicted performance is compared with actual flight history.

  4. Intravenous Chemotherapy Compounding Errors in a Follow-Up Pan-Canadian Observational Study.

    PubMed

    Gilbert, Rachel E; Kozak, Melissa C; Dobish, Roxanne B; Bourrier, Venetia C; Koke, Paul M; Kukreti, Vishal; Logan, Heather A; Easty, Anthony C; Trbovich, Patricia L

    2018-05-01

    Intravenous (IV) compounding safety has garnered recent attention as a result of high-profile incidents, awareness efforts from the safety community, and increasingly stringent practice standards. New research with more-sensitive error detection techniques continues to reinforce that error rates with manual IV compounding are unacceptably high. In 2014, our team published an observational study that described three types of previously unrecognized and potentially catastrophic latent chemotherapy preparation errors in Canadian oncology pharmacies that would otherwise be undetectable. We expand on this research and explore whether additional potential human failures are yet to be addressed by practice standards. Field observations were conducted in four cancer center pharmacies in four Canadian provinces from January 2013 to February 2015. Human factors specialists observed and interviewed pharmacy managers, oncology pharmacists, pharmacy technicians, and pharmacy assistants as they carried out their work. Emphasis was on latent errors (potential human failures) that could lead to outcomes such as wrong drug, dose, or diluent. Given the relatively short observational period, no active failures or actual errors were observed. However, 11 latent errors in chemotherapy compounding were identified. In terms of severity, all 11 errors create the potential for a patient to receive the wrong drug or dose, which in the context of cancer care, could lead to death or permanent loss of function. Three of the 11 practices were observed in our previous study, but eight were new. Applicable Canadian and international standards and guidelines do not explicitly address many of the potentially error-prone practices observed. We observed a significant degree of risk for error in manual mixing practice. These latent errors may exist in other regions where manual compounding of IV chemotherapy takes place. Continued efforts to advance standards, guidelines, technological innovation, and chemical quality testing are needed.

  5. Accuracy of cited “facts” in medical research articles: A review of study methodology and recalculation of quotation error rate

    PubMed Central

    2017-01-01

    Previous reviews estimated that approximately 20 to 25% of assertions cited from original research articles, or “facts,” are inaccurately quoted in the medical literature. These reviews noted that the original studies were dissimilar and only began to compare the methods of the original studies. The aim of this review is to examine the methods of the original studies and provide a more specific rate of incorrectly cited assertions, or quotation errors, in original research articles published in medical journals. Additionally, the estimate of quotation errors calculated here is based on the ratio of quotation errors to quotations examined (a percent) rather than the more prevalent and weighted metric of quotation errors to the references selected. Overall, this resulted in a lower estimate of the quotation error rate in original medical research articles. A total of 15 studies met the criteria for inclusion in the primary quantitative analysis. Quotation errors were divided into two categories: content ("factual") or source (improper indirect citation) errors. Content errors were further subdivided into major and minor errors depending on the degree that the assertion differed from the original source. The rate of quotation errors recalculated here is 14.5% (10.5% to 18.6% at a 95% confidence interval). These content errors are predominantly, 64.8% (56.1% to 73.5% at a 95% confidence interval), major errors or cited assertions in which the referenced source either fails to substantiate, is unrelated to, or contradicts the assertion. Minor errors, which are an oversimplification, overgeneralization, or trivial inaccuracies, are 35.2% (26.5% to 43.9% at a 95% confidence interval). Additionally, improper secondary (or indirect) citations, which are distinguished from calculations of quotation accuracy, occur at a rate of 10.4% (3.4% to 17.5% at a 95% confidence interval). PMID:28910404

  6. Accuracy of cited "facts" in medical research articles: A review of study methodology and recalculation of quotation error rate.

    PubMed

    Mogull, Scott A

    2017-01-01

    Previous reviews estimated that approximately 20 to 25% of assertions cited from original research articles, or "facts," are inaccurately quoted in the medical literature. These reviews noted that the original studies were dissimilar and only began to compare the methods of the original studies. The aim of this review is to examine the methods of the original studies and provide a more specific rate of incorrectly cited assertions, or quotation errors, in original research articles published in medical journals. Additionally, the estimate of quotation errors calculated here is based on the ratio of quotation errors to quotations examined (a percent) rather than the more prevalent and weighted metric of quotation errors to the references selected. Overall, this resulted in a lower estimate of the quotation error rate in original medical research articles. A total of 15 studies met the criteria for inclusion in the primary quantitative analysis. Quotation errors were divided into two categories: content ("factual") or source (improper indirect citation) errors. Content errors were further subdivided into major and minor errors depending on the degree that the assertion differed from the original source. The rate of quotation errors recalculated here is 14.5% (10.5% to 18.6% at a 95% confidence interval). These content errors are predominantly, 64.8% (56.1% to 73.5% at a 95% confidence interval), major errors or cited assertions in which the referenced source either fails to substantiate, is unrelated to, or contradicts the assertion. Minor errors, which are an oversimplification, overgeneralization, or trivial inaccuracies, are 35.2% (26.5% to 43.9% at a 95% confidence interval). Additionally, improper secondary (or indirect) citations, which are distinguished from calculations of quotation accuracy, occur at a rate of 10.4% (3.4% to 17.5% at a 95% confidence interval).

  7. Evaluation of the predicted error of the soil moisture retrieval from C-band SAR by comparison against modelled soil moisture estimates over Australia

    PubMed Central

    Doubková, Marcela; Van Dijk, Albert I.J.M.; Sabel, Daniel; Wagner, Wolfgang; Blöschl, Günter

    2012-01-01

    The Sentinel-1 will carry onboard a C-band radar instrument that will map the European continent once every four days and the global land surface at least once every twelve days with finest 5 × 20 m spatial resolution. The high temporal sampling rate and operational configuration make Sentinel-1 of interest for operational soil moisture monitoring. Currently, updated soil moisture data are made available at 1 km spatial resolution as a demonstration service using Global Mode (GM) measurements from the Advanced Synthetic Aperture Radar (ASAR) onboard ENVISAT. The service demonstrates the potential of the C-band observations to monitor variations in soil moisture. Importantly, a retrieval error estimate is also available; these are needed to assimilate observations into models. The retrieval error is estimated by propagating sensor errors through the retrieval model. In this work, the existing ASAR GM retrieval error product is evaluated using independent top soil moisture estimates produced by the grid-based landscape hydrological model (AWRA-L) developed within the Australian Water Resources Assessment system (AWRA). The ASAR GM retrieval error estimate, an assumed prior AWRA-L error estimate and the variance in the respective datasets were used to spatially predict the root mean square error (RMSE) and the Pearson's correlation coefficient R between the two datasets. These were compared with the RMSE calculated directly from the two datasets. The predicted and computed RMSE showed a very high level of agreement in spatial patterns as well as good quantitative agreement; the RMSE was predicted within accuracy of 4% of saturated soil moisture over 89% of the Australian land mass. Predicted and calculated R maps corresponded within accuracy of 10% over 61% of the continent. The strong correspondence between the predicted and calculated RMSE and R builds confidence in the retrieval error model and derived ASAR GM error estimates. The ASAR GM and Sentinel-1 have the same basic physical measurement characteristics, and therefore very similar retrieval error estimation method can be applied. Because of the expected improvements in radiometric resolution of the Sentinel-1 backscatter measurements, soil moisture estimation errors can be expected to be an order of magnitude less than those for ASAR GM. This opens the possibility for operationally available medium resolution soil moisture estimates with very well-specified errors that can be assimilated into hydrological or crop yield models, with potentially large benefits for land-atmosphere fluxes, crop growth, and water balance monitoring and modelling. PMID:23483015

  8. A monitor for the laboratory evaluation of control integrity in digital control systems operating in harsh electromagnetic environments

    NASA Technical Reports Server (NTRS)

    Belcastro, Celeste M.; Fischl, Robert; Kam, Moshe

    1992-01-01

    This paper presents a strategy for dynamically monitoring digital controllers in the laboratory for susceptibility to electromagnetic disturbances that compromise control integrity. The integrity of digital control systems operating in harsh electromagnetic environments can be compromised by upsets caused by induced transient electrical signals. Digital system upset is a functional error mode that involves no component damage, can occur simultaneously in all channels of a redundant control computer, and is software dependent. The motivation for this work is the need to develop tools and techniques that can be used in the laboratory to validate and/or certify critical aircraft controllers operating in electromagnetically adverse environments that result from lightning, high-intensity radiated fields (HIRF), and nuclear electromagnetic pulses (NEMP). The detection strategy presented in this paper provides dynamic monitoring of a given control computer for degraded functional integrity resulting from redundancy management errors, control calculation errors, and control correctness/effectiveness errors. In particular, this paper discusses the use of Kalman filtering, data fusion, and statistical decision theory in monitoring a given digital controller for control calculation errors.

  9. Study on the Rationality and Validity of Probit Models of Domino Effect to Chemical Process Equipment caused by Overpressure

    NASA Astrophysics Data System (ADS)

    Sun, Dongliang; Huang, Guangtuan; Jiang, Juncheng; Zhang, Mingguang; Wang, Zhirong

    2013-04-01

    Overpressure is one important cause of domino effect in accidents of chemical process equipments. Some models considering propagation probability and threshold values of the domino effect caused by overpressure have been proposed in previous study. In order to prove the rationality and validity of the models reported in the reference, two boundary values of three damage degrees reported were considered as random variables respectively in the interval [0, 100%]. Based on the overpressure data for damage to the equipment and the damage state, and the calculation method reported in the references, the mean square errors of the four categories of damage probability models of overpressure were calculated with random boundary values, and then a relationship of mean square error vs. the two boundary value was obtained, the minimum of mean square error was obtained, compared with the result of the present work, mean square error decreases by about 3%. Therefore, the error was in the acceptable range of engineering applications, the models reported can be considered reasonable and valid.

  10. Performance of a laser microsatellite network with an optical preamplifier.

    PubMed

    Arnon, Shlomi

    2005-04-01

    Laser satellite communication (LSC) uses free space as a propagation medium for various applications, such as intersatellite communication or satellite networking. An LSC system includes a laser transmitter and an optical receiver. For communication to occur, the line of sight of the transmitter and the receiver must be aligned. However, mechanical vibration and electronic noise in the control system reduce alignment between the transmitter laser beam and the receiver field of view (FOV), which results in pointing errors. The outcome of pointing errors is fading of the received signal, which leads to impaired link performance. An LSC system is considered in which the optical preamplifier is incorporated into the receiver, and a bit error probability (BEP) model is derived that takes into account the statistics of the pointing error as well as the optical amplifier and communication system parameters. The model and the numerical calculation results indicate that random pointing errors of sigma(chi)2G > 0.05 penalize communication performance dramatically for all combinations of optical amplifier gains and noise figures that were calculated.

  11. Spacecraft and propulsion technician error

    NASA Astrophysics Data System (ADS)

    Schultz, Daniel Clyde

    Commercial aviation and commercial space similarly launch, fly, and land passenger vehicles. Unlike aviation, the U.S. government has not established maintenance policies for commercial space. This study conducted a mixed methods review of 610 U.S. space launches from 1984 through 2011, which included 31 failures. An analysis of the failure causal factors showed that human error accounted for 76% of those failures, which included workmanship error accounting for 29% of the failures. With the imminent future of commercial space travel, the increased potential for the loss of human life demands that changes be made to the standardized procedures, training, and certification to reduce human error and failure rates. Several recommendations were made by this study to the FAA's Office of Commercial Space Transportation, space launch vehicle operators, and maintenance technician schools in an effort to increase the safety of the space transportation passengers.

  12. Alchemy: A Web 2.0 Real-time Quality Assurance Platform for Human Immunodeficiency Virus, Hepatitis C Virus, and BK Virus Quantitation Assays

    PubMed Central

    Agosto-Arroyo, Emmanuel; Coshatt, Gina M.; Winokur, Thomas S.; Harada, Shuko; Park, Seung L.

    2017-01-01

    Background: The molecular diagnostics laboratory faces the challenge of improving test turnaround time (TAT). Low and consistent TATs are of great clinical and regulatory importance, especially for molecular virology tests. Laboratory information systems (LISs) contain all the data elements necessary to do accurate quality assurance (QA) reporting of TAT and other measures, but these reports are in most cases still performed manually: a time-consuming and error-prone task. The aim of this study was to develop a web-based real-time QA platform that would automate QA reporting in the molecular diagnostics laboratory at our institution, and minimize the time expended in preparing these reports. Methods: Using a standard Linux, Nginx, MariaDB, PHP stack virtual machine running atop a Dell Precision 5810, we designed and built a web-based QA platform, code-named Alchemy. Data files pulled periodically from the LIS in comma-separated value format were used to autogenerate QA reports for the human immunodeficiency virus (HIV) quantitation, hepatitis C virus (HCV) quantitation, and BK virus (BKV) quantitation. Alchemy allowed the user to select a specific timeframe to be analyzed and calculated key QA statistics in real-time, including the average TAT in days, tests falling outside the expected TAT ranges, and test result ranges. Results: Before implementing Alchemy, reporting QA for the HIV, HCV, and BKV quantitation assays took 45–60 min of personnel time per test every month. With Alchemy, that time has decreased to 15 min total per month. Alchemy allowed the user to select specific periods of time and analyzed the TAT data in-depth without the need of extensive manual calculations. Conclusions: Alchemy has significantly decreased the time and the human error associated with QA report generation in our molecular diagnostics laboratory. Other tests will be added to this web-based platform in future updates. This effort shows the utility of informatician-supervised resident/fellow programming projects as learning opportunities and workflow improvements in the molecular laboratory. PMID:28480121

  13. REGRES: A FORTRAN-77 program to calculate nonparametric and ``structural'' parametric solutions to bivariate regression equations

    NASA Astrophysics Data System (ADS)

    Rock, N. M. S.; Duffy, T. R.

    REGRES allows a range of regression equations to be calculated for paired sets of data values in which both variables are subject to error (i.e. neither is the "independent" variable). Nonparametric regressions, based on medians of all possible pairwise slopes and intercepts, are treated in detail. Estimated slopes and intercepts are output, along with confidence limits, Spearman and Kendall rank correlation coefficients. Outliers can be rejected with user-determined stringency. Parametric regressions can be calculated for any value of λ (the ratio of the variances of the random errors for y and x)—including: (1) major axis ( λ = 1); (2) reduced major axis ( λ = variance of y/variance of x); (3) Y on Xλ = infinity; or (4) X on Y ( λ = 0) solutions. Pearson linear correlation coefficients also are output. REGRES provides an alternative to conventional isochron assessment techniques where bivariate normal errors cannot be assumed, or weighting methods are inappropriate.

  14. The resolution of identity and chain of spheres approximations for the LPNO-CCSD singles Fock term

    NASA Astrophysics Data System (ADS)

    Izsák, Róbert; Hansen, Andreas; Neese, Frank

    2012-10-01

    In the present work, the RIJCOSX approximation, developed earlier for accelerating the SCF procedure, is applied to one of the limiting factors of LPNO-CCSD calculations: the evaluation of the singles Fock term. It turns out that the introduction of RIJCOSX in the evaluation of the closed shell LPNO-CCSD singles Fock term causes errors below the microhartree limit. If the proposed procedure is also combined with RIJCOSX in SCF, then a somewhat larger error occurs, but reaction energy errors will still remain negligible. The speedup for the singles Fock term only is about 9-10 fold for the largest basis set applied. For the case of Penicillin using the def2-QZVPP basis set, a single point energy evaluation takes 2 day 16 h on a single processor leading to a total speedup of 2.6 as compared to a fully analytic calculation. Using eight processors, the same calculation takes only 14 h.

  15. 42 CFR 1005.23 - Harmless error.

    Code of Federal Regulations, 2012 CFR

    2012-10-01

    ... 42 Public Health 5 2012-10-01 2012-10-01 false Harmless error. 1005.23 Section 1005.23 Public Health OFFICE OF INSPECTOR GENERAL-HEALTH CARE, DEPARTMENT OF HEALTH AND HUMAN SERVICES OIG AUTHORITIES APPEALS OF EXCLUSIONS, CIVIL MONEY PENALTIES AND ASSESSMENTS § 1005.23 Harmless error. No error in either...

  16. 42 CFR 1005.23 - Harmless error.

    Code of Federal Regulations, 2014 CFR

    2014-10-01

    ... 42 Public Health 5 2014-10-01 2014-10-01 false Harmless error. 1005.23 Section 1005.23 Public Health OFFICE OF INSPECTOR GENERAL-HEALTH CARE, DEPARTMENT OF HEALTH AND HUMAN SERVICES OIG AUTHORITIES APPEALS OF EXCLUSIONS, CIVIL MONEY PENALTIES AND ASSESSMENTS § 1005.23 Harmless error. No error in either...

  17. 42 CFR 1005.23 - Harmless error.

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... 42 Public Health 5 2010-10-01 2010-10-01 false Harmless error. 1005.23 Section 1005.23 Public Health OFFICE OF INSPECTOR GENERAL-HEALTH CARE, DEPARTMENT OF HEALTH AND HUMAN SERVICES OIG AUTHORITIES APPEALS OF EXCLUSIONS, CIVIL MONEY PENALTIES AND ASSESSMENTS § 1005.23 Harmless error. No error in either...

  18. 42 CFR 1005.23 - Harmless error.

    Code of Federal Regulations, 2013 CFR

    2013-10-01

    ... 42 Public Health 5 2013-10-01 2013-10-01 false Harmless error. 1005.23 Section 1005.23 Public Health OFFICE OF INSPECTOR GENERAL-HEALTH CARE, DEPARTMENT OF HEALTH AND HUMAN SERVICES OIG AUTHORITIES APPEALS OF EXCLUSIONS, CIVIL MONEY PENALTIES AND ASSESSMENTS § 1005.23 Harmless error. No error in either...

  19. 42 CFR 1005.23 - Harmless error.

    Code of Federal Regulations, 2011 CFR

    2011-10-01

    ... 42 Public Health 5 2011-10-01 2011-10-01 false Harmless error. 1005.23 Section 1005.23 Public Health OFFICE OF INSPECTOR GENERAL-HEALTH CARE, DEPARTMENT OF HEALTH AND HUMAN SERVICES OIG AUTHORITIES APPEALS OF EXCLUSIONS, CIVIL MONEY PENALTIES AND ASSESSMENTS § 1005.23 Harmless error. No error in either...

  20. 42 CFR 3.552 - Harmless error.

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... 42 Public Health 1 2010-10-01 2010-10-01 false Harmless error. 3.552 Section 3.552 Public Health PUBLIC HEALTH SERVICE, DEPARTMENT OF HEALTH AND HUMAN SERVICES GENERAL PROVISIONS PATIENT SAFETY ORGANIZATIONS AND PATIENT SAFETY WORK PRODUCT Enforcement Program § 3.552 Harmless error. No error in either the...

  1. 45 CFR 98.100 - Error Rate Report.

    Code of Federal Regulations, 2013 CFR

    2013-10-01

    ... 45 Public Welfare 1 2013-10-01 2013-10-01 false Error Rate Report. 98.100 Section 98.100 Public Welfare DEPARTMENT OF HEALTH AND HUMAN SERVICES GENERAL ADMINISTRATION CHILD CARE AND DEVELOPMENT FUND Error Rate Reporting § 98.100 Error Rate Report. (a) Applicability—The requirements of this subpart...

  2. 45 CFR 98.100 - Error Rate Report.

    Code of Federal Regulations, 2014 CFR

    2014-10-01

    ... 45 Public Welfare 1 2014-10-01 2014-10-01 false Error Rate Report. 98.100 Section 98.100 Public Welfare Department of Health and Human Services GENERAL ADMINISTRATION CHILD CARE AND DEVELOPMENT FUND Error Rate Reporting § 98.100 Error Rate Report. (a) Applicability—The requirements of this subpart...

  3. 45 CFR 98.100 - Error Rate Report.

    Code of Federal Regulations, 2012 CFR

    2012-10-01

    ... 45 Public Welfare 1 2012-10-01 2012-10-01 false Error Rate Report. 98.100 Section 98.100 Public Welfare DEPARTMENT OF HEALTH AND HUMAN SERVICES GENERAL ADMINISTRATION CHILD CARE AND DEVELOPMENT FUND Error Rate Reporting § 98.100 Error Rate Report. (a) Applicability—The requirements of this subpart...

  4. 45 CFR 98.100 - Error Rate Report.

    Code of Federal Regulations, 2011 CFR

    2011-10-01

    ... 45 Public Welfare 1 2011-10-01 2011-10-01 false Error Rate Report. 98.100 Section 98.100 Public Welfare DEPARTMENT OF HEALTH AND HUMAN SERVICES GENERAL ADMINISTRATION CHILD CARE AND DEVELOPMENT FUND Error Rate Reporting § 98.100 Error Rate Report. (a) Applicability—The requirements of this subpart...

  5. Defense Mapping Agency (DMA) Raster-to-Vector Analysis

    DTIC Science & Technology

    1984-11-30

    model) to pinpoint critical deficiencies and understand trade-offs between alternative solutions. This may be exemplified by the allocation of human ...process, prone to errors (i.e., human operator eye/motor control limitations), and its time consuming nature (as a function of data density). It should...achieved through the facilities of coinputer interactive graphics. Each error or anomaly is individually identified by a human operator and corrected

  6. Uncertainty Propagation in an Ecosystem Nutrient Budget.

    EPA Science Inventory

    New aspects and advancements in classical uncertainty propagation methods were used to develop a nutrient budget with associated error for a northern Gulf of Mexico coastal embayment. Uncertainty was calculated for budget terms by propagating the standard error and degrees of fr...

  7. Explicitly Representing the Solvation Shell in Continuum Solvent Calculations

    PubMed Central

    Svendsen, Hallvard F.; Merz, Kenneth M.

    2009-01-01

    A method is presented to explicitly represent the first solvation shell in continuum solvation calculations. Initial solvation shell geometries were generated with classical molecular dynamics simulations. Clusters consisting of solute and 5 solvent molecules were fully relaxed in quantum mechanical calculations. The free energy of solvation of the solute was calculated from the free energy of formation of the cluster and the solvation free energy of the cluster calculated with continuum solvation models. The method has been implemented with two continuum solvation models, a Poisson-Boltzmann model and the IEF-PCM model. Calculations were carried out for a set of 60 ionic species. Implemented with the Poisson-Boltzmann model the method gave an unsigned average error of 2.1 kcal/mol and a RMSD of 2.6 kcal/mol for anions, for cations the unsigned average error was 2.8 kcal/mol and the RMSD 3.9 kcal/mol. Similar results were obtained with the IEF-PCM model. PMID:19425558

  8. Development and application of artificial neural network models to estimate values of a complex human thermal comfort index associated with urban heat and cool island patterns using air temperature data from a standard meteorological station

    NASA Astrophysics Data System (ADS)

    Moustris, Konstantinos; Tsiros, Ioannis X.; Tseliou, Areti; Nastos, Panagiotis

    2018-04-01

    The present study deals with the development and application of artificial neural network models (ANNs) to estimate the values of a complex human thermal comfort-discomfort index associated with urban heat and cool island conditions inside various urban clusters using as only inputs air temperature data from a standard meteorological station. The index used in the study is the Physiologically Equivalent Temperature (PET) index which requires as inputs, among others, air temperature, relative humidity, wind speed, and radiation (short- and long-wave components). For the estimation of PET hourly values, ANN models were developed, appropriately trained, and tested. Model results are compared to values calculated by the PET index based on field monitoring data for various urban clusters (street, square, park, courtyard, and gallery) in the city of Athens (Greece) during an extreme hot weather summer period. For the evaluation of the predictive ability of the developed ANN models, several statistical evaluation indices were applied: the mean bias error, the root mean square error, the index of agreement, the coefficient of determination, the true predictive rate, the false alarm rate, and the Success Index. According to the results, it seems that ANNs present a remarkable ability to estimate hourly PET values within various urban clusters using only hourly values of air temperature. This is very important in cases where the human thermal comfort-discomfort conditions have to be analyzed and the only available parameter is air temperature.

  9. Development and application of artificial neural network models to estimate values of a complex human thermal comfort index associated with urban heat and cool island patterns using air temperature data from a standard meteorological station.

    PubMed

    Moustris, Konstantinos; Tsiros, Ioannis X; Tseliou, Areti; Nastos, Panagiotis

    2018-04-11

    The present study deals with the development and application of artificial neural network models (ANNs) to estimate the values of a complex human thermal comfort-discomfort index associated with urban heat and cool island conditions inside various urban clusters using as only inputs air temperature data from a standard meteorological station. The index used in the study is the Physiologically Equivalent Temperature (PET) index which requires as inputs, among others, air temperature, relative humidity, wind speed, and radiation (short- and long-wave components). For the estimation of PET hourly values, ANN models were developed, appropriately trained, and tested. Model results are compared to values calculated by the PET index based on field monitoring data for various urban clusters (street, square, park, courtyard, and gallery) in the city of Athens (Greece) during an extreme hot weather summer period. For the evaluation of the predictive ability of the developed ANN models, several statistical evaluation indices were applied: the mean bias error, the root mean square error, the index of agreement, the coefficient of determination, the true predictive rate, the false alarm rate, and the Success Index. According to the results, it seems that ANNs present a remarkable ability to estimate hourly PET values within various urban clusters using only hourly values of air temperature. This is very important in cases where the human thermal comfort-discomfort conditions have to be analyzed and the only available parameter is air temperature.

  10. Correcting for strong eddy current induced B0 modulation enables two-spoke RF pulse design with parallel transmission: demonstration at 9.4T in the human brain.

    PubMed

    Wu, Xiaoping; Adriany, Gregor; Ugurbil, Kamil; Van de Moortele, Pierre-Francois

    2013-01-01

    Successful implementation of homogeneous slice-selective RF excitation in the human brain at 9.4T using 16-channel parallel transmission (pTX) is demonstrated. A novel three-step pulse design method incorporating fast real-time measurement of eddy current induced B0 variations as well as correction of resulting phase errors during excitation is described. To demonstrate the utility of the proposed method, phantom and in-vivo experiments targeting a uniform excitation in an axial slice were conducted using two-spoke pTX pulses. Even with the pre-emphasis activated, eddy current induced B0 variations with peak-to-peak values greater than 4 kHz were observed on our system during the rapid switches of slice selective gradients. This large B0 variation, when not corrected, resulted in drastically degraded excitation fidelity with the coefficient of variation (CV) of the flip angle calculated for the region of interest being large (~ 12% in the phantom and ~ 35% in the brain). By comparison, excitation fidelity was effectively restored, and satisfactory flip angle uniformity was achieved when using the proposed method, with the CV value reduced to ~ 3% in the phantom and ~ 8% in the brain. Additionally, experimental results were in good agreement with the numerical predictions obtained from Bloch simulations. Slice-selective flip angle homogenization in the human brain at 9.4T using 16-channel 3D spoke pTX pulses is achievable despite of large eddy current induced excitation phase errors; correcting for the latter was critical in this success.

  11. JPL-ANTOPT antenna structure optimization program

    NASA Technical Reports Server (NTRS)

    Strain, D. M.

    1994-01-01

    New antenna path-length error and pointing-error structure optimization codes were recently added to the MSC/NASTRAN structural analysis computer program. Path-length and pointing errors are important measured of structure-related antenna performance. The path-length and pointing errors are treated as scalar displacements for statics loading cases. These scalar displacements can be subject to constraint during the optimization process. Path-length and pointing-error calculations supplement the other optimization and sensitivity capabilities of NASTRAN. The analysis and design functions were implemented as 'DMAP ALTERs' to the Design Optimization (SOL 200) Solution Sequence of MSC-NASTRAN, Version 67.5.

  12. How we learn to make decisions: rapid propagation of reinforcement learning prediction errors in humans.

    PubMed

    Krigolson, Olav E; Hassall, Cameron D; Handy, Todd C

    2014-03-01

    Our ability to make decisions is predicated upon our knowledge of the outcomes of the actions available to us. Reinforcement learning theory posits that actions followed by a reward or punishment acquire value through the computation of prediction errors-discrepancies between the predicted and the actual reward. A multitude of neuroimaging studies have demonstrated that rewards and punishments evoke neural responses that appear to reflect reinforcement learning prediction errors [e.g., Krigolson, O. E., Pierce, L. J., Holroyd, C. B., & Tanaka, J. W. Learning to become an expert: Reinforcement learning and the acquisition of perceptual expertise. Journal of Cognitive Neuroscience, 21, 1833-1840, 2009; Bayer, H. M., & Glimcher, P. W. Midbrain dopamine neurons encode a quantitative reward prediction error signal. Neuron, 47, 129-141, 2005; O'Doherty, J. P. Reward representations and reward-related learning in the human brain: Insights from neuroimaging. Current Opinion in Neurobiology, 14, 769-776, 2004; Holroyd, C. B., & Coles, M. G. H. The neural basis of human error processing: Reinforcement learning, dopamine, and the error-related negativity. Psychological Review, 109, 679-709, 2002]. Here, we used the brain ERP technique to demonstrate that not only do rewards elicit a neural response akin to a prediction error but also that this signal rapidly diminished and propagated to the time of choice presentation with learning. Specifically, in a simple, learnable gambling task, we show that novel rewards elicited a feedback error-related negativity that rapidly decreased in amplitude with learning. Furthermore, we demonstrate the existence of a reward positivity at choice presentation, a previously unreported ERP component that has a similar timing and topography as the feedback error-related negativity that increased in amplitude with learning. The pattern of results we observed mirrored the output of a computational model that we implemented to compute reward prediction errors and the changes in amplitude of these prediction errors at the time of choice presentation and reward delivery. Our results provide further support that the computations that underlie human learning and decision-making follow reinforcement learning principles.

  13. Causal Evidence from Humans for the Role of Mediodorsal Nucleus of the Thalamus in Working Memory.

    PubMed

    Peräkylä, Jari; Sun, Lihua; Lehtimäki, Kai; Peltola, Jukka; Öhman, Juha; Möttönen, Timo; Ogawa, Keith H; Hartikainen, Kaisa M

    2017-12-01

    The mediodorsal nucleus of the thalamus (MD), with its extensive connections to the lateral pFC, has been implicated in human working memory and executive functions. However, this understanding is based solely on indirect evidence from human lesion and imaging studies and animal studies. Direct, causal evidence from humans is missing. To obtain direct evidence for MD's role in humans, we studied patients treated with deep brain stimulation (DBS) for refractory epilepsy. This treatment is thought to prevent the generalization of a seizure by disrupting the functioning of the patient's anterior nuclei of the thalamus (ANT) with high-frequency electric stimulation. This structure is located superior and anterior to MD, and when the DBS lead is implanted in ANT, tip contacts of the lead typically penetrate through ANT into the adjoining MD. To study the role of MD in human executive functions and working memory, we periodically disrupted and recovered MD's function with high-frequency electric stimulation using DBS contacts reaching MD while participants performed a cognitive task engaging several aspects of executive functions. We hypothesized that the efficacy of executive functions, specifically working memory, is impaired when the functioning of MD is perturbed by high-frequency stimulation. Eight participants treated with ANT-DBS for refractory epilepsy performed a computer-based test of executive functions while DBS was repeatedly switched ON and OFF at MD and at the control location (ANT). In comparison to stimulation of the control location, when MD was stimulated, participants committed 2.26 times more errors in general (total errors; OR = 2.26, 95% CI [1.69, 3.01]) and 2.86 times more working memory-related errors specifically (incorrect button presses; OR = 2.88, CI [1.95, 4.24]). Similarly, participants committed 1.81 more errors in general ( OR = 1.81, CI [1.45, 2.24]) and 2.08 times more working memory-related errors ( OR = 2.08, CI [1.57, 2.75]) in comparison to no stimulation condition. "Total errors" is a composite score consisting of basic error types and was mostly driven by working memory-related errors. The facts that MD and a control location, ANT, are only few millimeters away from each other and that their stimulation produces very different results highlight the location-specific effect of DBS rather than regionally unspecific general effect. In conclusion, disrupting and recovering MD's function with high-frequency electric stimulation modulated participants' online working memory performance providing causal, in vivo evidence from humans for the role of MD in human working memory.

  14. Relative-Error-Covariance Algorithms

    NASA Technical Reports Server (NTRS)

    Bierman, Gerald J.; Wolff, Peter J.

    1991-01-01

    Two algorithms compute error covariance of difference between optimal estimates, based on data acquired during overlapping or disjoint intervals, of state of discrete linear system. Provides quantitative measure of mutual consistency or inconsistency of estimates of states. Relative-error-covariance concept applied, to determine degree of correlation between trajectories calculated from two overlapping sets of measurements and construct real-time test of consistency of state estimates based upon recently acquired data.

  15. Interruption Practice Reduces Errors

    DTIC Science & Technology

    2014-01-01

    dangers of errors at the PCS. Electronic health record systems are used to reduce certain errors related to poor- handwriting and dosage...10.16, MSE =.31, p< .05, η2 = .18 A significant interaction between the number of interruptions and interrupted trials suggests that trials...the variance when calculating whether a memory has a higher signal than interference. If something in addition to activation contributes to goal

  16. How to deal with multiple binding poses in alchemical relative protein-ligand binding free energy calculations.

    PubMed

    Kaus, Joseph W; Harder, Edward; Lin, Teng; Abel, Robert; McCammon, J Andrew; Wang, Lingle

    2015-06-09

    Recent advances in improved force fields and sampling methods have made it possible for the accurate calculation of protein–ligand binding free energies. Alchemical free energy perturbation (FEP) using an explicit solvent model is one of the most rigorous methods to calculate relative binding free energies. However, for cases where there are high energy barriers separating the relevant conformations that are important for ligand binding, the calculated free energy may depend on the initial conformation used in the simulation due to the lack of complete sampling of all the important regions in phase space. This is particularly true for ligands with multiple possible binding modes separated by high energy barriers, making it difficult to sample all relevant binding modes even with modern enhanced sampling methods. In this paper, we apply a previously developed method that provides a corrected binding free energy for ligands with multiple binding modes by combining the free energy results from multiple alchemical FEP calculations starting from all enumerated poses, and the results are compared with Glide docking and MM-GBSA calculations. From these calculations, the dominant ligand binding mode can also be predicted. We apply this method to a series of ligands that bind to c-Jun N-terminal kinase-1 (JNK1) and obtain improved free energy results. The dominant ligand binding modes predicted by this method agree with the available crystallography, while both Glide docking and MM-GBSA calculations incorrectly predict the binding modes for some ligands. The method also helps separate the force field error from the ligand sampling error, such that deviations in the predicted binding free energy from the experimental values likely indicate possible inaccuracies in the force field. An error in the force field for a subset of the ligands studied was identified using this method, and improved free energy results were obtained by correcting the partial charges assigned to the ligands. This improved the root-mean-square error (RMSE) for the predicted binding free energy from 1.9 kcal/mol with the original partial charges to 1.3 kcal/mol with the corrected partial charges.

  17. How To Deal with Multiple Binding Poses in Alchemical Relative Protein–Ligand Binding Free Energy Calculations

    PubMed Central

    2016-01-01

    Recent advances in improved force fields and sampling methods have made it possible for the accurate calculation of protein–ligand binding free energies. Alchemical free energy perturbation (FEP) using an explicit solvent model is one of the most rigorous methods to calculate relative binding free energies. However, for cases where there are high energy barriers separating the relevant conformations that are important for ligand binding, the calculated free energy may depend on the initial conformation used in the simulation due to the lack of complete sampling of all the important regions in phase space. This is particularly true for ligands with multiple possible binding modes separated by high energy barriers, making it difficult to sample all relevant binding modes even with modern enhanced sampling methods. In this paper, we apply a previously developed method that provides a corrected binding free energy for ligands with multiple binding modes by combining the free energy results from multiple alchemical FEP calculations starting from all enumerated poses, and the results are compared with Glide docking and MM-GBSA calculations. From these calculations, the dominant ligand binding mode can also be predicted. We apply this method to a series of ligands that bind to c-Jun N-terminal kinase-1 (JNK1) and obtain improved free energy results. The dominant ligand binding modes predicted by this method agree with the available crystallography, while both Glide docking and MM-GBSA calculations incorrectly predict the binding modes for some ligands. The method also helps separate the force field error from the ligand sampling error, such that deviations in the predicted binding free energy from the experimental values likely indicate possible inaccuracies in the force field. An error in the force field for a subset of the ligands studied was identified using this method, and improved free energy results were obtained by correcting the partial charges assigned to the ligands. This improved the root-mean-square error (RMSE) for the predicted binding free energy from 1.9 kcal/mol with the original partial charges to 1.3 kcal/mol with the corrected partial charges. PMID:26085821

  18. A Method for the Study of Human Factors in Aircraft Operations

    NASA Technical Reports Server (NTRS)

    Barnhart, W.; Billings, C.; Cooper, G.; Gilstrap, R.; Lauber, J.; Orlady, H.; Puskas, B.; Stephens, W.

    1975-01-01

    A method for the study of human factors in the aviation environment is described. A conceptual framework is provided within which pilot and other human errors in aircraft operations may be studied with the intent of finding out how, and why, they occurred. An information processing model of human behavior serves as the basis for the acquisition and interpretation of information relating to occurrences which involve human error. A systematic method of collecting such data is presented and discussed. The classification of the data is outlined.

  19. An interactive framework for acquiring vision models of 3-D objects from 2-D images.

    PubMed

    Motai, Yuichi; Kak, Avinash

    2004-02-01

    This paper presents a human-computer interaction (HCI) framework for building vision models of three-dimensional (3-D) objects from their two-dimensional (2-D) images. Our framework is based on two guiding principles of HCI: 1) provide the human with as much visual assistance as possible to help the human make a correct input; and 2) verify each input provided by the human for its consistency with the inputs previously provided. For example, when stereo correspondence information is elicited from a human, his/her job is facilitated by superimposing epipolar lines on the images. Although that reduces the possibility of error in the human marked correspondences, such errors are not entirely eliminated because there can be multiple candidate points close together for complex objects. For another example, when pose-to-pose correspondence is sought from a human, his/her job is made easier by allowing the human to rotate the partial model constructed in the previous pose in relation to the partial model for the current pose. While this facility reduces the incidence of human-supplied pose-to-pose correspondence errors, such errors cannot be eliminated entirely because of confusion created when multiple candidate features exist close together. Each input provided by the human is therefore checked against the previous inputs by invoking situation-specific constraints. Different types of constraints (and different human-computer interaction protocols) are needed for the extraction of polygonal features and for the extraction of curved features. We will show results on both polygonal objects and object containing curved features.

  20. Favorable ecological circumstances promote life expectancy in chimpanzees similar to that of human hunter-gatherers☆

    PubMed Central

    Wood, Brian M.; Watts, David P.; Mitani, John C.; Langergraber, Kevin E.

    2017-01-01

    Demographic data on wild chimpanzees are crucial for understanding the evolution of chimpanzee and hominin life histories, but most data come from populations affected by disease outbreaks and anthropogenic disturbance. We present survivorship data from a relatively undisturbed and exceptionally large community of eastern chimpanzees (Pan troglodytes schweinfurthii) at Ngogo, Kibale National Park, Uganda. We monitored births, deaths, immigrations, and emigrations in the community between 1995 and 2016. Using known and estimated ages, we calculated survivorship curves for the whole community, for males and females separately, and for individuals ≤2 years old when identified. We used a novel method to address age estimation error by calculating stochastic survivorship curves. We compared Ngogo life expectancy, survivorship, and mortality rates to those from other chimpanzee communities and human hunter-gatherers. Life expectancy at birth for both sexes combined was 32.8 years, far exceeding estimates of chimpanzee life expectancy in other communities, and falling within the range of human hunter-gatherers (i.e., 27–37 years). Overall, the pattern of survivorship at Ngogo was more similar to that of human hunter-gatherers than to other chimpanzee communities. Maximum lifespan for the Ngogo chimpanzees, however, was similar to that reported at other chimpanzee research sites and was less than that of human-hunter gatherers. The absence of predation by large carnivores may contribute to some of the higher survivorship at Ngogo, but this cannot explain the much higher survivorship at Ngogo than at Kanyawara, another chimpanzee community in the same forest, which also lacks large carnivores. Higher survivorship at Ngogo appears to be an adaptive response to a food supply that is more abundant and varies less than that of Kanyawara. Future analyses of hominin life history evolution should take these results into account. PMID:28366199

  1. Error driven remeshing strategy in an elastic-plastic shakedown problem

    NASA Astrophysics Data System (ADS)

    Pazdanowski, Michał J.

    2018-01-01

    A shakedown based approach has been for many years successfully used to calculate the distributions of residual stresses in bodies made of elastic-plastic materials and subjected to cyclic loads exceeding their bearing capacity. The calculations performed indicated the existence of areas characterized by extremely high gradients and rapid changes of sign over small areas in the stress field sought. In order to account for these changes in sign, relatively dense nodal meshes had to be used during calculations in disproportionately large parts of considered bodies, resulting in unnecessary expenditure of computer resources. Therefore the effort was undertaken to limit the areas of high mesh densities and drive the mesh regeneration algorithm by selected error indicators.

  2. Calculation of stochastic broadening due to noise and field errors in the simple map in action-angle coordinates

    NASA Astrophysics Data System (ADS)

    Hinton, Courtney; Punjabi, Alkesh; Ali, Halima

    2008-11-01

    The simple map is the simplest map that has topology of divertor tokamaks [1]. Recently, the action-angle coordinates for simple map are analytically calculated, and simple map is constructed in action-angle coordinates [2]. Action-angle coordinates for simple map can not be inverted to real space coordinates (R,Z). Because there is logarithmic singularity on the ideal separatrix, trajectories can not cross separatrix [2]. Simple map in action-angle coordinates is applied to calculate stochastic broadening due to magnetic noise and field errors. Mode numbers for noise + field errors from the DIII-D tokamak are used. Mode numbers are (m,n)=(3,1), (4,1), (6,2), (7,2), (8,2), (9,3), (10,3), (11,3), (12,3) [3]. The common amplitude δ is varied from 0.8X10-5 to 2.0X10-5. For this noise and field errors, the width of stochastic layer in simple map is calculated. This work is supported by US Department of Energy grants DE-FG02-07ER54937, DE-FG02-01ER54624 and DE-FG02-04ER54793 1. A. Punjabi, H. Ali, T. Evans, and A. Boozer, Phys. Let. A 364, 140--145 (2007). 2. O. Kerwin, A. Punjabi, and H. Ali, to appear in Physics of Plasmas. 3. A. Punjabi and H. Ali, P1.012, 35^th EPS Conference on Plasma Physics, June 9-13, 2008, Hersonissos, Crete, Greece.

  3. Predictive Simulations of Neuromuscular Coordination and Joint-Contact Loading in Human Gait.

    PubMed

    Lin, Yi-Chung; Walter, Jonathan P; Pandy, Marcus G

    2018-04-18

    We implemented direct collocation on a full-body neuromusculoskeletal model to calculate muscle forces, ground reaction forces and knee contact loading simultaneously for one cycle of human gait. A data-tracking collocation problem was solved for walking at the normal speed to establish the practicality of incorporating a 3D model of articular contact and a model of foot-ground interaction explicitly in a dynamic optimization simulation. The data-tracking solution then was used as an initial guess to solve predictive collocation problems, where novel patterns of movement were generated for walking at slow and fast speeds, independent of experimental data. The data-tracking solutions accurately reproduced joint motion, ground forces and knee contact loads measured for two total knee arthroplasty patients walking at their preferred speeds. RMS errors in joint kinematics were < 2.0° for rotations and < 0.3 cm for translations while errors in the model-computed ground-reaction and knee-contact forces were < 0.07 BW and < 0.4 BW, respectively. The predictive solutions were also consistent with joint kinematics, ground forces, knee contact loads and muscle activation patterns measured for slow and fast walking. The results demonstrate the feasibility of performing computationally-efficient, predictive, dynamic optimization simulations of movement using full-body, muscle-actuated models with realistic representations of joint function.

  4. Efficient time-dependent density functional theory approximations for hybrid density functionals: analytical gradients and parallelization.

    PubMed

    Petrenko, Taras; Kossmann, Simone; Neese, Frank

    2011-02-07

    In this paper, we present the implementation of efficient approximations to time-dependent density functional theory (TDDFT) within the Tamm-Dancoff approximation (TDA) for hybrid density functionals. For the calculation of the TDDFT/TDA excitation energies and analytical gradients, we combine the resolution of identity (RI-J) algorithm for the computation of the Coulomb terms and the recently introduced "chain of spheres exchange" (COSX) algorithm for the calculation of the exchange terms. It is shown that for extended basis sets, the RIJCOSX approximation leads to speedups of up to 2 orders of magnitude compared to traditional methods, as demonstrated for hydrocarbon chains. The accuracy of the adiabatic transition energies, excited state structures, and vibrational frequencies is assessed on a set of 27 excited states for 25 molecules with the configuration interaction singles and hybrid TDDFT/TDA methods using various basis sets. Compared to the canonical values, the typical error in transition energies is of the order of 0.01 eV. Similar to the ground-state results, excited state equilibrium geometries differ by less than 0.3 pm in the bond distances and 0.5° in the bond angles from the canonical values. The typical error in the calculated excited state normal coordinate displacements is of the order of 0.01, and relative error in the calculated excited state vibrational frequencies is less than 1%. The errors introduced by the RIJCOSX approximation are, thus, insignificant compared to the errors related to the approximate nature of the TDDFT methods and basis set truncation. For TDDFT/TDA energy and gradient calculations on Ag-TB2-helicate (156 atoms, 2732 basis functions), it is demonstrated that the COSX algorithm parallelizes almost perfectly (speedup ~26-29 for 30 processors). The exchange-correlation terms also parallelize well (speedup ~27-29 for 30 processors). The solution of the Z-vector equations shows a speedup of ~24 on 30 processors. The parallelization efficiency for the Coulomb terms can be somewhat smaller (speedup ~15-25 for 30 processors), but their contribution to the total calculation time is small. Thus, the parallel program completes a Becke3-Lee-Yang-Parr energy and gradient calculation on the Ag-TB2-helicate in less than 4 h on 30 processors. We also present the necessary extension of the Lagrangian formalism, which enables the calculation of the TDDFT excited state properties in the frozen-core approximation. The algorithms described in this work are implemented into the ORCA electronic structure system.

  5. Rate Constants for Fine-Structure Excitations in O - H Collisions with Error Bars Obtained by Machine Learning

    NASA Astrophysics Data System (ADS)

    Vieira, Daniel; Krems, Roman

    2017-04-01

    Fine-structure transitions in collisions of O(3Pj) with atomic hydrogen are an important cooling mechanism in the interstellar medium; knowledge of the rate coefficients for these transitions has a wide range of astrophysical applications. The accuracy of the theoretical calculation is limited by inaccuracy in the ab initio interaction potentials used in the coupled-channel quantum scattering calculations from which the rate coefficients can be obtained. In this work we use the latest ab initio results for the O(3Pj) + H interaction potentials to improve on previous calculations of the rate coefficients. We further present a machine-learning technique based on Gaussian Process regression to determine the sensitivity of the rate coefficients to variations of the underlying adiabatic interaction potentials. To account for the inaccuracy inherent in the ab initio calculations we compute error bars for the rate coefficients corresponding to 20% variation in each of the interaction potentials. We obtain these error bars by fitting a Gaussian Process model to a data set of potential curves and rate constants. We use the fitted model to do sensitivity analysis, determining the relative importance of individual adiabatic potential curves to a given fine-structure transition. NSERC.

  6. Method for controlling a vehicle with two or more independently steered wheels

    DOEpatents

    Reister, D.B.; Unseren, M.A.

    1995-03-28

    A method is described for independently controlling each steerable drive wheel of a vehicle with two or more such wheels. An instantaneous center of rotation target and a tangential velocity target are inputs to a wheel target system which sends the velocity target and a steering angle target for each drive wheel to a pseudo-velocity target system. The pseudo-velocity target system determines a pseudo-velocity target which is compared to a current pseudo-velocity to determine a pseudo-velocity error. The steering angle targets and the steering angles are inputs to a steering angle control system which outputs to the steering angle encoders, which measure the steering angles. The pseudo-velocity error, the rate of change of the pseudo-velocity error, and the wheel slip between each pair of drive wheels are used to calculate intermediate control variables which, along with the steering angle targets are used to calculate the torque to be applied at each wheel. The current distance traveled for each wheel is then calculated. The current wheel velocities and steering angle targets are used to calculate the cumulative and instantaneous wheel slip and the current pseudo-velocity. 6 figures.

  7. A large-scale test of free-energy simulation estimates of protein-ligand binding affinities.

    PubMed

    Mikulskis, Paulius; Genheden, Samuel; Ryde, Ulf

    2014-10-27

    We have performed a large-scale test of alchemical perturbation calculations with the Bennett acceptance-ratio (BAR) approach to estimate relative affinities for the binding of 107 ligands to 10 different proteins. Employing 20-Å truncated spherical systems and only one intermediate state in the perturbations, we obtain an error of less than 4 kJ/mol for 54% of the studied relative affinities and a precision of 0.5 kJ/mol on average. However, only four of the proteins gave acceptable errors, correlations, and rankings. The results could be improved by using nine intermediate states in the simulations or including the entire protein in the simulations using periodic boundary conditions. However, 27 of the calculated affinities still gave errors of more than 4 kJ/mol, and for three of the proteins the results were not satisfactory. This shows that the performance of BAR calculations depends on the target protein and that several transformations gave poor results owing to limitations in the molecular-mechanics force field or the restricted sampling possible within a reasonable simulation time. Still, the BAR results are better than docking calculations for most of the proteins.

  8. Empirical State Error Covariance Matrix for Batch Estimation

    NASA Technical Reports Server (NTRS)

    Frisbee, Joe

    2015-01-01

    State estimation techniques effectively provide mean state estimates. However, the theoretical state error covariance matrices provided as part of these techniques often suffer from a lack of confidence in their ability to describe the uncertainty in the estimated states. By a reinterpretation of the equations involved in the weighted batch least squares algorithm, it is possible to directly arrive at an empirical state error covariance matrix. The proposed empirical state error covariance matrix will contain the effect of all error sources, known or not. This empirical error covariance matrix may be calculated as a side computation for each unique batch solution. Results based on the proposed technique will be presented for a simple, two observer and measurement error only problem.

  9. Safety and Performance Analysis of the Non-Radar Oceanic/Remote Airspace In-Trail Procedure

    NASA Technical Reports Server (NTRS)

    Carreno, Victor A.; Munoz, Cesar A.

    2007-01-01

    This document presents a safety and performance analysis of the nominal case for the In-Trail Procedure (ITP) in a non-radar oceanic/remote airspace. The analysis estimates the risk of collision between the aircraft performing the ITP and a reference aircraft. The risk of collision is only estimated for the ITP maneuver and it is based on nominal operating conditions. The analysis does not consider human error, communication error conditions, or the normal risk of flight present in current operations. The hazards associated with human error and communication errors are evaluated in an Operational Hazards Analysis presented elsewhere.

  10. Improved pKa Prediction of Substituted Alcohols, Phenols, and Hydroperoxides in Aqueous Medium Using Density Functional Theory and a Cluster-Continuum Solvation Model.

    PubMed

    Thapa, Bishnu; Schlegel, H Bernhard

    2017-06-22

    Acid dissociation constants (pK a 's) are key physicochemical properties that are needed to understand the structure and reactivity of molecules in solution. Theoretical pK a 's have been calculated for a set of 72 organic compounds with -OH and -OOH groups (48 with known experimental pK a 's). This test set includes 17 aliphatic alcohols, 25 substituted phenols, and 30 hydroperoxides. Calculations in aqueous medium have been carried out with SMD implicit solvation and three hybrid DFT functionals (B3LYP, ωB97XD, and M06-2X) with two basis sets (6-31+G(d,p) and 6-311++G(d,p)). The effect of explicit water molecules on calculated pK a 's was assessed by including up to three water molecules. pK a 's calculated with only SMD implicit solvation are found to have average errors greater than 6 pK a units. Including one explicit water reduces the error by about 3 pK a units, but the error is still far from chemical accuracy. With B3LYP/6-311++G(d,p) and three explicit water molecules in SMD solvation, the mean signed error and standard deviation are only -0.02 ± 0.55; a linear fit with zero intercept has a slope of 1.005 and R 2 = 0.97. Thus, this level of theory can be used to calculate pK a 's directly without the need for linear correlations or thermodynamic cycles. Estimated pK a values are reported for 24 hydroperoxides that have not yet been determined experimentally.

  11. A cognitive taxonomy of medical errors.

    PubMed

    Zhang, Jiajie; Patel, Vimla L; Johnson, Todd R; Shortliffe, Edward H

    2004-06-01

    Propose a cognitive taxonomy of medical errors at the level of individuals and their interactions with technology. Use cognitive theories of human error and human action to develop the theoretical foundations of the taxonomy, develop the structure of the taxonomy, populate the taxonomy with examples of medical error cases, identify cognitive mechanisms for each category of medical error under the taxonomy, and apply the taxonomy to practical problems. Four criteria were used to evaluate the cognitive taxonomy. The taxonomy should be able (1) to categorize major types of errors at the individual level along cognitive dimensions, (2) to associate each type of error with a specific underlying cognitive mechanism, (3) to describe how and explain why a specific error occurs, and (4) to generate intervention strategies for each type of error. The proposed cognitive taxonomy largely satisfies the four criteria at a theoretical and conceptual level. Theoretically, the proposed cognitive taxonomy provides a method to systematically categorize medical errors at the individual level along cognitive dimensions, leads to a better understanding of the underlying cognitive mechanisms of medical errors, and provides a framework that can guide future studies on medical errors. Practically, it provides guidelines for the development of cognitive interventions to decrease medical errors and foundation for the development of medical error reporting system that not only categorizes errors but also identifies problems and helps to generate solutions. To validate this model empirically, we will next be performing systematic experimental studies.

  12. Probability of Detection of Genotyping Errors and Mutations as Inheritance Inconsistencies in Nuclear-Family Data

    PubMed Central

    Douglas, Julie A.; Skol, Andrew D.; Boehnke, Michael

    2002-01-01

    Gene-mapping studies routinely rely on checking for Mendelian transmission of marker alleles in a pedigree, as a means of screening for genotyping errors and mutations, with the implicit assumption that, if a pedigree is consistent with Mendel’s laws of inheritance, then there are no genotyping errors. However, the occurrence of inheritance inconsistencies alone is an inadequate measure of the number of genotyping errors, since the rate of occurrence depends on the number and relationships of genotyped pedigree members, the type of errors, and the distribution of marker-allele frequencies. In this article, we calculate the expected probability of detection of a genotyping error or mutation as an inheritance inconsistency in nuclear-family data, as a function of both the number of genotyped parents and offspring and the marker-allele frequency distribution. Through computer simulation, we explore the sensitivity of our analytic calculations to the underlying error model. Under a random-allele–error model, we find that detection rates are 51%–77% for multiallelic markers and 13%–75% for biallelic markers; detection rates are generally lower when the error occurs in a parent than in an offspring, unless a large number of offspring are genotyped. Errors are especially difficult to detect for biallelic markers with equally frequent alleles, even when both parents are genotyped; in this case, the maximum detection rate is 34% for four-person nuclear families. Error detection in families in which parents are not genotyped is limited, even with multiallelic markers. Given these results, we recommend that additional error checking (e.g., on the basis of multipoint analysis) be performed, beyond routine checking for Mendelian consistency. Furthermore, our results permit assessment of the plausibility of an observed number of inheritance inconsistencies for a family, allowing the detection of likely pedigree—rather than genotyping—errors in the early stages of a genome scan. Such early assessments are valuable in either the targeting of families for resampling or discontinued genotyping. PMID:11791214

  13. Differential sensitivity to human communication in dogs, wolves, and human infants.

    PubMed

    Topál, József; Gergely, György; Erdohegyi, Agnes; Csibra, Gergely; Miklósi, Adám

    2009-09-04

    Ten-month-old infants persistently search for a hidden object at its initial hiding place even after observing it being hidden at another location. Recent evidence suggests that communicative cues from the experimenter contribute to the emergence of this perseverative search error. We replicated these results with dogs (Canis familiaris), who also commit more search errors in ostensive-communicative (in 75% of the total trials) than in noncommunicative (39%) or nonsocial (17%) hiding contexts. However, comparative investigations suggest that communicative signals serve different functions for dogs and infants, whereas human-reared wolves (Canis lupus) do not show doglike context-dependent differences of search errors. We propose that shared sensitivity to human communicative signals stems from convergent social evolution of the Homo and the Canis genera.

  14. Metrics for Business Process Models

    NASA Astrophysics Data System (ADS)

    Mendling, Jan

    Up until now, there has been little research on why people introduce errors in real-world business process models. In a more general context, Simon [404] points to the limitations of cognitive capabilities and concludes that humans act rationally only to a certain extent. Concerning modeling errors, this argument would imply that human modelers lose track of the interrelations of large and complex models due to their limited cognitive capabilities and introduce errors that they would not insert in a small model. A recent study by Mendling et al. [275] explores in how far certain complexity metrics of business process models have the potential to serve as error determinants. The authors conclude that complexity indeed appears to have an impact on error probability. Before we can test such a hypothesis in a more general setting, we have to establish an understanding of how we can define determinants that drive error probability and how we can measure them.

  15. Hierarchical Learning Induces Two Simultaneous, But Separable, Prediction Errors in Human Basal Ganglia

    PubMed Central

    Tsai, Karin; Wallis, Jonathan; Botvinick, Matthew

    2013-01-01

    Studies suggest that dopaminergic neurons report a unitary, global reward prediction error signal. However, learning in complex real-life tasks, in particular tasks that show hierarchical structure, requires multiple prediction errors that may coincide in time. We used functional neuroimaging to measure prediction error signals in humans performing such a hierarchical task involving simultaneous, uncorrelated prediction errors. Analysis of signals in a priori anatomical regions of interest in the ventral striatum and the ventral tegmental area indeed evidenced two simultaneous, but separable, prediction error signals corresponding to the two levels of hierarchy in the task. This result suggests that suitably designed tasks may reveal a more intricate pattern of firing in dopaminergic neurons. Moreover, the need for downstream separation of these signals implies possible limitations on the number of different task levels that we can learn about simultaneously. PMID:23536092

  16. A manufacturing error measurement methodology for a rotary vector reducer cycloidal gear based on a gear measuring center

    NASA Astrophysics Data System (ADS)

    Li, Tianxing; Zhou, Junxiang; Deng, Xiaozhong; Li, Jubo; Xing, Chunrong; Su, Jianxin; Wang, Huiliang

    2018-07-01

    A manufacturing error of a cycloidal gear is the key factor affecting the transmission accuracy of a robot rotary vector (RV) reducer. A methodology is proposed to realize the digitized measurement and data processing of the cycloidal gear manufacturing error based on the gear measuring center, which can quickly and accurately measure and evaluate the manufacturing error of the cycloidal gear by using both the whole tooth profile measurement and a single tooth profile measurement. By analyzing the particularity of the cycloidal profile and its effect on the actual meshing characteristics of the RV transmission, the cycloid profile measurement strategy is planned, and the theoretical profile model and error measurement model of cycloid-pin gear transmission are established. Through the digital processing technology, the theoretical trajectory of the probe and the normal vector of the measured point are calculated. By means of precision measurement principle and error compensation theory, a mathematical model for the accurate calculation and data processing of manufacturing error is constructed, and the actual manufacturing error of the cycloidal gear is obtained by the optimization iterative solution. Finally, the measurement experiment of the cycloidal gear tooth profile is carried out on the gear measuring center and the HEXAGON coordinate measuring machine, respectively. The measurement results verify the correctness and validity of the measurement theory and method. This methodology will provide the basis for the accurate evaluation and the effective control of manufacturing precision of the cycloidal gear in a robot RV reducer.

  17. Associations between errors and contributing factors in aircraft maintenance

    NASA Technical Reports Server (NTRS)

    Hobbs, Alan; Williamson, Ann

    2003-01-01

    In recent years cognitive error models have provided insights into the unsafe acts that lead to many accidents in safety-critical environments. Most models of accident causation are based on the notion that human errors occur in the context of contributing factors. However, there is a lack of published information on possible links between specific errors and contributing factors. A total of 619 safety occurrences involving aircraft maintenance were reported using a self-completed questionnaire. Of these occurrences, 96% were related to the actions of maintenance personnel. The types of errors that were involved, and the contributing factors associated with those actions, were determined. Each type of error was associated with a particular set of contributing factors and with specific occurrence outcomes. Among the associations were links between memory lapses and fatigue and between rule violations and time pressure. Potential applications of this research include assisting with the design of accident prevention strategies, the estimation of human error probabilities, and the monitoring of organizational safety performance.

  18. Neuroanatomical dissociation for taxonomic and thematic knowledge in the human brain

    PubMed Central

    Schwartz, Myrna F.; Kimberg, Daniel Y.; Walker, Grant M.; Brecher, Adelyn; Faseyitan, Olufunsho K.; Dell, Gary S.; Mirman, Daniel; Coslett, H. Branch

    2011-01-01

    It is thought that semantic memory represents taxonomic information differently from thematic information. This study investigated the neural basis for the taxonomic-thematic distinction in a unique way. We gathered picture-naming errors from 86 individuals with poststroke language impairment (aphasia). Error rates were determined separately for taxonomic errors (“pear” in response to apple) and thematic errors (“worm” in response to apple), and their shared variance was regressed out of each measure. With the segmented lesions normalized to a common template, we carried out voxel-based lesion-symptom mapping on each error type separately. We found that taxonomic errors localized to the left anterior temporal lobe and thematic errors localized to the left temporoparietal junction. This is an indication that the contribution of these regions to semantic memory cleaves along taxonomic-thematic lines. Our findings show that a distinction long recognized in the psychological sciences is grounded in the structure and function of the human brain. PMID:21540329

  19. Interspecies scaling and prediction of human clearance: comparison of small- and macro-molecule drugs

    PubMed Central

    Huh, Yeamin; Smith, David E.; Feng, Meihau Rose

    2014-01-01

    Human clearance prediction for small- and macro-molecule drugs was evaluated and compared using various scaling methods and statistical analysis.Human clearance is generally well predicted using single or multiple species simple allometry for macro- and small-molecule drugs excreted renally.The prediction error is higher for hepatically eliminated small-molecules using single or multiple species simple allometry scaling, and it appears that the prediction error is mainly associated with drugs with low hepatic extraction ratio (Eh). The error in human clearance prediction for hepatically eliminated small-molecules was reduced using scaling methods with a correction of maximum life span (MLP) or brain weight (BRW).Human clearance of both small- and macro-molecule drugs is well predicted using the monkey liver blood flow method. Predictions using liver blood flow from other species did not work as well, especially for the small-molecule drugs. PMID:21892879

  20. Towards a novel look on low-frequency climate reconstructions

    NASA Astrophysics Data System (ADS)

    Kamenik, Christian; Goslar, Tomasz; Hicks, Sheila; Barnekow, Lena; Huusko, Antti

    2010-05-01

    Information on low-frequency (millennial to sub-centennial) climate change is often derived from sedimentary archives, such as peat profiles or lake sediments. Usually, these archives have non-annual and varying time resolution. Their dating is mainly based on radionuclides, which provide probabilistic age-depth relationships with complex error structures. Dating uncertainties impede the interpretation of sediment-based climate reconstructions. They complicate the calculation of time-dependent rates. In most cases, they make any calibration in time impossible. Sediment-based climate proxies are therefore often presented as a single, best-guess time series without proper calibration and error estimation. Errors along time and dating errors that propagate into the calculation of time-dependent rates are neglected. Our objective is to overcome the aforementioned limitations by using a 'swarm' or 'ensemble' of reconstructions instead of a single best-guess. The novelty of our approach is to take into account age-depth uncertainties by permuting through a large number of potential age-depth relationships of the archive of interest. For each individual permutation we can then calculate rates, calibrate proxies in time, and reconstruct the climate-state variable of interest. From the resulting swarm of reconstructions, we can derive realistic estimates of even complex error structures. The likelihood of reconstructions is visualized by a grid of two-dimensional kernels that take into account probabilities along time and the climate-state variable of interest simultaneously. For comparison and regional synthesis, likelihoods can be scored against other independent climate time series.

Top