Science.gov

Sample records for acceptable error limits

  1. Accepting error to make less error.

    PubMed

    Einhorn, H J

    1986-01-01

    In this article I argue that the clinical and statistical approaches rest on different assumptions about the nature of random error and the appropriate level of accuracy to be expected in prediction. To examine this, a case is made for each approach. The clinical approach is characterized as being deterministic, causal, and less concerned with prediction than with diagnosis and treatment. The statistical approach accepts error as inevitable and in so doing makes less error in prediction. This is illustrated using examples from probability learning and equal weighting in linear models. Thereafter, a decision analysis of the two approaches is proposed. Of particular importance are the errors that characterize each approach: myths, magic, and illusions of control in the clinical; lost opportunities and illusions of the lack of control in the statistical. Each approach represents a gamble with corresponding risks and benefits.

  2. What Are Acceptable Limits of Radiation?

    NASA Video Gallery

    Brad Gersey, lead research scientist at the Center for Radiation Engineering and Science for Space Exploration, or CRESSE, at Prairie View A&M University, describes the legal and acceptable limits ...

  3. Towards more complete specifications for acceptable analytical performance - a plea for error grid analysis.

    PubMed

    Krouwer, Jan S; Cembrowski, George S

    2011-07-01

    Abstract We examine limitations of common analytical performance specifications for quantitative assays. Specifications can be either clinical or regulatory. Problems with current specifications include specifying limits for only 95% of the results, having only one set of limits that demarcate no harm from minor harm, using incomplete models for total error, not accounting for the potential of user error, and not supplying sufficient protocol requirements. Error grids are recommended to address these problems as error grids account for 100% of the data and stratify errors into different severity categories. Total error estimation from a method comparison can be used to estimate the inner region of an error grid, but the outer region needs to be addressed using risk management techniques. The risk management steps, foreign to many in laboratory medicine, are outlined.

  4. 12 CFR 250.163 - Inapplicability of amount limitations to “ineligible acceptances.”

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... that may be accepted for any one customer, and (2) A limitation on the aggregate amount of acceptances... congressional attention was on the acceptance powers of national banks.) In the absence of an indication of... limit on the amount of dollar exchange acceptances that may be accepted for any one customer...

  5. Customization of user interfaces to reduce errors and enhance user acceptance.

    PubMed

    Burkolter, Dina; Weyers, Benjamin; Kluge, Annette; Luther, Wolfram

    2014-03-01

    Customization is assumed to reduce error and increase user acceptance in the human-machine relation. Reconfiguration gives the operator the option to customize a user interface according to his or her own preferences. An experimental study with 72 computer science students using a simulated process control task was conducted. The reconfiguration group (RG) interactively reconfigured their user interfaces and used the reconfigured user interface in the subsequent test whereas the control group (CG) used a default user interface. Results showed significantly lower error rates and higher acceptance of the RG compared to the CG while there were no significant differences between the groups regarding situation awareness and mental workload. Reconfiguration seems to be promising and therefore warrants further exploration.

  6. A network model of human aging: Limits, errors, and information

    NASA Astrophysics Data System (ADS)

    Farrell, Spencer; Mitnitski, Arnold; Rockwood, Kenneth; Rutenberg, Andrew

    The Frailty Index (FI) quantifies human aging using the fraction of accumulated age-related deficits. The FI correlates strongly with mortality and accumulates non-linearly and stochastically with age. Clinical data shows a nearly universal limit of FI <= 0 . 7 . We computationally model an aging population using a network model of interacting deficits. Deficits damage and repair at rates that depend upon the average damage of connected nodes. The model is parametrized to fit clinical data. We find that attribution errors, especially false negative, allow the model to recover the frailty limit. Mutual information allows us to assess how well the FI can predict mortality. Mutual information provides a non-parametric measure of how the FI predicts mortality. We find that attribution errors have a small effect on the mutual information when many deficits are included in the model. The mutual information of our model and of the clinical data are comparable.

  7. Predetermining acceptable noise limits in EXAFS spectra in the limit of stochastic noise

    SciTech Connect

    Hu, Yung-Jin; Booth, Corwin H

    2009-12-14

    EXAFS measurements are used to probe a variety of experimental systems, but excel at elucidating local structure in samples which have slight disorder or no long-range crystalline order. Of special interest to the authors is the use of EXAFS in understanding the molecular-level binding structure and characteristics of actinides on the surface of environmental minerals and model mineral analogs. In environmental systems the element of interest can be on the order of 10-7% by weight of the total sample. Obviously such samples would be impossible to measure using EXAFS techniques. It is therefore essential to increase the concentration of the element of interest while still preserving a sample's ability to represent environmental conditions. Under such low concentration limits it is expected that the collected data is countrate, or stochastically limited. This condition occurs as we approach the signal-to-noise (S/N) limit of the technique where the random noise of the measurement process dominates over possible systematic errors. When stochastic error is expected to dominate systematic error, it is possible to predict, with the use of simulations, the ability of model fits to tolerate a certain level of stochastic noise. Elsewhere in these proceedings, we discuss how to tell when systematic errors dominate in measured EXAFS spectrum. Here, we outline a technique for determining the number of EXAFS scans necessary to test the relevance of a given structural model. Appropriate stochastic noise levels are determined for each point in r-space by collecting data on a real system. These noise levels are then applied to EXAFS simulations using a test model. In this way, all significant systematic error sources are eliminated in the simulated data. The structural model is then fit to the simulated data, decreasing the noise and increasing the k-range of the fit until the veracity of the model passes an F-test. This paper outlines a method of testing model systems in EXAFS

  8. 5 CFR 1605.16 - Claims for correction of employing agency errors; time limitations.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... of employing agency errors; time limitations. (a) Agency's discovery of error. Upon discovery of an... it, but, in any event, the agency must act promptly in doing so. (b) Participant's discovery of error. If an agency fails to discover an error of which a participant has knowledge involving the correct...

  9. Modernity and acceptance of family limitation in four developing countries.

    PubMed

    Miller, K A; Inkeles, A

    1977-01-01

    The relationship between individual modernity and adoption of family planning was investigated in East Pakistan (Bangladesh), Israel, India, and Nigeria. The survey involved interviews with approximately 1000 males in each country, with an emphasis on industrial, nonindustrial, and agricultural workers. Results indicated that the variables of modernity, i.e., literacy and amount of education received, degree of exposure to mass media, urban residence, white-collar occupation, and a high standard of living, were only slightly significant in explaining the acceptance of family planning. Survey results indicate that modern experiences have their effect in indirect ways through general psychological modernity. Variables related to family and sex roles do not explain attitudes toward family planning. 2 variables which did relate to family planning attitudes were: belief in science, medicine, and technology, and a secular as opposed to religious life orientation. Implications of the study are that the only way to insure decreasing birthrates in developing countries is to progress with general economic development. However, mere modernization will not achieve the desired results. There must be an emphasis in communication on the value of science, medicine, and technology.

  10. 75 FR 6371 - Jordan Hydroelectric Limited Partnership; Notice of Application Accepted for Filing and...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2010-02-09

    ... From the Federal Register Online via the Government Publishing Office DEPARTMENT OF ENERGY Federal Energy Regulatory Commission Jordan Hydroelectric Limited Partnership; Notice of Application Accepted for...: Jordan Hydroelectric Limited Partnership e. Name of Project: Flannagan Hydroelectric Project f....

  11. WTO accepts rules limiting medicine exports to poor countries.

    PubMed

    James, John S

    2003-09-12

    In a controversial decision on August 30, 2003, the World Trade Organization agreed to complex rules limiting the export of medications to developing countries. Reaction to the decision so far has shown a complete disconnect between trade delegates and the WTO, both of which praise the new rules as a humanitarian advance, and those working in treatment access in poor countries, who believe that they will effectively block treatment from reaching many who need it. We have prepared a background paper that analyzes this decision and its implications and offers the opinions of key figures on both sides of the debate. It is clear that the rules were largely written for and probably by the proprietary pharmaceutical industry, and imposed on the countries in the WTO mainly by the United States. The basic conflict is that this industry does not want the development of international trade in low-cost generic copies of its patented medicines--not even for poor countries, where little or no market exists. Yet millions of people die each year without medication for treatable conditions such as AIDS, and drug pricing remains one of several major obstacles to controlling global epidemics. PMID:14669728

  12. WTO accepts rules limiting medicine exports to poor countries.

    PubMed

    James, John S

    2003-09-12

    In a controversial decision on August 30, 2003, the World Trade Organization agreed to complex rules limiting the export of medications to developing countries. Reaction to the decision so far has shown a complete disconnect between trade delegates and the WTO, both of which praise the new rules as a humanitarian advance, and those working in treatment access in poor countries, who believe that they will effectively block treatment from reaching many who need it. We have prepared a background paper that analyzes this decision and its implications and offers the opinions of key figures on both sides of the debate. It is clear that the rules were largely written for and probably by the proprietary pharmaceutical industry, and imposed on the countries in the WTO mainly by the United States. The basic conflict is that this industry does not want the development of international trade in low-cost generic copies of its patented medicines--not even for poor countries, where little or no market exists. Yet millions of people die each year without medication for treatable conditions such as AIDS, and drug pricing remains one of several major obstacles to controlling global epidemics.

  13. DWPF COAL CARBON WASTE ACCEPTANCE CRITERIA LIMIT EVALUATION

    SciTech Connect

    Lambert, D.; Choi, A.

    2010-06-21

    A paper study was completed to assess the impact on the Defense Waste Processing Facility (DWPF)'s Chemical Processing Cell (CPC) acid addition and melter off-gas flammability control strategy in processing Sludge Batch 10 (SB10) to SB13 with an added Fluidized Bed Steam Reformer (FBSR) stream and two Salt Waste Processing Facility (SWPF) products (Strip Effluent and Actinide Removal Stream). In all of the cases that were modeled, an acid mix using formic acid and nitric acid could be achieved that would produce a predicted Reducing/Oxidizing (REDOX) Ratio of 0.20 Fe{sup +2}/{Sigma}Fe. There was sufficient formic acid in these combinations to reduce both the manganese and mercury present. Reduction of manganese and mercury are both necessary during Sludge Receipt and Adjustment Tank (SRAT) processing, however, other reducing agents such as coal and oxalate are not effective in this reduction. The next phase in this study will be experimental testing with SB10, FBSR, and both SWPF simulants to validate the assumptions in this paper study and determine whether there are any issues in processing these streams simultaneously. The paper study also evaluated a series of abnormal processing conditions to determine whether potential abnormal conditions in FBSR, SWPF or DWPF would produce melter feed that was too oxidizing or too reducing. In most of the cases that were modeled with one parameter at its extreme, an acid mix using formic acid and nitric acid could be achieved that would produce a predicted REDOX of 0.09-0.30 (target 0.20). However, when a run was completed with both high coal and oxalate, with minimum formic acid to reduce mercury and manganese, the final REDOX was predicted to be 0.49 with sludge and FBSR product and 0.47 with sludge, FBSR product and both SWPF products which exceeds the upper REDOX limit.

  14. Deconstructing the "reign of error": interpersonal warmth explains the self-fulfilling prophecy of anticipated acceptance.

    PubMed

    Stinson, Danu Anthony; Cameron, Jessica J; Wood, Joanne V; Gaucher, Danielle; Holmes, John G

    2009-09-01

    People's expectations of acceptance often come to create the acceptance or rejection they anticipate. The authors tested the hypothesis that interpersonal warmth is the behavioral key to this acceptance prophecy: If people expect acceptance, they will behave warmly, which in turn will lead other people to accept them; if they expect rejection, they will behave coldly, which will lead to less acceptance. A correlational study and an experiment supported this model. Study 1 confirmed that participants' warm and friendly behavior was a robust mediator of the acceptance prophecy compared to four plausible alternative explanations. Study 2 demonstrated that situational cues that reduced the risk of rejection also increased socially pessimistic participants' warmth and thus improved their social outcomes.

  15. Deconstructing the "reign of error": interpersonal warmth explains the self-fulfilling prophecy of anticipated acceptance.

    PubMed

    Stinson, Danu Anthony; Cameron, Jessica J; Wood, Joanne V; Gaucher, Danielle; Holmes, John G

    2009-09-01

    People's expectations of acceptance often come to create the acceptance or rejection they anticipate. The authors tested the hypothesis that interpersonal warmth is the behavioral key to this acceptance prophecy: If people expect acceptance, they will behave warmly, which in turn will lead other people to accept them; if they expect rejection, they will behave coldly, which will lead to less acceptance. A correlational study and an experiment supported this model. Study 1 confirmed that participants' warm and friendly behavior was a robust mediator of the acceptance prophecy compared to four plausible alternative explanations. Study 2 demonstrated that situational cues that reduced the risk of rejection also increased socially pessimistic participants' warmth and thus improved their social outcomes. PMID:19571273

  16. 10 CFR 2.643 - Acceptance and docketing of application for limited work authorization.

    Code of Federal Regulations, 2013 CFR

    2013-01-01

    ... acceptable for processing, the Director of New Reactors or the Director of Nuclear Reactor Regulation will... 10 Energy 1 2013-01-01 2013-01-01 false Acceptance and docketing of application for limited work authorization. 2.643 Section 2.643 Energy NUCLEAR REGULATORY COMMISSION AGENCY RULES OF PRACTICE AND...

  17. Physics of locked modes in ITER: Error field limits, rotation for obviation, and measurement of field errors

    SciTech Connect

    La Haye, R.J.

    1997-02-01

    The existing theoretical and experimental basis for predicting the levels of resonant static error field at different components m,n that stop plasma rotation and produce a locked mode is reviewed. For ITER ohmic discharges, the slow rotation of the very large plasma is predicted to incur a locked mode (and subsequent disastrous large magnetic islands) at a simultaneous weighted error field ({Sigma}{sub 1}{sup 3}w{sub m1}B{sup 2}{sub rm1}){sup {1/2}}/B{sub T} {ge} 1.9 x 10{sup -5}. Here the weights w{sub m1} are empirically determined from measurements on DIII-D to be w{sub 11} = 0. 2, w{sub 21} = 1.0, and w{sub 31} = 0. 8 and point out the relative importance of different error field components. This could be greatly obviated by application of counter injected neutral beams (which adds fluid flow to the natural ohmic electron drift). The addition of 5 MW of 1 MeV beams at 45{degrees} injection would increase the error field limit by a factor of 5; 13 MW would produce a factor of 10 improvement. Co-injection beams would also be effective but not as much as counter-injection as the co direction opposes the intrinsic rotation while the counter direction adds to it. A means for measuring individual PF and TF coil total axisymmetric field error to less than 1 in 10,000 is described. This would allow alignment of coils to mm accuracy and with correction coils make possible the very low levels of error field needed.

  18. Legitimization of regulatory norms: Waterfowl hunter acceptance of changing duck bag limits

    USGS Publications Warehouse

    Schroeder, Susan A.; Fulton, David C.; Lawrence, Jeffrey S.; Cordts, Steven D.

    2014-01-01

    Few studies have examined response to regulatory change over time, or addressed hunter attitudes about changes in hunting bag limits. This article explores Minnesota waterfowl hunters’ attitudes about duck bag limits, examining attitudes about two state duck bag limits that were initially more restrictive than the maximum set by the U.S. Fish and Wildlife Service (USFWS), but then increased to match federal limits. Results are from four mail surveys that examined attitudes about bag limits over time. Following two bag limit increases, a greater proportion of hunters rated the new bag limit “too high” and a smaller proportion rated it “too low.” Several years following the first bag limit increase, the proportion of hunters who indicated that the limit was “too high” had declined, suggesting hunter acceptance of the new regulation. Results suggest that waterfowl bag limits may represent legal norms that influence hunter attitudes and gain legitimacy over time.

  19. 75 FR 4057 - Jordan Limited Partnership; Notice of Application Accepted for Filing and Soliciting Motions To...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2010-01-26

    ... From the Federal Register Online via the Government Publishing Office DEPARTMENT OF ENERGY Federal Energy Regulatory Commission Jordan Limited Partnership; Notice of Application Accepted for Filing and...: Original Major License. b. Project No.: 12737-002. c. Date filed: April 16, 2009. d. Applicant:...

  20. Error Pattern Analysis of Elementary School-Aged Students with Limited English Proficiency

    ERIC Educational Resources Information Center

    Yang, Chin Wen; Sherman, Helene; Murdick, Nikki

    2011-01-01

    The purpose of this research study was to investigate and classify particular categories of mathematical errors made by students with Limited English Proficiency. Participants included 15 general education teachers, two English as Second Language teachers, and 91 Limited English Proficiency students. General education teachers provided mathematics…

  1. An efficient approach for limited-data chemical species tomography and its error bounds

    PubMed Central

    Polydorides, N.; Tsekenis, S.-A.; McCann, H.; Prat, V.-D. A.; Wright, P.

    2016-01-01

    We present a computationally efficient reconstruction method for the limited-data chemical species tomography problem that incorporates projection of the unknown gas concentration function onto a low-dimensional subspace, and regularization using prior information obtained from a simple flow model. In this context, the contribution of this work is on the analysis of the projection-induced data errors and the calculation of bounds for the overall image error incorporating the impact of projection and regularization errors as well as measurement noise. As an extension to this methodology, we present a variant algorithm that preserves the positivity of the concentration image. PMID:27118923

  2. Fatigue acceptance test limit criterion for larger diameter rolled thread fasteners

    SciTech Connect

    Kephart, A.R.

    1997-05-01

    This document describes a fatigue lifetime acceptance test criterion by which studs having rolled threads, larger than 1.0 inches in diameter, can be assured to meet minimum quality attributes associated with a controlled rolling process. This criterion is derived from a stress dependent, room temperature air fatigue database for test studs having a 0.625 inch diameter threads of Alloys X-750 HTH and direct aged 625. Anticipated fatigue lives of larger threads are based on thread root elastic stress concentration factors which increase with increasing thread diameters. Over the thread size range of interest, a 30% increase in notch stress is equivalent to a factor of five (5X) reduction in fatigue life. The resulting diameter dependent fatigue acceptance criterion is normalized to the aerospace rolled thread acceptance standards for a 1.0 inch diameter, 0.125 inch pitch, Unified National thread with a controlled Root radius (UNR). Testing was conducted at a stress of 50% of the minimum specified material ultimate strength, 80 Ksi, and at a stress ratio (R) of 0.10. Limited test data for fastener diameters of 1.00 to 2.25 inches are compared to the acceptance criterion. Sensitivity of fatigue life of threads to test nut geometry variables was also shown to be dependent on notch stress conditions. Bearing surface concavity of the compression nuts and thread flank contact mismatch conditions can significantly affect the fastener fatigue life. Without improved controls these conditions could potentially provide misleading acceptance data. Alternate test nut geometry features are described and implemented in the rolled thread stud specification, MIL-DTL-24789(SH), to mitigate the potential effects on fatigue acceptance data.

  3. Quantum Error-Correction-Enhanced Magnetometer Overcoming the Limit Imposed by Relaxation

    NASA Astrophysics Data System (ADS)

    Herrera-Martí, David A.; Gefen, Tuvia; Aharonov, Dorit; Katz, Nadav; Retzker, Alex

    2015-11-01

    When incorporated in quantum sensing protocols, quantum error correction can be used to correct for high frequency noise, as the correction procedure does not depend on the actual shape of the noise spectrum. As such, it provides a powerful way to complement usual refocusing techniques. Relaxation imposes a fundamental limit on the sensitivity of state of the art quantum sensors which cannot be overcome by dynamical decoupling. The only way to overcome this is to utilize quantum error correcting codes. We present a superconducting magnetometry design that incorporates approximate quantum error correction, in which the signal is generated by a two qubit Hamiltonian term. This two-qubit term is provided by the dynamics of a tunable coupler between two transmon qubits. For fast enough correction, it is possible to lengthen the coherence time of the device beyond the relaxation limit.

  4. Quantum Error-Correction-Enhanced Magnetometer Overcoming the Limit Imposed by Relaxation.

    PubMed

    Herrera-Martí, David A; Gefen, Tuvia; Aharonov, Dorit; Katz, Nadav; Retzker, Alex

    2015-11-13

    When incorporated in quantum sensing protocols, quantum error correction can be used to correct for high frequency noise, as the correction procedure does not depend on the actual shape of the noise spectrum. As such, it provides a powerful way to complement usual refocusing techniques. Relaxation imposes a fundamental limit on the sensitivity of state of the art quantum sensors which cannot be overcome by dynamical decoupling. The only way to overcome this is to utilize quantum error correcting codes. We present a superconducting magnetometry design that incorporates approximate quantum error correction, in which the signal is generated by a two qubit Hamiltonian term. This two-qubit term is provided by the dynamics of a tunable coupler between two transmon qubits. For fast enough correction, it is possible to lengthen the coherence time of the device beyond the relaxation limit. PMID:26613424

  5. Quantum Error-Correction-Enhanced Magnetometer Overcoming the Limit Imposed by Relaxation.

    PubMed

    Herrera-Martí, David A; Gefen, Tuvia; Aharonov, Dorit; Katz, Nadav; Retzker, Alex

    2015-11-13

    When incorporated in quantum sensing protocols, quantum error correction can be used to correct for high frequency noise, as the correction procedure does not depend on the actual shape of the noise spectrum. As such, it provides a powerful way to complement usual refocusing techniques. Relaxation imposes a fundamental limit on the sensitivity of state of the art quantum sensors which cannot be overcome by dynamical decoupling. The only way to overcome this is to utilize quantum error correcting codes. We present a superconducting magnetometry design that incorporates approximate quantum error correction, in which the signal is generated by a two qubit Hamiltonian term. This two-qubit term is provided by the dynamics of a tunable coupler between two transmon qubits. For fast enough correction, it is possible to lengthen the coherence time of the device beyond the relaxation limit.

  6. Application of Zoning and ``Limits of Acceptable Change'' to Manage Snorkelling Tourism

    NASA Astrophysics Data System (ADS)

    Roman, George S. J.; Dearden, Philip; Rollins, Rick

    2007-06-01

    Zoning and applying Limits of Acceptable Change (LAC) are two promising strategies for managing tourism in Marine Protected Areas (MPAs). Typically, these management strategies require the collection and integration of ecological and socioeconomic data. This problem is illustrated by a case study of Koh Chang National Marine Park, Thailand. Biophysical surveys assessed coral communities in the MPA to derive indices of reef diversity and vulnerability. Social surveys assessed visitor perceptions and satisfaction with conditions encountered on snorkelling tours. Notably, increased coral mortality caused a significant decrease in visitor satisfaction. The two studies were integrated to prescribe zoning and “Limits of Acceptable Change” (LAC). As a biophysical indicator, the data suggest a LAC value of 0.35 for the coral mortality index. As a social indicator, the data suggest that a significant fraction of visitors would find a LAC value of under 30 snorkellers per site as acceptable. The draft zoning plan prescribed four different types of zones: (I) a Conservation Zone with no access apart from monitoring or research; (II) Tourism Zones with high tourism intensities at less vulnerable reefs; (III) Ecotourism zones with a social LAC standard of <30 snorkellers per site, and (IV) General Use Zones to meet local artisanal fishery needs. This study illustrates how ecological and socioeconomic field studies in MPAs can be integrated to craft zoning plans addressing multiple objectives.

  7. Setting limits for acceptable change in sediment particle size composition following marine aggregate dredging.

    PubMed

    Cooper, Keith M

    2012-08-01

    In the UK, Government policy requires marine aggregate extraction companies to leave the seabed in a similar physical condition after the cessation of dredging. This measure is intended to promote recovery, and the return of a similar faunal community to that which existed before dredging. Whilst the policy is sensible, and in line with the principles of sustainable development, the use of the word 'similar' is open to interpretation. There is, therefore, a need to set quantifiable limits for acceptable change in sediment composition. Using a case study site, it is shown how such limits could be defined by the range of sediment particle size composition naturally found in association with the faunal assemblages in the wider region. Whilst the approach offers a number of advantages over the present system, further testing would be required before it could be recommended for use in the regulatory context. PMID:22721693

  8. 5 CFR 1605.22 - Claims for correction of Board or TSP record keeper errors; time limitations.

    Code of Federal Regulations, 2012 CFR

    2012-01-01

    ... 5 Administrative Personnel 3 2012-01-01 2012-01-01 false Claims for correction of Board or TSP record keeper errors; time limitations. 1605.22 Section 1605.22 Administrative Personnel FEDERAL RETIREMENT THRIFT INVESTMENT BOARD CORRECTION OF ADMINISTRATIVE ERRORS Board or TSP Record Keeper Errors § 1605.22 Claims for correction of Board or...

  9. Densities mixture unfolding for data obtained from detectors with finite resolution and limited acceptance

    NASA Astrophysics Data System (ADS)

    Gagunashvili, N. D.

    2015-04-01

    A procedure based on a Mixture Density Model for correcting experimental data for distortions due to finite resolution and limited detector acceptance is presented. Addressing the case that the solution is known to be non-negative, in the approach presented here, the true distribution is estimated by a weighted sum of probability density functions with positive weights and with the width of the densities acting as a regularization parameter responsible for the smoothness of the result. To obtain better smoothing in less populated regions, the width parameter is chosen inversely proportional to the square root of the estimated density. Furthermore, the non-negative garrote method is used to find the most economic representation of the solution. Cross-validation is employed to determine the optimal values of the resolution and garrote parameters. The proposed approach is directly applicable to multidimensional problems. Numerical examples in one and two dimensions are presented to illustrate the procedure.

  10. Natural Conception May Be an Acceptable Option in HIV-Serodiscordant Couples in Resource Limited Settings.

    PubMed

    Sun, Lijun; Wang, Fang; Liu, An; Xin, Ruolei; Zhu, Yunxia; Li, Jianwei; Shao, Ying; Ye, Jiangzhu; Chen, Danqing; Li, Zaicun

    2015-01-01

    Many HIV serodiscordant couples have a strong desire to have their own biological children. Natural conception may be the only choice in some resource limited settings but data about natural conception is limited. Here, we reported our findings of natural conception in HIV serodiscordant couples. Between January 2008 and June 2014, we retrospectively collected data on 91 HIV serodiscordant couples presenting to Beijing Youan Hospital with childbearing desires. HIV counseling, effective ART on HIV infected partners, pre-exposure prophylaxis (PrEP) and post-exposure prophylaxis (PEP) in negative female partners and timed intercourse were used to maximally reduce the risk of HIV transmission. Of the 91 HIV serodiscordant couples, 43 were positive in male partners and 48 were positive in female partners. There were 196 unprotected vaginal intercourses, 100 natural conception and 97 newborns. There were no cases of HIV seroconversion in uninfected sexual partners. Natural conception may be an acceptable option in HIV-serodiscordant couples in resource limited settings if HIV-positive individuals have undetectable viremia on HAART, combined with HIV counseling, PrEP, PEP and timed intercourse.

  11. Natural Conception May Be an Acceptable Option in HIV-Serodiscordant Couples in Resource Limited Settings

    PubMed Central

    Xin, Ruolei; Zhu, Yunxia; Li, Jianwei; Shao, Ying; Ye, Jiangzhu; Chen, Danqing; Li, Zaicun

    2015-01-01

    Many HIV serodiscordant couples have a strong desire to have their own biological children. Natural conception may be the only choice in some resource limited settings but data about natural conception is limited. Here, we reported our findings of natural conception in HIV serodiscordant couples. Between January 2008 and June 2014, we retrospectively collected data on 91 HIV serodiscordant couples presenting to Beijing Youan Hospital with childbearing desires. HIV counseling, effective ART on HIV infected partners, pre-exposure prophylaxis (PrEP) and post-exposure prophylaxis (PEP) in negative female partners and timed intercourse were used to maximally reduce the risk of HIV transmission. Of the 91 HIV serodiscordant couples, 43 were positive in male partners and 48 were positive in female partners. There were 196 unprotected vaginal intercourses, 100 natural conception and 97 newborns. There were no cases of HIV seroconversion in uninfected sexual partners. Natural conception may be an acceptable option in HIV-serodiscordant couples in resource limited settings if HIV-positive individuals have undetectable viremia on HAART, combined with HIV counseling, PrEP, PEP and timed intercourse. PMID:26540103

  12. A hybrid variational-ensemble data assimilation scheme with systematic error correction for limited-area ocean models

    NASA Astrophysics Data System (ADS)

    Oddo, Paolo; Storto, Andrea; Dobricic, Srdjan; Russo, Aniello; Lewis, Craig; Onken, Reiner; Coelho, Emanuel

    2016-10-01

    A hybrid variational-ensemble data assimilation scheme to estimate the vertical and horizontal parts of the background error covariance matrix for an ocean variational data assimilation system is presented and tested in a limited-area ocean model implemented in the western Mediterranean Sea. An extensive data set collected during the Recognized Environmental Picture Experiments conducted in June 2014 by the Centre for Maritime Research and Experimentation has been used for assimilation and validation. The hybrid scheme is used to both correct the systematic error introduced in the system from the external forcing (initialisation, lateral and surface open boundary conditions) and model parameterisation, and improve the representation of small-scale errors in the background error covariance matrix. An ensemble system is run offline for further use in the hybrid scheme, generated through perturbation of assimilated observations. Results of four different experiments have been compared. The reference experiment uses the classical stationary formulation of the background error covariance matrix and has no systematic error correction. The other three experiments account for, or not, systematic error correction and hybrid background error covariance matrix combining the static and the ensemble-derived errors of the day. Results show that the hybrid scheme when used in conjunction with the systematic error correction reduces the mean absolute error of temperature and salinity misfit by 55 and 42 % respectively, versus statistics arising from standard climatological covariances without systematic error correction.

  13. Validation of analytical methods involved in dissolution assays: acceptance limits and decision methodologies.

    PubMed

    Rozet, E; Ziemons, E; Marini, R D; Boulanger, B; Hubert, Ph

    2012-11-01

    Dissolution tests are key elements to ensure continuing product quality and performance. The ultimate goal of these tests is to assure consistent product quality within a defined set of specification criteria. Validation of an analytical method aimed at assessing the dissolution profile of products or at verifying pharmacopoeias compliance should demonstrate that this analytical method is able to correctly declare two dissolution profiles as similar or drug products as compliant with respect to their specifications. It is essential to ensure that these analytical methods are fit for their purpose. Method validation is aimed at providing this guarantee. However, even in the ICHQ2 guideline there is no information explaining how to decide whether the method under validation is valid for its final purpose or not. Are the entire validation criterion needed to ensure that a Quality Control (QC) analytical method for dissolution test is valid? What acceptance limits should be set on these criteria? How to decide about method's validity? These are the questions that this work aims at answering. Focus is made to comply with the current implementation of the Quality by Design (QbD) principles in the pharmaceutical industry in order to allow to correctly defining the Analytical Target Profile (ATP) of analytical methods involved in dissolution tests. Analytical method validation is then the natural demonstration that the developed methods are fit for their intended purpose and is not any more the inconsiderate checklist validation approach still generally performed to complete the filing required to obtain product marketing authorization. PMID:23084050

  14. Acceptability of donated breast milk in a resource limited South African setting

    PubMed Central

    2011-01-01

    Background The importance of breast milk for infants' growth, development and overall health is widely recognized. In situations where women are not able to provide their infants with sufficient amounts of their own breast milk, donor breast milk is the next preferred option. Although there is considerable research on the safety and scientific aspects of donor milk, and the motivations and experiences of donors, there is limited research addressing the attitudes and experiences of the women and families whose infants receive this milk. This study therefore examined attitudes towards donated breast milk among mothers, families and healthcare providers of potential recipient infants. Methods The study was conducted at a public hospital and nearby clinic in Durban, South Africa. The qualitative data was derived from eight focus group discussions which included four groups with mothers; one with male partners; and one with grandmothers, investigating attitudes towards receiving donated breast milk for infants. There was also one group each with nurses and doctors about their attitudes towards donated breast milk and its use in the hospital. The focus groups were conducted in September and October 2009 and each group had between four and eleven participants, leading to a total of 48 participants. Results Although breast milk was seen as important to child health there were concerns about undermining of breast milk because of concerns about HIV and marketing and promotion of formula milks. In addition there were concerns about the safety of donor breast milk and discomfort about using another mother's milk. Participants believed that education on the importance of breast milk and transparency on the processes involved in sourcing and preparing donor milk would improve the acceptability. Conclusions This study has shown that there are obstacles to the acceptability of donor milk, mainly stemming from lack of awareness/familiarity with the processes around donor breast

  15. Understanding the Factors Limiting the Acceptability of Online Courses and Degrees

    ERIC Educational Resources Information Center

    Adams, Jonathan

    2008-01-01

    This study examines prior research conducted on the acceptability of online degrees in hiring situations. In a national survey, a questionnaire was developed for assessing the importance of objections to accepting job candidates with online degrees and sent to university search committee chairs in institutions advertising open faculty positions…

  16. Confidence limits, error bars and method comparison in molecular modeling. Part 2: comparing methods.

    PubMed

    Nicholls, A

    2016-02-01

    The calculation of error bars for quantities of interest in computational chemistry comes in two forms: (1) Determining the confidence of a prediction, for instance of the property of a molecule; (2) Assessing uncertainty in measuring the difference between properties, for instance between performance metrics of two or more computational approaches. While a former paper in this series concentrated on the first of these, this second paper focuses on comparison, i.e. how do we calculate differences in methods in an accurate and statistically valid manner. Described within are classical statistical approaches for comparing widely used metrics such as enrichment, area under the curve and Pearson's product-moment coefficient, as well as generic measures. These are considered of over single and multiple sets of data and for two or more methods that evince either independent or correlated behavior. General issues concerning significance testing and confidence limits from a Bayesian perspective are discussed, along with size-of-effect aspects of evaluation.

  17. [Limitations of venous function diagnosis using laboratory apparatus - possibility of errors].

    PubMed

    Partsch, H; Santler, R

    1978-01-01

    The authors discuss some questions concerning the limitations and the errors of methods involving the Sonar Doppler, Plethysmography and Phlebodynamics in phlebology diagnosis. The Doppler technique is the most useful one in detecting a valvular insufficiency or a venous obstruction of the thigh. Plethysmography is a simple method of detecting a pelvic or femoral thrombosis. It does not make it possible, on the level of the calf, to make a distinction between a recent thrombosis or a post-thrombotic state. It cannot be used when there is a large edema. It is inadequate for studying the function of the muscular pump of the calf, which can also be done by phlebodynamics. This method is now the most precise technique for quantitatively studying the total function of the muscular pump of the calf. It can be easily reproduced, and the effect of therapeutic measures can thus be anticipated.

  18. First Year Wilkinson Microwave Anisotropy Probe(WMAP) Observations: Data Processing Methods and Systematic Errors Limits

    NASA Technical Reports Server (NTRS)

    Hinshaw, G.; Barnes, C.; Bennett, C. L.; Greason, M. R.; Halpern, M.; Hill, R. S.; Jarosik, N.; Kogut, A.; Limon, M.; Meyer, S. S.

    2003-01-01

    We describe the calibration and data processing methods used to generate full-sky maps of the cosmic microwave background (CMB) from the first year of Wilkinson Microwave Anisotropy Probe (WMAP) observations. Detailed limits on residual systematic errors are assigned based largely on analyses of the flight data supplemented, where necessary, with results from ground tests. The data are calibrated in flight using the dipole modulation of the CMB due to the observatory's motion around the Sun. This constitutes a full-beam calibration source. An iterative algorithm simultaneously fits the time-ordered data to obtain calibration parameters and pixelized sky map temperatures. The noise properties are determined by analyzing the time-ordered data with this sky signal estimate subtracted. Based on this, we apply a pre-whitening filter to the time-ordered data to remove a low level of l/f noise. We infer and correct for a small (approx. 1 %) transmission imbalance between the two sky inputs to each differential radiometer, and we subtract a small sidelobe correction from the 23 GHz (K band) map prior to further analysis. No other systematic error corrections are applied to the data. Calibration and baseline artifacts, including the response to environmental perturbations, are negligible. Systematic uncertainties are comparable to statistical uncertainties in the characterization of the beam response. Both are accounted for in the covariance matrix of the window function and are propagated to uncertainties in the final power spectrum. We characterize the combined upper limits to residual systematic uncertainties through the pixel covariance matrix.

  19. Fatigue acceptance test limit criteria for larger diameter rolled thread fasteners

    SciTech Connect

    Kephart, A.R.

    1999-05-19

    This document describes a fatigue lifetime acceptance test criterion by which studs having rolled threads, larger than 1.0 inches (25 mm) in diameter, can be assured to meet minimum quality attributes associated with a controlled rolling process.

  20. La composition academique: les limites de l'acceptabilite (Composition for Academic Purposes: Criteria for Acceptability).

    ERIC Educational Resources Information Center

    Grenall, G. M.

    1981-01-01

    Examines the pedagogical approaches and problems attendant to the development of English writing programs for foreign students. Discusses the skills necessary to handle course work, such as essay tests, term papers and reports, theses and dissertations, and focuses particularly on diagnostic problems and acceptability criteria. Societe Nouvelle…

  1. 12 CFR 250.163 - Inapplicability of amount limitations to “ineligible acceptances.”

    Code of Federal Regulations, 2011 CFR

    2011-01-01

    ... 12th paragraph of sec. 13 of the Federal Reserve Act, which paragraph is omitted from the United States... that “the acceptance power of State member banks is not necessarily confined to the provisions of section 13 (of the Federal Reserve Act), inasmuch as the laws of many States confer broader...

  2. 12 CFR 250.163 - Inapplicability of amount limitations to “ineligible acceptances.”

    Code of Federal Regulations, 2013 CFR

    2013-01-01

    ... omitted from the United States Code) ... has been of the view that “the acceptance power of State member banks is not necessarily confined to the provisions of section 13 (of the Federal Reserve Act), inasmuch as the laws of many States...

  3. The limits of parental authority to accept or refuse medical treatment.

    PubMed

    Miller, Geoffrey

    2011-06-01

    The legal and ethical right of parents to refuse medical treatment for their children differs from the authority possessed by competent adults with decisional capacity. Parents have a duty to act in the best interests of their children from the children's perspective and not to inflict harm. Best interests are determined by weighing benefits and burdens, which includes using evidence-based outcomes and value judgments. The result is placed along a risk/benefit spectrum. If the result is close to low risk/high benefit, the parents have a strong obligation to accept a health care team recommendation. Otherwise, parents may choose between reasonable medical options without threat or coercion.

  4. Wireless smart meters and public acceptance: the environment, limited choices, and precautionary politics.

    PubMed

    Hess, David J; Coley, Jonathan S

    2014-08-01

    Wireless smart meters (WSMs) promise numerous environmental benefits, but they have been installed without full consideration of public acceptance issues. Although societal-implications research and regulatory policy have focused on privacy, security, and accuracy issues, our research indicates that health concerns have played an important role in the public policy debates that have emerged in California. Regulatory bodies do not recognize non-thermal health effects for non-ionizing electromagnetic radiation, but both homeowners and counter-experts have contested the official assurances that WSMs pose no health risks. Similarities and differences with the existing social science literature on mobile phone masts are discussed, as are the broader political implications of framing an alternative policy based on an opt-out choice. The research suggests conditions under which health-oriented precautionary politics can be particularly effective, namely, if there is a mandatory technology, a network of counter-experts, and a broader context of democratic contestation.

  5. A Complementary Note to 'A Lag-1 Smoother Approach to System-Error Estimation': The Intrinsic Limitations of Residual Diagnostics

    NASA Technical Reports Server (NTRS)

    Todling, Ricardo

    2015-01-01

    Recently, this author studied an approach to the estimation of system error based on combining observation residuals derived from a sequential filter and fixed lag-1 smoother. While extending the methodology to a variational formulation, experimenting with simple models and making sure consistency was found between the sequential and variational formulations, the limitations of the residual-based approach came clearly to the surface. This note uses the sequential assimilation application to simple nonlinear dynamics to highlight the issue. Only when some of the underlying error statistics are assumed known is it possible to estimate the unknown component. In general, when considerable uncertainties exist in the underlying statistics as a whole, attempts to obtain separate estimates of the various error covariances are bound to lead to misrepresentation of errors. The conclusions are particularly relevant to present-day attempts to estimate observation-error correlations from observation residual statistics. A brief illustration of the issue is also provided by comparing estimates of error correlations derived from a quasi-operational assimilation system and a corresponding Observing System Simulation Experiments framework.

  6. The Limits of Coding with Joint Constraints on Detected and Undetected Error Rates

    NASA Technical Reports Server (NTRS)

    Dolinar, Sam; Andrews, Kenneth; Pollara, Fabrizio; Divsalar, Dariush

    2008-01-01

    We develop a remarkably tight upper bound on the performance of a parameterized family of bounded angle maximum-likelihood (BA-ML) incomplete decoders. The new bound for this class of incomplete decoders is calculated from the code's weight enumerator, and is an extension of Poltyrev-type bounds developed for complete ML decoders. This bound can also be applied to bound the average performance of random code ensembles in terms of an ensemble average weight enumerator. We also formulate conditions defining a parameterized family of optimal incomplete decoders, defined to minimize both the total codeword error probability and the undetected error probability for any fixed capability of the decoder to detect errors. We illustrate the gap between optimal and BA-ML incomplete decoding via simulation of a small code.

  7. 241-SY-101 DACS High hydrogen abort limit reduction (SCR 473) acceptance test report

    SciTech Connect

    ERMI, A.M.

    1999-09-09

    The capability of the 241-SY-101 Data Acquisition and Control System (DACS) computer system to provide proper control and monitoring of the 241-SY-101 underground storage tank hydrogen monitoring system utilizing the reduced hydrogen abort limit of 0.69% was systematically evaluated by the performance of ATP HNF-4927. This document reports the results of the ATP.

  8. Predicting tool operator capacity to react against torque within acceptable handle deflection limits in automotive assembly.

    PubMed

    Radwin, Robert G; Chourasia, Amrish; Fronczak, Frank J; Subedi, Yashpal; Howery, Robert; Yen, Thomas Y; Sesto, Mary E; Irwin, Curtis B

    2016-05-01

    The proportion of tool operators capable of maintaining published psychophysically derived threaded fastener tool handle deflection limits were predicted using a biodynamic tool operator model, interacting with the tool, task and workstation. Tool parameters, including geometry, speed and torque were obtained from the specifications for 35 tools used in an auto assembly plant. Tool mass moments of inertia were measured for these tools using a novel device that engages the tool in a rotating system of known inertia. Task parameters, including fastener target torque and joint properties (soft, medium or hard), were ascertained from the vehicle design specifications. Workstation parameters, including vertical and horizontal distances from the operator were measured using a laser rangefinder for 69 tool installations in the plant. These parameters were entered into the model and tool handle deflection was predicted for each job. While handle deflection for most jobs did not exceed the capacity of 75% females and 99% males, six jobs exceeded the deflection criterion. Those tool installations were examined and modifications in tool speed and operator position improved those jobs within the deflection limits, as predicted by the model. We conclude that biodynamic tool operator models may be useful for identifying stressful tool installations and interventions that bring them within the capacity of most operators.

  9. Fault-tolerant and finite-error localization for point emitters within the diffraction limit.

    PubMed

    Tang, Zong Sheng; Durak, Kadir; Ling, Alexander

    2016-09-19

    We implement a self-interference technique for determining the separation between two incoherent point sources. This method relies on image inversion interferometry and when used with the appropriate data analytics, it yields an estimate of the separation with finite-error, including the case when the sources overlap completely. The experimental results show that the technique has a good tolerance to noise and misalignment, making it an interesting consideration for high resolution instruments in microscopy or astronomy. PMID:27661935

  10. Fault-tolerant and finite-error localization for point emitters within the diffraction limit.

    PubMed

    Tang, Zong Sheng; Durak, Kadir; Ling, Alexander

    2016-09-19

    We implement a self-interference technique for determining the separation between two incoherent point sources. This method relies on image inversion interferometry and when used with the appropriate data analytics, it yields an estimate of the separation with finite-error, including the case when the sources overlap completely. The experimental results show that the technique has a good tolerance to noise and misalignment, making it an interesting consideration for high resolution instruments in microscopy or astronomy.

  11. X-ray optics metrology limited by random noise, instrumental drifts, and systematic errors

    SciTech Connect

    Yashchuk, Valeriy V.; Anderson, Erik H.; Barber, Samuel K.; Cambie, Rossana; Celestre, Richard; Conley, Raymond; Goldberg, Kenneth A.; McKinney, Wayne R.; Morrison, Gregory; Takacs, Peter Z.; Voronov, Dmitriy L.; Yuan, Sheng; Padmore, Howard A.

    2010-07-09

    Continuous, large-scale efforts to improve and develop third- and forth-generation synchrotron radiation light sources for unprecedented high-brightness, low emittance, and coherent x-ray beams demand diffracting and reflecting x-ray optics suitable for micro- and nano-focusing, brightness preservation, and super high resolution. One of the major impediments for development of x-ray optics with the required beamline performance comes from the inadequate present level of optical and at-wavelength metrology and insufficient integration of the metrology into the fabrication process and into beamlines. Based on our experience at the ALS Optical Metrology Laboratory, we review the experimental methods and techniques that allow us to mitigate significant optical metrology problems related to random, systematic, and drift errors with super-high-quality x-ray optics. Measurement errors below 0.2 mu rad have become routine. We present recent results from the ALS of temperature stabilized nano-focusing optics and dedicated at-wavelength metrology. The international effort to develop a next generation Optical Slope Measuring System (OSMS) to address these problems is also discussed. Finally, we analyze the remaining obstacles to further improvement of beamline x-ray optics and dedicated metrology, and highlight the ways we see to overcome the problems.

  12. Estimation of glucose kinetics in fetal-maternal studies: Potential errors, solutions, and limitations

    SciTech Connect

    Menon, R.K.; Bloch, C.A.; Sperling, M.A. )

    1990-06-01

    We investigated whether errors occur in the estimation of ovine maternal-fetal glucose (Glc) kinetics using the isotope dilution technique when the Glc pool is rapidly expanded by exogenous (protocol A) or endogenous (protocol C) Glc entry and sought possible solutions (protocol B). In protocol A (n = 8), after attaining steady-state Glc specific activity (SA) by (U-14C)glucose (period 1), infusion of Glc (period 2) predictably decreased Glc SA, whereas. (U-14C)glucose concentration unexpectedly rose from 7,208 +/- 367 (means +/- SE) in period 1 to 8,558 +/- 308 disintegrations/min (dpm) per ml in period 2 (P less than 0.01). Fetal endogenous Glc production (EGP) was negligible during period 1 (0.44 +/- 1.0), but yielded a physiologically impossible negative value of -2.1 +/- 0.72 mg.kg-1.min-1 during period 2. When the fall in Glc SA during Glc infusion was prevented by addition of (U-14C)glucose admixed with the exogenous Glc (protocol B; n = 7), EGP was no longer negative. In protocol C (n = 6), sequential infusions of four increasing doses of epinephrine serially decreased SA, whereas tracer Glc increased from 7,483 +/- 608 to 11,525 +/- 992 dpm/ml plasma (P less than 0.05), imposing an obligatory underestimation of EGP. Thus a tracer mixing problem leads to erroneous estimations of fetal Glc utilization and Glc production via the three-compartment model in sheep when the Glc pool is expanded exogenously or endogenously. These errors can be minimized by maintaining the Glc SA relatively constant.

  13. 20 CFR 404.780 - Evidence of “good cause” for exceeding time limits on accepting proof of support or application...

    Code of Federal Regulations, 2013 CFR

    2013-04-01

    ... limits on accepting proof of support or application for a lump-sum death payment. 404.780 Section 404.780 Employees' Benefits SOCIAL SECURITY ADMINISTRATION FEDERAL OLD-AGE, SURVIVORS AND DISABILITY INSURANCE (1950... accepting proof of support or application for a lump-sum death payment. (a) When evidence of good cause...

  14. Internal consistency of fault-tolerant quantum error correction in light of rigorous derivations of the quantum Markovian limit

    SciTech Connect

    Alicki, Robert; Lidar, Daniel A.; Zanardi, Paolo

    2006-05-15

    We critically examine the internal consistency of a set of minimal assumptions entering the theory of fault-tolerant quantum error correction for Markovian noise. These assumptions are fast gates, a constant supply of fresh and cold ancillas, and a Markovian bath. We point out that these assumptions may not be mutually consistent in light of rigorous formulations of the Markovian approximation. Namely, Markovian dynamics requires either the singular coupling limit (high temperature), or the weak coupling limit (weak system-bath interaction). The former is incompatible with the assumption of a constant and fresh supply of cold ancillas, while the latter is inconsistent with fast gates. We discuss ways to resolve these inconsistencies. As part of our discussion we derive, in the weak coupling limit, a new master equation for a system subject to periodic driving.

  15. Evaluation and Acceptability of a Simplified Test of Visual Function at Birth in a Limited-Resource Setting.

    PubMed

    Carrara, Verena I; Darakomon, Mue Chae; Thin, Nant War War; Paw, Naw Ta Kaw; Wah, Naw; Wah, Hser Gay; Helen, Naw; Keereecharoen, Suporn; Paw, Naw Ta Mlar; Jittamala, Podjanee; Nosten, François H; Ricci, Daniela; McGready, Rose

    2016-01-01

    Neurological examination, including visual fixation and tracking of a target, is routinely performed in the Shoklo Malaria Research Unit postnatal care units on the Thailand-Myanmar border. We aimed to evaluate a simple visual newborn test developed in Italy and performed by non-specialized personnel working in neonatal care units. An intensive training of local health staff in Thailand was conducted prior to performing assessments at 24, 48 and 72 hours of life in healthy, low-risk term singletons. The 48 and 72 hours results were then compared to values obtained to those from Italy. Parents and staff administering the test reported on acceptability. One hundred and seventy nine newborns, between June 2011 and October 2012, participated in the study. The test was rapidly completed if the infant remained in an optimal behavioral stage (7 ± 2 minutes) but the test duration increased significantly (12 ± 4 minutes, p < 0.001) if its behavior changed. Infants were able to fix a target and to discriminate a colored face at 24 hours of life. Horizontal tracking of a target was achieved by 96% (152/159) of the infants at 48 hours. Circular tracking, stripe discrimination and attention to distance significantly improved between each 24-hour test period. The test was easily performed by non-specialized local staff and well accepted by the parents. Healthy term singletons in this limited-resource setting have a visual response similar to that obtained to gestational age matched newborns in Italy. It is possible to use these results as a reference set of values for the visual assessment in Karen and Burmese infants in the first 72 hours of life. The utility of the 24 hours test should be pursued. PMID:27300137

  16. Evaluation and Acceptability of a Simplified Test of Visual Function at Birth in a Limited-Resource Setting

    PubMed Central

    Carrara, Verena I.; Darakomon, Mue Chae; Thin, Nant War War; Paw, Naw Ta Kaw; Wah, Naw; Wah, Hser Gay; Helen, Naw; Keereecharoen, Suporn; Paw, Naw Ta Mlar; Jittamala, Podjanee; Nosten, François H.; Ricci, Daniela; McGready, Rose

    2016-01-01

    Neurological examination, including visual fixation and tracking of a target, is routinely performed in the Shoklo Malaria Research Unit postnatal care units on the Thailand-Myanmar border. We aimed to evaluate a simple visual newborn test developed in Italy and performed by non-specialized personnel working in neonatal care units. An intensive training of local health staff in Thailand was conducted prior to performing assessments at 24, 48 and 72 hours of life in healthy, low-risk term singletons. The 48 and 72 hours results were then compared to values obtained to those from Italy. Parents and staff administering the test reported on acceptability. One hundred and seventy nine newborns, between June 2011 and October 2012, participated in the study. The test was rapidly completed if the infant remained in an optimal behavioral stage (7 ± 2 minutes) but the test duration increased significantly (12 ± 4 minutes, p < 0.001) if its behavior changed. Infants were able to fix a target and to discriminate a colored face at 24 hours of life. Horizontal tracking of a target was achieved by 96% (152/159) of the infants at 48 hours. Circular tracking, stripe discrimination and attention to distance significantly improved between each 24-hour test period. The test was easily performed by non-specialized local staff and well accepted by the parents. Healthy term singletons in this limited-resource setting have a visual response similar to that obtained to gestational age matched newborns in Italy. It is possible to use these results as a reference set of values for the visual assessment in Karen and Burmese infants in the first 72 hours of life. The utility of the 24 hours test should be pursued. PMID:27300137

  17. 20 CFR 404.780 - Evidence of “good cause” for exceeding time limits on accepting proof of support or application...

    Code of Federal Regulations, 2010 CFR

    2010-04-01

    ... 20 Employees' Benefits 2 2010-04-01 2010-04-01 false Evidence of âgood causeâ for exceeding time limits on accepting proof of support or application for a lump-sum death payment. 404.780 Section 404.780...- ) Evidence Other Evidence Requirements § 404.780 Evidence of “good cause” for exceeding time limits...

  18. 20 CFR 404.780 - Evidence of “good cause” for exceeding time limits on accepting proof of support or application...

    Code of Federal Regulations, 2011 CFR

    2011-04-01

    ... 20 Employees' Benefits 2 2011-04-01 2011-04-01 false Evidence of âgood causeâ for exceeding time limits on accepting proof of support or application for a lump-sum death payment. 404.780 Section 404.780...- ) Evidence Other Evidence Requirements § 404.780 Evidence of “good cause” for exceeding time limits...

  19. Analysis and mitigation of systematic errors in spectral shearing interferometry of pulses approaching the single-cycle limit [Invited

    SciTech Connect

    Birge, Jonathan R.; Kaertner, Franz X.

    2008-06-15

    We derive an analytical approximation for the measured pulse width error in spectral shearing methods, such as spectral phase interferometry for direct electric-field reconstruction (SPIDER), caused by an anomalous delay between the two sheared pulse components. This analysis suggests that, as pulses approach the single-cycle limit, the resulting requirements on the calibration and stability of this delay become significant, requiring precision orders of magnitude higher than the scale of a wavelength. This is demonstrated by numerical simulations of SPIDER pulse reconstruction using actual data from a sub-two-cycle laser. We briefly propose methods to minimize the effects of this sensitivity in SPIDER and review variants of spectral shearing that attempt to avoid this difficulty.

  20. Errors, limitations, and pitfalls in the diagnosis of central and peripheral nervous system lesions in intraoperative cytology and frozen sections

    PubMed Central

    Chand, Priyanka; Amit, Sonal; Gupta, Raghvendra; Agarwal, Asha

    2016-01-01

    Context: Intraoperative cytology and frozen section play an important role in the diagnosis of neurosurgical specimens. There are limitations in both these procedures but understanding the errors and pitfalls may help in increasing the diagnostic yield. Aims: To find the diagnostic accuracy of intraoperative cytology and frozen section for central and peripheral nervous system (PNS) lesions and analyze the errors, pitfalls, and limitations in these procedures. Settings and Design: Eighty cases were included in this prospective study in a span of 1.5 years. Materials and Methods: The crush preparations and the frozen sections were stained with hematoxylin and eosin method. The diagnosis of crush smears and the frozen sections were compared with the diagnosis in the paraffin section, which was considered as the gold standard. Statistical Analyses Used: Diagnostic accuracy, sensitivity, and specificity. Results: The diagnostic accuracy of crush smears was 91.25% with a sensitivity of 95.5% and specificity of 100%. In the frozen sections, the overall diagnostic accuracy was 95%, sensitivity was 96.8%, and specificity was 100%. The categories of pitfalls noted in this study were categorization of spindle cell lesions, differentiation of oligodendroglioma from astrocytoma in frozen sections, differentiation of coagulative tumor necrosis of glioblastoma multiforme (GBM) from the caseous necrosis of tuberculosis, grading of gliomas in frozen section, and differentiation of the normal granular cells of the cerebellum from the lymphocytes in cytological smears. Conclusions: Crush smear and frozen section are complimentary procedures. When both are used together, the diagnostic yield is substantially increased. PMID:27279685

  1. On the construction and analysis of stochastic models: Characterization and propagation of the errors associated with limited data

    SciTech Connect

    Ghanem, Roger G. . E-mail: ghanem@usc.edu; Doostan, Alireza . E-mail: doostan@jhu.edu

    2006-09-01

    This paper investigates the predictive accuracy of stochastic models. In particular, a formulation is presented for the impact of data limitations associated with the calibration of parameters for these models, on their overall predictive accuracy. In the course of this development, a new method for the characterization of stochastic processes from corresponding experimental observations is obtained. Specifically, polynomial chaos representations of these processes are estimated that are consistent, in some useful sense, with the data. The estimated polynomial chaos coefficients are themselves characterized as random variables with known probability density function, thus permitting the analysis of the dependence of their values on further experimental evidence. Moreover, the error in these coefficients, associated with limited data, is propagated through a physical system characterized by a stochastic partial differential equation (SPDE). This formalism permits the rational allocation of resources in view of studying the possibility of validating a particular predictive model. A Bayesian inference scheme is relied upon as the logic for parameter estimation, with its computational engine provided by a Metropolis-Hastings Markov chain Monte Carlo procedure.

  2. Water-balance uncertainty in Honduras: a limits-of-acceptability approach to model evaluation using a time-variant rating curve

    NASA Astrophysics Data System (ADS)

    Westerberg, I.; Guerrero, J.-L.; Beven, K.; Seibert, J.; Halldin, S.; Lundin, L.-C.; Xu, C.-Y.

    2009-04-01

    The climate of Central America is highly variable both spatially and temporally; extreme events like floods and droughts are recurrent phenomena posing great challenges to regional water-resources management. Scarce and low-quality hydro-meteorological data complicate hydrological modelling and few previous studies have addressed the water-balance in Honduras. In the alluvial Choluteca River, the river bed changes over time as fill and scour occur in the channel, leading to a fast-changing relation between stage and discharge and difficulties in deriving consistent rating curves. In this application of a four-parameter water-balance model, a limits-of-acceptability approach to model evaluation was used within the General Likelihood Uncertainty Estimation (GLUE) framework. The limits of acceptability were determined for discharge alone for each time step, and ideally a simulated result should always be contained within the limits. A moving-window weighted fuzzy regression of the ratings, based on estimated uncertainties in the rating-curve data, was used to derive the limits. This provided an objective way to determine the limits of acceptability and handle the non-stationarity of the rating curves. The model was then applied within GLUE and evaluated using the derived limits. Preliminary results show that the best simulations are within the limits 75-80% of the time, indicating that precipitation data and other uncertainties like model structure also have a significant effect on predictability.

  3. Achieving the Complete-Basis Limit in Large Molecular Clusters: Computationally Efficient Procedures to Eliminate Basis-Set Superposition Error

    NASA Astrophysics Data System (ADS)

    Richard, Ryan M.; Herbert, John M.

    2013-06-01

    Previous electronic structure studies that have relied on fragmentation have been primarily interested in those methods' abilities to replicate the supersystem energy (or a related energy difference) without recourse to the ability of those supersystem results to replicate experiment or high accuracy benchmarks. Here we focus on replicating accurate ab initio benchmarks, that are suitable for comparison to experimental data. In doing this it becomes imperative that we correct our methods for basis-set superposition errors (BSSE) in a computationally feasible way. This criterion leads us to develop a new method for BSSE correction, which we term the many-body counterpoise correction, or MBn for short. MBn is truncated at order n, in much the same manner as a normal many-body expansion leading to a decrease in computational time. Furthermore, its formulation in terms of fragments makes it especially suitable for use with pre-existing fragment codes. A secondary focus of this study is directed at assessing fragment methods' abilities to extrapolate to the complete basis set (CBS) limit as well as compute approximate triples corrections. Ultimately, by analysis of (H_2O)_6 and (H_2O)_{10}F^- systems, it is concluded that with large enough basis-sets (triple or quad zeta) fragment based methods can replicate high level benchmarks in a fraction of the time.

  4. Effect of model error on precipitation forecasts in the high-resolution limited area ensemble prediction system of the Korea Meteorological Administration

    NASA Astrophysics Data System (ADS)

    Kim, SeHyun; Kim, Hyun Mee

    2015-04-01

    In numerical weather prediction using convective-scale model resolution, forecast uncertainties are caused by initial condition error, boundary condition error, and model error. Because convective-scale forecasts are influenced by subgrid scale processes which cannot be resolved easily, the model error becomes more important than the initial and boundary condition errors. To consider the model error, multi-model and multi-physics methods use several models and physics schemes and the stochastic physics method uses random numbers to create a noise term in the model equations (e.g. Stochastic Perturbed Parameterization Tendency (SPPT), Stochastic Kinetic Energy Backscatter (SKEB), Stochastic Convective Vorticity (SCV), and Random Parameters (RP)). In this study, the RP method was used to consider the model error in the high-resolution limited area ensemble prediction system (EPS) of the Korea Meteorological Administration (KMA). The EPS has 12 ensemble members with 3 km horizontal resolution which generate 48 h forecasts. The initial and boundary conditions were provided by the global EPS of the KMA. The RP method was applied to microphysics and boundary layer schemes, and the ensemble forecasts using RP were compared with those without RP during July 2013. Both Root Mean Square Error (RMSE) and spread of wind at 10 m verified by surface Automatic Weather System (AWS) observations decreased when using RP. However, for 1 hour accumulated precipitation, the spread increased with RP and Equitable Threat Score (ETS) showed different results for each rainfall event.

  5. 5 CFR 1605.22 - Claims for correction of Board or TSP record keeper errors; time limitations.

    Code of Federal Regulations, 2013 CFR

    2013-01-01

    ... participant or beneficiary. (b) Board's or TSP record keeper's discovery of error. (1) Upon discovery of an... before its discovery, the Board or the TSP record keeper may exercise sound discretion in deciding... error if it is discovered before 30 days after the issuance of the earlier of the most recent...

  6. 5 CFR 1605.22 - Claims for correction of Board or TSP record keeper errors; time limitations.

    Code of Federal Regulations, 2011 CFR

    2011-01-01

    ... participant or beneficiary. (b) Board's or TSP record keeper's discovery of error. (1) Upon discovery of an... before its discovery, the Board or the TSP record keeper may exercise sound discretion in deciding... error if it is discovered before 30 days after the issuance of the earlier of the most recent...

  7. 5 CFR 1605.22 - Claims for correction of Board or TSP record keeper errors; time limitations.

    Code of Federal Regulations, 2014 CFR

    2014-01-01

    ... participant or beneficiary. (b) Board's or TSP record keeper's discovery of error. (1) Upon discovery of an... before its discovery, the Board or the TSP record keeper may exercise sound discretion in deciding... error if it is discovered before 30 days after the issuance of the earlier of the most recent...

  8. Error Analysis

    NASA Astrophysics Data System (ADS)

    Scherer, Philipp O. J.

    Input data as well as the results of elementary operations have to be represented by machine numbers, the subset of real numbers which is used by the arithmetic unit of today's computers. Generally this generates rounding errors. This kind of numerical error can be avoided in principle by using arbitrary precision arithmetics or symbolic algebra programs. But this is unpractical in many cases due to the increase in computing time and memory requirements. Results from more complex operations like square roots or trigonometric functions can have even larger errors since series expansions have to be truncated and iterations accumulate the errors of the individual steps. In addition, the precision of input data from an experiment is limited. In this chapter we study the influence of numerical errors on the uncertainties of the calculated results and the stability of simple algorithms.

  9. Operational Interventions to Maintenance Error

    NASA Technical Reports Server (NTRS)

    Kanki, Barbara G.; Walter, Diane; Dulchinos, VIcki

    1997-01-01

    A significant proportion of aviation accidents and incidents are known to be tied to human error. However, research of flight operational errors has shown that so-called pilot error often involves a variety of human factors issues and not a simple lack of individual technical skills. In aircraft maintenance operations, there is similar concern that maintenance errors which may lead to incidents and accidents are related to a large variety of human factors issues. Although maintenance error data and research are limited, industry initiatives involving human factors training in maintenance have become increasingly accepted as one type of maintenance error intervention. Conscientious efforts have been made in re-inventing the team7 concept for maintenance operations and in tailoring programs to fit the needs of technical opeRAtions. Nevertheless, there remains a dual challenge: 1) to develop human factors interventions which are directly supported by reliable human error data, and 2) to integrate human factors concepts into the procedures and practices of everyday technical tasks. In this paper, we describe several varieties of human factors interventions and focus on two specific alternatives which target problems related to procedures and practices; namely, 1) structured on-the-job training and 2) procedure re-design. We hope to demonstrate that the key to leveraging the impact of these solutions comes from focused interventions; that is, interventions which are derived from a clear understanding of specific maintenance errors, their operational context and human factors components.

  10. 5 CFR 1605.22 - Claims for correction of Board or TSP record keeper errors; time limitations.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... participant or beneficiary. (b) Board's or TSP record keeper's discovery of error. (1) Upon discovery of an... before its discovery, the Board or the TSP record keeper may exercise sound discretion in deciding... correct it, but, in any event, must act promptly in doing so. (c) Participant's or beneficiary's...

  11. DWPF COAL-CARBON WASTE ACCEPTANCE CRITERIA LIMIT EVALUATION BASED ON EXPERIMENTAL WORK (TANK 48 IMPACT STUDY)

    SciTech Connect

    Lambert, D.; Choi, A.

    2010-10-15

    This report summarizes the results of both experimental and modeling studies performed using Sludge Batch 10 (SB10) simulants and FBSR product from Tank 48 simulant testing in order to develop higher levels of coal-carbon that can be managed by DWPF. Once the Fluidized Bed Steam Reforming (FBSR) process starts up for treatment of Tank 48 legacy waste, the FBSR product stream will contribute higher levels of coal-carbon in the sludge batch for processing at DWPF. Coal-carbon is added into the FBSR process as a reductant and some of it will be present in the FBSR product as unreacted coal. The FBSR product will be slurried in water, transferred to Tank Farm and will be combined with sludge and washed to produce the sludge batch that DWPF will process. The FBSR product is high in both water soluble sodium carbonate and unreacted coal-carbon. Most of the sodium carbonate is removed during washing but all of the coal-carbon will remain and become part of the DWPF sludge batch. A paper study was performed earlier to assess the impact of FBSR coal-carbon on the DWPF Chemical Processing Cell (CPC) operation and melter off-gas flammability by combining it with SB10-SB13. The results of the paper study are documented in Ref. 7 and the key findings included that SB10 would be the most difficult batch to process with the FBSR coal present and up to 5,000 mg/kg of coal-carbon could be fed to the melter without exceeding the off-gas flammability safety basis limits. In the present study, a bench-scale demonstration of the DWPF CPC processing was performed using SB10 simulants spiked with varying amounts of coal, and the resulting seven CPC products were fed to the DWPF melter cold cap and off-gas dynamics models to determine the maximum coal that can be processed through the melter without exceeding the off-gas flammability safety basis limits. Based on the results of these experimental and modeling studies, the presence of coal-carbon in the sludge feed to DWPF is found to have

  12. Action errors, error management, and learning in organizations.

    PubMed

    Frese, Michael; Keith, Nina

    2015-01-01

    Every organization is confronted with errors. Most errors are corrected easily, but some may lead to negative consequences. Organizations often focus on error prevention as a single strategy for dealing with errors. Our review suggests that error prevention needs to be supplemented by error management--an approach directed at effectively dealing with errors after they have occurred, with the goal of minimizing negative and maximizing positive error consequences (examples of the latter are learning and innovations). After defining errors and related concepts, we review research on error-related processes affected by error management (error detection, damage control). Empirical evidence on positive effects of error management in individuals and organizations is then discussed, along with emotional, motivational, cognitive, and behavioral pathways of these effects. Learning from errors is central, but like other positive consequences, learning occurs under certain circumstances--one being the development of a mind-set of acceptance of human error.

  13. Adaptive tracking control for double-pendulum overhead cranes subject to tracking error limitation, parametric uncertainties and external disturbances

    NASA Astrophysics Data System (ADS)

    Zhang, Menghua; Ma, Xin; Rong, Xuewen; Tian, Xincheng; Li, Yibin

    2016-08-01

    In a practical application, overhead cranes are usually subjected to system parameter uncertainties, such as uncertain payload masses, cable lengths, frictions, and external disturbances, such as air resistance. Most existing crane control methods treat the payload swing as that of a single-pendulum. However, certain types of payloads and hoisting mechanisms result in double-pendulum dynamics. The double-pendulum effects will make most existing crane control methods fail to work normally. Therefore, an adaptive tracking controller for double-pendulum overhead cranes subject to parametric uncertainties and external disturbances is developed in this paper. The proposed adaptive tracking control method guarantees that the trolley tracking error is always within a prior set of boundary conditions and converges to zero rapidly. The asymptotic stability of the closed-loop system's equilibrium point is assured by Lyapunov techniques and Barbalat's Lemma. Simulation results show that the proposed adaptive tracking control method is robust with respect to system parametric uncertainties and external disturbances.

  14. Pitfalls in Inversion and Interpretation of Continuous Resistivity Profiling Data: Effects of Resolution Limitations and Measurement Error

    NASA Astrophysics Data System (ADS)

    Lane, J. W.; Day-Lewis, F. D.; Loke, M. H.; White, E. A.

    2005-12-01

    Water-borne continuous resistivity profiling (CRP), also called marine or streaming resistivity, increasingly is used to support hydrogeophysical studies in freshwater and saltwater environments. CRP can provide resistivity tomograms for delineation of focused ground-water discharge, identification of sediment types, and mapping the near-shore freshwater/saltwater interface. Data collection, performed with a boat-towed electrode streamer, is commonly fast and relatively straightforward. In contrast, data processing and interpretation are potentially time consuming and subject to pitfalls. Data analysis is difficult due to the underdetermined nature of the tomographic inverse problem and the poorly understood resolution of tomograms, which is a function of the measurement physics, survey geometry, measurement error, and inverse problem parameterization and regularization. CRP data analysis in particular is complicated by noise in the data, sources of which include water leaking into the electrode cable, inefficient data collection geometry, and electrode obstruction by vegetation in the water column. Preliminary modeling has shown that, as in other types of geotomography, inversions of CRP data tend to overpredict the extent of and underpredict the magnitude of resistivity anomalies. Previous work also has shown that the water layer has a strong effect on the measured apparent resistivity values as it commonly has a much lower resistivity than the subsurface. Here we use synthetic examples and inverted field data sets to (1) assess the ability of CRP to resolve hydrogeophysical targets of interest for a range of water depths and salinities; and (2) examine the effects of CRP streamer noise on inverted resistivity sections. Our results show that inversion and interpretation of CRP data should be guided by hydrologic insight, available data for bathymetry and water layer resistivity, and a reliable model of measurement errors.

  15. Limiter

    DOEpatents

    Cohen, S.A.; Hosea, J.C.; Timberlake, J.R.

    1984-10-19

    A limiter with a specially contoured front face is provided. The front face of the limiter (the plasma-side face) is flat with a central indentation. In addition, the limiter shape is cylindrically symmetric so that the limiter can be rotated for greater heat distribution. This limiter shape accommodates the various power scrape-off distances lambda p, which depend on the parallel velocity, V/sub parallel/, of the impacting particles.

  16. Limiter

    DOEpatents

    Cohen, Samuel A.; Hosea, Joel C.; Timberlake, John R.

    1986-01-01

    A limiter with a specially contoured front face accommodates the various power scrape-off distances .lambda..sub.p, which depend on the parallel velocity, V.sub..parallel., of the impacting particles. The front face of the limiter (the plasma-side face) is flat with a central indentation. In addition, the limiter shape is cylindrically symmetric so that the limiter can be rotated for greater heat distribution.

  17. Pilot of the brief behavioral activation treatment for depression in latinos with limited english proficiency: preliminary evaluation of efficacy and acceptability.

    PubMed

    Collado, Anahi; Castillo, Soraida D; Maero, Fabian; Lejuez, C W; Macpherson, Laura

    2014-01-01

    Latinos with limited English proficiency (LEP) experience multiple barriers to accessing efficacious mental health treatments. Using a stage model of behavior therapy research, this Stage I investigation evaluated the Brief Behavioral Activation Treatment for Depression (BATD), an intervention that may be well equipped to address existing treatment barriers. A sample of 10 Latinos with LEP and depressive symptomatology participated in a 10-session, direct (i.e., literal) Spanish-language translation of BATD, with no other cultural modifications. Participants were assessed at each session for depressive symptomatology and for the proposed BATD mechanisms: activity engagement and environmental reward. One month after treatment, participants were reassessed and interviewed to elicit feedback about BATD. Hierarchical linear model analyses were used to measure BATD outcomes. Results showed depressive symptomatology decreased (p<.001), while both activation (p=.04) and environmental reward (p=.02) increased over the course of BATD. Increases in activation corresponded concurrently with decreases in depression (p=.01), while environmental reward preceded decreases in depressive symptomatology (all p's ≤ .04). Follow-up analyses revealed sustained clinical gains in depression and activation, and an increase in environmental reward at follow-up. Participant interviews conducted 1 month after treatment conclusion indicated that BATD is an acceptable treatment for our sample of interest. Despite the limitations inherent in a study restricted to a sample of 10, preliminary outcomes of this Stage I research suggest that members of this otherwise underserved group showed improvements in depressive symptomatology and are willing to participate in and adhere to BATD. The study's positive outcomes suggest that a Stage II randomized clinical trial is a logical next step.

  18. Pilot of the Brief Behavioral Activation Treatment for Depression in Latinos with Limited English Proficiency: Preliminary Evaluation of Efficacy and Acceptability

    PubMed Central

    Collado, Anahi; Castillo, Soraida D.; Maero, Fabian; Lejuez, C. W.; MacPherson, Laura

    2014-01-01

    Latinos with limited English proficiency (LEP) experience multiple barriers to accessing efficacious mental health treatments. Using a stage model of behavioral therapies research, this Stage 1b investigation evaluated the Brief Behavioral Activation Treatment for Depression (BATD), an intervention which may be well-equipped to address existing treatment barriers. A sample of 10 Latinos with LEP and depressive symptomatology participated in a 10-session,direct (i.e., literal) Spanish-language translation of BATD, with no other cultural modifications. Participants were assessed at each session for depressive symptomatology and for the proposed BATD mechanisms: activity engagement and environmental reward. One month after treatment, participants were reassessed and interviewed to elicit feedback about BATD. Hierarchical Linear Model analyses were used to measure BATD outcomes. Results showed depressive symptomatology decreased (p<.001), while both activation (p = .04) and environmental reward (p = .02) increased over the course of BATD. Increases in activation corresponded concurrently with decreases in depression (p = .01), while environmental reward preceded decreases in depressive symptomatology (all p’s≤ .04). Follow-up analyses revealed sustained clinical gains in depression and activation, and an increase in environmental reward at follow-up. Participant interviews conducted one month after treatment conclusion indicated that BATD is an acceptable treatment for our sample of interest. Despite the limitations inherent to a study restricted to sample of ten, preliminary outcomes of this Stage I research suggest that members of this otherwise underserved group showed improvements in depressive symptomatology and are willing to participate in and adhere to BATD. The study’s positive outcomes suggest that a Stage II randomized clinical trial is a logical next step. PMID:24411118

  19. Reduction of Maintenance Error Through Focused Interventions

    NASA Technical Reports Server (NTRS)

    Kanki, Barbara G.; Walter, Diane; Rosekind, Mark R. (Technical Monitor)

    1997-01-01

    It is well known that a significant proportion of aviation accidents and incidents are tied to human error. In flight operations, research of operational errors has shown that so-called "pilot error" often involves a variety of human factors issues and not a simple lack of individual technical skills. In aircraft maintenance operations, there is similar concern that maintenance errors which may lead to incidents and accidents are related to a large variety of human factors issues. Although maintenance error data and research are limited, industry initiatives involving human factors training in maintenance have become increasingly accepted as one type of maintenance error intervention. Conscientious efforts have been made in re-inventing the "team" concept for maintenance operations and in tailoring programs to fit the needs of technical operations. Nevertheless, there remains a dual challenge: to develop human factors interventions which are directly supported by reliable human error data, and to integrate human factors concepts into the procedures and practices of everyday technical tasks. In this paper, we describe several varieties of human factors interventions and focus on two specific alternatives which target problems related to procedures and practices; namely, 1) structured on-the-job training and 2) procedure re-design. We hope to demonstrate that the key to leveraging the impact of these solutions comes from focused interventions; that is, interventions which are derived from a clear understanding of specific maintenance errors, their operational context and human factors components.

  20. On the validity of the basis set superposition error and complete basis set limit extrapolations for the binding energy of the formic acid dimer

    SciTech Connect

    Miliordos, Evangelos; Xantheas, Sotiris S.

    2015-03-07

    We report the variation of the binding energy of the Formic Acid Dimer with the size of the basis set at the Coupled Cluster with iterative Singles, Doubles and perturbatively connected Triple replacements [CCSD(T)] level of theory, estimate the Complete Basis Set (CBS) limit, and examine the validity of the Basis Set Superposition Error (BSSE)-correction for this quantity that was previously challenged by Kalescky, Kraka, and Cremer (KKC) [J. Chem. Phys. 140, 084315 (2014)]. Our results indicate that the BSSE correction, including terms that account for the substantial geometry change of the monomers due to the formation of two strong hydrogen bonds in the dimer, is indeed valid for obtaining accurate estimates for the binding energy of this system as it exhibits the expected decrease with increasing basis set size. We attribute the discrepancy between our current results and those of KKC to their use of a valence basis set in conjunction with the correlation of all electrons (i.e., including the 1s of C and O). We further show that the use of a core-valence set in conjunction with all electron correlation converges faster to the CBS limit as the BSSE correction is less than half than the valence electron/valence basis set case. The uncorrected and BSSE-corrected binding energies were found to produce the same (within 0.1 kcal/mol) CBS limits. We obtain CCSD(T)/CBS best estimates for D{sub e} = − 16.1 ± 0.1 kcal/mol and for D{sub 0} = − 14.3 ± 0.1 kcal/mol, the later in excellent agreement with the experimental value of −14.22 ± 0.12 kcal/mol.

  1. On the validity of the basis set superposition error and complete basis set limit extrapolations for the binding energy of the formic acid dimer.

    PubMed

    Miliordos, Evangelos; Xantheas, Sotiris S

    2015-03-01

    We report the variation of the binding energy of the Formic Acid Dimer with the size of the basis set at the Coupled Cluster with iterative Singles, Doubles and perturbatively connected Triple replacements [CCSD(T)] level of theory, estimate the Complete Basis Set (CBS) limit, and examine the validity of the Basis Set Superposition Error (BSSE)-correction for this quantity that was previously challenged by Kalescky, Kraka, and Cremer (KKC) [J. Chem. Phys. 140, 084315 (2014)]. Our results indicate that the BSSE correction, including terms that account for the substantial geometry change of the monomers due to the formation of two strong hydrogen bonds in the dimer, is indeed valid for obtaining accurate estimates for the binding energy of this system as it exhibits the expected decrease with increasing basis set size. We attribute the discrepancy between our current results and those of KKC to their use of a valence basis set in conjunction with the correlation of all electrons (i.e., including the 1s of C and O). We further show that the use of a core-valence set in conjunction with all electron correlation converges faster to the CBS limit as the BSSE correction is less than half than the valence electron/valence basis set case. The uncorrected and BSSE-corrected binding energies were found to produce the same (within 0.1 kcal/mol) CBS limits. We obtain CCSD(T)/CBS best estimates for De = - 16.1 ± 0.1 kcal/mol and for D0 = - 14.3 ± 0.1 kcal/mol, the later in excellent agreement with the experimental value of -14.22 ± 0.12 kcal/mol.

  2. Uncertainty quantification and error analysis

    SciTech Connect

    Higdon, Dave M; Anderson, Mark C; Habib, Salman; Klein, Richard; Berliner, Mark; Covey, Curt; Ghattas, Omar; Graziani, Carlo; Seager, Mark; Sefcik, Joseph; Stark, Philip

    2010-01-01

    UQ studies all sources of error and uncertainty, including: systematic and stochastic measurement error; ignorance; limitations of theoretical models; limitations of numerical representations of those models; limitations on the accuracy and reliability of computations, approximations, and algorithms; and human error. A more precise definition for UQ is suggested below.

  3. Evaluation by Monte Carlo simulations of the power limits and bit-error rate degradation in wavelength-division multiplexing networks caused by four-wave mixing.

    PubMed

    Neokosmidis, Ioannis; Kamalakis, Thomas; Chipouras, Aristides; Sphicopoulos, Thomas

    2004-09-10

    Fiber nonlinearities can degrade the performance of a wavelength-division multiplexing optical network. For high input power, a low chromatic dispersion coefficient, or low channel spacing, the most severe penalties are due to four-wave mixing (FWM). To compute the bit-error rate that is due to FWM noise, one must evaluate accurately the probability-density functions (pdf) of both the space and the mark states. An accurate evaluation of the pdf of the FWM noise in the space state is given, for the first time to the authors' knowledge, by use of Monte Carlo simulations. Additionally, it is shown that the pdf in the mark state is not symmetric as had been assumed in previous studies. Diagrams are presented that permit estimation of the pdf, given the number of channels in the system. The accuracy of the previous models is also investigated, and finally the results of this study are used to estimate the power limits of a wavelength-division multiplexing system. PMID:15468703

  4. Evaluation by Monte Carlo simulations of the power limits and bit-error rate degradation in wavelength-division multiplexing networks caused by four-wave mixing.

    PubMed

    Neokosmidis, Ioannis; Kamalakis, Thomas; Chipouras, Aristides; Sphicopoulos, Thomas

    2004-09-10

    Fiber nonlinearities can degrade the performance of a wavelength-division multiplexing optical network. For high input power, a low chromatic dispersion coefficient, or low channel spacing, the most severe penalties are due to four-wave mixing (FWM). To compute the bit-error rate that is due to FWM noise, one must evaluate accurately the probability-density functions (pdf) of both the space and the mark states. An accurate evaluation of the pdf of the FWM noise in the space state is given, for the first time to the authors' knowledge, by use of Monte Carlo simulations. Additionally, it is shown that the pdf in the mark state is not symmetric as had been assumed in previous studies. Diagrams are presented that permit estimation of the pdf, given the number of channels in the system. The accuracy of the previous models is also investigated, and finally the results of this study are used to estimate the power limits of a wavelength-division multiplexing system.

  5. Randomized Trial of a Computerized Touch Screen Decision Aid to Increase Acceptance of Colonoscopy Screening in an African American Population with Limited Literacy.

    PubMed

    Ruzek, Sheryl B; Bass, Sarah Bauerle; Greener, Judith; Wolak, Caitlin; Gordon, Thomas F

    2016-10-01

    The goal of this study was to assess the effectiveness of a touch screen decision aid to increase acceptance of colonoscopy screening among African American patients with low literacy, developed and tailored using perceptual mapping methods grounded in Illness Self-Regulation and Information-Communication Theories. The pilot randomized controlled trial investigated the effects of a theory-based intervention on patients' acceptance of screening, including their perceptions of educational value, feelings about colonoscopy, likelihood to undergo screening, and decisional conflict about colonoscopy screening. Sixty-one African American patients with low literacy, aged 50-70 years, with no history of colonoscopy, were randomly assigned to receive a computerized touch screen decision aid (CDA; n = 33) or a literacy appropriate print tool (PT; n = 28) immediately before a primary care appointment in an urban, university-affiliated general internal medicine clinic. Patients rated the CDA significantly higher than the PT on all indicators of acceptance, including the helpfulness of the information for making a screening decision, and reported positive feelings about colonoscopy, greater likelihood to be screened, and lower decisional conflict. Results showed that a touch screen decision tool is acceptable to African American patients with low iteracy and, by increasing intent to screen, may increase rates of colonoscopy screening.

  6. 13 CFR 124.504 - What circumstances limit SBA's ability to accept a procurement for award as an 8(a) contract?

    Code of Federal Regulations, 2012 CFR

    2012-01-01

    ...) procedures. (c) Adverse impact. SBA has made a written determination that acceptance of the procurement for 8(a) award would have an adverse impact on an individual small business, a group of small businesses located in a specific geographical location, or other small business programs. The adverse impact...

  7. Randomized Trial of a Computerized Touch Screen Decision Aid to Increase Acceptance of Colonoscopy Screening in an African American Population with Limited Literacy.

    PubMed

    Ruzek, Sheryl B; Bass, Sarah Bauerle; Greener, Judith; Wolak, Caitlin; Gordon, Thomas F

    2016-10-01

    The goal of this study was to assess the effectiveness of a touch screen decision aid to increase acceptance of colonoscopy screening among African American patients with low literacy, developed and tailored using perceptual mapping methods grounded in Illness Self-Regulation and Information-Communication Theories. The pilot randomized controlled trial investigated the effects of a theory-based intervention on patients' acceptance of screening, including their perceptions of educational value, feelings about colonoscopy, likelihood to undergo screening, and decisional conflict about colonoscopy screening. Sixty-one African American patients with low literacy, aged 50-70 years, with no history of colonoscopy, were randomly assigned to receive a computerized touch screen decision aid (CDA; n = 33) or a literacy appropriate print tool (PT; n = 28) immediately before a primary care appointment in an urban, university-affiliated general internal medicine clinic. Patients rated the CDA significantly higher than the PT on all indicators of acceptance, including the helpfulness of the information for making a screening decision, and reported positive feelings about colonoscopy, greater likelihood to be screened, and lower decisional conflict. Results showed that a touch screen decision tool is acceptable to African American patients with low iteracy and, by increasing intent to screen, may increase rates of colonoscopy screening. PMID:26940369

  8. 13 CFR 124.504 - What circumstances limit SBA's ability to accept a procurement for award as an 8(a) contract?

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... requirement using appropriate competitive 8(a) procedures. (c) Adverse impact. SBA has made a written determination that acceptance of the procurement for 8(a) award would have an adverse impact on an individual... business programs. The adverse impact concept is designed to protect small business concerns which...

  9. The Insufficiency of Error Analysis

    ERIC Educational Resources Information Center

    Hammarberg, B.

    1974-01-01

    The position here is that error analysis is inadequate, particularly from the language-teaching point of view. Non-errors must be considered in specifying the learner's current command of the language, its limits, and his learning tasks. A cyclic procedure of elicitation and analysis, to secure evidence of errors and non-errors, is outlined.…

  10. Refractive Errors

    MedlinePlus

    ... and lens of your eye helps you focus. Refractive errors are vision problems that happen when the shape ... cornea, or aging of the lens. Four common refractive errors are Myopia, or nearsightedness - clear vision close up ...

  11. Limits of Expertise: Rethinking Pilot Error and the Causes of Airline Accidents. CRM/HF Conference, Held in Denver, Colorado on April 16-17, 2006

    NASA Technical Reports Server (NTRS)

    Dismukes, Key; Berman, Ben; Loukopoulos, Loukisa

    2007-01-01

    Reviewed NTSB reports of the 19 U.S. airline accidents between 1991-2000 attributed primarily to crew error. Asked: Why might any airline crew in situation of accident crew--knowing only what they knew--be vulnerable. Can never know with certainty why accident crew made specific errors but can determine why the population of pilots is vulnerable. Considers variability of expert performance as function of interplay of multiple factors.

  12. Validation of stroke volume and cardiac output by electrical interrogation of the brachial artery in normals: assessment of strengths, limitations, and sources of error.

    PubMed

    Bernstein, Donald P; Henry, Isaac C; Lemmens, Harry J; Chaltas, Janell L; DeMaria, Anthony N; Moon, James B; Kahn, Andrew M

    2015-12-01

    The goal of this study is to validate a new, continuous, noninvasive stroke volume (SV) method, known as transbrachial electrical bioimpedance velocimetry (TBEV). TBEV SV was compared to SV obtained by cardiac magnetic resonance imaging (cMRI) in normal humans devoid of clinically apparent heart disease. Thirty-two (32) volunteers were enrolled in the study. Each subject was evaluated by echocardiography to assure that no aortic or mitral valve disease was present. Subsequently, each subject underwent electrical interrogation of the brachial artery by means of a high frequency, low amplitude alternating current. A first TBEV SV estimate was obtained. Immediately after the initial TBEV study, subjects underwent cMRI, using steady-state precession imaging to obtain a volumetric estimate of SV. Following cMRI, the TBEV SV study was repeated. Comparing the cMRI-derived SV to that of TBEV, the two TBEV estimates were averaged and compared to the cMRI standard. CO was computed as the product of SV and heart rate. Statistical methods consisted of Bland-Altman and linear regression analysis. TBEV SV and CO estimates were obtained in 30 of the 32 subjects enrolled. Bland-Altman analysis of pre- and post-cMRI TBEV SV showed a mean bias of 2.87 % (2.05 mL), precision of 13.59% (11.99 mL) and 95% limits of agreement (LOA) of +29.51% (25.55 mL) and -23.77% (-21.45 mL). Regression analysis for pre- and post-cMRI TBEV SV values yielded y = 0.76x + 25.1 and r(2) = 0.71 (r = 0.84). Bland-Altman analysis comparing cMRI SV with averaged TBEV SV showed a mean bias of -1.56% (-1.53 mL), precision of 13.47% (12.84 mL), 95% LOA of +24.85% (+23.64 mL) and -27.97% (-26.7 mL) and percent error = 26.2 %. For correlation analysis, the regression equation was y = 0.82x + 19.1 and correlation coefficient r(2) = 0.61 (r = 0.78). Bland-Altman analysis of averaged pre- and post-cMRI TBEV CO versus cMRI CO yielded a mean bias of 5.01% (0.32 L min(-1)), precision of 12.85% (0.77 L min(-1)), 95% LOA

  13. Enhanced notification of infusion pump programming errors.

    PubMed

    Evans, R Scott; Carlson, Rick; Johnson, Kyle V; Palmer, Brent K; Lloyd, James F

    2010-01-01

    Hospitalized patients receive countless doses of medications through manually programmed infusion pumps. Many medication errors are the result of programming incorrect pump settings. When used appropriately, smart pumps have the potential to detect some programming errors. However, based on the current use of smart pumps, there are conflicting reports on their ability to prevent patient harm without additional capabilities and interfaces to electronic medical records (EMR). We developed a smart system that is connected to the EMR including medication charting that can detect and alert on potential pump programming errors. Acceptable programming limits of dose rate increases in addition to initial drug doses for 23 high-risk medications are monitored. During 22.5 months in a 24 bed ICU, 970 alerts (4% of 25,040 doses, 1.4 alerts per day) were generated for pump settings programmed outside acceptable limits of which 137 (14%) were found to have prevented potential harm. Monitoring pump programming at the system level rather than the pump provides access to additional patient data in the EMR including previous dosage levels, other concurrent medications and caloric intake, age, gender, vitals and laboratory results.

  14. The Error in Total Error Reduction

    PubMed Central

    Witnauer, James E.; Urcelay, Gonzalo P.; Miller, Ralph R.

    2013-01-01

    Most models of human and animal learning assume that learning is proportional to the discrepancy between a delivered outcome and the outcome predicted by all cues present during that trial (i.e., total error across a stimulus compound). This total error reduction (TER) view has been implemented in connectionist and artificial neural network models to describe the conditions under which weights between units change. Electrophysiological work has revealed that the activity of dopamine neurons is correlated with the total error signal in models of reward learning. Similar neural mechanisms presumably support fear conditioning, human contingency learning, and other types of learning. Using a computational modelling approach, we compared several TER models of associative learning to an alternative model that rejects the TER assumption in favor of local error reduction (LER), which assumes that learning about each cue is proportional to the discrepancy between the delivered outcome and the outcome predicted by that specific cue on that trial. The LER model provided a better fit to the reviewed data than the TER models. Given the superiority of the LER model with the present data sets, acceptance of TER should be tempered. PMID:23891930

  15. 13 CFR 124.504 - What circumstances limit SBA's ability to accept a procurement for award as an 8(a) contract?

    Code of Federal Regulations, 2011 CFR

    2011-01-01

    ... Assistance SMALL BUSINESS ADMINISTRATION 8(a) BUSINESS DEVELOPMENT/SMALL DISADVANTAGED BUSINESS STATUS DETERMINATIONS 8(a) Business Development Contractual Assistance § 124.504 What circumstances limit SBA's ability... requirement to the concern's business development needs against the business development needs of...

  16. Medication Errors

    MedlinePlus

    ... to reduce the risk of medication errors to industry and others at FDA. Additionally, DMEPA prospectively reviews ... List of Abbreviations Regulations and Guidances Guidance for Industry: Safety Considerations for Product Design to Minimize Medication ...

  17. Medication Errors

    MedlinePlus

    Medicines cure infectious diseases, prevent problems from chronic diseases, and ease pain. But medicines can also cause harmful reactions if not used ... You can help prevent errors by Knowing your medicines. Keep a list of the names of your ...

  18. Preventing medication errors in cancer chemotherapy.

    PubMed

    Cohen, M R; Anderson, R W; Attilio, R M; Green, L; Muller, R J; Pruemer, J M

    1996-04-01

    Recommendations for preventing medication errors in cancer chemotherapy are made. Before a health care provider is granted privileges to prescribe, dispense, or administer antineoplastic agents, he or she should undergo a tailored educational program and possibly testing or certification. Appropriate reference materials should be developed. Each institution should develop a dose-verification process with as many independent checks as possible. A detailed checklist covering prescribing, transcribing, dispensing, and administration should be used. Oral orders are not acceptable. All doses should be calculated independently by the physician, the pharmacist, and the nurse. Dosage limits should be established and a review process set up for doses that exceed the limits. These limits should be entered into pharmacy computer systems, listed on preprinted order forms, stated on the product packaging, placed in strategic locations in the institution, and communicated to employees. The prescribing vocabulary must be standardized. Acronyms, abbreviations, and brand names must be avoided and steps taken to avoid other sources of confusion in the written orders, such as trailing zeros. Preprinted antineoplastic drug order forms containing checklists can help avoid errors. Manufacturers should be encouraged to avoid or eliminate ambiguities in drug names and dosing information. Patients must be educated about all aspects of their cancer chemotherapy, as patients represent a last line of defense against errors. An interdisciplinary team at each practice site should review every medication error reported. Pharmacists should be involved at all sites where antineoplastic agents are dispensed. Although it may not be possible to eliminate all medication errors in cancer chemotherapy, the risk can be minimized through specific steps. Because of their training and experience, pharmacists should take the lead in this effort. PMID:8697025

  19. Quasi-analytical determination of noise-induced error limits in lidar retrieval of aerosol backscatter coefficient by the elastic, two-component algorithm.

    PubMed

    Sicard, Michaël; Comerón, Adolfo; Rocadenbosch, Francisco; Rodríguez, Alejandro; Muñoz, Constantino

    2009-01-10

    The elastic, two-component algorithm is the most common inversion method for retrieving the aerosol backscatter coefficient from ground- or space-based backscatter lidar systems. A quasi-analytical formulation of the statistical error associated to the aerosol backscatter coefficient caused by the use of real, noise-corrupted lidar signals in the two-component algorithm is presented. The error expression depends on the signal-to-noise ratio along the inversion path and takes into account "instantaneous" effects, the effect of the signal-to-noise ratio at the range where the aerosol backscatter coefficient is being computed, as well as "memory" effects, namely, both the effect of the signal-to-noise ratio in the cell where the inversion is started and the cumulative effect of the noise between that cell and the actual cell where the aerosol backscatter coefficient is evaluated. An example is shown to illustrate how the "instantaneous" effect is reduced when averaging the noise-contaminated signal over a number of cells around the range where the inversion is started.

  20. Error compensation for thermally induced errors on a machine tool

    SciTech Connect

    Krulewich, D.A.

    1996-11-08

    Heat flow from internal and external sources and the environment create machine deformations, resulting in positioning errors between the tool and workpiece. There is no industrially accepted method for thermal error compensation. A simple model has been selected that linearly relates discrete temperature measurements to the deflection. The biggest problem is how to locate the temperature sensors and to determine the number of required temperature sensors. This research develops a method to determine the number and location of temperature measurements.

  1. TU-C-BRE-08: IMRT QA: Selecting Meaningful Gamma Criteria Based On Error Detection Sensitivity

    SciTech Connect

    Steers, J; Fraass, B

    2014-06-15

    Purpose: To develop a strategy for defining meaningful tolerance limits and studying the sensitivity of IMRT QA gamma criteria by inducing known errors in QA plans. Methods: IMRT QA measurements (ArcCHECK, Sun Nuclear) were compared to QA plan calculations with induced errors. Many (>24) gamma comparisons between data and calculations were performed for each of several kinds of cases and classes of induced error types with varying magnitudes (e.g. MU errors ranging from -10% to +10%), resulting in over 3,000 comparisons. Gamma passing rates for each error class and case were graphed against error magnitude to create error curves in order to represent the range of missed errors in routine IMRT QA using various gamma criteria. Results: This study demonstrates that random, case-specific, and systematic errors can be detected by the error curve analysis. Depending on location of the peak of the error curve (e.g., not centered about zero), 3%/3mm threshold=10% criteria may miss MU errors of up to 10% and random MLC errors of up to 5 mm. Additionally, using larger dose thresholds for specific devices may increase error sensitivity (for the same X%/Ymm criteria) by up to a factor of two. This analysis will allow clinics to select more meaningful gamma criteria based on QA device, treatment techniques, and acceptable error tolerances. Conclusion: We propose a strategy for selecting gamma parameters based on the sensitivity of gamma criteria and individual QA devices to induced calculation errors in QA plans. Our data suggest large errors may be missed using conventional gamma criteria and that using stricter criteria with an increased dose threshold may reduce the range of missed errors. This approach allows quantification of gamma criteria sensitivity and is straightforward to apply to other combinations of devices and treatment techniques.

  2. Thermodynamics of Error Correction

    NASA Astrophysics Data System (ADS)

    Sartori, Pablo; Pigolotti, Simone

    2015-10-01

    Information processing at the molecular scale is limited by thermal fluctuations. This can cause undesired consequences in copying information since thermal noise can lead to errors that can compromise the functionality of the copy. For example, a high error rate during DNA duplication can lead to cell death. Given the importance of accurate copying at the molecular scale, it is fundamental to understand its thermodynamic features. In this paper, we derive a universal expression for the copy error as a function of entropy production and work dissipated by the system during wrong incorporations. Its derivation is based on the second law of thermodynamics; hence, its validity is independent of the details of the molecular machinery, be it any polymerase or artificial copying device. Using this expression, we find that information can be copied in three different regimes. In two of them, work is dissipated to either increase or decrease the error. In the third regime, the protocol extracts work while correcting errors, reminiscent of a Maxwell demon. As a case study, we apply our framework to study a copy protocol assisted by kinetic proofreading, and show that it can operate in any of these three regimes. We finally show that, for any effective proofreading scheme, error reduction is limited by the chemical driving of the proofreading reaction.

  3. Development of an iterative reconstruction method to overcome 2D detector low resolution limitations in MLC leaf position error detection for 3D dose verification in IMRT.

    PubMed

    Visser, R; Godart, J; Wauben, D J L; Langendijk, J A; Van't Veld, A A; Korevaar, E W

    2016-05-21

    The objective of this study was to introduce a new iterative method to reconstruct multi leaf collimator (MLC) positions based on low resolution ionization detector array measurements and to evaluate its error detection performance. The iterative reconstruction method consists of a fluence model, a detector model and an optimizer. Expected detector response was calculated using a radiotherapy treatment plan in combination with the fluence model and detector model. MLC leaf positions were reconstructed by minimizing differences between expected and measured detector response. The iterative reconstruction method was evaluated for an Elekta SLi with 10.0 mm MLC leafs in combination with the COMPASS system and the MatriXX Evolution (IBA Dosimetry) detector with a spacing of 7.62 mm. The detector was positioned in such a way that each leaf pair of the MLC was aligned with one row of ionization chambers. Known leaf displacements were introduced in various field geometries ranging from  -10.0 mm to 10.0 mm. Error detection performance was tested for MLC leaf position dependency relative to the detector position, gantry angle dependency, monitor unit dependency, and for ten clinical intensity modulated radiotherapy (IMRT) treatment beams. For one clinical head and neck IMRT treatment beam, influence of the iterative reconstruction method on existing 3D dose reconstruction artifacts was evaluated. The described iterative reconstruction method was capable of individual MLC leaf position reconstruction with millimeter accuracy, independent of the relative detector position within the range of clinically applied MU's for IMRT. Dose reconstruction artifacts in a clinical IMRT treatment beam were considerably reduced as compared to the current dose verification procedure. The iterative reconstruction method allows high accuracy 3D dose verification by including actual MLC leaf positions reconstructed from low resolution 2D measurements. PMID:27100169

  4. Development of an iterative reconstruction method to overcome 2D detector low resolution limitations in MLC leaf position error detection for 3D dose verification in IMRT

    NASA Astrophysics Data System (ADS)

    Visser, R.; Godart, J.; Wauben, D. J. L.; Langendijk, J. A.; van't Veld, A. A.; Korevaar, E. W.

    2016-05-01

    The objective of this study was to introduce a new iterative method to reconstruct multi leaf collimator (MLC) positions based on low resolution ionization detector array measurements and to evaluate its error detection performance. The iterative reconstruction method consists of a fluence model, a detector model and an optimizer. Expected detector response was calculated using a radiotherapy treatment plan in combination with the fluence model and detector model. MLC leaf positions were reconstructed by minimizing differences between expected and measured detector response. The iterative reconstruction method was evaluated for an Elekta SLi with 10.0 mm MLC leafs in combination with the COMPASS system and the MatriXX Evolution (IBA Dosimetry) detector with a spacing of 7.62 mm. The detector was positioned in such a way that each leaf pair of the MLC was aligned with one row of ionization chambers. Known leaf displacements were introduced in various field geometries ranging from  -10.0 mm to 10.0 mm. Error detection performance was tested for MLC leaf position dependency relative to the detector position, gantry angle dependency, monitor unit dependency, and for ten clinical intensity modulated radiotherapy (IMRT) treatment beams. For one clinical head and neck IMRT treatment beam, influence of the iterative reconstruction method on existing 3D dose reconstruction artifacts was evaluated. The described iterative reconstruction method was capable of individual MLC leaf position reconstruction with millimeter accuracy, independent of the relative detector position within the range of clinically applied MU’s for IMRT. Dose reconstruction artifacts in a clinical IMRT treatment beam were considerably reduced as compared to the current dose verification procedure. The iterative reconstruction method allows high accuracy 3D dose verification by including actual MLC leaf positions reconstructed from low resolution 2D measurements.

  5. Development of an iterative reconstruction method to overcome 2D detector low resolution limitations in MLC leaf position error detection for 3D dose verification in IMRT

    NASA Astrophysics Data System (ADS)

    Visser, R.; Godart, J.; Wauben, D. J. L.; Langendijk, J. A.; van’t Veld, A. A.; Korevaar, E. W.

    2016-05-01

    The objective of this study was to introduce a new iterative method to reconstruct multi leaf collimator (MLC) positions based on low resolution ionization detector array measurements and to evaluate its error detection performance. The iterative reconstruction method consists of a fluence model, a detector model and an optimizer. Expected detector response was calculated using a radiotherapy treatment plan in combination with the fluence model and detector model. MLC leaf positions were reconstructed by minimizing differences between expected and measured detector response. The iterative reconstruction method was evaluated for an Elekta SLi with 10.0 mm MLC leafs in combination with the COMPASS system and the MatriXX Evolution (IBA Dosimetry) detector with a spacing of 7.62 mm. The detector was positioned in such a way that each leaf pair of the MLC was aligned with one row of ionization chambers. Known leaf displacements were introduced in various field geometries ranging from  ‑10.0 mm to 10.0 mm. Error detection performance was tested for MLC leaf position dependency relative to the detector position, gantry angle dependency, monitor unit dependency, and for ten clinical intensity modulated radiotherapy (IMRT) treatment beams. For one clinical head and neck IMRT treatment beam, influence of the iterative reconstruction method on existing 3D dose reconstruction artifacts was evaluated. The described iterative reconstruction method was capable of individual MLC leaf position reconstruction with millimeter accuracy, independent of the relative detector position within the range of clinically applied MU’s for IMRT. Dose reconstruction artifacts in a clinical IMRT treatment beam were considerably reduced as compared to the current dose verification procedure. The iterative reconstruction method allows high accuracy 3D dose verification by including actual MLC leaf positions reconstructed from low resolution 2D measurements.

  6. Uncorrected refractive errors.

    PubMed

    Naidoo, Kovin S; Jaggernath, Jyoti

    2012-01-01

    Global estimates indicate that more than 2.3 billion people in the world suffer from poor vision due to refractive error; of which 670 million people are considered visually impaired because they do not have access to corrective treatment. Refractive errors, if uncorrected, results in an impaired quality of life for millions of people worldwide, irrespective of their age, sex and ethnicity. Over the past decade, a series of studies using a survey methodology, referred to as Refractive Error Study in Children (RESC), were performed in populations with different ethnic origins and cultural settings. These studies confirmed that the prevalence of uncorrected refractive errors is considerably high for children in low-and-middle-income countries. Furthermore, uncorrected refractive error has been noted to have extensive social and economic impacts, such as limiting educational and employment opportunities of economically active persons, healthy individuals and communities. The key public health challenges presented by uncorrected refractive errors, the leading cause of vision impairment across the world, require urgent attention. To address these issues, it is critical to focus on the development of human resources and sustainable methods of service delivery. This paper discusses three core pillars to addressing the challenges posed by uncorrected refractive errors: Human Resource (HR) Development, Service Development and Social Entrepreneurship. PMID:22944755

  7. Ultimate limits to error probabilities for ionospheric models based on solar geophysical indices and how these compare with the state of the art

    NASA Technical Reports Server (NTRS)

    Nisbet, J. S.; Stehle, C. G.

    1981-01-01

    An ideal model based on a given set of geophysical indices is defined as a model that provides a least squares fit to the data set as a function of the indices considered. Satellite measurements of electron content for three stations at different magnetic latitudes were used to provide such data sets which were each fitted to the geophysical indices. The magnitude of the difference between the measured value and the derived equation for the data set was used to estimate the probability of making an error greater than a given magnitude for such an ideal model. Atmospheric Explorer C data is used to examine the causes of the fluctuations and suggestions are made about how real improvements can be made in ionospheric forecasting ability. Joule heating inputs in the auroral electrojets are related to the AL and AU magnetic indices. Magnetic indices based on the time integral of the energy deposited in the electrojets are proposed for modeling processes affected by auroral zone heating.

  8. Final Environmental Assessment and Finding of No Significant Impact: The Implementation of the Authorized Limits Process for Waste Acceptance at the C-746-U Landfill Paducah Gaseous Diffusion Plant Paducah, Kentucky

    SciTech Connect

    N /A

    2002-08-06

    The US Department of Energy (DOE) has completed an environmental assessment (DOE/EA-1414) for the proposed implementation of the authorized limits process for waste acceptance at the C-746-U Landfill at the Paducah Gaseous Diffusion Plant (PGDP) in Paducah, Kentucky. Based on the results of the impact analysis reported in the EA, which is incorporated herein by this reference, DOE has determined that the proposed action is not a major Federal action that would significantly affect the quality of the human environment within the context of the ''National Environmental Policy Act of 1969'' (NEPA). Therefore preparation of an environmental impact statement is not necessary, and DOE is issuing this Finding of No Significant Impact (FONSI).

  9. On the validity of the basis set superposition error and complete basis set limit extrapolations for the binding energy of the formic acid dimer

    SciTech Connect

    Miliordos, Evangelos; Xantheas, Sotiris S.

    2015-03-07

    We report the variation of the binding energy of the formic acid dimer at the CCSD(T)/ Complete Basis Set limit and examine the validity of the BSSE-correction, previously challenged by Kalescky, Kraka and Cremer [J. Chem. Phys. 140 (2014) 084315]. Our best estimate of D0=14.3±0.1 kcal/mol is in excellent agreement with the experimental value of 14.22±0.12 kcal/mol. The BSSE correction is indeed valid for this system since it exhibits the expected behavior of decreasing with increasing basis set size and its inclusion produces the same limit (within 0.1 kcal/mol) as the one obtained from extrapolation of the uncorrected binding energy. This work was supported by the U.S. Department of Energy, Office of Science, Office of Basic Energy Sciences, Division of Chemical Sciences, Geosciences and Biosciences. Pacific Northwest National Laboratory (PNNL) is a multiprogram national laboratory operated for DOE by Battelle. A portion of this research was performed using the Molecular Science Computing Facility (MSCF) in EMSL, a national scientific user facility sponsored by the Department of Energy’s Office of Biological and Environmental Research and located at PNNL.

  10. Dialogues on prediction errors.

    PubMed

    Niv, Yael; Schoenbaum, Geoffrey

    2008-07-01

    The recognition that computational ideas from reinforcement learning are relevant to the study of neural circuits has taken the cognitive neuroscience community by storm. A central tenet of these models is that discrepancies between actual and expected outcomes can be used for learning. Neural correlates of such prediction-error signals have been observed now in midbrain dopaminergic neurons, striatum, amygdala and even prefrontal cortex, and models incorporating prediction errors have been invoked to explain complex phenomena such as the transition from goal-directed to habitual behavior. Yet, like any revolution, the fast-paced progress has left an uneven understanding in its wake. Here, we provide answers to ten simple questions about prediction errors, with the aim of exposing both the strengths and the limitations of this active area of neuroscience research.

  11. IWGT report on quantitative approaches to genotoxicity risk assessment II. Use of point-of-departure (PoD) metrics in defining acceptable exposure limits and assessing human risk.

    PubMed

    MacGregor, James T; Frötschl, Roland; White, Paul A; Crump, Kenny S; Eastmond, David A; Fukushima, Shoji; Guérard, Melanie; Hayashi, Makoto; Soeteman-Hernández, Lya G; Johnson, George E; Kasamatsu, Toshio; Levy, Dan D; Morita, Takeshi; Müller, Lutz; Schoeny, Rita; Schuler, Maik J; Thybaud, Véronique

    2015-05-01

    This is the second of two reports from the International Workshops on Genotoxicity Testing (IWGT) Working Group on Quantitative Approaches to Genetic Toxicology Risk Assessment (the QWG). The first report summarized the discussions and recommendations of the QWG related to the need for quantitative dose-response analysis of genetic toxicology data, the existence and appropriate evaluation of threshold responses, and methods to analyze exposure-response relationships and derive points of departure (PoDs) from which acceptable exposure levels could be determined. This report summarizes the QWG discussions and recommendations regarding appropriate approaches to evaluate exposure-related risks of genotoxic damage, including extrapolation below identified PoDs and across test systems and species. Recommendations include the selection of appropriate genetic endpoints and target tissues, uncertainty factors and extrapolation methods to be considered, the importance and use of information on mode of action, toxicokinetics, metabolism, and exposure biomarkers when using quantitative exposure-response data to determine acceptable exposure levels in human populations or to assess the risk associated with known or anticipated exposures. The empirical relationship between genetic damage (mutation and chromosomal aberration) and cancer in animal models was also examined. It was concluded that there is a general correlation between cancer induction and mutagenic and/or clastogenic damage for agents thought to act via a genotoxic mechanism, but that the correlation is limited due to an inadequate number of cases in which mutation and cancer can be compared at a sufficient number of doses in the same target tissues of the same species and strain exposed under directly comparable routes and experimental protocols.

  12. Improved Error Thresholds for Measurement-Free Error Correction

    NASA Astrophysics Data System (ADS)

    Crow, Daniel; Joynt, Robert; Saffman, M.

    2016-09-01

    Motivated by limitations and capabilities of neutral atom qubits, we examine whether measurement-free error correction can produce practical error thresholds. We show that this can be achieved by extracting redundant syndrome information, giving our procedure extra fault tolerance and eliminating the need for ancilla verification. The procedure is particularly favorable when multiqubit gates are available for the correction step. Simulations of the bit-flip, Bacon-Shor, and Steane codes indicate that coherent error correction can produce threshold error rates that are on the order of 10-3 to 10-4—comparable with or better than measurement-based values, and much better than previous results for other coherent error correction schemes. This indicates that coherent error correction is worthy of serious consideration for achieving protected logical qubits.

  13. Rectifying calibration error of Goldmann applanation tonometer is easy!

    PubMed

    Choudhari, Nikhil S; Moorthy, Krishna P; Tungikar, Vinod B; Kumar, Mohan; George, Ronnie; Rao, Harsha L; Senthil, Sirisha; Vijaya, Lingam; Garudadri, Chandra Sekhar

    2014-11-01

    Purpose: Goldmann applanation tonometer (GAT) is the current Gold standard tonometer. However, its calibration error is common and can go unnoticed in clinics. Its company repair has limitations. The purpose of this report is to describe a self-taught technique of rectifying calibration error of GAT. Materials and Methods: Twenty-nine slit-lamp-mounted Haag-Streit Goldmann tonometers (Model AT 900 C/M; Haag-Streit, Switzerland) were included in this cross-sectional interventional pilot study. The technique of rectification of calibration error of the tonometer involved cleaning and lubrication of the instrument followed by alignment of weights when lubrication alone didn't suffice. We followed the South East Asia Glaucoma Interest Group's definition of calibration error tolerance (acceptable GAT calibration error within ±2, ±3 and ±4 mm Hg at the 0, 20 and 60-mm Hg testing levels, respectively). Results: Twelve out of 29 (41.3%) GATs were out of calibration. The range of positive and negative calibration error at the clinically most important 20-mm Hg testing level was 0.5 to 20 mm Hg and -0.5 to -18 mm Hg, respectively. Cleaning and lubrication alone sufficed to rectify calibration error of 11 (91.6%) faulty instruments. Only one (8.3%) faulty GAT required alignment of the counter-weight. Conclusions: Rectification of calibration error of GAT is possible in-house. Cleaning and lubrication of GAT can be carried out even by eye care professionals and may suffice to rectify calibration error in the majority of faulty instruments. Such an exercise may drastically reduce the downtime of the Gold standard tonometer.

  14. 42 CFR 431.960 - Types of payment errors.

    Code of Federal Regulations, 2014 CFR

    2014-10-01

    ... payment errors. (c) Medical review errors. (1) A medical review error is an error resulting in an...) Medical review errors include, but are not limited to the following: (i) Lack of documentation. (ii... 42 Public Health 4 2014-10-01 2014-10-01 false Types of payment errors. 431.960 Section...

  15. 42 CFR 431.960 - Types of payment errors.

    Code of Federal Regulations, 2013 CFR

    2013-10-01

    ... payment errors. (c) Medical review errors. (1) A medical review error is an error resulting in an...) Medical review errors include, but are not limited to the following: (i) Lack of documentation. (ii... 42 Public Health 4 2013-10-01 2013-10-01 false Types of payment errors. 431.960 Section...

  16. 42 CFR 431.960 - Types of payment errors.

    Code of Federal Regulations, 2012 CFR

    2012-10-01

    ... payment errors. (c) Medical review errors. (1) A medical review error is an error resulting in an...) Medical review errors include, but are not limited to the following: (i) Lack of documentation. (ii... 42 Public Health 4 2012-10-01 2012-10-01 false Types of payment errors. 431.960 Section...

  17. 42 CFR 431.960 - Types of payment errors.

    Code of Federal Regulations, 2011 CFR

    2011-10-01

    ... payment errors. (c) Medical review errors. (1) A medical review error is an error resulting in an...) Medical review errors include, but are not limited to the following: (i) Lack of documentation. (ii... 42 Public Health 4 2011-10-01 2011-10-01 false Types of payment errors. 431.960 Section...

  18. Error and its meaning in forensic science.

    PubMed

    Christensen, Angi M; Crowder, Christian M; Ousley, Stephen D; Houck, Max M

    2014-01-01

    The discussion of "error" has gained momentum in forensic science in the wake of the Daubert guidelines and has intensified with the National Academy of Sciences' Report. Error has many different meanings, and too often, forensic practitioners themselves as well as the courts misunderstand scientific error and statistical error rates, often confusing them with practitioner error (or mistakes). Here, we present an overview of these concepts as they pertain to forensic science applications, discussing the difference between practitioner error (including mistakes), instrument error, statistical error, and method error. We urge forensic practitioners to ensure that potential sources of error and method limitations are understood and clearly communicated and advocate that the legal community be informed regarding the differences between interobserver errors, uncertainty, variation, and mistakes.

  19. Definition of the limit of quantification in the presence of instrumental and non-instrumental errors. Comparison among various definitions applied to the calibration of zinc by inductively coupled plasma-mass spectrometry

    NASA Astrophysics Data System (ADS)

    Badocco, Denis; Lavagnini, Irma; Mondin, Andrea; Favaro, Gabriella; Pastore, Paolo

    2015-12-01

    The limit of quantification (LOQ) in the presence of instrumental and non-instrumental errors was proposed. It was theoretically defined combining the two-component variance regression and LOQ schemas already present in the literature and applied to the calibration of zinc by the ICP-MS technique. At low concentration levels, the two-component variance LOQ definition should be always used above all when a clean room is not available. Three LOQ definitions were accounted for. One of them in the concentration and two in the signal domain. The LOQ computed in the concentration domain, proposed by Currie, was completed by adding the third order terms in the Taylor expansion because they are of the same order of magnitude of the second ones so that they cannot be neglected. In this context, the error propagation was simplified by eliminating the correlation contributions by using independent random variables. Among the signal domain definitions, a particular attention was devoted to the recently proposed approach based on at least one significant digit in the measurement. The relative LOQ values resulted very large in preventing the quantitative analysis. It was found that the Currie schemas in the signal and concentration domains gave similar LOQ values but the former formulation is to be preferred as more easily computable.

  20. [Diagnostic Errors in Medicine].

    PubMed

    Buser, Claudia; Bankova, Andriyana

    2015-12-01

    The recognition of diagnostic errors in everyday practice can help improve patient safety. The most common diagnostic errors are the cognitive errors, followed by system-related errors and no fault errors. The cognitive errors often result from mental shortcuts, known as heuristics. The rate of cognitive errors can be reduced by a better understanding of heuristics and the use of checklists. The autopsy as a retrospective quality assessment of clinical diagnosis has a crucial role in learning from diagnostic errors. Diagnostic errors occur more often in primary care in comparison to hospital settings. On the other hand, the inpatient errors are more severe than the outpatient errors.

  1. [Diagnostic Errors in Medicine].

    PubMed

    Buser, Claudia; Bankova, Andriyana

    2015-12-01

    The recognition of diagnostic errors in everyday practice can help improve patient safety. The most common diagnostic errors are the cognitive errors, followed by system-related errors and no fault errors. The cognitive errors often result from mental shortcuts, known as heuristics. The rate of cognitive errors can be reduced by a better understanding of heuristics and the use of checklists. The autopsy as a retrospective quality assessment of clinical diagnosis has a crucial role in learning from diagnostic errors. Diagnostic errors occur more often in primary care in comparison to hospital settings. On the other hand, the inpatient errors are more severe than the outpatient errors. PMID:26649954

  2. Manson's triple error.

    PubMed

    F, Delaporte

    2008-09-01

    The author discusses the significance, implications and limitations of Manson's work. How did Patrick Manson resolve some of the major problems raised by the filarial worm life cycle? The Amoy physician showed that circulating embryos could only leave the blood via the percutaneous route, thereby requiring a bloodsucking insect. The discovery of a new autonomous, airborne, active host undoubtedly had a considerable impact on the history of parasitology, but the way in which Manson formulated and solved the problem of the transfer of filarial worms from the body of the mosquito to man resulted in failure. This article shows how the epistemological transformation operated by Manson was indissociably related to a series of errors and how a major breakthrough can be the result of a series of false proposals and, consequently, that the history of truth often involves a history of error. PMID:18814729

  3. Sun compass error model

    NASA Technical Reports Server (NTRS)

    Blucker, T. J.; Ferry, W. W.

    1971-01-01

    An error model is described for the Apollo 15 sun compass, a contingency navigational device. Field test data are presented along with significant results of the test. The errors reported include a random error resulting from tilt in leveling the sun compass, a random error because of observer sighting inaccuracies, a bias error because of mean tilt in compass leveling, a bias error in the sun compass itself, and a bias error because the device is leveled to the local terrain slope.

  4. Offer/Acceptance Ratio.

    ERIC Educational Resources Information Center

    Collins, Mimi

    1997-01-01

    Explores how human resource professionals, with above average offer/acceptance ratios, streamline their recruitment efforts. Profiles company strategies with internships, internal promotion, cooperative education programs, and how to get candidates to accept offers. Also discusses how to use the offer/acceptance ratio as a measure of program…

  5. Unforced errors and error reduction in tennis

    PubMed Central

    Brody, H

    2006-01-01

    Only at the highest level of tennis is the number of winners comparable to the number of unforced errors. As the average player loses many more points due to unforced errors than due to winners by an opponent, if the rate of unforced errors can be reduced, it should lead to an increase in points won. This article shows how players can improve their game by understanding and applying the laws of physics to reduce the number of unforced errors. PMID:16632568

  6. Error in radiology.

    PubMed

    Goddard, P; Leslie, A; Jones, A; Wakeley, C; Kabala, J

    2001-10-01

    The level of error in radiology has been tabulated from articles on error and on "double reporting" or "double reading". The level of error varies depending on the radiological investigation, but the range is 2-20% for clinically significant or major error. The greatest reduction in error rates will come from changes in systems.

  7. Error growth in operational ECMWF forecasts

    NASA Technical Reports Server (NTRS)

    Kalnay, E.; Dalcher, A.

    1985-01-01

    A parameterization scheme used at the European Centre for Medium Range Forecasting to model the average growth of the difference between forecasts on consecutive days was extended by including the effect of error growth on forecast model deficiencies. Error was defined as the difference between the forecast and analysis fields during the verification time. Systematic and random errors were considered separately in calculating the error variance for a 10 day operational forecast. A good fit was obtained with measured forecast errors and a satisfactory trend was achieved in the difference between forecasts. Fitting six parameters to forecast errors and differences that were performed separately for each wavenumber revealed that the error growth rate grew with wavenumber. The saturation error decreased with the total wavenumber and the limit of predictability, i.e., when error variance reaches 95 percent of saturation, decreased monotonically with the total wavenumber.

  8. 12 CFR 250.164 - Bankers' acceptances.

    Code of Federal Regulations, 2012 CFR

    2012-01-01

    ... 12 Banks and Banking 4 2012-01-01 2012-01-01 false Bankers' acceptances. 250.164 Section 250.164... reserve requirements under section 7 of the International Banking Act of 1978 (12 U.S.C. 3105). The Board..., Form FR Y-7, are also to be used in the calculation of the acceptance limits applicable to...

  9. Robust characterization of leakage errors

    NASA Astrophysics Data System (ADS)

    Wallman, Joel J.; Barnhill, Marie; Emerson, Joseph

    2016-04-01

    Leakage errors arise when the quantum state leaks out of some subspace of interest, for example, the two-level subspace of a multi-level system defining a computational ‘qubit’, the logical code space of a quantum error-correcting code, or a decoherence-free subspace. Leakage errors pose a distinct challenge to quantum control relative to the more well-studied decoherence errors and can be a limiting factor to achieving fault-tolerant quantum computation. Here we present a scalable and robust randomized benchmarking protocol for quickly estimating the leakage rate due to an arbitrary Markovian noise process on a larger system. We illustrate the reliability of the protocol through numerical simulations.

  10. Acceptability of BCG vaccination.

    PubMed

    Mande, R

    1977-01-01

    The acceptability of BCG vaccination varies a great deal according to the country and to the period when the vaccine is given. The incidence of complications has not always a direct influence on this acceptability, which depends, for a very large part, on the risk of tuberculosis in a given country at a given time.

  11. ATLAS ACCEPTANCE TEST

    SciTech Connect

    Cochrane, J. C. , Jr.; Parker, J. V.; Hinckley, W. B.; Hosack, K. W.; Mills, D.; Parsons, W. M.; Scudder, D. W.; Stokes, J. L.; Tabaka, L. J.; Thompson, M. C.; Wysocki, Frederick Joseph; Campbell, T. N.; Lancaster, D. L.; Tom, C. Y.

    2001-01-01

    The acceptance test program for Atlas, a 23 MJ pulsed power facility for use in the Los Alamos High Energy Density Hydrodynamics program, has been completed. Completion of this program officially releases Atlas from the construction phase and readies it for experiments. Details of the acceptance test program results and of machine capabilities for experiments will be presented.

  12. Some legal implications of pilot error.

    PubMed

    Hill, I R; Pile, R L

    1982-07-01

    Pilots are not expected to be superhuman beings, and it must therefore be accepted that they will make mistakes, some of which may have disastrous consequences. If it can be proven that the error equates with negligence in the pursuance of their duties, then they may be subjected to the full force of the Law. However, because pilot error is a multifactorial phenomenon, which is imperfectly understood, the initiation of legal proceedings may be difficult. If a penalty is to be imposed, the law demands a degree of proof which may be greater than that demanded by some investigating authorities, before implementing the appellation 'pilot error'.

  13. Estimation of flood warning runoff thresholds in ungauged basins with asymmetric error functions

    NASA Astrophysics Data System (ADS)

    Toth, Elena

    2016-06-01

    In many real-world flood forecasting systems, the runoff thresholds for activating warnings or mitigation measures correspond to the flow peaks with a given return period (often 2 years, which may be associated with the bankfull discharge). At locations where the historical streamflow records are absent or very limited, the threshold can be estimated with regionally derived empirical relationships between catchment descriptors and the desired flood quantile. Whatever the function form, such models are generally parameterised by minimising the mean square error, which assigns equal importance to overprediction or underprediction errors. Considering that the consequences of an overestimated warning threshold (leading to the risk of missing alarms) generally have a much lower level of acceptance than those of an underestimated threshold (leading to the issuance of false alarms), the present work proposes to parameterise the regression model through an asymmetric error function, which penalises the overpredictions more. The estimates by models (feedforward neural networks) with increasing degree of asymmetry are compared with those of a traditional, symmetrically trained network, in a rigorous cross-validation experiment referred to a database of catchments covering the country of Italy. The analysis shows that the use of the asymmetric error function can substantially reduce the number and extent of overestimation errors, if compared to the use of the traditional square errors. Of course such reduction is at the expense of increasing underestimation errors, but the overall accurateness is still acceptable and the results illustrate the potential value of choosing an asymmetric error function when the consequences of missed alarms are more severe than those of false alarms.

  14. Estimation of flood warning runoff thresholds in ungauged basins with asymmetric error functions

    NASA Astrophysics Data System (ADS)

    Toth, E.

    2015-06-01

    In many real-world flood forecasting systems, the runoff thresholds for activating warnings or mitigation measures correspond to the flow peaks with a given return period (often the 2-year one, that may be associated with the bankfull discharge). At locations where the historical streamflow records are absent or very limited, the threshold can be estimated with regionally-derived empirical relationships between catchment descriptors and the desired flood quantile. Whatever is the function form, such models are generally parameterised by minimising the mean square error, that assigns equal importance to overprediction or underprediction errors. Considering that the consequences of an overestimated warning threshold (leading to the risk of missing alarms) generally have a much lower level of acceptance than those of an underestimated threshold (leading to the issuance of false alarms), the present work proposes to parameterise the regression model through an asymmetric error function, that penalises more the overpredictions. The estimates by models (feedforward neural networks) with increasing degree of asymmetry are compared with those of a traditional, symmetrically-trained network, in a rigorous cross-validation experiment referred to a database of catchments covering the Italian country. The analysis shows that the use of the asymmetric error function can substantially reduce the number and extent of overestimation errors, if compared to the use of the traditional square errors. Of course such reduction is at the expense of increasing underestimation errors, but the overall accurateness is still acceptable and the results illustrate the potential value of choosing an asymmetric error function when the consequences of missed alarms are more severe than those of false alarms.

  15. Quantifying errors without random sampling

    PubMed Central

    Phillips, Carl V; LaPole, Luwanna M

    2003-01-01

    Background All quantifications of mortality, morbidity, and other health measures involve numerous sources of error. The routine quantification of random sampling error makes it easy to forget that other sources of error can and should be quantified. When a quantification does not involve sampling, error is almost never quantified and results are often reported in ways that dramatically overstate their precision. Discussion We argue that the precision implicit in typical reporting is problematic and sketch methods for quantifying the various sources of error, building up from simple examples that can be solved analytically to more complex cases. There are straightforward ways to partially quantify the uncertainty surrounding a parameter that is not characterized by random sampling, such as limiting reported significant figures. We present simple methods for doing such quantifications, and for incorporating them into calculations. More complicated methods become necessary when multiple sources of uncertainty must be combined. We demonstrate that Monte Carlo simulation, using available software, can estimate the uncertainty resulting from complicated calculations with many sources of uncertainty. We apply the method to the current estimate of the annual incidence of foodborne illness in the United States. Summary Quantifying uncertainty from systematic errors is practical. Reporting this uncertainty would more honestly represent study results, help show the probability that estimated values fall within some critical range, and facilitate better targeting of further research. PMID:12892568

  16. Applying the intention-to-treat principle in practice: Guidance on handling randomisation errors

    PubMed Central

    Sullivan, Thomas R; Voysey, Merryn; Lee, Katherine J; Cook, Jonathan A; Forbes, Andrew B

    2015-01-01

    Background: The intention-to-treat principle states that all randomised participants should be analysed in their randomised group. The implications of this principle are widely discussed in relation to the analysis, but have received limited attention in the context of handling errors that occur during the randomisation process. The aims of this article are to (1) demonstrate the potential pitfalls of attempting to correct randomisation errors and (2) provide guidance on handling common randomisation errors when they are discovered that maintains the goals of the intention-to-treat principle. Methods: The potential pitfalls of attempting to correct randomisation errors are demonstrated and guidance on handling common errors is provided, using examples from our own experiences. Results: We illustrate the problems that can occur when attempts are made to correct randomisation errors and argue that documenting, rather than correcting these errors, is most consistent with the intention-to-treat principle. When a participant is randomised using incorrect baseline information, we recommend accepting the randomisation but recording the correct baseline data. If ineligible participants are inadvertently randomised, we advocate keeping them in the trial and collecting all relevant data but seeking clinical input to determine their appropriate course of management, unless they can be excluded in an objective and unbiased manner. When multiple randomisations are performed in error for the same participant, we suggest retaining the initial randomisation and either disregarding the second randomisation if only one set of data will be obtained for the participant, or retaining the second randomisation otherwise. When participants are issued the incorrect treatment at the time of randomisation, we propose documenting the treatment received and seeking clinical input regarding the ongoing treatment of the participant. Conclusion: Randomisation errors are almost inevitable and

  17. Field error lottery

    NASA Astrophysics Data System (ADS)

    James Elliott, C.; McVey, Brian D.; Quimby, David C.

    1991-07-01

    The level of field errors in a free electron laser (FEL) is an important determinant of its performance. We have computed 3D performance of a large laser subsystem subjected to field errors of various types. These calculations have been guided by simple models such as SWOOP. The technique of choice is use of the FELEX free electron laser code that now possesses extensive engineering capabilities. Modeling includes the ability to establish tolerances of various types: fast and slow scale field bowing, field error level, beam position monitor error level, gap errors, defocusing errors, energy slew, displacement and pointing errors. Many effects of these errors on relative gain and relative power extraction are displayed and are the essential elements of determining an error budget. The random errors also depend on the particular random number seed used in the calculation. The simultaneous display of the performance versus error level of cases with multiple seeds illustrates the variations attributable to stochasticity of this model. All these errors are evaluated numerically for comprehensive engineering of the system. In particular, gap errors are found to place requirements beyond convenient mechanical tolerances of ± 25 μm, and amelioration of these may occur by a procedure using direct measurement of the magnetic fields at assembly time.

  18. Field error lottery

    NASA Astrophysics Data System (ADS)

    Elliott, C. James; McVey, Brian D.; Quimby, David C.

    1990-11-01

    The level of field errors in an FEL is an important determinant of its performance. We have computed 3D performance of a large laser subsystem subjected to field errors of various types. These calculations have been guided by simple models such as SWOOP. The technique of choice is utilization of the FELEX free electron laser code that now possesses extensive engineering capabilities. Modeling includes the ability to establish tolerances of various types: fast and slow scale field bowing, field error level, beam position monitor error level, gap errors, defocusing errors, energy slew, displacement, and pointing errors. Many effects of these errors on relative gain and relative power extraction are displayed and are the essential elements of determining an error budget. The random errors also depend on the particular random number seed used in the calculation. The simultaneous display of the performance versus error level of cases with multiple seeds illustrates the variations attributable to stochasticity of this model. All these errors are evaluated numerically for comprehensive engineering of the system. In particular, gap errors are found to place requirements beyond mechanical tolerances of (plus minus)25(mu)m, and amelioration of these may occur by a procedure utilizing direct measurement of the magnetic fields at assembly time.

  19. Field error lottery

    SciTech Connect

    Elliott, C.J.; McVey, B. ); Quimby, D.C. )

    1990-01-01

    The level of field errors in an FEL is an important determinant of its performance. We have computed 3D performance of a large laser subsystem subjected to field errors of various types. These calculations have been guided by simple models such as SWOOP. The technique of choice is utilization of the FELEX free electron laser code that now possesses extensive engineering capabilities. Modeling includes the ability to establish tolerances of various types: fast and slow scale field bowing, field error level, beam position monitor error level, gap errors, defocusing errors, energy slew, displacement and pointing errors. Many effects of these errors on relative gain and relative power extraction are displayed and are the essential elements of determining an error budget. The random errors also depend on the particular random number seed used in the calculation. The simultaneous display of the performance versus error level of cases with multiple seeds illustrates the variations attributable to stochasticity of this model. All these errors are evaluated numerically for comprehensive engineering of the system. In particular, gap errors are found to place requirements beyond mechanical tolerances of {plus minus}25{mu}m, and amelioration of these may occur by a procedure utilizing direct measurement of the magnetic fields at assembly time. 4 refs., 12 figs.

  20. Acceptance procedures: Microfilm printer

    NASA Technical Reports Server (NTRS)

    Lockwood, H. E.

    1973-01-01

    Acceptance tests were made for a special order automatic additive color microfilm printer. Tests include film capacity, film transport, resolution, illumination uniformity, exposure range checks, and color cuing considerations.

  1. Inborn errors of metabolism

    MedlinePlus

    Metabolism - inborn errors of ... Bodamer OA. Approach to inborn errors of metabolism. In: Goldman L, Schafer AI, eds. Goldman's Cecil Medicine . 25th ed. Philadelphia, PA: Elsevier Saunders; 2015:chap 205. Rezvani I, Rezvani G. An ...

  2. Irreducible error rate in aeronautical satellite channels

    NASA Technical Reports Server (NTRS)

    Davarian, F.

    1988-01-01

    The irreducible error rate in aeronautical satellite systems is experimentally investigated. It is shown that the introduction of a delay in the multipath component of a Rician channel increases the channel irreducible error rate. However, since the carrier/multipath ratio is usually large for aeronautical applications, this rise in the irreducible error rate should not be interpreted as a practical limitation of aeronautical satellite communications.

  3. Current limiters

    SciTech Connect

    Loescher, D.H.; Noren, K.

    1996-09-01

    The current that flows between the electrical test equipment and the nuclear explosive must be limited to safe levels during electrical tests conducted on nuclear explosives at the DOE Pantex facility. The safest way to limit the current is to use batteries that can provide only acceptably low current into a short circuit; unfortunately this is not always possible. When it is not possible, current limiters, along with other design features, are used to limit the current. Three types of current limiters, the fuse blower, the resistor limiter, and the MOSFET-pass-transistor limiters, are used extensively in Pantex test equipment. Detailed failure mode and effects analyses were conducted on these limiters. Two other types of limiters were also analyzed. It was found that there is no best type of limiter that should be used in all applications. The fuse blower has advantages when many circuits must be monitored, a low insertion voltage drop is important, and size and weight must be kept low. However, this limiter has many failure modes that can lead to the loss of over current protection. The resistor limiter is simple and inexpensive, but is normally usable only on circuits for which the nominal current is less than a few tens of milliamperes. The MOSFET limiter can be used on high current circuits, but it has a number of single point failure modes that can lead to a loss of protective action. Because bad component placement or poor wire routing can defeat any limiter, placement and routing must be designed carefully and documented thoroughly.

  4. Detection and avoidance of errors in computer software

    NASA Technical Reports Server (NTRS)

    Kinsler, Les

    1989-01-01

    The acceptance test errors of a computer software project to determine if the errors could be detected or avoided in earlier phases of development. GROAGSS (Gamma Ray Observatory Attitude Ground Support System) was selected as the software project to be examined. The development of the software followed the standard Flight Dynamics Software Development methods. GROAGSS was developed between August 1985 and April 1989. The project is approximately 250,000 lines of code of which approximately 43,000 lines are reused from previous projects. GROAGSS had a total of 1715 Change Report Forms (CRFs) submitted during the entire development and testing. These changes contained 936 errors. Of these 936 errors, 374 were found during the acceptance testing. These acceptance test errors were first categorized into methods of avoidance including: more clearly written requirements; detail review; code reading; structural unit testing; and functional system integration testing. The errors were later broken down in terms of effort to detect and correct, class of error, and probability that the prescribed detection method would be successful. These determinations were based on Software Engineering Laboratory (SEL) documents and interviews with the project programmers. A summary of the results of the categorizations is presented. The number of programming errors at the beginning of acceptance testing can be significantly reduced. The results of the existing development methodology are examined for ways of improvements. A basis is provided for the definition is a new development/testing paradigm. Monitoring of the new scheme will objectively determine its effectiveness on avoiding and detecting errors.

  5. Drug Errors in Anaesthesiology

    PubMed Central

    Jain, Rajnish Kumar; Katiyar, Sarika

    2009-01-01

    Summary Medication errors are a leading cause of morbidity and mortality in hospitalized patients. The incidence of these drug errors during anaesthesia is not certain. They impose a considerable financial burden to health care systems apart from the patient losses. Common causes of these errors and their prevention is discussed. PMID:20640103

  6. SU-D-BRD-07: Evaluation of the Effectiveness of Statistical Process Control Methods to Detect Systematic Errors For Routine Electron Energy Verification

    SciTech Connect

    Parker, S

    2015-06-15

    Purpose: To evaluate the ability of statistical process control methods to detect systematic errors when using a two dimensional (2D) detector array for routine electron beam energy verification. Methods: Electron beam energy constancy was measured using an aluminum wedge and a 2D diode array on four linear accelerators. Process control limits were established. Measurements were recorded in control charts and compared with both calculated process control limits and TG-142 recommended specification limits. The data was tested for normality, process capability and process acceptability. Additional measurements were recorded while systematic errors were intentionally introduced. Systematic errors included shifts in the alignment of the wedge, incorrect orientation of the wedge, and incorrect array calibration. Results: Control limits calculated for each beam were smaller than the recommended specification limits. Process capability and process acceptability ratios were greater than one in all cases. All data was normally distributed. Shifts in the alignment of the wedge were most apparent for low energies. The smallest shift (0.5 mm) was detectable using process control limits in some cases, while the largest shift (2 mm) was detectable using specification limits in only one case. The wedge orientation tested did not affect the measurements as this did not affect the thickness of aluminum over the detectors of interest. Array calibration dependence varied with energy and selected array calibration. 6 MeV was the least sensitive to array calibration selection while 16 MeV was the most sensitive. Conclusion: Statistical process control methods demonstrated that the data distribution was normally distributed, the process was capable of meeting specifications, and that the process was centered within the specification limits. Though not all systematic errors were distinguishable from random errors, process control limits increased the ability to detect systematic errors

  7. A Fourier analysis on the maximum acceptable grid size for discrete proton beam dose calculation

    SciTech Connect

    Li, Haisen S.; Romeijn, H. Edwin; Dempsey, James F.

    2006-09-15

    We developed an analytical method for determining the maximum acceptable grid size for discrete dose calculation in proton therapy treatment plan optimization, so that the accuracy of the optimized dose distribution is guaranteed in the phase of dose sampling and the superfluous computational work is avoided. The accuracy of dose sampling was judged by the criterion that the continuous dose distribution could be reconstructed from the discrete dose within a 2% error limit. To keep the error caused by the discrete dose sampling under a 2% limit, the dose grid size cannot exceed a maximum acceptable value. The method was based on Fourier analysis and the Shannon-Nyquist sampling theorem as an extension of our previous analysis for photon beam intensity modulated radiation therapy [J. F. Dempsey, H. E. Romeijn, J. G. Li, D. A. Low, and J. R. Palta, Med. Phys. 32, 380-388 (2005)]. The proton beam model used for the analysis was a near mono-energetic (of width about 1% the incident energy) and monodirectional infinitesimal (nonintegrated) pencil beam in water medium. By monodirection, we mean that the proton particles are in the same direction before entering the water medium and the various scattering prior to entrance to water is not taken into account. In intensity modulated proton therapy, the elementary intensity modulation entity for proton therapy is either an infinitesimal or finite sized beamlet. Since a finite sized beamlet is the superposition of infinitesimal pencil beams, the result of the maximum acceptable grid size obtained with infinitesimal pencil beam also applies to finite sized beamlet. The analytic Bragg curve function proposed by Bortfeld [T. Bortfeld, Med. Phys. 24, 2024-2033 (1997)] was employed. The lateral profile was approximated by a depth dependent Gaussian distribution. The model included the spreads of the Bragg peak and the lateral profiles due to multiple Coulomb scattering. The dependence of the maximum acceptable dose grid size on the

  8. [Medical errors in obstetrics].

    PubMed

    Marek, Z

    1984-08-01

    Errors in medicine may fall into 3 main categories: 1) medical errors made only by physicians, 2) technical errors made by physicians and other health care specialists, and 3) organizational errors associated with mismanagement of medical facilities. This classification of medical errors, as well as the definition and treatment of them, fully applies to obstetrics. However, the difference between obstetrics and other fields of medicine stems from the fact that an obstetrician usually deals with healthy women. Conversely, professional risk in obstetrics is very high, as errors and malpractice can lead to very serious complications. Observations show that the most frequent obstetrical errors occur in induced abortions, diagnosis of pregnancy, selection of optimal delivery techniques, treatment of hemorrhages, and other complications. Therefore, the obstetrician should be prepared to use intensive care procedures similar to those used for resuscitation.

  9. [Errors in laboratory daily practice].

    PubMed

    Larrose, C; Le Carrer, D

    2007-01-01

    Legislation set by GBEA (Guide de bonne exécution des analyses) requires that, before performing analysis, the laboratory directors have to check both the nature of the samples and the patients identity. The data processing of requisition forms, which identifies key errors, was established in 2000 and in 2002 by the specialized biochemistry laboratory, also with the contribution of the reception centre for biological samples. The laboratories follow a strict criteria of defining acceptability as a starting point for the reception to then check requisition forms and biological samples. All errors are logged into the laboratory database and analysis report are sent to the care unit specifying the problems and the consequences they have on the analysis. The data is then assessed by the laboratory directors to produce monthly or annual statistical reports. This indicates the number of errors, which are then indexed to patient files to reveal the specific problem areas, therefore allowing the laboratory directors to teach the nurses and enable corrective action.

  10. Aircraft system modeling error and control error

    NASA Technical Reports Server (NTRS)

    Kulkarni, Nilesh V. (Inventor); Kaneshige, John T. (Inventor); Krishnakumar, Kalmanje S. (Inventor); Burken, John J. (Inventor)

    2012-01-01

    A method for modeling error-driven adaptive control of an aircraft. Normal aircraft plant dynamics is modeled, using an original plant description in which a controller responds to a tracking error e(k) to drive the component to a normal reference value according to an asymptote curve. Where the system senses that (1) at least one aircraft plant component is experiencing an excursion and (2) the return of this component value toward its reference value is not proceeding according to the expected controller characteristics, neural network (NN) modeling of aircraft plant operation may be changed. However, if (1) is satisfied but the error component is returning toward its reference value according to expected controller characteristics, the NN will continue to model operation of the aircraft plant according to an original description.

  11. A propagation of error analysis of the enzyme activity expression. A model for determining the total system random error of a kinetic enzyme analyzer.

    PubMed

    Tiffany, T O; Thayer, P C; Coelho, C M; Manning, G B

    1976-09-01

    We present a total system error evaluation of random error, based on a propagation of error analysis of the expression for the calculation of enzyme activity. A simple expression is derived that contains terms for photometric error, timing uncertainty, temperature-control error, sample and reagent volume errors, and pathlength error. This error expression was developed in general to provide a simple means of evaluating the magnitude of random error in an analytical system and in particular to provide an error evaluation protocol for the assessment of the error components in a prototype Miniature Centrifugal Analyzer system. Individual system components of error are measured. These measured error components are combined in the error expressiion to predict performance. Enzyme activity measurements are made to correlate with the projected error data. In conclusion, it is demonstrated that this is one method for permitting the clinical chemist and the instrument manufacturer to establish reasonable error limits. PMID:954193

  12. Comparing Absolute Error with Squared Error for Evaluating Empirical Models of Continuous Variables: Compositions, Implications, and Consequences

    NASA Astrophysics Data System (ADS)

    Gao, J.

    2014-12-01

    Reducing modeling error is often a major concern of empirical geophysical models. However, modeling errors can be defined in different ways: When the response variable is continuous, the most commonly used metrics are squared (SQ) and absolute (ABS) errors. For most applications, ABS error is the more natural, but SQ error is mathematically more tractable, so is often used as a substitute with little scientific justification. Existing literature has not thoroughly investigated the implications of using SQ error in place of ABS error, especially not geospatially. This study compares the two metrics through the lens of bias-variance decomposition (BVD). BVD breaks down the expected modeling error of each model evaluation point into bias (systematic error), variance (model sensitivity), and noise (observation instability). It offers a way to probe the composition of various error metrics. I analytically derived the BVD of ABS error and compared it with the well-known SQ error BVD, and found that not only the two metrics measure the characteristics of the probability distributions of modeling errors differently, but also the effects of these characteristics on the overall expected error are different. Most notably, under SQ error all bias, variance, and noise increase expected error, while under ABS error certain parts of the error components reduce expected error. Since manipulating these subtractive terms is a legitimate way to reduce expected modeling error, SQ error can never capture the complete story embedded in ABS error. I then empirically compared the two metrics with a supervised remote sensing model for mapping surface imperviousness. Pair-wise spatially-explicit comparison for each error component showed that SQ error overstates all error components in comparison to ABS error, especially variance-related terms. Hence, substituting ABS error with SQ error makes model performance appear worse than it actually is, and the analyst would more likely accept a

  13. Smaller hospitals accept advertising.

    PubMed

    Mackesy, R

    1988-07-01

    Administrators at small- and medium-sized hospitals gradually have accepted the role of marketing in their organizations, albeit at a much slower rate than larger institutions. This update of a 1983 survey tracks the increasing competitiveness, complexity and specialization of providing health care and of advertising a small hospital's services. PMID:10288550

  14. Students Accepted on Probation.

    ERIC Educational Resources Information Center

    Lorberbaum, Caroline S.

    This report is a justification of the Dalton Junior College admissions policy designed to help students who had had academic and/or social difficulties at other schools. These students were accepted on probation, their problems carefully analyzed, and much effort devoted to those with low academic potential. They received extensive academic and…

  15. Approaches to acceptable risk

    SciTech Connect

    Whipple, C.

    1997-04-30

    Several alternative approaches to address the question {open_quotes}How safe is safe enough?{close_quotes} are reviewed and an attempt is made to apply the reasoning behind these approaches to the issue of acceptability of radiation exposures received in space. The approaches to the issue of the acceptability of technological risk described here are primarily analytical, and are drawn from examples in the management of environmental health risks. These include risk-based approaches, in which specific quantitative risk targets determine the acceptability of an activity, and cost-benefit and decision analysis, which generally focus on the estimation and evaluation of risks, benefits and costs, in a framework that balances these factors against each other. These analytical methods tend by their quantitative nature to emphasize the magnitude of risks, costs and alternatives, and to downplay other factors, especially those that are not easily expressed in quantitative terms, that affect acceptance or rejection of risk. Such other factors include the issues of risk perceptions and how and by whom risk decisions are made.

  16. Why was Relativity Accepted?

    NASA Astrophysics Data System (ADS)

    Brush, S. G.

    Historians of science have published many studies of the reception of Einstein's special and general theories of relativity. Based on a review of these studies, and my own research on the role of the light-bending prediction in the reception of general relativity, I discuss the role of three kinds of reasons for accepting relativity (1) empirical predictions and explanations; (2) social-psychological factors; and (3) aesthetic-mathematical factors. According to the historical studies, acceptance was a three-stage process. First, a few leading scientists adopted the special theory for aesthetic-mathematical reasons. In the second stage, their enthusiastic advocacy persuaded other scientists to work on the theory and apply it to problems currently of interest in atomic physics. The special theory was accepted by many German physicists by 1910 and had begun to attract some interest in other countries. In the third stage, the confirmation of Einstein's light-bending prediction attracted much public attention and forced all physicists to take the general theory of relativity seriously. In addition to light-bending, the explanation of the advance of Mercury's perihelion was considered strong evidence by theoretical physicists. The American astronomers who conducted successful tests of general relativity became defenders of the theory. There is little evidence that relativity was `socially constructed' but its initial acceptance was facilitated by the prestige and resources of its advocates.

  17. Software error detection

    NASA Technical Reports Server (NTRS)

    Buechler, W.; Tucker, A. G.

    1981-01-01

    Several methods were employed to detect both the occurrence and source of errors in the operational software of the AN/SLQ-32. A large embedded real time electronic warfare command and control system for the ROLM 1606 computer are presented. The ROLM computer provides information about invalid addressing, improper use of privileged instructions, stack overflows, and unimplemented instructions. Additionally, software techniques were developed to detect invalid jumps, indices out of range, infinte loops, stack underflows, and field size errors. Finally, data are saved to provide information about the status of the system when an error is detected. This information includes I/O buffers, interrupt counts, stack contents, and recently passed locations. The various errors detected, techniques to assist in debugging problems, and segment simulation on a nontarget computer are discussed. These error detection techniques were a major factor in the success of finding the primary cause of error in 98% of over 500 system dumps.

  18. Error detection method

    DOEpatents

    Olson, Eric J.

    2013-06-11

    An apparatus, program product, and method that run an algorithm on a hardware based processor, generate a hardware error as a result of running the algorithm, generate an algorithm output for the algorithm, compare the algorithm output to another output for the algorithm, and detect the hardware error from the comparison. The algorithm is designed to cause the hardware based processor to heat to a degree that increases the likelihood of hardware errors to manifest, and the hardware error is observable in the algorithm output. As such, electronic components may be sufficiently heated and/or sufficiently stressed to create better conditions for generating hardware errors, and the output of the algorithm may be compared at the end of the run to detect a hardware error that occurred anywhere during the run that may otherwise not be detected by traditional methodologies (e.g., due to cooling, insufficient heat and/or stress, etc.).

  19. Medication errors: definitions and classification.

    PubMed

    Aronson, Jeffrey K

    2009-06-01

    1. To understand medication errors and to identify preventive strategies, we need to classify them and define the terms that describe them. 2. The four main approaches to defining technical terms consider etymology, usage, previous definitions, and the Ramsey-Lewis method (based on an understanding of theory and practice). 3. A medication error is 'a failure in the treatment process that leads to, or has the potential to lead to, harm to the patient'. 4. Prescribing faults, a subset of medication errors, should be distinguished from prescription errors. A prescribing fault is 'a failure in the prescribing [decision-making] process that leads to, or has the potential to lead to, harm to the patient'. The converse of this, 'balanced prescribing' is 'the use of a medicine that is appropriate to the patient's condition and, within the limits created by the uncertainty that attends therapeutic decisions, in a dosage regimen that optimizes the balance of benefit to harm'. This excludes all forms of prescribing faults, such as irrational, inappropriate, and ineffective prescribing, underprescribing and overprescribing. 5. A prescription error is 'a failure in the prescription writing process that results in a wrong instruction about one or more of the normal features of a prescription'. The 'normal features' include the identity of the recipient, the identity of the drug, the formulation, dose, route, timing, frequency, and duration of administration. 6. Medication errors can be classified, invoking psychological theory, as knowledge-based mistakes, rule-based mistakes, action-based slips, and memory-based lapses. This classification informs preventive strategies.

  20. Medication errors: definitions and classification

    PubMed Central

    Aronson, Jeffrey K

    2009-01-01

    To understand medication errors and to identify preventive strategies, we need to classify them and define the terms that describe them. The four main approaches to defining technical terms consider etymology, usage, previous definitions, and the Ramsey–Lewis method (based on an understanding of theory and practice). A medication error is ‘a failure in the treatment process that leads to, or has the potential to lead to, harm to the patient’. Prescribing faults, a subset of medication errors, should be distinguished from prescription errors. A prescribing fault is ‘a failure in the prescribing [decision-making] process that leads to, or has the potential to lead to, harm to the patient’. The converse of this, ‘balanced prescribing’ is ‘the use of a medicine that is appropriate to the patient's condition and, within the limits created by the uncertainty that attends therapeutic decisions, in a dosage regimen that optimizes the balance of benefit to harm’. This excludes all forms of prescribing faults, such as irrational, inappropriate, and ineffective prescribing, underprescribing and overprescribing. A prescription error is ‘a failure in the prescription writing process that results in a wrong instruction about one or more of the normal features of a prescription’. The ‘normal features’ include the identity of the recipient, the identity of the drug, the formulation, dose, route, timing, frequency, and duration of administration. Medication errors can be classified, invoking psychological theory, as knowledge-based mistakes, rule-based mistakes, action-based slips, and memory-based lapses. This classification informs preventive strategies. PMID:19594526

  1. Model Error Budgets

    NASA Technical Reports Server (NTRS)

    Briggs, Hugh C.

    2008-01-01

    An error budget is a commonly used tool in design of complex aerospace systems. It represents system performance requirements in terms of allowable errors and flows these down through a hierarchical structure to lower assemblies and components. The requirements may simply be 'allocated' based upon heuristics or experience, or they may be designed through use of physics-based models. This paper presents a basis for developing an error budget for models of the system, as opposed to the system itself. The need for model error budgets arises when system models are a principle design agent as is increasingly more common for poorly testable high performance space systems.

  2. Numerical modelling errors in electrical impedance tomography.

    PubMed

    Dehghani, Hamid; Soleimani, Manuchehr

    2007-07-01

    Electrical impedance tomography (EIT) is a non-invasive technique that aims to reconstruct images of internal impedance values of a volume of interest, based on measurements taken on the external boundary. Since most reconstruction algorithms rely on model-based approximations, it is important to ensure numerical accuracy for the model being used. This work demonstrates and highlights the importance of accurate modelling in terms of model discretization (meshing) and shows that although the predicted boundary data from a forward model may be within an accepted error, the calculated internal field, which is often used for image reconstruction, may contain errors, based on the mesh quality that will result in image artefacts.

  3. Acceptability of human risk.

    PubMed Central

    Kasperson, R E

    1983-01-01

    This paper has three objectives: to explore the nature of the problem implicit in the term "risk acceptability," to examine the possible contributions of scientific information to risk standard-setting, and to argue that societal response is best guided by considerations of process rather than formal methods of analysis. Most technological risks are not accepted but are imposed. There is also little reason to expect consensus among individuals on their tolerance of risk. Moreover, debates about risk levels are often at base debates over the adequacy of the institutions which manage the risks. Scientific information can contribute three broad types of analyses to risk-setting deliberations: contextual analysis, equity assessment, and public preference analysis. More effective risk-setting decisions will involve attention to the process used, particularly in regard to the requirements of procedural justice and democratic responsibility. PMID:6418541

  4. [Analysis of variance of bacterial counts in milk. 1. Characterization of total variance and the components of variance random sampling error, methodologic error and variation between parallel errors during storage].

    PubMed

    Böhmer, L; Hildebrandt, G

    1998-01-01

    In contrast to the prevailing automatized chemical analytical methods, classical microbiological techniques are linked with considerable material- and human-dependent sources of errors. These effects must be objectively considered for assessing the reliability and representativeness of a test result. As an example for error analysis, the deviation of bacterial counts and the influence of the time of testing, bacterial species involved (total bacterial count, coliform count) and the detection method used (pour-/spread-plate) were determined in a repeated testing of parallel samples of pasteurized (stored for 8 days at 10 degrees C) and raw (stored for 3 days at 6 degrees C) milk. Separate characterization of deviation components, namely, unavoidable random sampling error as well as methodical error and variation between parallel samples, was made possible by means of a test design where variance analysis was applied. Based on the results of the study, the following conclusions can be drawn: 1. Immediately after filling, the total count deviation in milk mainly followed the POISSON-distribution model and allowed a reliable hygiene evaluation of lots even with few samples. Subsequently, regardless of the examination procedure used, the setting up of parallel dilution series can be disregarded. 2. With increasing storage period, bacterial multiplication especially of psychrotrophs leads to unpredictable changes in the bacterial profile and density. With the increase in errors between samples, it is common to find packages which have acceptable microbiological quality but are already spoiled by the time of the expiry date labeled. As a consequence, a uniform acceptance or rejection of the batch is seldom possible. 3. Because the contamination level of coliforms in certified raw milk mostly lies near the detection limit, coliform counts with high relative deviation are expected to be found in milk directly after filling. Since no bacterial multiplication takes place

  5. Preventing errors in laterality.

    PubMed

    Landau, Elliot; Hirschorn, David; Koutras, Iakovos; Malek, Alexander; Demissie, Seleshie

    2015-04-01

    An error in laterality is the reporting of a finding that is present on the right side as on the left or vice versa. While different medical and surgical specialties have implemented protocols to help prevent such errors, very few studies have been published that describe these errors in radiology reports and ways to prevent them. We devised a system that allows the radiologist to view reports in a separate window, displayed in a simple font and with all terms of laterality highlighted in separate colors. This allows the radiologist to correlate all detected laterality terms of the report with the images open in PACS and correct them before the report is finalized. The system is monitored every time an error in laterality was detected. The system detected 32 errors in laterality over a 7-month period (rate of 0.0007 %), with CT containing the highest error detection rate of all modalities. Significantly, more errors were detected in male patients compared with female patients. In conclusion, our study demonstrated that with our system, laterality errors can be detected and corrected prior to finalizing reports.

  6. Refractive error blindness.

    PubMed Central

    Dandona, R.; Dandona, L.

    2001-01-01

    Recent data suggest that a large number of people are blind in different parts of the world due to high refractive error because they are not using appropriate refractive correction. Refractive error as a cause of blindness has been recognized only recently with the increasing use of presenting visual acuity for defining blindness. In addition to blindness due to naturally occurring high refractive error, inadequate refractive correction of aphakia after cataract surgery is also a significant cause of blindness in developing countries. Blindness due to refractive error in any population suggests that eye care services in general in that population are inadequate since treatment of refractive error is perhaps the simplest and most effective form of eye care. Strategies such as vision screening programmes need to be implemented on a large scale to detect individuals suffering from refractive error blindness. Sufficient numbers of personnel to perform reasonable quality refraction need to be trained in developing countries. Also adequate infrastructure has to be developed in underserved areas of the world to facilitate the logistics of providing affordable reasonable-quality spectacles to individuals suffering from refractive error blindness. Long-term success in reducing refractive error blindness worldwide will require attention to these issues within the context of comprehensive approaches to reduce all causes of avoidable blindness. PMID:11285669

  7. Everyday Scale Errors

    ERIC Educational Resources Information Center

    Ware, Elizabeth A.; Uttal, David H.; DeLoache, Judy S.

    2010-01-01

    Young children occasionally make "scale errors"--they attempt to fit their bodies into extremely small objects or attempt to fit a larger object into another, tiny, object. For example, a child might try to sit in a dollhouse-sized chair or try to stuff a large doll into it. Scale error research was originally motivated by parents' and…

  8. Age and Acceptance of Euthanasia.

    ERIC Educational Resources Information Center

    Ward, Russell A.

    1980-01-01

    Study explores relationship between age (and sex and race) and acceptance of euthanasia. Women and non-Whites were less accepting because of religiosity. Among older people less acceptance was attributable to their lesser education and greater religiosity. Results suggest that quality of life in old age affects acceptability of euthanasia. (Author)

  9. Proofreading for word errors.

    PubMed

    Pilotti, Maura; Chodorow, Martin; Agpawa, Ian; Krajniak, Marta; Mahamane, Salif

    2012-04-01

    Proofreading (i.e., reading text for the purpose of detecting and correcting typographical errors) is viewed as a component of the activity of revising text and thus is a necessary (albeit not sufficient) procedural step for enhancing the quality of a written product. The purpose of the present research was to test competing accounts of word-error detection which predict factors that may influence reading and proofreading differently. Word errors, which change a word into another word (e.g., from --> form), were selected for examination because they are unlikely to be detected by automatic spell-checking functions. Consequently, their detection still rests mostly in the hands of the human proofreader. Findings highlighted the weaknesses of existing accounts of proofreading and identified factors, such as length and frequency of the error in the English language relative to frequency of the correct word, which might play a key role in detection of word errors.

  10. The incidence of diagnostic error in medicine.

    PubMed

    Graber, Mark L

    2013-10-01

    A wide variety of research studies suggest that breakdowns in the diagnostic process result in a staggering toll of harm and patient deaths. These include autopsy studies, case reviews, surveys of patient and physicians, voluntary reporting systems, using standardised patients, second reviews, diagnostic testing audits and closed claims reviews. Although these different approaches provide important information and unique insights regarding diagnostic errors, each has limitations and none is well suited to establishing the incidence of diagnostic error in actual practice, or the aggregate rate of error and harm. We argue that being able to measure the incidence of diagnostic error is essential to enable research studies on diagnostic error, and to initiate quality improvement projects aimed at reducing the risk of error and harm. Three approaches appear most promising in this regard: (1) using 'trigger tools' to identify from electronic health records cases at high risk for diagnostic error; (2) using standardised patients (secret shoppers) to study the rate of error in practice; (3) encouraging both patients and physicians to voluntarily report errors they encounter, and facilitating this process. PMID:23771902

  11. Errors in neuroradiology.

    PubMed

    Caranci, Ferdinando; Tedeschi, Enrico; Leone, Giuseppe; Reginelli, Alfonso; Gatta, Gianluca; Pinto, Antonio; Squillaci, Ettore; Briganti, Francesco; Brunese, Luca

    2015-09-01

    Approximately 4 % of radiologic interpretation in daily practice contains errors and discrepancies that should occur in 2-20 % of reports. Fortunately, most of them are minor degree errors, or if serious, are found and corrected with sufficient promptness; obviously, diagnostic errors become critical when misinterpretation or misidentification should significantly delay medical or surgical treatments. Errors can be summarized into four main categories: observer errors, errors in interpretation, failure to suggest the next appropriate procedure, failure to communicate in a timely and a clinically appropriate manner. Misdiagnosis/misinterpretation percentage should rise up in emergency setting and in the first moments of the learning curve, as in residency. Para-physiological and pathological pitfalls in neuroradiology include calcification and brain stones, pseudofractures, and enlargement of subarachnoid or epidural spaces, ventricular system abnormalities, vascular system abnormalities, intracranial lesions or pseudolesions, and finally neuroradiological emergencies. In order to minimize the possibility of error, it is important to be aware of various presentations of pathology, obtain clinical information, know current practice guidelines, review after interpreting a diagnostic study, suggest follow-up studies when appropriate, communicate significant abnormal findings appropriately and in a timely fashion directly with the treatment team.

  12. Error Prevention Aid

    NASA Technical Reports Server (NTRS)

    1987-01-01

    In a complex computer environment there is ample opportunity for error, a mistake by a programmer, or a software-induced undesirable side effect. In insurance, errors can cost a company heavily, so protection against inadvertent change is a must for the efficient firm. The data processing center at Transport Life Insurance Company has taken a step to guard against accidental changes by adopting a software package called EQNINT (Equations Interpreter Program). EQNINT cross checks the basic formulas in a program against the formulas that make up the major production system. EQNINT assures that formulas are coded correctly and helps catch errors before they affect the customer service or its profitability.

  13. Evaluating mixed samples as a source of error in non-invasive genetic studies using microsatellites

    USGS Publications Warehouse

    Roon, David A.; Thomas, M.E.; Kendall, K.C.; Waits, L.P.

    2005-01-01

    The use of noninvasive genetic sampling (NGS) for surveying wild populations is increasing rapidly. Currently, only a limited number of studies have evaluated potential biases associated with NGS. This paper evaluates the potential errors associated with analysing mixed samples drawn from multiple animals. Most NGS studies assume that mixed samples will be identified and removed during the genotyping process. We evaluated this assumption by creating 128 mixed samples of extracted DNA from brown bear (Ursus arctos) hair samples. These mixed samples were genotyped and screened for errors at six microsatellite loci according to protocols consistent with those used in other NGS studies. Five mixed samples produced acceptable genotypes after the first screening. However, all mixed samples produced multiple alleles at one or more loci, amplified as only one of the source samples, or yielded inconsistent electropherograms by the final stage of the error-checking process. These processes could potentially reduce the number of individuals observed in NGS studies, but errors should be conservative within demographic estimates. Researchers should be aware of the potential for mixed samples and carefully design gel analysis criteria and error checking protocols to detect mixed samples.

  14. ON COMPUTING UPPER LIMITS TO SOURCE INTENSITIES

    SciTech Connect

    Kashyap, Vinay L.; Siemiginowska, Aneta; Van Dyk, David A.; Xu Jin; Connors, Alanna; Freeman, Peter E.; Zezas, Andreas E-mail: asiemiginowska@cfa.harvard.ed E-mail: jinx@ics.uci.ed E-mail: pfreeman@cmu.ed

    2010-08-10

    A common problem in astrophysics is determining how bright a source could be and still not be detected in an observation. Despite the simplicity with which the problem can be stated, the solution involves complicated statistical issues that require careful analysis. In contrast to the more familiar confidence bound, this concept has never been formally analyzed, leading to a great variety of often ad hoc solutions. Here we formulate and describe the problem in a self-consistent manner. Detection significance is usually defined by the acceptable proportion of false positives (background fluctuations that are claimed as detections, or Type I error), and we invoke the complementary concept of false negatives (real sources that go undetected, or Type II error), based on the statistical power of a test, to compute an upper limit to the detectable source intensity. To determine the minimum intensity that a source must have for it to be detected, we first define a detection threshold and then compute the probabilities of detecting sources of various intensities at the given threshold. The intensity that corresponds to the specified Type II error probability defines that minimum intensity and is identified as the upper limit. Thus, an upper limit is a characteristic of the detection procedure rather than the strength of any particular source. It should not be confused with confidence intervals or other estimates of source intensity. This is particularly important given the large number of catalogs that are being generated from increasingly sensitive surveys. We discuss, with examples, the differences between these upper limits and confidence bounds. Both measures are useful quantities that should be reported in order to extract the most science from catalogs, though they answer different statistical questions: an upper bound describes an inference range on the source intensity, while an upper limit calibrates the detection process. We provide a recipe for computing upper

  15. Baby-Crying Acceptance

    NASA Astrophysics Data System (ADS)

    Martins, Tiago; de Magalhães, Sérgio Tenreiro

    The baby's crying is his most important mean of communication. The crying monitoring performed by devices that have been developed doesn't ensure the complete safety of the child. It is necessary to join, to these technological resources, means of communicating the results to the responsible, which would involve the digital processing of information available from crying. The survey carried out, enabled to understand the level of adoption, in the continental territory of Portugal, of a technology that will be able to do such a digital processing. It was used the TAM as the theoretical referential. The statistical analysis showed that there is a good probability of acceptance of such a system.

  16. High acceptance recoil polarimeter

    SciTech Connect

    The HARP Collaboration

    1992-12-05

    In order to detect neutrons and protons in the 50 to 600 MeV energy range and measure their polarization, an efficient, low-noise, self-calibrating device is being designed. This detector, known as the High Acceptance Recoil Polarimeter (HARP), is based on the recoil principle of proton detection from np[r arrow]n[prime]p[prime] or pp[r arrow]p[prime]p[prime] scattering (detected particles are underlined) which intrinsically yields polarization information on the incoming particle. HARP will be commissioned to carry out experiments in 1994.

  17. The Errors of Our Ways: Understanding Error Representations in Cerebellar-Dependent Motor Learning

    PubMed Central

    Popa, Laurentiu S.; Streng, Martha L.; Hewitt, Angela L.; Ebner, Timothy J.

    2015-01-01

    The cerebellum is essential for error-driven motor learning and is strongly implicated in detecting and correcting for motor errors. Therefore, elucidating how motor errors are represented in the cerebellum is essential in understanding cerebellar function, in general, and its role in motor learning, in particular. This review examines how motor errors are encoded in the cerebellar cortex in the context of a forward internal model that generates predictions about the upcoming movement and drives learning and adaptation. In this framework, sensory prediction errors, defined as the discrepancy between the predicted consequences of motor commands and the sensory feedback, are crucial for both on-line movement control and motor learning. While many studies support the dominant view that motor errors are encoded in the complex spike discharge of Purkinje cells, others have failed to relate complex spike activity with errors. Given these limitations, we review recent findings in the monkey showing that complex spike modulation is not necessarily required for motor learning or for simple spike adaptation. Also, new results demonstrate that the simple spike discharge provides continuous error signals that both lead and lag the actual movements in time, suggesting errors are encoded as both an internal prediction of motor commands and the actual sensory feedback. These dual error representations have opposing effects on simple spike discharge, consistent with the signals needed to generate sensory prediction errors used to update a forward internal model. PMID:26112422

  18. Understanding and Confronting Our Mistakes: The Epidemiology of Error in Radiology and Strategies for Error Reduction.

    PubMed

    Bruno, Michael A; Walker, Eric A; Abujudeh, Hani H

    2015-10-01

    Arriving at a medical diagnosis is a highly complex process that is extremely error prone. Missed or delayed diagnoses often lead to patient harm and missed opportunities for treatment. Since medical imaging is a major contributor to the overall diagnostic process, it is also a major potential source of diagnostic error. Although some diagnoses may be missed because of the technical or physical limitations of the imaging modality, including image resolution, intrinsic or extrinsic contrast, and signal-to-noise ratio, most missed radiologic diagnoses are attributable to image interpretation errors by radiologists. Radiologic interpretation cannot be mechanized or automated; it is a human enterprise based on complex psychophysiologic and cognitive processes and is itself subject to a wide variety of error types, including perceptual errors (those in which an important abnormality is simply not seen on the images) and cognitive errors (those in which the abnormality is visually detected but the meaning or importance of the finding is not correctly understood or appreciated). The overall prevalence of radiologists' errors in practice does not appear to have changed since it was first estimated in the 1960s. The authors review the epidemiology of errors in diagnostic radiology, including a recently proposed taxonomy of radiologists' errors, as well as research findings, in an attempt to elucidate possible underlying causes of these errors. The authors also propose strategies for error reduction in radiology. On the basis of current understanding, specific suggestions are offered as to how radiologists can improve their performance in practice. PMID:26466178

  19. Quantum Error Correction for Metrology

    NASA Astrophysics Data System (ADS)

    Sushkov, Alex; Kessler, Eric; Lovchinsky, Igor; Lukin, Mikhail

    2014-05-01

    The question of the best achievable sensitivity in a quantum measurement is of great experimental relevance, and has seen a lot of attention in recent years. Recent studies [e.g., Nat. Phys. 7, 406 (2011), Nat. Comms. 3, 1063 (2012)] suggest that in most generic scenarios any potential quantum gain (e.g. through the use of entangled states) vanishes in the presence of environmental noise. To overcome these limitations, we propose and analyze a new approach to improve quantum metrology based on quantum error correction (QEC). We identify the conditions under which QEC allows one to improve the signal-to-noise ratio in quantum-limited measurements, and we demonstrate that it enables, in certain situations, Heisenberg-limited sensitivity. We discuss specific applications to nanoscale sensing using nitrogen-vacancy centers in diamond in which QEC can significantly improve the measurement sensitivity and bandwidth under realistic experimental conditions.

  20. On-Machine Acceptance

    SciTech Connect

    Arnold, K.F.

    2000-02-14

    Probing processes are used intermittently and not effectively as an on-line measurement device. This project was needed to evolve machine probing from merely a setup aid to an on-the-machine inspection system. Use of probing for on-machine inspection would significantly decrease cycle time by elimination of the need for first-piece inspection (at a remote location). Federal Manufacturing and Technologies (FM and T) had the manufacturing facility and the ability to integrate the system into production. The Contractor had a system that could optimize the machine tool to compensate for thermal growth and related error.

  1. Acceptance threshold theory can explain occurrence of homosexual behaviour.

    PubMed

    Engel, Katharina C; Männer, Lisa; Ayasse, Manfred; Steiger, Sandra

    2015-01-01

    Same-sex sexual behaviour (SSB) has been documented in a wide range of animals, but its evolutionary causes are not well understood. Here, we investigated SSB in the light of Reeve's acceptance threshold theory. When recognition is not error-proof, the acceptance threshold used by males to recognize potential mating partners should be flexibly adjusted to maximize the fitness pay-off between the costs of erroneously accepting males and the benefits of accepting females. By manipulating male burying beetles' search time for females and their reproductive potential, we influenced their perceived costs of making an acceptance or rejection error. As predicted, when the costs of rejecting females increased, males exhibited more permissive discrimination decisions and showed high levels of SSB; when the costs of accepting males increased, males were more restrictive and showed low levels of SSB. Our results support the idea that in animal species, in which the recognition cues of females and males overlap to a certain degree, SSB is a consequence of an adaptive discrimination strategy to avoid the costs of making rejection errors.

  2. Estimating Bias Error Distributions

    NASA Technical Reports Server (NTRS)

    Liu, Tian-Shu; Finley, Tom D.

    2001-01-01

    This paper formulates the general methodology for estimating the bias error distribution of a device in a measuring domain from less accurate measurements when a minimal number of standard values (typically two values) are available. A new perspective is that the bias error distribution can be found as a solution of an intrinsic functional equation in a domain. Based on this theory, the scaling- and translation-based methods for determining the bias error distribution arc developed. These methods are virtually applicable to any device as long as the bias error distribution of the device can be sufficiently described by a power series (a polynomial) or a Fourier series in a domain. These methods have been validated through computational simulations and laboratory calibration experiments for a number of different devices.

  3. The surveillance error grid.

    PubMed

    Klonoff, David C; Lias, Courtney; Vigersky, Robert; Clarke, William; Parkes, Joan Lee; Sacks, David B; Kirkman, M Sue; Kovatchev, Boris

    2014-07-01

    Currently used error grids for assessing clinical accuracy of blood glucose monitors are based on out-of-date medical practices. Error grids have not been widely embraced by regulatory agencies for clearance of monitors, but this type of tool could be useful for surveillance of the performance of cleared products. Diabetes Technology Society together with representatives from the Food and Drug Administration, the American Diabetes Association, the Endocrine Society, and the Association for the Advancement of Medical Instrumentation, and representatives of academia, industry, and government, have developed a new error grid, called the surveillance error grid (SEG) as a tool to assess the degree of clinical risk from inaccurate blood glucose (BG) monitors. A total of 206 diabetes clinicians were surveyed about the clinical risk of errors of measured BG levels by a monitor. The impact of such errors on 4 patient scenarios was surveyed. Each monitor/reference data pair was scored and color-coded on a graph per its average risk rating. Using modeled data representative of the accuracy of contemporary meters, the relationships between clinical risk and monitor error were calculated for the Clarke error grid (CEG), Parkes error grid (PEG), and SEG. SEG action boundaries were consistent across scenarios, regardless of whether the patient was type 1 or type 2 or using insulin or not. No significant differences were noted between responses of adult/pediatric or 4 types of clinicians. Although small specific differences in risk boundaries between US and non-US clinicians were noted, the panel felt they did not justify separate grids for these 2 types of clinicians. The data points of the SEG were classified in 15 zones according to their assigned level of risk, which allowed for comparisons with the classic CEG and PEG. Modeled glucose monitor data with realistic self-monitoring of blood glucose errors derived from meter testing experiments plotted on the SEG when compared to

  4. Emperical Tests of Acceptance Sampling Plans

    NASA Technical Reports Server (NTRS)

    White, K. Preston, Jr.; Johnson, Kenneth L.

    2012-01-01

    Acceptance sampling is a quality control procedure applied as an alternative to 100% inspection. A random sample of items is drawn from a lot to determine the fraction of items which have a required quality characteristic. Both the number of items to be inspected and the criterion for determining conformance of the lot to the requirement are given by an appropriate sampling plan with specified risks of Type I and Type II sampling errors. In this paper, we present the results of empirical tests of the accuracy of selected sampling plans reported in the literature. These plans are for measureable quality characteristics which are known have either binomial, exponential, normal, gamma, Weibull, inverse Gaussian, or Poisson distributions. In the main, results support the accepted wisdom that variables acceptance plans are superior to attributes (binomial) acceptance plans, in the sense that these provide comparable protection against risks at reduced sampling cost. For the Gaussian and Weibull plans, however, there are ranges of the shape parameters for which the required sample sizes are in fact larger than the corresponding attributes plans, dramatically so for instances of large skew. Tests further confirm that the published inverse-Gaussian (IG) plan is flawed, as reported by White and Johnson (2011).

  5. Effects of Error Experience When Learning to Simulate Hypernasality

    ERIC Educational Resources Information Center

    Wong, Andus W.-K.; Tse, Andy C.-Y.; Ma, Estella P.-M.; Whitehill, Tara L.; Masters, Rich S. W.

    2013-01-01

    Purpose: The purpose of this study was to evaluate the effects of error experience on the acquisition of hypernasal speech. Method: Twenty-eight healthy participants were asked to simulate hypernasality in either an "errorless learning" condition (in which the possibility for errors was limited) or an "errorful learning"…

  6. How to Leverage the Potential of Mathematical Errors

    ERIC Educational Resources Information Center

    Bray, Wendy S.

    2013-01-01

    Telling children that they can learn from their mistakes is common practice. Yet research indicates that many teachers in the United States limit public attention to errors during mathematics lessons (Bray 2011; Santagata 2005). Some believe that drawing attention to errors publicly may embarrass error makers or may be confusing to struggling…

  7. Alcohol and error processing.

    PubMed

    Holroyd, Clay B; Yeung, Nick

    2003-08-01

    A recent study indicates that alcohol consumption reduces the amplitude of the error-related negativity (ERN), a negative deflection in the electroencephalogram associated with error commission. Here, we explore possible mechanisms underlying this result in the context of two recent theories about the neural system that produces the ERN - one based on principles of reinforcement learning and the other based on response conflict monitoring.

  8. Quantum Error Correction

    NASA Astrophysics Data System (ADS)

    Lidar, Daniel A.; Brun, Todd A.

    2013-09-01

    Prologue; Preface; Part I. Background: 1. Introduction to decoherence and noise in open quantum systems Daniel Lidar and Todd Brun; 2. Introduction to quantum error correction Dave Bacon; 3. Introduction to decoherence-free subspaces and noiseless subsystems Daniel Lidar; 4. Introduction to quantum dynamical decoupling Lorenza Viola; 5. Introduction to quantum fault tolerance Panos Aliferis; Part II. Generalized Approaches to Quantum Error Correction: 6. Operator quantum error correction David Kribs and David Poulin; 7. Entanglement-assisted quantum error-correcting codes Todd Brun and Min-Hsiu Hsieh; 8. Continuous-time quantum error correction Ognyan Oreshkov; Part III. Advanced Quantum Codes: 9. Quantum convolutional codes Mark Wilde; 10. Non-additive quantum codes Markus Grassl and Martin Rötteler; 11. Iterative quantum coding systems David Poulin; 12. Algebraic quantum coding theory Andreas Klappenecker; 13. Optimization-based quantum error correction Andrew Fletcher; Part IV. Advanced Dynamical Decoupling: 14. High order dynamical decoupling Zhen-Yu Wang and Ren-Bao Liu; 15. Combinatorial approaches to dynamical decoupling Martin Rötteler and Pawel Wocjan; Part V. Alternative Quantum Computation Approaches: 16. Holonomic quantum computation Paolo Zanardi; 17. Fault tolerance for holonomic quantum computation Ognyan Oreshkov, Todd Brun and Daniel Lidar; 18. Fault tolerant measurement-based quantum computing Debbie Leung; Part VI. Topological Methods: 19. Topological codes Héctor Bombín; 20. Fault tolerant topological cluster state quantum computing Austin Fowler and Kovid Goyal; Part VII. Applications and Implementations: 21. Experimental quantum error correction Dave Bacon; 22. Experimental dynamical decoupling Lorenza Viola; 23. Architectures Jacob Taylor; 24. Error correction in quantum communication Mark Wilde; Part VIII. Critical Evaluation of Fault Tolerance: 25. Hamiltonian methods in QEC and fault tolerance Eduardo Novais, Eduardo Mucciolo and

  9. LHC INTERACTION REGION QUADRUPOLE ERROR IMPACT STUDIES

    SciTech Connect

    FISCHER,W.; PTITSIN,V.; WEI,J.

    1999-09-07

    The performance of the Large Hadron Collider (LHC) at collision energy is limited by the field quality of the interaction region (IR) quadrupoles and dipoles. In this paper the authors study the impact of the expected field errors of these magnets on the dynamic aperture. The authors investigate different magnet arrangements and error strength. Based on the results they propose and evaluate a corrector layout to meet the required dynamic aperture performance in a companion paper.

  10. Phasing piston error in segmented telescopes.

    PubMed

    Jiang, Junlun; Zhao, Weirui

    2016-08-22

    To achieve a diffraction-limited imaging, the piston errors between the segments of the segmented primary mirror telescope should be reduced to λ/40 RMS. We propose a method to detect the piston error by analyzing the intensity distribution on the image plane according to the Fourier optics principle, which can capture segments with the piston errors as large as the coherence length of the input light and reduce these to 0.026λ RMS (λ = 633nm). This method is adaptable to any segmented and deployable primary mirror telescope. Experiments have been carried out to validate the feasibility of the method. PMID:27557192

  11. Phasing piston error in segmented telescopes.

    PubMed

    Jiang, Junlun; Zhao, Weirui

    2016-08-22

    To achieve a diffraction-limited imaging, the piston errors between the segments of the segmented primary mirror telescope should be reduced to λ/40 RMS. We propose a method to detect the piston error by analyzing the intensity distribution on the image plane according to the Fourier optics principle, which can capture segments with the piston errors as large as the coherence length of the input light and reduce these to 0.026λ RMS (λ = 633nm). This method is adaptable to any segmented and deployable primary mirror telescope. Experiments have been carried out to validate the feasibility of the method.

  12. Initial Proposal for MPI 3.0 Error Handling

    SciTech Connect

    Bronevetsky, G

    2008-07-07

    The MPI 2 spec contains error handling and notification mechanisms that have a number of limitations from the point of view of application fault tolerance: (1) The specification makes no demands on MPI to survive failures. Although MPI implementers are encouraged to 'circumscribe the impact of an error, so that normal processing can continue after an error handler was invoked', nothing more is specified in the standard. In particular, the defined MPI error classes are used only to clarify to the user the source of the error and do not describe the MPI functionality that is not available as a result of the error. (2) All errors must somehow be associated with some specific MPI call. As such, (A) It is difficult for MPI to notify users of failures in asynchronous calls, such as an MPI{_}Rsend call, which may return immediately after the message data is sent along the wire but before it is successfully delivered; (B) There is no provision for asynchronous error notification regarding errors that will affect future calls, such as notifying process p of the failure of process q before p tries to communicate with q. (3) There is no description of when error notification will happen relative to the occurrence of the error. In particular, the specification does not state whether an error that would cause MPI functions to return an error code under the MPI{_}ERRORS{_}RETURN error handler would cause a user-defined error handler to be called during the same MPI function or at some earlier or later point in time. (4) Although MPI makes it possible for libraries to define their own error classes and invoke application error handlers, it is not possible for the application to define new error notification patterns either within or across processes. This means that it is not possible for one application process to ask to be informed of errors on other processes or for the application to be informed of specific classes of errors.

  13. Error reduction in EMG signal decomposition.

    PubMed

    Kline, Joshua C; De Luca, Carlo J

    2014-12-01

    Decomposition of the electromyographic (EMG) signal into constituent action potentials and the identification of individual firing instances of each motor unit in the presence of ambient noise are inherently probabilistic processes, whether performed manually or with automated algorithms. Consequently, they are subject to errors. We set out to classify and reduce these errors by analyzing 1,061 motor-unit action-potential trains (MUAPTs), obtained by decomposing surface EMG (sEMG) signals recorded during human voluntary contractions. Decomposition errors were classified into two general categories: location errors representing variability in the temporal localization of each motor-unit firing instance and identification errors consisting of falsely detected or missed firing instances. To mitigate these errors, we developed an error-reduction algorithm that combines multiple decomposition estimates to determine a more probable estimate of motor-unit firing instances with fewer errors. The performance of the algorithm is governed by a trade-off between the yield of MUAPTs obtained above a given accuracy level and the time required to perform the decomposition. When applied to a set of sEMG signals synthesized from real MUAPTs, the identification error was reduced by an average of 1.78%, improving the accuracy to 97.0%, and the location error was reduced by an average of 1.66 ms. The error-reduction algorithm in this study is not limited to any specific decomposition strategy. Rather, we propose it be used for other decomposition methods, especially when analyzing precise motor-unit firing instances, as occurs when measuring synchronization.

  14. Human error in aviation operations

    NASA Technical Reports Server (NTRS)

    Nagel, David C.

    1988-01-01

    The role of human error in commercial and general aviation accidents and the techniques used to evaluate it are reviewed from a human-factors perspective. Topics addressed include the general decline in accidents per million departures since the 1960s, the increase in the proportion of accidents due to human error, methods for studying error, theoretical error models, and the design of error-resistant systems. Consideration is given to information acquisition and processing errors, visually guided flight, disorientation, instrument-assisted guidance, communication errors, decision errors, debiasing, and action errors.

  15. Error monitoring in musicians.

    PubMed

    Maidhof, Clemens

    2013-01-01

    To err is human, and hence even professional musicians make errors occasionally during their performances. This paper summarizes recent work investigating error monitoring in musicians, i.e., the processes and their neural correlates associated with the monitoring of ongoing actions and the detection of deviations from intended sounds. Electroencephalography (EEG) studies reported an early component of the event-related potential (ERP) occurring before the onsets of pitch errors. This component, which can be altered in musicians with focal dystonia, likely reflects processes of error detection and/or error compensation, i.e., attempts to cancel the undesired sensory consequence (a wrong tone) a musician is about to perceive. Thus, auditory feedback seems not to be a prerequisite for error detection, consistent with previous behavioral results. In contrast, when auditory feedback is externally manipulated and thus unexpected, motor performance can be severely distorted, although not all feedback alterations result in performance impairments. Recent studies investigating the neural correlates of feedback processing showed that unexpected feedback elicits an ERP component after note onsets, which shows larger amplitudes during music performance than during mere perception of the same musical sequences. Hence, these results stress the role of motor actions for the processing of auditory information. Furthermore, recent methodological advances like the combination of 3D motion capture techniques with EEG will be discussed. Such combinations of different measures can potentially help to disentangle the roles of different feedback types such as proprioceptive and auditory feedback, and in general to derive at a better understanding of the complex interactions between the motor and auditory domain during error monitoring. Finally, outstanding questions and future directions in this context will be discussed. PMID:23898255

  16. Errata: Papers in Error Analysis.

    ERIC Educational Resources Information Center

    Svartvik, Jan, Ed.

    Papers presented at the symposium of error analysis in Lund, Sweden, in September 1972, approach error analysis specifically in its relation to foreign language teaching and second language learning. Error analysis is defined as having three major aspects: (1) the description of the errors, (2) the explanation of errors by means of contrastive…

  17. Computation of Standard Errors

    PubMed Central

    Dowd, Bryan E; Greene, William H; Norton, Edward C

    2014-01-01

    Objectives We discuss the problem of computing the standard errors of functions involving estimated parameters and provide the relevant computer code for three different computational approaches using two popular computer packages. Study Design We show how to compute the standard errors of several functions of interest: the predicted value of the dependent variable for a particular subject, and the effect of a change in an explanatory variable on the predicted value of the dependent variable for an individual subject and average effect for a sample of subjects. Empirical Application Using a publicly available dataset, we explain three different methods of computing standard errors: the delta method, Krinsky–Robb, and bootstrapping. We provide computer code for Stata 12 and LIMDEP 10/NLOGIT 5. Conclusions In most applications, choice of the computational method for standard errors of functions of estimated parameters is a matter of convenience. However, when computing standard errors of the sample average of functions that involve both estimated parameters and nonstochastic explanatory variables, it is important to consider the sources of variation in the function's values. PMID:24800304

  18. Compact disk error measurements

    NASA Technical Reports Server (NTRS)

    Howe, D.; Harriman, K.; Tehranchi, B.

    1993-01-01

    The objectives of this project are as follows: provide hardware and software that will perform simple, real-time, high resolution (single-byte) measurement of the error burst and good data gap statistics seen by a photoCD player read channel when recorded CD write-once discs of variable quality (i.e., condition) are being read; extend the above system to enable measurement of the hard decision (i.e., 1-bit error flags) and soft decision (i.e., 2-bit error flags) decoding information that is produced/used by the Cross Interleaved - Reed - Solomon - Code (CIRC) block decoder employed in the photoCD player read channel; construct a model that uses data obtained via the systems described above to produce meaningful estimates of output error rates (due to both uncorrected ECC words and misdecoded ECC words) when a CD disc having specific (measured) error statistics is read (completion date to be determined); and check the hypothesis that current adaptive CIRC block decoders are optimized for pressed (DAD/ROM) CD discs. If warranted, do a conceptual design of an adaptive CIRC decoder that is optimized for write-once CD discs.

  19. The Michelson Stellar Interferometer Error Budget for Triple Triple-Satellite Configuration

    NASA Technical Reports Server (NTRS)

    Marathay, Arvind S.; Shiefman, Joe

    1996-01-01

    This report presents the results of a study of the instrumentation tolerances for a conventional style Michelson stellar interferometer (MSI). The method used to determine the tolerances was to determine the change, due to the instrument errors, in the measured fringe visibility and phase relative to the ideal values. The ideal values are those values of fringe visibility and phase that would be measured by a perfect MSI and are attributable solely to the object being detected. Once the functional relationship for changes in visibility and phase as a function of various instrument errors is understood it is then possible to set limits on the instrument errors in order to ensure that the measured visibility and phase are different from the ideal values by no more than some specified amount. This was done as part of this study. The limits we obtained are based on a visibility error of no more than 1% and a phase error of no more than 0.063 radians (this comes from 1% of 2(pi) radians). The choice of these 1% limits is supported in the literture. The approach employed in the study involved the use of ASAP (Advanced System Analysis Program) software provided by Breault Research Organization, Inc., in conjunction with parallel analytical calculations. The interferometer accepts object radiation into two separate arms each consisting of an outer mirror, an inner mirror, a delay line (made up of two moveable mirrors and two static mirrors), and a 10:1 afocal reduction telescope. The radiation coming out of both arms is incident on a slit plane which is opaque with two openings (slits). One of the two slits is centered directly under one of the two arms of the interferometer and the other slit is centered directly under the other arm. The slit plane is followed immediately by an ideal combining lens which images the radiation in the fringe plane (also referred to subsequently as the detector plane).

  20. Experimental Quantum Error Detection

    PubMed Central

    Jin, Xian-Min; Yi, Zhen-Huan; Yang, Bin; Zhou, Fei; Yang, Tao; Peng, Cheng-Zhi

    2012-01-01

    Faithful transmission of quantum information is a crucial ingredient in quantum communication networks. To overcome the unavoidable decoherence in a noisy channel, to date, many efforts have been made to transmit one state by consuming large numbers of time-synchronized ancilla states. However, such huge demands of quantum resources are hard to meet with current technology and this restricts practical applications. Here we experimentally demonstrate quantum error detection, an economical approach to reliably protecting a qubit against bit-flip errors. Arbitrary unknown polarization states of single photons and entangled photons are converted into time bins deterministically via a modified Franson interferometer. Noise arising in both 10 m and 0.8 km fiber, which induces associated errors on the reference frame of time bins, is filtered when photons are detected. The demonstrated resource efficiency and state independence make this protocol a promising candidate for implementing a real-world quantum communication network. PMID:22953047

  1. Good people who try their best can have problems: recognition of human factors and how to minimise error.

    PubMed

    Brennan, Peter A; Mitchell, David A; Holmes, Simon; Plint, Simon; Parry, David

    2016-01-01

    Human error is as old as humanity itself and is an appreciable cause of mistakes by both organisations and people. Much of the work related to human factors in causing error has originated from aviation where mistakes can be catastrophic not only for those who contribute to the error, but for passengers as well. The role of human error in medical and surgical incidents, which are often multifactorial, is becoming better understood, and includes both organisational issues (by the employer) and potential human factors (at a personal level). Mistakes as a result of individual human factors and surgical teams should be better recognised and emphasised. Attitudes and acceptance of preoperative briefing has improved since the introduction of the World Health Organization (WHO) surgical checklist. However, this does not address limitations or other safety concerns that are related to performance, such as stress and fatigue, emotional state, hunger, awareness of what is going on situational awareness, and other factors that could potentially lead to error. Here we attempt to raise awareness of these human factors, and highlight how they can lead to error, and how they can be minimised in our day-to-day practice. Can hospitals move from being "high risk industries" to "high reliability organisations"? PMID:26542258

  2. Error Free Software

    NASA Technical Reports Server (NTRS)

    1985-01-01

    A mathematical theory for development of "higher order" software to catch computer mistakes resulted from a Johnson Space Center contract for Apollo spacecraft navigation. Two women who were involved in the project formed Higher Order Software, Inc. to develop and market the system of error analysis and correction. They designed software which is logically error-free, which, in one instance, was found to increase productivity by 600%. USE.IT defines its objectives using AXES -- a user can write in English and the system converts to computer languages. It is employed by several large corporations.

  3. Sonic boom acceptability studies

    NASA Astrophysics Data System (ADS)

    Shepherd, Kevin P.; Sullivan, Brenda M.; Leatherwood, Jack D.; McCurdy, David A.

    1992-04-01

    The determination of the magnitude of sonic boom exposure which would be acceptable to the general population requires, as a starting point, a method to assess and compare individual sonic booms. There is no consensus within the scientific and regulatory communities regarding an appropriate sonic boom assessment metric. Loudness, being a fundamental and well-understood attribute of human hearing was chosen as a means of comparing sonic booms of differing shapes and amplitudes. The figure illustrates the basic steps which yield a calculated value of loudness. Based upon the aircraft configuration and its operating conditions, the sonic boom pressure signature which reaches the ground is calculated. This pressure-time history is transformed to the frequency domain and converted into a one-third octave band spectrum. The essence of the loudness method is to account for the frequency response and integration characteristics of the auditory system. The result of the calculation procedure is a numerical description (perceived level, dB) which represents the loudness of the sonic boom waveform.

  4. Sonic boom acceptability studies

    NASA Technical Reports Server (NTRS)

    Shepherd, Kevin P.; Sullivan, Brenda M.; Leatherwood, Jack D.; Mccurdy, David A.

    1992-01-01

    The determination of the magnitude of sonic boom exposure which would be acceptable to the general population requires, as a starting point, a method to assess and compare individual sonic booms. There is no consensus within the scientific and regulatory communities regarding an appropriate sonic boom assessment metric. Loudness, being a fundamental and well-understood attribute of human hearing was chosen as a means of comparing sonic booms of differing shapes and amplitudes. The figure illustrates the basic steps which yield a calculated value of loudness. Based upon the aircraft configuration and its operating conditions, the sonic boom pressure signature which reaches the ground is calculated. This pressure-time history is transformed to the frequency domain and converted into a one-third octave band spectrum. The essence of the loudness method is to account for the frequency response and integration characteristics of the auditory system. The result of the calculation procedure is a numerical description (perceived level, dB) which represents the loudness of the sonic boom waveform.

  5. Reducing prospective memory error and costs in simulated air traffic control: External aids, extending practice, and removing perceived memory requirements.

    PubMed

    Loft, Shayne; Chapman, Melissa; Smith, Rebekah E

    2016-09-01

    In air traffic control (ATC), forgetting to perform deferred actions-prospective memory (PM) errors-can have severe consequences. PM demands can also interfere with ongoing tasks (costs). We examined the extent to which PM errors and costs were reduced in simulated ATC by providing extended practice, or by providing external aids combined with extended practice, or by providing external aids combined with instructions that removed perceived memory requirements. Participants accepted/handed-off aircraft and detected conflicts. For the PM task, participants were required to substitute alternative actions for routine actions when accepting aircraft. In Experiment 1, when no aids were provided, PM errors and costs were not reduced by practice. When aids were provided, costs observed early in practice were eliminated with practice, but residual PM errors remained. Experiment 2 provided more limited practice with aids, but instructions that did not frame the PM task as a "memory" task led to high PM accuracy without costs. Attention-allocation policies that participants set based on expected PM demands were modified as individuals were increasingly exposed to reliable aids, or were given instructions that removed perceived memory requirements. These findings have implications for the design of aids for individuals who monitor multi-item dynamic displays. (PsycINFO Database Record PMID:27608067

  6. Automatically generated acceptance test: A software reliability experiment

    NASA Technical Reports Server (NTRS)

    Protzel, Peter W.

    1988-01-01

    This study presents results of a software reliability experiment investigating the feasibility of a new error detection method. The method can be used as an acceptance test and is solely based on empirical data about the behavior of internal states of a program. The experimental design uses the existing environment of a multi-version experiment previously conducted at the NASA Langley Research Center, in which the launch interceptor problem is used as a model. This allows the controlled experimental investigation of versions with well-known single and multiple faults, and the availability of an oracle permits the determination of the error detection performance of the test. Fault interaction phenomena are observed that have an amplifying effect on the number of error occurrences. Preliminary results indicate that all faults examined so far are detected by the acceptance test. This shows promise for further investigations, and for the employment of this test method on other applications.

  7. Human Factors Process Task Analysis: Liquid Oxygen Pump Acceptance Test Procedure at the Advanced Technology Development Center

    NASA Technical Reports Server (NTRS)

    Diorio, Kimberly A.; Voska, Ned (Technical Monitor)

    2002-01-01

    This viewgraph presentation provides information on Human Factors Process Failure Modes and Effects Analysis (HF PFMEA). HF PFMEA includes the following 10 steps: Describe mission; Define System; Identify human-machine; List human actions; Identify potential errors; Identify factors that effect error; Determine likelihood of error; Determine potential effects of errors; Evaluate risk; Generate solutions (manage error). The presentation also describes how this analysis was applied to a liquid oxygen pump acceptance test.

  8. Detention and deception: limits of ethical acceptability in detention research.

    PubMed

    Minas, I H

    2004-10-01

    The core of Australia's response to asylum seekers who arrive in an unauthorised manner has been to detain them in immigration detention centres until they are judged to engage Australia's protection obligations or, if they do not, until they are returned to their country of origin. For a number of asylum seekers this has resulted in very prolonged detention. This policy has aroused a storm of controversy with very polarised positions being taken by participants in the debate. In particular, the claim has frequently been made (including by this author) that the circumstances and duration of immigration detention cause substantial harm to the mental health of a significant number of detained asylum seekers. A rational debate on the effects of detention has been hampered by the fact that the Australian government has not allowed researchers access to the detention centres in spite of persistent requests for access by professional bodies. This paper is written in response to the following questions posed by the Journal: Is there a case to be made for individuals agreeing to participate in research studies and for the wider population of current and future detainees to be involved in research without informing either the detention provider or the host nation? Is is legitimate for a researcher to engage in potentially deceptive actions in order to obtain access to such detention facilities to undertake research? What ethical framework should underpin such research? Although there is very little guidance in the literature on the ethical conduct of research in settings such as immigration detention centres, a consideration of the ethical implications of carrying out research in the manner raised by these questions leads this author to conclude that such research cannot be ethically justified. Governments must be persuaded to allow, and to provide substantial support for, ethically conducted research on all aspects of detention. There is also a need for the development of an explicit ethical framework for the conduct of research in settings characterised by a very problematic human rights context.

  9. [Medical device use errors].

    PubMed

    Friesdorf, Wolfgang; Marsolek, Ingo

    2008-01-01

    Medical devices define our everyday patient treatment processes. But despite the beneficial effect, every use can also lead to damages. Use errors are thus often explained by human failure. But human errors can never be completely extinct, especially in such complex work processes like those in medicine that often involve time pressure. Therefore we need error-tolerant work systems in which potential problems are identified and solved as early as possible. In this context human engineering uses the TOP principle: technological before organisational and then person-related solutions. But especially in everyday medical work we realise that error-prone usability concepts can often only be counterbalanced by organisational or person-related measures. Thus human failure is pre-programmed. In addition, many medical work places represent a somewhat chaotic accumulation of individual devices with totally different user interaction concepts. There is not only a lack of holistic work place concepts, but of holistic process and system concepts as well. However, this can only be achieved through the co-operation of producers, healthcare providers and clinical users, by systematically analyzing and iteratively optimizing the underlying treatment processes from both a technological and organizational perspective. What we need is a joint platform like medilab V of the TU Berlin, in which the entire medical treatment chain can be simulated in order to discuss, experiment and model--a key to a safe and efficient healthcare system of the future. PMID:19213452

  10. Orwell's Instructive Errors

    ERIC Educational Resources Information Center

    Julian, Liam

    2009-01-01

    In this article, the author talks about George Orwell, his instructive errors, and the manner in which Orwell pierced worthless theory, faced facts and defended decency (with fluctuating success), and largely ignored the tradition of accumulated wisdom that has rendered him a timeless teacher--one whose inadvertent lessons, while infrequently…

  11. Help prevent hospital errors

    MedlinePlus

    ... A.D.A.M. Editorial team. Related MedlinePlus Health Topics Medication Errors Patient Safety Browse the Encyclopedia A.D.A.M., Inc. is accredited by URAC, also known as the American Accreditation HealthCare Commission ... for online health information and services. Learn more about A.D. ...

  12. The influence of the IMRT QA set-up error on the 2D and 3D gamma evaluation method as obtained by using Monte Carlo simulations

    NASA Astrophysics Data System (ADS)

    Kim, Kyeong-Hyeon; Kim, Dong-Su; Kim, Tae-Ho; Kang, Seong-Hee; Cho, Min-Seok; Suh, Tae Suk

    2015-11-01

    The phantom-alignment error is one of the factors affecting delivery quality assurance (QA) accuracy in intensity-modulated radiation therapy (IMRT). Accordingly, a possibility of inadequate use of spatial information in gamma evaluation may exist for patient-specific IMRT QA. The influence of the phantom-alignment error on gamma evaluation can be demonstrated experimentally by using the gamma passing rate and the gamma value. However, such experimental methods have a limitation regarding the intrinsic verification of the influence of the phantom set-up error because experimentally measuring the phantom-alignment error accurately is impossible. To overcome this limitation, we aimed to verify the effect of the phantom set-up error within the gamma evaluation formula by using a Monte Carlo simulation. Artificial phantom set-up errors were simulated, and the concept of the true point (TP) was used to represent the actual coordinates of the measurement point for the mathematical modeling of these effects on the gamma. Using dose distributions acquired from the Monte Carlo simulation, performed gamma evaluations in 2D and 3D. The results of the gamma evaluations and the dose difference at the TP were classified to verify the degrees of dose reflection at the TP. The 2D and the 3D gamma errors were defined by comparing gamma values between the case of the imposed phantom set-up error and the TP in order to investigate the effect of the set-up error on the gamma value. According to the results for gamma errors, the 3D gamma evaluation reflected the dose at the TP better than the 2D one. Moreover, the gamma passing rates were higher for 3D than for 2D, as is widely known. Thus, the 3D gamma evaluation can increase the precision of patient-specific IMRT QA by applying stringent acceptance criteria and setting a reasonable action level for the 3D gamma passing rate.

  13. Challenge and Error: Critical Events and Attention-Related Errors

    ERIC Educational Resources Information Center

    Cheyne, James Allan; Carriere, Jonathan S. A.; Solman, Grayden J. F.; Smilek, Daniel

    2011-01-01

    Attention lapses resulting from reactivity to task challenges and their consequences constitute a pervasive factor affecting everyday performance errors and accidents. A bidirectional model of attention lapses (error [image omitted] attention-lapse: Cheyne, Solman, Carriere, & Smilek, 2009) argues that errors beget errors by generating attention…

  14. Inborn Errors of Metabolism.

    PubMed

    Ezgu, Fatih

    2016-01-01

    Inborn errors of metabolism are single gene disorders resulting from the defects in the biochemical pathways of the body. Although these disorders are individually rare, collectively they account for a significant portion of childhood disability and deaths. Most of the disorders are inherited as autosomal recessive whereas autosomal dominant and X-linked disorders are also present. The clinical signs and symptoms arise from the accumulation of the toxic substrate, deficiency of the product, or both. Depending on the residual activity of the deficient enzyme, the initiation of the clinical picture may vary starting from the newborn period up until adulthood. Hundreds of disorders have been described until now and there has been a considerable clinical overlap between certain inborn errors. Resulting from this fact, the definite diagnosis of inborn errors depends on enzyme assays or genetic tests. Especially during the recent years, significant achievements have been gained for the biochemical and genetic diagnosis of inborn errors. Techniques such as tandem mass spectrometry and gas chromatography for biochemical diagnosis and microarrays and next-generation sequencing for the genetic diagnosis have enabled rapid and accurate diagnosis. The achievements for the diagnosis also enabled newborn screening and prenatal diagnosis. Parallel to the development the diagnostic methods; significant progress has also been obtained for the treatment. Treatment approaches such as special diets, enzyme replacement therapy, substrate inhibition, and organ transplantation have been widely used. It is obvious that by the help of the preclinical and clinical research carried out for inborn errors, better diagnostic methods and better treatment approaches will high likely be available.

  15. Treatment Acceptability of Healthcare Services for Children with Cerebral Palsy

    ERIC Educational Resources Information Center

    Dahl, Norm; Tervo, Raymond; Symons, Frank J.

    2007-01-01

    Background: Although treatment acceptability scales in intellectual and developmental disabilities research have been used in large- and small-scale applications, large-scale application has been limited to analogue (i.e. contrived) investigations. This study extended the application of treatment acceptability by assessing a large sample of care…

  16. Increasing Our Acceptance as Parents of Children with Special Needs

    ERIC Educational Resources Information Center

    Loewenstein, David

    2007-01-01

    Accepting the limitations of a child whose life was supposed to be imbued with endless possibilities requires parents to come to terms with expectations of themselves and the world around them. In this article, the author offers some helpful strategies for fostering acceptance and strengthening family relationships: (1) Remember that parenting is…

  17. Functional Error Models to Accelerate Nested Sampling

    NASA Astrophysics Data System (ADS)

    Josset, L.; Elsheikh, A. H.; Demyanov, V.; Lunati, I.

    2014-12-01

    Sampling algorithm, the proposed geostatistical realization is first evaluated through the approximate model to decide whether it is useful or not to perform a full physics simulation. This improves the acceptance rate of full physics simulations and opens the door to iteratively test the performance and improve the quality of the error model.

  18. The effect of biomechanical variables on force sensitive resistor error: Implications for calibration and improved accuracy.

    PubMed

    Schofield, Jonathon S; Evans, Katherine R; Hebert, Jacqueline S; Marasco, Paul D; Carey, Jason P

    2016-03-21

    Force Sensitive Resistors (FSRs) are commercially available thin film polymer sensors commonly employed in a multitude of biomechanical measurement environments. Reasons for such wide spread usage lie in the versatility, small profile, and low cost of these sensors. Yet FSRs have limitations. It is commonly accepted that temperature, curvature and biological tissue compliance may impact sensor conductance and resulting force readings. The effect of these variables and degree to which they interact has yet to be comprehensively investigated and quantified. This work systematically assesses varying levels of temperature, sensor curvature and surface compliance using a full factorial design-of-experiments approach. Three models of Interlink FSRs were evaluated. Calibration equations under 12 unique combinations of temperature, curvature and compliance were determined for each sensor. Root mean squared error, mean absolute error, and maximum error were quantified as measures of the impact these thermo/mechanical factors have on sensor performance. It was found that all three variables have the potential to affect FSR calibration curves. The FSR model and corresponding sensor geometry are sensitive to these three mechanical factors at varying levels. Experimental results suggest that reducing sensor error requires calibration of each sensor in an environment as close to its intended use as possible and if multiple FSRs are used in a system, they must be calibrated independently. PMID:26903413

  19. Medical error and human factors engineering: where are we now?

    PubMed

    Gawron, Valerie J; Drury, Colin G; Fairbanks, Rollin J; Berger, Roseanne C

    2006-01-01

    The goal of human factors engineering is to optimize the relationship between humans and systems by studying human behavior, abilities, and limitations and using this knowledge to design systems for safe and effective human use. With the assumption that the human component of any system will inevitably produce errors, human factors engineers design systems and human/machine interfaces that are robust enough to reduce error rates and the effect of the inevitable error within the system. In this article, we review the extent and nature of medical error and then discuss human factors engineering tools that have potential applicability. These tools include taxonomies of human and system error and error data collection and analysis methods. Finally, we describe studies that have examined medical error, and on the basis of these studies, present conclusions about how human factors engineering can significantly reduce medical errors and their effects.

  20. Tropical errors and convection

    NASA Astrophysics Data System (ADS)

    Bechtold, P.; Bauer, P.; Engelen, R. J.

    2012-12-01

    Tropical convection is analysed in the ECMWF Integrated Forecast System (IFS) through tropical errors and their evolution during the last decade as a function of model resolution and model changes. As the characterization of these errors is particularly difficult over tropical oceans due to sparse in situ upper-air data, more weight compared to the middle latitudes is given in the analysis to the underlying forecast model. Therefore, special attention is paid to available near-surface observations and to comparison with analysis from other Centers. There is a systematic lack of low-level wind convergence in the Inner Tropical Convergence Zone (ITCZ) in the IFS, leading to a spindown of the Hadley cell. Critical areas with strong cross-equatorial flow and large wind errors are the Indian Ocean with large interannual variations in forecast errors, and the East Pacific with persistent systematic errors that have evolved little during the last decade. The analysis quality in the East Pacific is affected by observation errors inherent to the atmospheric motion vector wind product. The model's tropical climate and its variability and teleconnections are also evaluated, with a particular focus on the Madden-Julian Oscillation (MJO) during the Year of Tropical Convection (YOTC). The model is shown to reproduce the observed tropical large-scale wave spectra and teleconnections, but overestimates the precipitation during the South-East Asian summer monsoon. The recent improvements in tropical precipitation, convectively coupled wave and MJO predictability are shown to be strongly related to improvements in the convection parameterization that realistically represents the convection sensitivity to environmental moisture, and the large-scale forcing due to the use of strong entrainment and a variable adjustment time-scale. There is however a remaining slight moistening tendency and low-level wind imbalance in the model that is responsible for the Asian Monsoon bias and for too

  1. Speech Errors across the Lifespan

    ERIC Educational Resources Information Center

    Vousden, Janet I.; Maylor, Elizabeth A.

    2006-01-01

    Dell, Burger, and Svec (1997) proposed that the proportion of speech errors classified as anticipations (e.g., "moot and mouth") can be predicted solely from the overall error rate, such that the greater the error rate, the lower the anticipatory proportion (AP) of errors. We report a study examining whether this effect applies to changes in error…

  2. Control by model error estimation

    NASA Technical Reports Server (NTRS)

    Likins, P. W.; Skelton, R. E.

    1976-01-01

    Modern control theory relies upon the fidelity of the mathematical model of the system. Truncated modes, external disturbances, and parameter errors in linear system models are corrected by augmenting to the original system of equations an 'error system' which is designed to approximate the effects of such model errors. A Chebyshev error system is developed for application to the Large Space Telescope (LST).

  3. Marking Errors: A Simple Strategy

    ERIC Educational Resources Information Center

    Timmons, Theresa Cullen

    1987-01-01

    Indicates that using highlighters to mark errors produced a 76% class improvement in removing comma errors and a 95.5% improvement in removing apostrophe errors. Outlines two teaching procedures, to be followed before introducing this tool to the class, that enable students to remove errors at this effective rate. (JD)

  4. Automatic Error Analysis Using Intervals

    ERIC Educational Resources Information Center

    Rothwell, E. J.; Cloud, M. J.

    2012-01-01

    A technique for automatic error analysis using interval mathematics is introduced. A comparison to standard error propagation methods shows that in cases involving complicated formulas, the interval approach gives comparable error estimates with much less effort. Several examples are considered, and numerical errors are computed using the INTLAB…

  5. Neural Correlates of Reach Errors

    PubMed Central

    Hashambhoy, Yasmin; Rane, Tushar; Shadmehr, Reza

    2005-01-01

    Reach errors may be broadly classified into errors arising from unpredictable changes in target location, called target errors, and errors arising from miscalibration of internal models, called execution errors. Execution errors may be caused by miscalibration of dynamics (e.g.. when a force field alters limb dynamics) or by miscalibration of kinematics (e.g., when prisms alter visual feedback). While all types of errors lead to similar online corrections, we found that the motor system showed strong trial-by-trial adaptation in response to random execution errors but not in response to random target errors. We used fMRI and a compatible robot to study brain regions involved in processing each kind of error. Both kinematic and dynamic execution errors activated regions along the central and the post-central sulci and in lobules V, VI, and VIII of the cerebellum, making these areas possible sites of plastic changes in internal models for reaching. Only activity related to kinematic errors extended into parietal area 5. These results are inconsistent with the idea that kinematics and dynamics of reaching are computed in separate neural entities. In contrast, only target errors caused increased activity in the striatum and the posterior superior parietal lobule. The cerebellum and motor cortex were as strongly activated as with execution errors. These findings indicate a neural and behavioral dissociation between errors that lead to switching of behavioral goals, and errors that lead to adaptation of internal models of limb dynamics and kinematics. PMID:16251440

  6. A Simple Approach to Experimental Errors

    ERIC Educational Resources Information Center

    Phillips, M. D.

    1972-01-01

    Classifies experimental error into two main groups: systematic error (instrument, personal, inherent, and variational errors) and random errors (reading and setting errors) and presents mathematical treatments for the determination of random errors. (PR)

  7. Modular error embedding

    DOEpatents

    Sandford, II, Maxwell T.; Handel, Theodore G.; Ettinger, J. Mark

    1999-01-01

    A method of embedding auxiliary information into the digital representation of host data containing noise in the low-order bits. The method applies to digital data representing analog signals, for example digital images. The method reduces the error introduced by other methods that replace the low-order bits with auxiliary information. By a substantially reverse process, the embedded auxiliary data can be retrieved easily by an authorized user through use of a digital key. The modular error embedding method includes a process to permute the order in which the host data values are processed. The method doubles the amount of auxiliary information that can be added to host data values, in comparison with bit-replacement methods for high bit-rate coding. The invention preserves human perception of the meaning and content of the host data, permitting the addition of auxiliary data in the amount of 50% or greater of the original host data.

  8. Error-Free Software

    NASA Technical Reports Server (NTRS)

    1989-01-01

    001 is an integrated tool suited for automatically developing ultra reliable models, simulations and software systems. Developed and marketed by Hamilton Technologies, Inc. (HTI), it has been applied in engineering, manufacturing, banking and software tools development. The software provides the ability to simplify the complex. A system developed with 001 can be a prototype or fully developed with production quality code. It is free of interface errors, consistent, logically complete and has no data or control flow errors. Systems can be designed, developed and maintained with maximum productivity. Margaret Hamilton, President of Hamilton Technologies, also directed the research and development of USE.IT, an earlier product which was the first computer aided software engineering product in the industry to concentrate on automatically supporting the development of an ultrareliable system throughout its life cycle. Both products originated in NASA technology developed under a Johnson Space Center contract.

  9. Enteral feeding pumps: efficacy, safety, and patient acceptability

    PubMed Central

    White, Helen; King, Linsey

    2014-01-01

    Enteral feeding is a long established practice across pediatric and adult populations, to enhance nutritional intake and prevent malnutrition. Despite recognition of the importance of nutrition within the modern health agenda, evaluation of the efficacy of how such feeds are delivered is more limited. The accuracy, safety, and consistency with which enteral feed pump systems dispense nutritional formulae are important determinants of their use and acceptability. Enteral feed pump safety has received increased interest in recent years as enteral pumps are used across hospital and home settings. Four areas of enteral feed pump safety have emerged: the consistent and accurate delivery of formula; the minimization of errors associated with tube misconnection; the impact of continuous feed delivery itself (via an enteral feed pump); and the chemical composition of the casing used in enteral feed pump manufacture. The daily use of pumps in delivery of enteral feeds in a home setting predominantly falls to the hands of parents and caregivers. Their understanding of the use and function of their pump is necessary to ensure appropriate, safe, and accurate delivery of enteral nutrition; their experience with this is important in informing clinicians and manufacturers of the emerging needs and requirements of this diverse patient population. The review highlights current practice and areas of concern and establishes our current knowledge in this field. PMID:25170284

  10. The Relative Frequency of Spanish Pronunciation Errors.

    ERIC Educational Resources Information Center

    Hammerly, Hector

    Types of hierarchies of pronunciation difficulty are discussed, and a hierarchy based on contrastive analysis plus informal observation is proposed. This hierarchy is less one of initial difficulty than of error persistence. One feature of this hierarchy is that, because of lesser learner awareness and very limited functional load, errors…

  11. [The notion and classification of expert errors].

    PubMed

    Klevno, V A

    2012-01-01

    The author presents the analysis of the legal and forensic medical literature concerning currently accepted concepts and classification of expert malpractice. He proposes a new easy-to-remember definition of the expert error and considers the classification of such mistakes. The analysis of the cases of erroneous application of the medical criteria for estimation of the harm to health made it possible to reveal and systematize the causes accounting for the cases of expert malpractice committed by forensic medical experts and health providers when determining the degree of harm to human health. PMID:22686055

  12. Type I error control for tree classification.

    PubMed

    Jung, Sin-Ho; Chen, Yong; Ahn, Hongshik

    2014-01-01

    Binary tree classification has been useful for classifying the whole population based on the levels of outcome variable that is associated with chosen predictors. Often we start a classification with a large number of candidate predictors, and each predictor takes a number of different cutoff values. Because of these types of multiplicity, binary tree classification method is subject to severe type I error probability. Nonetheless, there have not been many publications to address this issue. In this paper, we propose a binary tree classification method to control the probability to accept a predictor below certain level, say 5%.

  13. Variation transmission model for setting acceptance criteria in a multi-staged pharmaceutical manufacturing process.

    PubMed

    Montes, Richard O

    2012-03-01

    Pharmaceutical manufacturing processes consist of a series of stages (e.g., reaction, workup, isolation) to generate the active pharmaceutical ingredient (API). Outputs at intermediate stages (in-process control) and API need to be controlled within acceptance criteria to assure final drug product quality. In this paper, two methods based on tolerance interval to derive such acceptance criteria will be evaluated. The first method is serial worst case (SWC), an industry risk minimization strategy, wherein input materials and process parameters of a stage are fixed at their worst-case settings to calculate the maximum level expected from the stage. This maximum output then becomes input to the next stage wherein process parameters are again fixed at worst-case setting. The procedure is serially repeated throughout the process until the final stage. The calculated limits using SWC can be artificially high and may not reflect the actual process performance. The second method is the variation transmission (VT) using autoregressive model, wherein variation transmitted up to a stage is estimated by accounting for the recursive structure of the errors at each stage. Computer simulations at varying extent of variation transmission and process stage variability are performed. For the scenarios tested, VT method is demonstrated to better maintain the simulated confidence level and more precisely estimate the true proportion parameter than SWC. Real data examples are also presented that corroborate the findings from the simulation. Overall, VT is recommended for setting acceptance criteria in a multi-staged pharmaceutical manufacturing process.

  14. Cone penetrometer acceptance test report

    SciTech Connect

    Boechler, G.N.

    1996-09-19

    This Acceptance Test Report (ATR) documents the results of acceptance test procedure WHC-SD-WM-ATR-151. Included in this report is a summary of the tests, the results and issues, the signature and sign- off ATP pages, and a summarized table of the specification vs. ATP section that satisfied the specification.

  15. We need to talk about error: causes and types of error in veterinary practice.

    PubMed

    Oxtoby, C; Ferguson, E; White, K; Mossop, L

    2015-10-31

    Patient safety research in human medicine has identified the causes and common types of medical error and subsequently informed the development of interventions which mitigate harm, such as the WHO's safe surgery checklist. There is no such evidence available to the veterinary profession. This study therefore aims to identify the causes and types of errors in veterinary practice, and presents an evidence based system for their classification. Causes of error were identified from retrospective record review of 678 claims to the profession's leading indemnity insurer and nine focus groups (average N per group=8) with vets, nurses and support staff were performed using critical incident technique. Reason's (2000) Swiss cheese model of error was used to inform the interpretation of the data. Types of error were extracted from 2978 claims records reported between the years 2009 and 2013. The major classes of error causation were identified with mistakes involving surgery the most common type of error. The results were triangulated with findings from the medical literature and highlight the importance of cognitive limitations, deficiencies in non-technical skills and a systems approach to veterinary error. PMID:26489997

  16. We need to talk about error: causes and types of error in veterinary practice.

    PubMed

    Oxtoby, C; Ferguson, E; White, K; Mossop, L

    2015-10-31

    Patient safety research in human medicine has identified the causes and common types of medical error and subsequently informed the development of interventions which mitigate harm, such as the WHO's safe surgery checklist. There is no such evidence available to the veterinary profession. This study therefore aims to identify the causes and types of errors in veterinary practice, and presents an evidence based system for their classification. Causes of error were identified from retrospective record review of 678 claims to the profession's leading indemnity insurer and nine focus groups (average N per group=8) with vets, nurses and support staff were performed using critical incident technique. Reason's (2000) Swiss cheese model of error was used to inform the interpretation of the data. Types of error were extracted from 2978 claims records reported between the years 2009 and 2013. The major classes of error causation were identified with mistakes involving surgery the most common type of error. The results were triangulated with findings from the medical literature and highlight the importance of cognitive limitations, deficiencies in non-technical skills and a systems approach to veterinary error.

  17. Dissolution test acceptance sampling plans.

    PubMed

    Tsong, Y; Hammerstrom, T; Lin, K; Ong, T E

    1995-07-01

    The U.S. Pharmacopeia (USP) general monograph provides a standard for dissolution compliance with the requirements as stated in the individual USP monograph for a tablet or capsule dosage form. The acceptance rules recommended by USP have important roles in the quality control process. The USP rules and their modifications are often used as an industrial lot release sampling plan, where a lot is accepted when the tablets or capsules sampled are accepted as proof of compliance with the requirement. In this paper, the operating characteristics of the USP acceptance rules are reviewed and compared to a selected modification. The operating characteristics curves show that the USP acceptance rules are sensitive to the true mean dissolution and do not reject a lot or batch that has a large percentage of tablets that dissolve with less than the dissolution specification.

  18. GY SAMPLING THEORY IN ENVIRONMENTAL STUDIES 2: SUBSAMPLING ERROR MEASUREMENTS

    EPA Science Inventory

    Sampling can be a significant source of error in the measurement process. The characterization and cleanup of hazardous waste sites require data that meet site-specific levels of acceptable quality if scientifically supportable decisions are to be made. In support of this effort,...

  19. The Location of Error: Reflections on a Research Project

    ERIC Educational Resources Information Center

    Cook, Devan

    2010-01-01

    Andrea Lunsford and Karen Lunsford conclude "Mistakes Are a Fact of Life: A National Comparative Study," a discussion of their research project exploring patterns of formal grammar and usage error in first-year writing, with an invitation to "conduct a local version of this study." The author was eager to accept their invitation; learning and…

  20. Errors and mistakes in breast ultrasound diagnostics.

    PubMed

    Jakubowski, Wiesław; Dobruch-Sobczak, Katarzyna; Migda, Bartosz

    2012-09-01

    Sonomammography is often the first additional examination performed in the diagnostics of breast diseases. The development of ultrasound imaging techniques, particularly the introduction of high frequency transducers, matrix transducers, harmonic imaging and finally, elastography, influenced the improvement of breast disease diagnostics. Nevertheless, as in each imaging method, there are errors and mistakes resulting from the technical limitations of the method, breast anatomy (fibrous remodeling), insufficient sensitivity and, in particular, specificity. Errors in breast ultrasound diagnostics can be divided into impossible to be avoided and potentially possible to be reduced. In this article the most frequently made errors in ultrasound have been presented, including the ones caused by the presence of artifacts resulting from volumetric averaging in the near and far field, artifacts in cysts or in dilated lactiferous ducts (reverberations, comet tail artifacts, lateral beam artifacts), improper setting of general enhancement or time gain curve or range. Errors dependent on the examiner, resulting in the wrong BIRADS-usg classification, are divided into negative and positive errors. The sources of these errors have been listed. The methods of minimization of the number of errors made have been discussed, including the ones related to the appropriate examination technique, taking into account data from case history and the use of the greatest possible number of additional options such as: harmonic imaging, color and power Doppler and elastography. In the article examples of errors resulting from the technical conditions of the method have been presented, and those dependent on the examiner which are related to the great diversity and variation of ultrasound images of pathological breast lesions.

  1. Quantum Error Correction with Biased Noise

    NASA Astrophysics Data System (ADS)

    Brooks, Peter

    Quantum computing offers powerful new techniques for speeding up the calculation of many classically intractable problems. Quantum algorithms can allow for the efficient simulation of physical systems, with applications to basic research, chemical modeling, and drug discovery; other algorithms have important implications for cryptography and internet security. At the same time, building a quantum computer is a daunting task, requiring the coherent manipulation of systems with many quantum degrees of freedom while preventing environmental noise from interacting too strongly with the system. Fortunately, we know that, under reasonable assumptions, we can use the techniques of quantum error correction and fault tolerance to achieve an arbitrary reduction in the noise level. In this thesis, we look at how additional information about the structure of noise, or "noise bias," can improve or alter the performance of techniques in quantum error correction and fault tolerance. In Chapter 2, we explore the possibility of designing certain quantum gates to be extremely robust with respect to errors in their operation. This naturally leads to structured noise where certain gates can be implemented in a protected manner, allowing the user to focus their protection on the noisier unprotected operations. In Chapter 3, we examine how to tailor error-correcting codes and fault-tolerant quantum circuits in the presence of dephasing biased noise, where dephasing errors are far more common than bit-flip errors. By using an appropriately asymmetric code, we demonstrate the ability to improve the amount of error reduction and decrease the physical resources required for error correction. In Chapter 4, we analyze a variety of protocols for distilling magic states, which enable universal quantum computation, in the presence of faulty Clifford operations. Here again there is a hierarchy of noise levels, with a fixed error rate for faulty gates, and a second rate for errors in the distilled

  2. Laser Phase Errors in Seeded FELs

    SciTech Connect

    Ratner, D.; Fry, A.; Stupakov, G.; White, W.; /SLAC

    2012-03-28

    Harmonic seeding of free electron lasers has attracted significant attention from the promise of transform-limited pulses in the soft X-ray region. Harmonic multiplication schemes extend seeding to shorter wavelengths, but also amplify the spectral phase errors of the initial seed laser, and may degrade the pulse quality. In this paper we consider the effect of seed laser phase errors in high gain harmonic generation and echo-enabled harmonic generation. We use simulations to confirm analytical results for the case of linearly chirped seed lasers, and extend the results for arbitrary seed laser envelope and phase.

  3. Human error and the search for blame

    NASA Technical Reports Server (NTRS)

    Denning, Peter J.

    1989-01-01

    Human error is a frequent topic in discussions about risks in using computer systems. A rational analysis of human error leads through the consideration of mistakes to standards that designers use to avoid mistakes that lead to known breakdowns. The irrational side, however, is more interesting. It conditions people to think that breakdowns are inherently wrong and that there is ultimately someone who is responsible. This leads to a search for someone to blame which diverts attention from: learning from the mistakes; seeing the limitations of current engineering methodology; and improving the discourse of design.

  4. Extending the Technology Acceptance Model: Policy Acceptance Model (PAM)

    NASA Astrophysics Data System (ADS)

    Pierce, Tamra

    There has been extensive research on how new ideas and technologies are accepted in society. This has resulted in the creation of many models that are used to discover and assess the contributing factors. The Technology Acceptance Model (TAM) is one that is a widely accepted model. This model examines people's acceptance of new technologies based on variables that directly correlate to how the end user views the product. This paper introduces the Policy Acceptance Model (PAM), an expansion of TAM, which is designed for the analysis and evaluation of acceptance of new policy implementation. PAM includes the traditional constructs of TAM and adds the variables of age, ethnicity, and family. The model is demonstrated using a survey of people's attitude toward the upcoming healthcare reform in the United States (US) from 72 survey respondents. The aim is that the theory behind this model can be used as a framework that will be applicable to studies looking at the introduction of any new or modified policies.

  5. Characterization of the error budget of Alba-NOM

    NASA Astrophysics Data System (ADS)

    Nicolas, Josep; Martínez, Juan Carlos

    2013-05-01

    The Alba-NOM instrument is a high accuracy scanning machine capable of measuring the slope profile of long mirrors with resolution below the nanometer scale and for a wide range of curvatures. We present the characterization of different sources of errors that limit the uncertainty of the instrument. We have investigated three main contributions to the uncertainty of the measurements: errors introduced by the scanning system and the pentaprism, errors due to environmental conditions, and optical errors of the autocollimator. These sources of error have been investigated by measuring the corresponding motion errors with a high accuracy differential interferometer and by simulating their impact on the measurements by means of ray-tracing. Optical error contributions have been extracted from the analysis of redundant measurements of test surfaces. The methods and results are presented, as well as an example of application that has benefited from the achieved accuracy.

  6. Skylab water balance error analysis

    NASA Technical Reports Server (NTRS)

    Leonard, J. I.

    1977-01-01

    Estimates of the precision of the net water balance were obtained for the entire Skylab preflight and inflight phases as well as for the first two weeks of flight. Quantitative estimates of both total sampling errors and instrumentation errors were obtained. It was shown that measurement error is minimal in comparison to biological variability and little can be gained from improvement in analytical accuracy. In addition, a propagation of error analysis demonstrated that total water balance error could be accounted for almost entirely by the errors associated with body mass changes. Errors due to interaction between terms in the water balance equation (covariances) represented less than 10% of the total error. Overall, the analysis provides evidence that daily measurements of body water changes obtained from the indirect balance technique are reasonable, precise, and relaible. The method is not biased toward net retention or loss.

  7. [Dealing with errors in medicine].

    PubMed

    Schoenenberger, R A; Perruchoud, A P

    1998-12-24

    Iatrogenic disease is probably more commonly than assumed the consequence of errors and mistakes committed by physicians and other medical personnel. Traditionally, strategies to prevent errors in medicine focus on inspection and rely on the professional ethos of health care personnel. The increasingly complex nature of medical practise and the multitude of interventions that each patient receives increases the likelihood of error. More efficient approaches to deal with errors have been developed. The methods include routine identification of errors (critical incidence report), systematic monitoring of multiple-step processes in medical practice, system analysis, and system redesign. A search for underlying causes of errors (rather than distal causes) will enable organizations to collectively learn without denying the inevitable occurrence of human error. Errors and mistakes may become precious chances to increase the quality of medical care.

  8. How to limit clinical errors in interpretation of data.

    PubMed

    Wright, P; Jansen, C; Wyatt, J C

    1998-11-01

    We all assume that we can understand and correctly interpret what we read. However, interpretation is a collection of subtle processes that are easily influenced by poor presentation or wording of information. This article examines how evidence-based principles of information design can be applied to medical records to enhance clinical understanding and accuracy in interpretation of the detailed data that they contain.

  9. Error suppression and error correction in adiabatic quantum computation: non-equilibrium dynamics

    NASA Astrophysics Data System (ADS)

    Sarovar, Mohan; Young, Kevin C.

    2013-12-01

    While adiabatic quantum computing (AQC) has some robustness to noise and decoherence, it is widely believed that encoding, error suppression and error correction will be required to scale AQC to large problem sizes. Previous works have established at least two different techniques for error suppression in AQC. In this paper we derive a model for describing the dynamics of encoded AQC and show that previous constructions for error suppression can be unified with this dynamical model. In addition, the model clarifies the mechanisms of error suppression and allows the identification of its weaknesses. In the second half of the paper, we utilize our description of non-equilibrium dynamics in encoded AQC to construct methods for error correction in AQC by cooling local degrees of freedom (qubits). While this is shown to be possible in principle, we also identify the key challenge to this approach: the requirement of high-weight Hamiltonians. Finally, we use our dynamical model to perform a simplified thermal stability analysis of concatenated-stabilizer-code encoded many-body systems for AQC or quantum memories. This work is a companion paper to ‘Error suppression and error correction in adiabatic quantum computation: techniques and challenges (2013 Phys. Rev. X 3 041013)’, which provides a quantum information perspective on the techniques and limitations of error suppression and correction in AQC. In this paper we couch the same results within a dynamical framework, which allows for a detailed analysis of the non-equilibrium dynamics of error suppression and correction in encoded AQC.

  10. Error Sources in Asteroid Astrometry

    NASA Technical Reports Server (NTRS)

    Owen, William M., Jr.

    2000-01-01

    Asteroid astrometry, like any other scientific measurement process, is subject to both random and systematic errors, not all of which are under the observer's control. To design an astrometric observing program or to improve an existing one requires knowledge of the various sources of error, how different errors affect one's results, and how various errors may be minimized by careful observation or data reduction techniques.

  11. Reducing nurse medicine administration errors.

    PubMed

    Ofosu, Rose; Jarrett, Patricia

    Errors in administering medicines are common and can compromise the safety of patients. This review discusses the causes of drug administration error in hospitals by student and registered nurses, and the practical measures educators and hospitals can take to improve nurses' knowledge and skills in medicines management, and reduce drug errors.

  12. Error Bounds for Interpolative Approximations.

    ERIC Educational Resources Information Center

    Gal-Ezer, J.; Zwas, G.

    1990-01-01

    Elementary error estimation in the approximation of functions by polynomials as a computational assignment, error-bounding functions and error bounds, and the choice of interpolation points are discussed. Precalculus and computer instruction are used on some of the calculations. (KR)

  13. L-286 Acceptance Test Record

    SciTech Connect

    HARMON, B.C.

    2000-01-14

    This document provides a detailed account of how the acceptance testing was conducted for Project L-286, ''200E Area Sanitary Water Plant Effluent Stream Reduction''. The testing of the L-286 instrumentation system was conducted under the direct supervision

  14. Accepted scientific research works (abstracts).

    PubMed

    2014-01-01

    These are the 39 accepted abstracts for IAYT's Symposium on Yoga Research (SYR) September 24-24, 2014 at the Kripalu Center for Yoga & Health and published in the Final Program Guide and Abstracts. PMID:25645134

  15. Preventing Communication Errors in Telephone Medicine

    PubMed Central

    Reisman, Anna B; Brown, Karen E

    2005-01-01

    Errors in telephone communication can result in outcomes ranging from inconvenience and anxiety to serious compromises in patient safety. Although 25% of interactions between physicians and patients take place on the telephone, little has been written about telephone communication and medical mishaps. Similarly, training in telephone medicine skills is limited; only 6% of residency programs teach any aspect of telephone medicine. Increasing familiarity with common telephone challenges with patients may help physicians decrease the likelihood of negative outcomes. We use case vignettes to highlight communication errors in common telephone scenarios. These scenarios include giving sensitive test results, requests for narcotics, managing ill patients who are not sick enough for the emergency room, dealing with late-night calls, communicating with unintelligible patients, and handling calls from family members. We provide management strategies to minimize the occurrence of these errors. PMID:16191150

  16. Analysis of error-correction constraints in an optical disk.

    PubMed

    Roberts, J D; Ryley, A; Jones, D M; Burke, D

    1996-07-10

    The compact disk read-only memory (CD-ROM) is a mature storage medium with complex error control. It comprises four levels of Reed Solomon codes allied to a sequence of sophisticated interleaving strategies and 8:14 modulation coding. New storage media are being developed and introduced that place still further demands on signal processing for error correction. It is therefore appropriate to explore thoroughly the limit of existing strategies to assess future requirements. We describe a simulation of all stages of the CD-ROM coding, modulation, and decoding. The results of decoding the burst error of a prescribed number of modulation bits are discussed in detail. Measures of residual uncorrected error within a sector are displayed by C1, C2, P, and Q error counts and by the status of the final cyclic redundancy check (CRC). Where each data sector is encoded separately, it is shown that error-correction performance against burst errors depends critically on the position of the burst within a sector. The C1 error measures the burst length, whereas C2 errors reflect the burst position. The performance of Reed Solomon product codes is shown by the P and Q statistics. It is shown that synchronization loss is critical near the limits of error correction. An example is given of miscorrection that is identified by the CRC check. PMID:21102793

  17. Beta systems error analysis

    NASA Technical Reports Server (NTRS)

    1984-01-01

    The atmospheric backscatter coefficient, beta, measured with an airborne CO Laser Doppler Velocimeter (LDV) system operating in a continuous wave, focussed model is discussed. The Single Particle Mode (SPM) algorithm, was developed from concept through analysis of an extensive amount of data obtained with the system on board a NASA aircraft. The SPM algorithm is intended to be employed in situations where one particle at a time appears in the sensitive volume of the LDV. In addition to giving the backscatter coefficient, the SPM algorithm also produces as intermediate results the aerosol density and the aerosol backscatter cross section distribution. A second method, which measures only the atmospheric backscatter coefficient, is called the Volume Mode (VM) and was simultaneously employed. The results of these two methods differed by slightly less than an order of magnitude. The measurement uncertainties or other errors in the results of the two methods are examined.

  18. Errors inducing radiation overdoses.

    PubMed

    Grammaticos, Philip C

    2013-01-01

    There is no doubt that equipments exposing radiation and used for therapeutic purposes should be often checked for possibly administering radiation overdoses to the patients. Technologists, radiation safety officers, radiologists, medical physicists, healthcare providers and administration should take proper care on this issue. "We must be beneficial and not harmful to the patients", according to the Hippocratic doctrine. Cases of radiation overdose are often reported. A series of cases of radiation overdoses have recently been reported. Doctors who were responsible, received heavy punishments. It is much better to prevent than to treat an error or a disease. A Personal Smart Card or Score Card has been suggested for every patient undergoing therapeutic and/or diagnostic procedures by the use of radiation. Taxonomy may also help. PMID:24251304

  19. The physiological basis for spacecraft environmental limits

    NASA Technical Reports Server (NTRS)

    Waligora, J. M. (Compiler)

    1979-01-01

    Limits for operational environments are discussed in terms of acceptable physiological changes. The environmental factors considered are pressure, contaminants, temperature, acceleration, noise, rf radiation, and weightlessness.

  20. Measuring Cyclic Error in Laser Heterodyne Interferometers

    NASA Technical Reports Server (NTRS)

    Ryan, Daniel; Abramovici, Alexander; Zhao, Feng; Dekens, Frank; An, Xin; Azizi, Alireza; Chapsky, Jacob; Halverson, Peter

    2010-01-01

    An improved method and apparatus have been devised for measuring cyclic errors in the readouts of laser heterodyne interferometers that are configured and operated as displacement gauges. The cyclic errors arise as a consequence of mixing of spurious optical and electrical signals in beam launchers that are subsystems of such interferometers. The conventional approach to measurement of cyclic error involves phase measurements and yields values precise to within about 10 pm over air optical paths at laser wavelengths in the visible and near infrared. The present approach, which involves amplitude measurements instead of phase measurements, yields values precise to about .0.1 microns . about 100 times the precision of the conventional approach. In a displacement gauge of the type of interest here, the laser heterodyne interferometer is used to measure any change in distance along an optical axis between two corner-cube retroreflectors. One of the corner-cube retroreflectors is mounted on a piezoelectric transducer (see figure), which is used to introduce a low-frequency periodic displacement that can be measured by the gauges. The transducer is excited at a frequency of 9 Hz by a triangular waveform to generate a 9-Hz triangular-wave displacement having an amplitude of 25 microns. The displacement gives rise to both amplitude and phase modulation of the heterodyne signals in the gauges. The modulation includes cyclic error components, and the magnitude of the cyclic-error component of the phase modulation is what one needs to measure in order to determine the magnitude of the cyclic displacement error. The precision attainable in the conventional (phase measurement) approach to measuring cyclic error is limited because the phase measurements are af-

  1. Register file soft error recovery

    SciTech Connect

    Fleischer, Bruce M.; Fox, Thomas W.; Wait, Charles D.; Muff, Adam J.; Watson, III, Alfred T.

    2013-10-15

    Register file soft error recovery including a system that includes a first register file and a second register file that mirrors the first register file. The system also includes an arithmetic pipeline for receiving data read from the first register file, and error detection circuitry to detect whether the data read from the first register file includes corrupted data. The system further includes error recovery circuitry to insert an error recovery instruction into the arithmetic pipeline in response to detecting the corrupted data. The inserted error recovery instruction replaces the corrupted data in the first register file with a copy of the data from the second register file.

  2. Rapid mapping of volumetric errors

    SciTech Connect

    Krulewich, D.; Hale, L.; Yordy, D.

    1995-09-13

    This paper describes a relatively inexpensive, fast, and easy to execute approach to mapping the volumetric errors of a machine tool, coordinate measuring machine, or robot. An error map is used to characterize a machine or to improve its accuracy by compensating for the systematic errors. The method consists of three steps: (1) modeling the relationship between the volumetric error and the current state of the machine; (2) acquiring error data based on length measurements throughout the work volume; and (3) optimizing the model to the particular machine.

  3. Spin glasses and error-correcting codes

    NASA Technical Reports Server (NTRS)

    Belongie, M. L.

    1994-01-01

    In this article, we study a model for error-correcting codes that comes from spin glass theory and leads to both new codes and a new decoding technique. Using the theory of spin glasses, it has been proven that a simple construction yields a family of binary codes whose performance asymptotically approaches the Shannon bound for the Gaussian channel. The limit is approached as the number of information bits per codeword approaches infinity while the rate of the code approaches zero. Thus, the codes rapidly become impractical. We present simulation results that show the performance of a few manageable examples of these codes. In the correspondence that exists between spin glasses and error-correcting codes, the concept of a thermal average leads to a method of decoding that differs from the standard method of finding the most likely information sequence for a given received codeword. Whereas the standard method corresponds to calculating the thermal average at temperature zero, calculating the thermal average at a certain optimum temperature results instead in the sequence of most likely information bits. Since linear block codes and convolutional codes can be viewed as examples of spin glasses, this new decoding method can be used to decode these codes in a way that minimizes the bit error rate instead of the codeword error rate. We present simulation results that show a small improvement in bit error rate by using the thermal average technique.

  4. Entanglment assisted zero-error codes

    NASA Astrophysics Data System (ADS)

    Matthews, William; Mancinska, Laura; Leung, Debbie; Ozols, Maris; Roy, Aidan

    2011-03-01

    Zero-error information theory studies the transmission of data over noisy communication channels with strictly zero error probability. For classical channels and data, much of the theory can be studied in terms of combinatorial graph properties and is a source of hard open problems in that domain. In recent work, we investigated how entanglement between sender and receiver can be used in this task. We found that entanglement-assisted zero-error codes (which are still naturally studied in terms of graphs) sometimes offer an increased bit rate of zero-error communication even in the large block length limit. The assisted codes that we have constructed are closely related to Kochen-Specker proofs of non-contextuality as studied in the context of foundational physics, and our results on asymptotic rates of assisted zero-error communication yield non-contextuality proofs which are particularly `strong' in a certain quantitive sense. I will also describe formal connections to the multi-prover games known as pseudo-telepathy games.

  5. Position error propagation in the simplex strapdown navigation system

    NASA Technical Reports Server (NTRS)

    1976-01-01

    The results of an analysis of the effects of deterministic error sources on position error in the simplex strapdown navigation system were documented. Improving the long term accuracy of the system was addressed in two phases: understanding and controlling the error within the system, and defining methods of damping the net system error through the use of an external reference velocity or position. Review of the flight and ground data revealed error containing the Schuler frequency as well as non-repeatable trends. The only unbounded terms are those involving gyro bias and azimuth error coupled with velocity. All forms of Schuler-periodic position error were found to be sufficiently large to require update or damping capability unless the source coefficients can be limited to values less than those used in this analysis for misalignment and gyro and accelerometer bias. The first-order effects of the deterministic error sources were determined with a simple error propagator which provided plots of error time functions in response to various source error values.

  6. Evaluating innovation. Part 1: The concept of progressive scholarly acceptance.

    PubMed

    Schnurman, Zane; Kondziolka, Douglas

    2016-01-01

    Understanding how the relevant medical community accepts new therapies is vital to patients, physicians, and society. Increasingly, focus is placed on how medical innovations are evaluated. But recognizing when a treatment has become accepted practice-essentially, acceptance by the scientific community-remains a challenge and a barrierto investigating treatment development. This report aims to demonstrate the theory, method, and limitations of a model for measuring a new metric that the authors term "progressive scholarly acceptance." A model was developed to identify when the scientific community has accepted an innovation, by observing when researchers have moved beyond the initial study of efficacy. This model could enable further investigations into the methods and influences of treatment development.

  7. Evaluating the acceptability of recreation rationing policies used on rivers

    NASA Astrophysics Data System (ADS)

    Wikle, Thomas A.

    1991-05-01

    Research shows that users and managers have different perceptions of acceptable policies that ration or limit recreational use on rivers. The acceptability of seven rationing policies was evaluated using Thurstone's method of paired comparisons, which provided a rank ordering of advance reservation, lottery, first-come/first-served, merit, priority for first time users, zoning, and price. Chi-squared tests were used to determine if users and managers have significantly different levels of acceptability for the policies. River users and managers were found to be significantly different according to their evaluation of advance reservation, zoning, and merit. The results also indicated that river users collectively divide the policies into three categories corresponding to high, moderate, and low levels of acceptability, while river managers divide the policies into two levels corresponding to acceptable and unacceptable.

  8. Error, signal, and the placement of Ctenophora sister to all other animals.

    PubMed

    Whelan, Nathan V; Kocot, Kevin M; Moroz, Leonid L; Halanych, Kenneth M

    2015-05-01

    Elucidating relationships among early animal lineages has been difficult, and recent phylogenomic analyses place Ctenophora sister to all other extant animals, contrary to the traditional view of Porifera as the earliest-branching animal lineage. To date, phylogenetic support for either ctenophores or sponges as sister to other animals has been limited and inconsistent among studies. Lack of agreement among phylogenomic analyses using different data and methods obscures how complex traits, such as epithelia, neurons, and muscles evolved. A consensus view of animal evolution will not be accepted until datasets and methods converge on a single hypothesis of early metazoan relationships and putative sources of systematic error (e.g., long-branch attraction, compositional bias, poor model choice) are assessed. Here, we investigate possible causes of systematic error by expanding taxon sampling with eight novel transcriptomes, strictly enforcing orthology inference criteria, and progressively examining potential causes of systematic error while using both maximum-likelihood with robust data partitioning and Bayesian inference with a site-heterogeneous model. We identified ribosomal protein genes as possessing a conflicting signal compared with other genes, which caused some past studies to infer ctenophores and cnidarians as sister. Importantly, biases resulting from elevated compositional heterogeneity or elevated substitution rates are ruled out. Placement of ctenophores as sister to all other animals, and sponge monophyly, are strongly supported under multiple analyses, herein. PMID:25902535

  9. Error, signal, and the placement of Ctenophora sister to all other animals.

    PubMed

    Whelan, Nathan V; Kocot, Kevin M; Moroz, Leonid L; Halanych, Kenneth M

    2015-05-01

    Elucidating relationships among early animal lineages has been difficult, and recent phylogenomic analyses place Ctenophora sister to all other extant animals, contrary to the traditional view of Porifera as the earliest-branching animal lineage. To date, phylogenetic support for either ctenophores or sponges as sister to other animals has been limited and inconsistent among studies. Lack of agreement among phylogenomic analyses using different data and methods obscures how complex traits, such as epithelia, neurons, and muscles evolved. A consensus view of animal evolution will not be accepted until datasets and methods converge on a single hypothesis of early metazoan relationships and putative sources of systematic error (e.g., long-branch attraction, compositional bias, poor model choice) are assessed. Here, we investigate possible causes of systematic error by expanding taxon sampling with eight novel transcriptomes, strictly enforcing orthology inference criteria, and progressively examining potential causes of systematic error while using both maximum-likelihood with robust data partitioning and Bayesian inference with a site-heterogeneous model. We identified ribosomal protein genes as possessing a conflicting signal compared with other genes, which caused some past studies to infer ctenophores and cnidarians as sister. Importantly, biases resulting from elevated compositional heterogeneity or elevated substitution rates are ruled out. Placement of ctenophores as sister to all other animals, and sponge monophyly, are strongly supported under multiple analyses, herein.

  10. Error, signal, and the placement of Ctenophora sister to all other animals

    PubMed Central

    Whelan, Nathan V.; Kocot, Kevin M.; Moroz, Leonid L.

    2015-01-01

    Elucidating relationships among early animal lineages has been difficult, and recent phylogenomic analyses place Ctenophora sister to all other extant animals, contrary to the traditional view of Porifera as the earliest-branching animal lineage. To date, phylogenetic support for either ctenophores or sponges as sister to other animals has been limited and inconsistent among studies. Lack of agreement among phylogenomic analyses using different data and methods obscures how complex traits, such as epithelia, neurons, and muscles evolved. A consensus view of animal evolution will not be accepted until datasets and methods converge on a single hypothesis of early metazoan relationships and putative sources of systematic error (e.g., long-branch attraction, compositional bias, poor model choice) are assessed. Here, we investigate possible causes of systematic error by expanding taxon sampling with eight novel transcriptomes, strictly enforcing orthology inference criteria, and progressively examining potential causes of systematic error while using both maximum-likelihood with robust data partitioning and Bayesian inference with a site-heterogeneous model. We identified ribosomal protein genes as possessing a conflicting signal compared with other genes, which caused some past studies to infer ctenophores and cnidarians as sister. Importantly, biases resulting from elevated compositional heterogeneity or elevated substitution rates are ruled out. Placement of ctenophores as sister to all other animals, and sponge monophyly, are strongly supported under multiple analyses, herein. PMID:25902535

  11. An Introduction to Error Analysis for Quantitative Chemistry

    ERIC Educational Resources Information Center

    Neman, R. L.

    1972-01-01

    Describes two formulas for calculating errors due to instrument limitations which are usually found in gravimetric volumetric analysis and indicates their possible applications to other fields of science. (CC)

  12. How perioperative nurses define, attribute causes of, and react to intraoperative nursing errors.

    PubMed

    Chard, Robin

    2010-01-01

    Errors in nursing practice pose a continuing threat to patient safety. A descriptive, correlational study was conducted to examine the definitions, circumstances, and perceived causes of intraoperative nursing errors; reactions of perioperative nurses to intraoperative nursing errors; and the relationships among coping with intraoperative nursing errors, emotional distress, and changes in practice made as a result of error. The results indicate that strategies of accepting responsibility and using self-control are significant predictors of emotional distress. Seeking social support and planful problem solving emerged as significant predictors of constructive changes in practice. Most predictive of defensive changes was the strategy of escape/avoidance.

  13. Modeling error analysis of stationary linear discrete-time filters

    NASA Technical Reports Server (NTRS)

    Patel, R.; Toda, M.

    1977-01-01

    The performance of Kalman-type, linear, discrete-time filters in the presence of modeling errors is considered. The discussion is limited to stationary performance, and bounds are obtained for the performance index, the mean-squared error of estimates for suboptimal and optimal (Kalman) filters. The computation of these bounds requires information on only the model matrices and the range of errors for these matrices. Consequently, a design can easily compare the performance of a suboptimal filter with that of the optimal filter, when only the range of errors in the elements of the model matrices is available.

  14. Errors in clinical laboratories or errors in laboratory medicine?

    PubMed

    Plebani, Mario

    2006-01-01

    Laboratory testing is a highly complex process and, although laboratory services are relatively safe, they are not as safe as they could or should be. Clinical laboratories have long focused their attention on quality control methods and quality assessment programs dealing with analytical aspects of testing. However, a growing body of evidence accumulated in recent decades demonstrates that quality in clinical laboratories cannot be assured by merely focusing on purely analytical aspects. The more recent surveys on errors in laboratory medicine conclude that in the delivery of laboratory testing, mistakes occur more frequently before (pre-analytical) and after (post-analytical) the test has been performed. Most errors are due to pre-analytical factors (46-68.2% of total errors), while a high error rate (18.5-47% of total errors) has also been found in the post-analytical phase. Errors due to analytical problems have been significantly reduced over time, but there is evidence that, particularly for immunoassays, interference may have a serious impact on patients. A description of the most frequent and risky pre-, intra- and post-analytical errors and advice on practical steps for measuring and reducing the risk of errors is therefore given in the present paper. Many mistakes in the Total Testing Process are called "laboratory errors", although these may be due to poor communication, action taken by others involved in the testing process (e.g., physicians, nurses and phlebotomists), or poorly designed processes, all of which are beyond the laboratory's control. Likewise, there is evidence that laboratory information is only partially utilized. A recent document from the International Organization for Standardization (ISO) recommends a new, broader definition of the term "laboratory error" and a classification of errors according to different criteria. In a modern approach to total quality, centered on patients' needs and satisfaction, the risk of errors and mistakes

  15. Contour Error Map Algorithm

    NASA Technical Reports Server (NTRS)

    Merceret, Francis; Lane, John; Immer, Christopher; Case, Jonathan; Manobianco, John

    2005-01-01

    The contour error map (CEM) algorithm and the software that implements the algorithm are means of quantifying correlations between sets of time-varying data that are binarized and registered on spatial grids. The present version of the software is intended for use in evaluating numerical weather forecasts against observational sea-breeze data. In cases in which observational data come from off-grid stations, it is necessary to preprocess the observational data to transform them into gridded data. First, the wind direction is gridded and binarized so that D(i,j;n) is the input to CEM based on forecast data and d(i,j;n) is the input to CEM based on gridded observational data. Here, i and j are spatial indices representing 1.25-km intervals along the west-to-east and south-to-north directions, respectively; and n is a time index representing 5-minute intervals. A binary value of D or d = 0 corresponds to an offshore wind, whereas a value of D or d = 1 corresponds to an onshore wind. CEM includes two notable subalgorithms: One identifies and verifies sea-breeze boundaries; the other, which can be invoked optionally, performs an image-erosion function for the purpose of attempting to eliminate river-breeze contributions in the wind fields.

  16. Optimal input design for aircraft instrumentation systematic error estimation

    NASA Technical Reports Server (NTRS)

    Morelli, Eugene A.

    1991-01-01

    A new technique for designing optimal flight test inputs for accurate estimation of instrumentation systematic errors was developed and demonstrated. A simulation model of the F-18 High Angle of Attack Research Vehicle (HARV) aircraft was used to evaluate the effectiveness of the optimal input compared to input recorded during flight test. Instrumentation systematic error parameter estimates and their standard errors were compared. It was found that the optimal input design improved error parameter estimates and their accuracies for a fixed time input design. Pilot acceptability of the optimal input design was demonstrated using a six degree-of-freedom fixed base piloted simulation of the F-18 HARV. The technique described in this work provides a practical, optimal procedure for designing inputs for data compatibility experiments.

  17. Sepsis: Medical errors in Poland.

    PubMed

    Rorat, Marta; Jurek, Tomasz

    2016-01-01

    Health, safety and medical errors are currently the subject of worldwide discussion. The authors analysed medico-legal opinions trying to determine types of medical errors and their impact on the course of sepsis. The authors carried out a retrospective analysis of 66 medico-legal opinions issued by the Wroclaw Department of Forensic Medicine between 2004 and 2013 (at the request of the prosecutor or court) in cases examined for medical errors. Medical errors were confirmed in 55 of the 66 medico-legal opinions. The age of victims varied from 2 weeks to 68 years; 49 patients died. The analysis revealed medical errors committed by 113 health-care workers: 98 physicians, 8 nurses and 8 emergency medical dispatchers. In 33 cases, an error was made before hospitalisation. Hospital errors occurred in 35 victims. Diagnostic errors were discovered in 50 patients, including 46 cases of sepsis being incorrectly recognised and insufficient diagnoses in 37 cases. Therapeutic errors occurred in 37 victims, organisational errors in 9 and technical errors in 2. In addition to sepsis, 8 patients also had a severe concomitant disease and 8 had a chronic disease. In 45 cases, the authors observed glaring errors, which could incur criminal liability. There is an urgent need to introduce a system for reporting and analysing medical errors in Poland. The development and popularisation of standards for identifying and treating sepsis across basic medical professions is essential to improve patient safety and survival rates. Procedures should be introduced to prevent health-care workers from administering incorrect treatment in cases.

  18. From requirements to acceptance tests

    NASA Technical Reports Server (NTRS)

    Baize, Lionel; Pasquier, Helene

    1993-01-01

    From user requirements definition to accepted software system, the software project management wants to be sure that the system will meet the requirements. For the development of a telecommunication satellites Control Centre, C.N.E.S. has used new rules to make the use of tracing matrix easier. From Requirements to Acceptance Tests, each item of a document must have an identifier. A unique matrix traces the system and allows the tracking of the consequences of a change in the requirements. A tool has been developed, to import documents into a relational data base. Each record of the data base corresponds to an item of a document, the access key is the item identifier. Tracing matrix is also processed, providing automatically links between the different documents. It enables the reading on the same screen of traced items. For example one can read simultaneously the User Requirements items, the corresponding Software Requirements items and the Acceptance Tests.

  19. 7 CFR 1728.30 - Inclusion of an item for listing or technical acceptance.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... acceptance. Items accepted subject to certain conditions, such as limited use to gain service experience, or... submitting a letter to the Administrator requesting such a review. (h) Change in Design. RUS acceptance of an item will be conditioned on the understanding that no design changes (material or dimensions)...

  20. Realtime mitigation of GPS SA errors using Loran-C

    NASA Technical Reports Server (NTRS)

    Braasch, Soo Y.

    1994-01-01

    The hybrid use of Loran-C with the Global Positioning System (GPS) was shown capable of providing a sole-means of enroute air radionavigation. By allowing pilots to fly direct to their destinations, use of this system is resulting in significant time savings and therefore fuel savings as well. However, a major error source limiting the accuracy of GPS is the intentional degradation of the GPS signal known as Selective Availability (SA). SA-induced position errors are highly correlated and far exceed all other error sources (horizontal position error: 100 meters, 95 percent). Realtime mitigation of SA errors from the position solution is highly desirable. How that can be achieved is discussed. The stability of Loran-C signals is exploited to reduce SA errors. The theory behind this technique is discussed and results using bench and flight data are given.

  1. Dopamine reward prediction error coding.

    PubMed

    Schultz, Wolfram

    2016-03-01

    Reward prediction errors consist of the differences between received and predicted rewards. They are crucial for basic forms of learning about rewards and make us strive for more rewards-an evolutionary beneficial trait. Most dopamine neurons in the midbrain of humans, monkeys, and rodents signal a reward prediction error; they are activated by more reward than predicted (positive prediction error), remain at baseline activity for fully predicted rewards, and show depressed activity with less reward than predicted (negative prediction error). The dopamine signal increases nonlinearly with reward value and codes formal economic utility. Drugs of addiction generate, hijack, and amplify the dopamine reward signal and induce exaggerated, uncontrolled dopamine effects on neuronal plasticity. The striatum, amygdala, and frontal cortex also show reward prediction error coding, but only in subpopulations of neurons. Thus, the important concept of reward prediction errors is implemented in neuronal hardware.

  2. Dopamine reward prediction error coding

    PubMed Central

    Schultz, Wolfram

    2016-01-01

    Reward prediction errors consist of the differences between received and predicted rewards. They are crucial for basic forms of learning about rewards and make us strive for more rewards—an evolutionary beneficial trait. Most dopamine neurons in the midbrain of humans, monkeys, and rodents signal a reward prediction error; they are activated by more reward than predicted (positive prediction error), remain at baseline activity for fully predicted rewards, and show depressed activity with less reward than predicted (negative prediction error). The dopamine signal increases nonlinearly with reward value and codes formal economic utility. Drugs of addiction generate, hijack, and amplify the dopamine reward signal and induce exaggerated, uncontrolled dopamine effects on neuronal plasticity. The striatum, amygdala, and frontal cortex also show reward prediction error coding, but only in subpopulations of neurons. Thus, the important concept of reward prediction errors is implemented in neuronal hardware. PMID:27069377

  3. Writing errors by normal subjects.

    PubMed

    Moretti, Rita; Torre, Paola; Antonello, Rodolfo M; Fabbro, Franco; Cazzato, Giuseppe; Bava, Antonio

    2003-08-01

    Writing is a complex process requiring visual memory, attention, phonological and semantic operations, and motor performance. For that reason, it can easily be disturbed by interfering with attention, memory, by interfering subvocalization, and so on. With 16 female third-year students (23.4 +/- 0.8 yr.) from the University of Trieste, we investigated the production of errors in three experimental conditions (control, articulatory suppression, and tapping). In the articulatory suppression condition, the participants produced significantly more linguistic impairments (such as agrammatism, unrelated substitutions, sentence omissions, and semantically deviant sentences), which are similar to linguistic impairments found in aphasia. On the tapping condition there were more perseverations, deletions, and substitutions of both letters and words. These data suggest that writing is not an automatic skill. Only after many years of experience and practice of processing information (through cortical to subcortical channels) can writing be considered an automatic skill. Limited experimental conditions can disrupt the writing system of normal subjects, probably interfering with the cortical to subcortical loops, and link normality to pathology. PMID:14604043

  4. Medication Errors in Outpatient Pediatrics.

    PubMed

    Berrier, Kyla

    2016-01-01

    Medication errors may occur during parental administration of prescription and over-the-counter medications in the outpatient pediatric setting. Misinterpretation of medication labels and dosing errors are two types of errors in medication administration. Health literacy may play an important role in parents' ability to safely manage their child's medication regimen. There are several proposed strategies for decreasing these medication administration errors, including using standardized dosing instruments, using strictly metric units for medication dosing, and providing parents and caregivers with picture-based dosing instructions. Pediatric healthcare providers should be aware of these strategies and seek to implement many of them into their practices. PMID:27537086

  5. A theory of human error

    NASA Technical Reports Server (NTRS)

    Mcruer, D. T.; Clement, W. F.; Allen, R. W.

    1980-01-01

    Human error, a significant contributing factor in a very high proportion of civil transport, general aviation, and rotorcraft accidents is investigated. Correction of the sources of human error requires that one attempt to reconstruct underlying and contributing causes of error from the circumstantial causes cited in official investigative reports. A validated analytical theory of the input-output behavior of human operators involving manual control, communication, supervisory, and monitoring tasks which are relevant to aviation operations is presented. This theory of behavior, both appropriate and inappropriate, provides an insightful basis for investigating, classifying, and quantifying the needed cause-effect relationships governing propagation of human error.

  6. Motion estimation performance models with application to hardware error tolerance

    NASA Astrophysics Data System (ADS)

    Cheong, Hye-Yeon; Ortega, Antonio

    2007-01-01

    The progress of VLSI technology towards deep sub-micron feature sizes, e.g., sub-100 nanometer technology, has created a growing impact of hardware defects and fabrication process variability, which lead to reductions in yield rate. To address these problems, a new approach, system-level error tolerance (ET), has been recently introduced. Considering that a significant percentage of the entire chip production is discarded due to minor imperfections, this approach is based on accepting imperfect chips that introduce imperceptible/acceptable system-level degradation; this leads to increases in overall effective yield. In this paper, we investigate the impact of hardware faults on the video compression performance, with a focus on the motion estimation (ME) process. More specifically, we provide an analytical formulation of the impact of single and multiple stuck-at-faults within ME computation. We further present a model for estimating the system-level performance degradation due to such faults, which can be used for the error tolerance based decision strategy of accepting a given faulty chip. We also show how different faults and ME search algorithms compare in terms of error tolerance and define the characteristics of search algorithm that lead to increased error tolerance. Finally, we show that different hardware architectures performing the same metric computation have different error tolerance characteristics and we present the optimal ME hardware architecture in terms of error tolerance. While we focus on ME hardware, our work could also applied to systems (e.g., classifiers, matching pursuits, vector quantization) where a selection is made among several alternatives (e.g., class label, basis function, quantization codeword) based on which choice minimizes an additive metric of interest.

  7. Measurement Error and Equating Error in Power Analysis

    ERIC Educational Resources Information Center

    Phillips, Gary W.; Jiang, Tao

    2016-01-01

    Power analysis is a fundamental prerequisite for conducting scientific research. Without power analysis the researcher has no way of knowing whether the sample size is large enough to detect the effect he or she is looking for. This paper demonstrates how psychometric factors such as measurement error and equating error affect the power of…

  8. Anxiety and Error Monitoring: Increased Error Sensitivity or Altered Expectations?

    ERIC Educational Resources Information Center

    Compton, Rebecca J.; Carp, Joshua; Chaddock, Laura; Fineman, Stephanie L.; Quandt, Lorna C.; Ratliff, Jeffrey B.

    2007-01-01

    This study tested the prediction that the error-related negativity (ERN), a physiological measure of error monitoring, would be enhanced in anxious individuals, particularly in conditions with threatening cues. Participants made gender judgments about faces whose expressions were either happy, angry, or neutral. Replicating prior studies, midline…

  9. Critical evidence for the prediction error theory in associative learning.

    PubMed

    Terao, Kanta; Matsumoto, Yukihisa; Mizunami, Makoto

    2015-01-01

    In associative learning in mammals, it is widely accepted that the discrepancy, or error, between actual and predicted reward determines whether learning occurs. Complete evidence for the prediction error theory, however, has not been obtained in any learning systems: Prediction error theory stems from the finding of a blocking phenomenon, but blocking can also be accounted for by other theories, such as the attentional theory. We demonstrated blocking in classical conditioning in crickets and obtained evidence to reject the attentional theory. To obtain further evidence supporting the prediction error theory and rejecting alternative theories, we constructed a neural model to match the prediction error theory, by modifying our previous model of learning in crickets, and we tested a prediction from the model: the model predicts that pharmacological intervention of octopaminergic transmission during appetitive conditioning impairs learning but not formation of reward prediction itself, and it thus predicts no learning in subsequent training. We observed such an "auto-blocking", which could be accounted for by the prediction error theory but not by other competitive theories to account for blocking. This study unambiguously demonstrates validity of the prediction error theory in associative learning. PMID:25754125

  10. Critical evidence for the prediction error theory in associative learning

    PubMed Central

    Terao, Kanta; Matsumoto, Yukihisa; Mizunami, Makoto

    2015-01-01

    In associative learning in mammals, it is widely accepted that the discrepancy, or error, between actual and predicted reward determines whether learning occurs. Complete evidence for the prediction error theory, however, has not been obtained in any learning systems: Prediction error theory stems from the finding of a blocking phenomenon, but blocking can also be accounted for by other theories, such as the attentional theory. We demonstrated blocking in classical conditioning in crickets and obtained evidence to reject the attentional theory. To obtain further evidence supporting the prediction error theory and rejecting alternative theories, we constructed a neural model to match the prediction error theory, by modifying our previous model of learning in crickets, and we tested a prediction from the model: the model predicts that pharmacological intervention of octopaminergic transmission during appetitive conditioning impairs learning but not formation of reward prediction itself, and it thus predicts no learning in subsequent training. We observed such an “auto-blocking”, which could be accounted for by the prediction error theory but not by other competitive theories to account for blocking. This study unambiguously demonstrates validity of the prediction error theory in associative learning. PMID:25754125

  11. Critical evidence for the prediction error theory in associative learning.

    PubMed

    Terao, Kanta; Matsumoto, Yukihisa; Mizunami, Makoto

    2015-03-10

    In associative learning in mammals, it is widely accepted that the discrepancy, or error, between actual and predicted reward determines whether learning occurs. Complete evidence for the prediction error theory, however, has not been obtained in any learning systems: Prediction error theory stems from the finding of a blocking phenomenon, but blocking can also be accounted for by other theories, such as the attentional theory. We demonstrated blocking in classical conditioning in crickets and obtained evidence to reject the attentional theory. To obtain further evidence supporting the prediction error theory and rejecting alternative theories, we constructed a neural model to match the prediction error theory, by modifying our previous model of learning in crickets, and we tested a prediction from the model: the model predicts that pharmacological intervention of octopaminergic transmission during appetitive conditioning impairs learning but not formation of reward prediction itself, and it thus predicts no learning in subsequent training. We observed such an "auto-blocking", which could be accounted for by the prediction error theory but not by other competitive theories to account for blocking. This study unambiguously demonstrates validity of the prediction error theory in associative learning.

  12. Imaginary Companions and Peer Acceptance

    ERIC Educational Resources Information Center

    Gleason, Tracy R.

    2004-01-01

    Early research on imaginary companions suggests that children who create them do so to compensate for poor social relationships. Consequently, the peer acceptance of children with imaginary companions was compared to that of their peers. Sociometrics were conducted on 88 preschool-aged children; 11 had invisible companions, 16 had personified…

  13. Acceptance of Others (Number Form).

    ERIC Educational Resources Information Center

    Masters, James R.; Laverty, Grace E.

    As part of the instrumentation to assess the effectiveness of the Schools Without Failure (SWF) program in 10 elementary schools in the New Castle, Pa. School District, the Acceptance of Others (Number Form) was prepared to determine pupil's attitudes toward classmates. Given a list of all class members, pupils are asked to circle a number from 1…

  14. W-025, acceptance test report

    SciTech Connect

    Roscha, V.

    1994-10-04

    This acceptance test report (ATR) has been prepared to establish the results of the field testing conducted on W-025 to demonstrate that the electrical/instrumentation systems functioned as intended by design. This is part of the RMW Land Disposal Facility.

  15. Euthanasia Acceptance: An Attitudinal Inquiry.

    ERIC Educational Resources Information Center

    Klopfer, Fredrick J.; Price, William F.

    The study presented was conducted to examine potential relationships between attitudes regarding the dying process, including acceptance of euthanasia, and other attitudinal or demographic attributes. The data of the survey was comprised of responses given by 331 respondents to a door-to-door interview. Results are discussed in terms of preferred…

  16. Helping Our Children Accept Themselves.

    ERIC Educational Resources Information Center

    Gamble, Mae

    1984-01-01

    Parents of a child with muscular dystrophy recount their reactions to learning of the diagnosis, their gradual acceptance, and their son's resistance, which was gradually lessened when he was provided with more information and treated more normally as a member of the family. (CL)

  17. Acceptance and Commitment Therapy: Introduction

    ERIC Educational Resources Information Center

    Twohig, Michael P.

    2012-01-01

    This is the introductory article to a special series in Cognitive and Behavioral Practice on Acceptance and Commitment Therapy (ACT). Instead of each article herein reviewing the basics of ACT, this article contains that review. This article provides a description of where ACT fits within the larger category of cognitive behavior therapy (CBT):…

  18. Who accepts first aid training?

    PubMed

    Pearn, J; Dawson, B; Leditschke, F; Petrie, G; Nixon, J

    1980-09-01

    The percentage of individuals trained in first aid skills in the general community is inadequate. We report here a study to investigate factors which influence motivation to accept voluntary training in first aid. A group of 700 randomly selected owners of inground swimming pools (a parental high-risk group) was offered a course of formal first aid instruction. Nine per cent attended the offered training course. The time commitment involved in traditional courses (eight training nights spread over four weeks) is not a deterrent, the same percentage accepting such courses as that who accept a course of one night's instruction. Cost is an important deterrent factor, consumer resistance rising over 15 cost units (one cost unit = the price of a loaf of bread). The level of competent first aid training within the community can be raised by (a) keeping to traditional course content, but (b) by ensuring a higher acceptance rate of first aid courses by a new approach to publicity campaigns, to convince prospective students of the real worth of first aid training. Questions concerning who should be taught first aid, and factors influencing motivation, are discussed.

  19. Improving medication administration error reporting systems. Why do errors occur?

    PubMed

    Wakefield, B J; Wakefield, D S; Uden-Holman, T

    2000-01-01

    Monitoring medication administration errors (MAE) is often included as part of the hospital's risk management program. While observation of actual medication administration is the most accurate way to identify errors, hospitals typically rely on voluntary incident reporting processes. Although incident reporting systems are more economical than other methods of error detection, incident reporting can also be a time-consuming process depending on the complexity or "user-friendliness" of the reporting system. Accurate incident reporting systems are also dependent on the ability of the practitioner to: 1) recognize an error has actually occurred; 2) believe the error is significant enough to warrant reporting; and 3) overcome the embarrassment of having committed a MAE and the fear of punishment for reporting a mistake (either one's own or another's mistake).

  20. Predictive error analysis for a water resource management model

    NASA Astrophysics Data System (ADS)

    Gallagher, Mark; Doherty, John

    2007-02-01

    SummaryIn calibrating a model, a set of parameters is assigned to the model which will be employed for the making of all future predictions. If these parameters are estimated through solution of an inverse problem, formulated to be properly posed through either pre-calibration or mathematical regularisation, then solution of this inverse problem will, of necessity, lead to a simplified parameter set that omits the details of reality, while still fitting historical data acceptably well. Furthermore, estimates of parameters so obtained will be contaminated by measurement noise. Both of these phenomena will lead to errors in predictions made by the model, with the potential for error increasing with the hydraulic property detail on which the prediction depends. Integrity of model usage demands that model predictions be accompanied by some estimate of the possible errors associated with them. The present paper applies theory developed in a previous work to the analysis of predictive error associated with a real world, water resource management model. The analysis offers many challenges, including the fact that the model is a complex one that was partly calibrated by hand. Nevertheless, it is typical of models which are commonly employed as the basis for the making of important decisions, and for which such an analysis must be made. The potential errors associated with point-based and averaged water level and creek inflow predictions are examined, together with the dependence of these errors on the amount of averaging involved. Error variances associated with predictions made by the existing model are compared with "optimized error variances" that could have been obtained had calibration been undertaken in such a way as to minimize predictive error variance. The contributions by different parameter types to the overall error variance of selected predictions are also examined.

  1. Beam lifetime and limitations during low-energy RHIC operation

    SciTech Connect

    Fedotov, A.V.; Bai, M.; Blaskiewicz, M.; Fischer, W.; Kayran, D.; Montag, C.; Satogata, T.; Tepikian, S.; Wang, G.

    2011-03-28

    The low-energy physics program at the Relativistic Heavy Ion Collider (RHIC), motivated by a search for the QCD phase transition critical point, requires operation at low energies. At these energies, large nonlinear magnetic field errors and large beam sizes produce low beam lifetimes. A variety of beam dynamics effects such as Intrabeam Scattering (IBS), space charge and beam-beam forces also contribute. All these effects are important to understand beam lifetime limitations in RHIC at low energies. During the low-energy RHIC physics run in May-June 2010 at beam {gamma} = 6.1 and {gamma} = 4.1, gold beam lifetimes were measured for various values of space-charge tune shifts, transverse acceptance limitation by collimators, synchrotron tunes and RF voltage. This paper summarizes our observations and initial findings.

  2. Frequency analysis of nonlinear oscillations via the global error minimization

    NASA Astrophysics Data System (ADS)

    Kalami Yazdi, M.; Hosseini Tehrani, P.

    2016-06-01

    The capacity and effectiveness of a modified variational approach, namely global error minimization (GEM) is illustrated in this study. For this purpose, the free oscillations of a rod rocking on a cylindrical surface and the Duffing-harmonic oscillator are treated. In order to validate and exhibit the merit of the method, the obtained result is compared with both of the exact frequency and the outcome of other well-known analytical methods. The corollary reveals that the first order approximation leads to an acceptable relative error, specially for large initial conditions. The procedure can be promisingly exerted to the conservative nonlinear problems.

  3. Error coding simulations in C

    NASA Technical Reports Server (NTRS)

    Noble, Viveca K.

    1994-01-01

    When data is transmitted through a noisy channel, errors are produced within the data rendering it indecipherable. Through the use of error control coding techniques, the bit error rate can be reduced to any desired level without sacrificing the transmission data rate. The Astrionics Laboratory at Marshall Space Flight Center has decided to use a modular, end-to-end telemetry data simulator to simulate the transmission of data from flight to ground and various methods of error control. The simulator includes modules for random data generation, data compression, Consultative Committee for Space Data Systems (CCSDS) transfer frame formation, error correction/detection, error generation and error statistics. The simulator utilizes a concatenated coding scheme which includes CCSDS standard (255,223) Reed-Solomon (RS) code over GF(2(exp 8)) with interleave depth of 5 as the outermost code, (7, 1/2) convolutional code as an inner code and CCSDS recommended (n, n-16) cyclic redundancy check (CRC) code as the innermost code, where n is the number of information bits plus 16 parity bits. The received signal-to-noise for a desired bit error rate is greatly reduced through the use of forward error correction techniques. Even greater coding gain is provided through the use of a concatenated coding scheme. Interleaving/deinterleaving is necessary to randomize burst errors which may appear at the input of the RS decoder. The burst correction capability length is increased in proportion to the interleave depth. The modular nature of the simulator allows for inclusion or exclusion of modules as needed. This paper describes the development and operation of the simulator, the verification of a C-language Reed-Solomon code, and the possibility of using Comdisco SPW(tm) as a tool for determining optimal error control schemes.

  4. Error coding simulations in C

    NASA Astrophysics Data System (ADS)

    Noble, Viveca K.

    1994-10-01

    When data is transmitted through a noisy channel, errors are produced within the data rendering it indecipherable. Through the use of error control coding techniques, the bit error rate can be reduced to any desired level without sacrificing the transmission data rate. The Astrionics Laboratory at Marshall Space Flight Center has decided to use a modular, end-to-end telemetry data simulator to simulate the transmission of data from flight to ground and various methods of error control. The simulator includes modules for random data generation, data compression, Consultative Committee for Space Data Systems (CCSDS) transfer frame formation, error correction/detection, error generation and error statistics. The simulator utilizes a concatenated coding scheme which includes CCSDS standard (255,223) Reed-Solomon (RS) code over GF(2(exp 8)) with interleave depth of 5 as the outermost code, (7, 1/2) convolutional code as an inner code and CCSDS recommended (n, n-16) cyclic redundancy check (CRC) code as the innermost code, where n is the number of information bits plus 16 parity bits. The received signal-to-noise for a desired bit error rate is greatly reduced through the use of forward error correction techniques. Even greater coding gain is provided through the use of a concatenated coding scheme. Interleaving/deinterleaving is necessary to randomize burst errors which may appear at the input of the RS decoder. The burst correction capability length is increased in proportion to the interleave depth. The modular nature of the simulator allows for inclusion or exclusion of modules as needed. This paper describes the development and operation of the simulator, the verification of a C-language Reed-Solomon code, and the possibility of using Comdisco SPW(tm) as a tool for determining optimal error control schemes.

  5. Passport officers' errors in face matching.

    PubMed

    White, David; Kemp, Richard I; Jenkins, Rob; Matheson, Michael; Burton, A Mike

    2014-01-01

    Photo-ID is widely used in security settings, despite research showing that viewers find it very difficult to match unfamiliar faces. Here we test participants with specialist experience and training in the task: passport-issuing officers. First, we ask officers to compare photos to live ID-card bearers, and observe high error rates, including 14% false acceptance of 'fraudulent' photos. Second, we compare passport officers with a set of student participants, and find equally poor levels of accuracy in both groups. Finally, we observe that passport officers show no performance advantage over the general population on a standardised face-matching task. Across all tasks, we observe very large individual differences: while average performance of passport staff was poor, some officers performed very accurately--though this was not related to length of experience or training. We propose that improvements in security could be made by emphasising personnel selection.

  6. An adaptive error-resilient video encoder

    NASA Astrophysics Data System (ADS)

    Cheng, Liang; El Zarki, Magda

    2003-06-01

    When designing an encoder for a real-time video application over a wireless channel, we must take into consideration the unpredictable fluctuation of the quality of the channel and its impact on the transmitted video data. This uncertainty motivates the development of an adaptive video encoding mechanism that can compensate for the infidelity caused either by data loss and/or by the post-processing (error concealment) at the decoder. In this paper, we first explore the major factors that cause quality degradation. We then propose an adaptive progressive replenishment algorithm for a packet loss rate (PLR) feedback enabled system. Assuming the availability of a feedback channel, we discuss a video quality assessment method, which allows the encoder to be aware of the decoder-side perceptual quality. Finally, we present a novel dual-feedback mechanism that guarantees an acceptable level of quality at the receiver side with modest increase in the complexity of the encoder.

  7. Passport Officers’ Errors in Face Matching

    PubMed Central

    White, David; Kemp, Richard I.; Jenkins, Rob; Matheson, Michael; Burton, A. Mike

    2014-01-01

    Photo-ID is widely used in security settings, despite research showing that viewers find it very difficult to match unfamiliar faces. Here we test participants with specialist experience and training in the task: passport-issuing officers. First, we ask officers to compare photos to live ID-card bearers, and observe high error rates, including 14% false acceptance of ‘fraudulent’ photos. Second, we compare passport officers with a set of student participants, and find equally poor levels of accuracy in both groups. Finally, we observe that passport officers show no performance advantage over the general population on a standardised face-matching task. Across all tasks, we observe very large individual differences: while average performance of passport staff was poor, some officers performed very accurately – though this was not related to length of experience or training. We propose that improvements in security could be made by emphasising personnel selection. PMID:25133682

  8. Human Error: A Concept Analysis

    NASA Technical Reports Server (NTRS)

    Hansen, Frederick D.

    2007-01-01

    Human error is the subject of research in almost every industry and profession of our times. This term is part of our daily language and intuitively understood by most people however, it would be premature to assume that everyone's understanding of human error s the same. For example, human error is used to describe the outcome or consequence of human action, the causal factor of an accident, deliberate violations,a nd the actual action taken by a human being. As a result, researchers rarely agree on the either a specific definition or how to prevent human error. The purpose of this article is to explore the specific concept of human error using Concept Analysis as described by Walker and Avant (1995). The concept of human error is examined as currently used in the literature of a variety of industries and professions. Defining attributes and examples of model, borderline, and contrary cases are described. The antecedents and consequences of human error are also discussed and a definition of human error is offered.

  9. Explaining Errors in Children's Questions

    ERIC Educational Resources Information Center

    Rowland, Caroline F.

    2007-01-01

    The ability to explain the occurrence of errors in children's speech is an essential component of successful theories of language acquisition. The present study tested some generativist and constructivist predictions about error on the questions produced by ten English-learning children between 2 and 5 years of age. The analyses demonstrated that,…

  10. Dual Processing and Diagnostic Errors

    ERIC Educational Resources Information Center

    Norman, Geoff

    2009-01-01

    In this paper, I review evidence from two theories in psychology relevant to diagnosis and diagnostic errors. "Dual Process" theories of thinking, frequently mentioned with respect to diagnostic error, propose that categorization decisions can be made with either a fast, unconscious, contextual process called System 1 or a slow, analytical,…

  11. Quantifying error distributions in crowding.

    PubMed

    Hanus, Deborah; Vul, Edward

    2013-03-22

    When multiple objects are in close proximity, observers have difficulty identifying them individually. Two classes of theories aim to account for this crowding phenomenon: spatial pooling and spatial substitution. Variations of these accounts predict different patterns of errors in crowded displays. Here we aim to characterize the kinds of errors that people make during crowding by comparing a number of error models across three experiments in which we manipulate flanker spacing, display eccentricity, and precueing duration. We find that both spatial intrusions and individual letter confusions play a considerable role in errors. Moreover, we find no evidence that a naïve pooling model that predicts errors based on a nonadditive combination of target and flankers explains errors better than an independent intrusion model (indeed, in our data, an independent intrusion model is slightly, but significantly, better). Finally, we find that manipulating trial difficulty in any way (spacing, eccentricity, or precueing) produces homogenous changes in error distributions. Together, these results provide quantitative baselines for predictive models of crowding errors, suggest that pooling and spatial substitution models are difficult to tease apart, and imply that manipulations of crowding all influence a common mechanism that impacts subject performance.

  12. Children's Scale Errors with Tools

    ERIC Educational Resources Information Center

    Casler, Krista; Eshleman, Angelica; Greene, Kimberly; Terziyan, Treysi

    2011-01-01

    Children sometimes make "scale errors," attempting to interact with tiny object replicas as though they were full size. Here, we demonstrate that instrumental tools provide special insight into the origins of scale errors and, moreover, into the broader nature of children's purpose-guided reasoning and behavior with objects. In Study 1, 1.5- to…

  13. Acceptance of colonoscopy requires more than test tolerance

    PubMed Central

    Condon, Amanda; Graff, Lesley; Elliot, Lawrence; Ilnyckyj, Alexandra

    2008-01-01

    BACKGROUND: Colon cancer screening, including colonoscopy, lags behind other forms of cancer screening for participation rates. The intrinsic nature of the endoscopic procedure may be an important barrier that limits patients from finding this test acceptable and affects willingness to undergo screening. With colon cancer screening programs emerging in Canada, test characteristics and their impact on acceptance warrant consideration. OBJECTIVES: To measure the acceptability of colonoscopy and define factors that contribute to procedural acceptability, in relation to another invasive gastrointestinal scope procedure, gastroscopy. PATIENTS AND METHODS: Consecutive patients undergoing a colonoscopy (n=55) or a gastroscopy (n=33) were recruited. Their procedural experience was evaluated and compared pre-endoscopy, immediately before testing and postendoscopy. Questionnaires were used to capture multiple domains of the endoscopy experience and patient characteristics. RESULTS: Patient scope groups did not differ preprocedurally for general or procedure-specific anxiety. However, the colonoscopy group did anticipate more pain. Those who had a gastroscopy demonstrated higher preprocedural acceptance than those who had a colonoscopy. The colonoscopy group had a significant decrease in scope concerns and anxiety postprocedurally. As well, they reported less pain than they anticipated. Regardless, postprocedurally, the colonoscopy group’s acceptance did not increase significantly, whereas the gastroscopy group was almost unanimous in their test acceptance. The best predictor of pretest acceptability of colonoscopy was anticipated pain. CONCLUSIONS: The findings indicate that concerns that relate specifically to colonoscopy, including anticipated pain, influence acceptability of the procedure. However, the experience of a colonoscopy does not lead to improved test acceptance, despite decreases in procedural anxiety and pain. Patients’ preprocedural views of the test are

  14. Acceptability of HPV vaccine implementation among parents in India

    PubMed Central

    Paul, Proma; Tanner, Amanda E.; Gravitt, Patti E.; Vijayaraghavan, K; Shah, Keerti V.; Zimet, Gregory D.

    2015-01-01

    Due to high cervical cancer rates and limited research on human papillomavirus (HPV) vaccine acceptability in India, the research team examined parental attitudes towards HPV vaccines. Thirty-six interviews with parents were conducted to assess STI-related knowledge and HPV-specific vaccine awareness and acceptability. Despite limited knowledge, parents had positive views toward HPV vaccines. Common barriers included: concerns about side effects, vaccine cost, and missing work to receive vaccine. Parents were strongly influenced by healthcare providers’ recommendations. Our findings suggest that addressing parental concerns, health worker training and polices, and efforts to minimize cost will be central to successful HPV vaccine implementation. PMID:23611111

  15. Acceptability of HPV vaccine implementation among parents in India.

    PubMed

    Paul, Proma; Tanner, Amanda E; Gravitt, Patti E; Vijayaraghavan, K; Shah, Keerti V; Zimet, Gregory D; Study Group, Catch

    2014-01-01

    Due to high cervical cancer rates and limited research on human papillomavirus (HPV) vaccine acceptability in India, the research team examined parental attitudes toward HPV vaccines. Thirty-six interviews with parents were conducted to assess sexually transmitted infection (STI)-related knowledge and HPV-specific vaccine awareness and acceptability. Despite limited knowledge, parents had positive views toward HPV vaccines. Common barriers included concerns about side effects, vaccine cost, and missing work to receive the vaccine. Parents were strongly influenced by health care providers' recommendations. Our findings suggest that addressing parental concerns, health worker training and polices, and efforts to minimize cost will be central to successful HPV vaccine implementation. PMID:23611111

  16. Challenge and error: critical events and attention-related errors.

    PubMed

    Cheyne, James Allan; Carriere, Jonathan S A; Solman, Grayden J F; Smilek, Daniel

    2011-12-01

    Attention lapses resulting from reactivity to task challenges and their consequences constitute a pervasive factor affecting everyday performance errors and accidents. A bidirectional model of attention lapses (error↔attention-lapse: Cheyne, Solman, Carriere, & Smilek, 2009) argues that errors beget errors by generating attention lapses; resource-depleting cognitions interfering with attention to subsequent task challenges. Attention lapses lead to errors, and errors themselves are a potent consequence often leading to further attention lapses potentially initiating a spiral into more serious errors. We investigated this challenge-induced error↔attention-lapse model using the Sustained Attention to Response Task (SART), a GO-NOGO task requiring continuous attention and response to a number series and withholding of responses to a rare NOGO digit. We found response speed and increased commission errors following task challenges to be a function of temporal distance from, and prior performance on, previous NOGO trials. We conclude by comparing and contrasting the present theory and findings to those based on choice paradigms and argue that the present findings have implications for the generality of conflict monitoring and control models.

  17. Human error in recreational boating.

    PubMed

    McKnight, A James; Becker, Wayne W; Pettit, Anthony J; McKnight, A Scott

    2007-03-01

    Each year over 600 people die and more than 4000 are reported injured in recreational boating accidents. As with most other accidents, human error is the major contributor. U.S. Coast Guard reports of 3358 accidents were analyzed to identify errors in each of the boat types by which statistics are compiled: auxiliary (motor) sailboats, cabin motorboats, canoes and kayaks, house boats, personal watercraft, open motorboats, pontoon boats, row boats, sail-only boats. The individual errors were grouped into categories on the basis of similarities in the behavior involved. Those presented here are the categories accounting for at least 5% of all errors when summed across boat types. The most revealing and significant finding is the extent to which the errors vary across types. Since boating is carried out with one or two types of boats for long periods of time, effective accident prevention measures, including safety instruction, need to be geared to individual boat types.

  18. Angle interferometer cross axis errors

    SciTech Connect

    Bryan, J.B.; Carter, D.L.; Thompson, S.L.

    1994-01-01

    Angle interferometers are commonly used to measure surface plate flatness. An error can exist when the centerline of the double comer cube mirror assembly is not square to the surface plate and the guide bar for the mirror sled is curved. Typical errors can be one to two microns per meter. A similar error can exist in the calibration of rotary tables when the centerline of the double comer cube mirror assembly is not square to the axes of rotation of the angle calibrator and the calibrator axis is not parallel to the rotary table axis. Commercial double comer cube assemblies typically have non-parallelism errors of ten milli-radians between their centerlines and their sides and similar values for non-squareness between their centerlines and end surfaces. The authors have developed a simple method for measuring these errors and correcting them by remachining the reference surfaces.

  19. Onorbit IMU alignment error budget

    NASA Technical Reports Server (NTRS)

    Corson, R. W.

    1980-01-01

    The Star Tracker, Crew Optical Alignment Sight (COAS), and Inertial Measurement Unit (IMU) from a complex navigation system with a multitude of error sources were combined. A complete list of the system errors is presented. The errors were combined in a rational way to yield an estimate of the IMU alignment accuracy for STS-1. The expected standard deviation in the IMU alignment error for STS-1 type alignments was determined to be 72 arc seconds per axis for star tracker alignments and 188 arc seconds per axis for COAS alignments. These estimates are based on current knowledge of the star tracker, COAS, IMU, and navigation base error specifications, and were partially verified by preliminary Monte Carlo analysis.

  20. A theory of human error

    NASA Technical Reports Server (NTRS)

    Mcruer, D. T.; Clement, W. F.; Allen, R. W.

    1981-01-01

    Human errors tend to be treated in terms of clinical and anecdotal descriptions, from which remedial measures are difficult to derive. Correction of the sources of human error requires an attempt to reconstruct underlying and contributing causes of error from the circumstantial causes cited in official investigative reports. A comprehensive analytical theory of the cause-effect relationships governing propagation of human error is indispensable to a reconstruction of the underlying and contributing causes. A validated analytical theory of the input-output behavior of human operators involving manual control, communication, supervisory, and monitoring tasks which are relevant to aviation, maritime, automotive, and process control operations is highlighted. This theory of behavior, both appropriate and inappropriate, provides an insightful basis for investigating, classifying, and quantifying the needed cause-effect relationships governing propagation of human error.

  1. Adherence to balance tolerance limits at the Upper Mississippi Science Center, La Crosse, Wisconsin.

    USGS Publications Warehouse

    Myers, C.T.; Kennedy, D.M.

    1998-01-01

    Verification of balance accuracy entails applying a series of standard masses to a balance prior to use and recording the measured values. The recorded values for each standard should have lower and upper weight limits or tolerances that are accepted as verification of balance accuracy under normal operating conditions. Balance logbooks for seven analytical balances at the Upper Mississippi Science Center were checked over a 3.5-year period to determine if the recorded weights were within the established tolerance limits. A total of 9435 measurements were checked. There were 14 instances in which the balance malfunctioned and operators recorded a rationale in the balance logbook. Sixty-three recording errors were found. Twenty-eight operators were responsible for two types of recording errors: Measurements of weights were recorded outside of the tolerance limit but not acknowledged as an error by the operator (n = 40); and measurements were recorded with the wrong number of decimal places (n = 23). The adherence rate for following tolerance limits was 99.3%. To ensure the continued adherence to tolerance limits, the quality-assurance unit revised standard operating procedures to require more frequent review of balance logbooks.

  2. Data Acceptance Criteria for Standardized Human-Associated Fecal Source Identification Quantitative Real-Time PCR Methods.

    PubMed

    Shanks, Orin C; Kelty, Catherine A; Oshiro, Robin; Haugland, Richard A; Madi, Tania; Brooks, Lauren; Field, Katharine G; Sivaganesan, Mano

    2016-05-01

    There is growing interest in the application of human-associated fecal source identification quantitative real-time PCR (qPCR) technologies for water quality management. The transition from a research tool to a standardized protocol requires a high degree of confidence in data quality across laboratories. Data quality is typically determined through a series of specifications that ensure good experimental practice and the absence of bias in the results due to DNA isolation and amplification interferences. However, there is currently a lack of consensus on how best to evaluate and interpret human fecal source identification qPCR experiments. This is, in part, due to the lack of standardized protocols and information on interlaboratory variability under conditions for data acceptance. The aim of this study is to provide users and reviewers with a complete series of conditions for data acceptance derived from a multiple laboratory data set using standardized procedures. To establish these benchmarks, data from HF183/BacR287 and HumM2 human-associated qPCR methods were generated across 14 laboratories. Each laboratory followed a standardized protocol utilizing the same lot of reference DNA materials, DNA isolation kits, amplification reagents, and test samples to generate comparable data. After removal of outliers, a nested analysis of variance (ANOVA) was used to establish proficiency metrics that include lab-to-lab, replicate testing within a lab, and random error for amplification inhibition and sample processing controls. Other data acceptance measurements included extraneous DNA contamination assessments (no-template and extraction blank controls) and calibration model performance (correlation coefficient, amplification efficiency, and lower limit of quantification). To demonstrate the implementation of the proposed standardized protocols and data acceptance criteria, comparable data from two additional laboratories were reviewed. The data acceptance criteria

  3. Data Acceptance Criteria for Standardized Human-Associated Fecal Source Identification Quantitative Real-Time PCR Methods

    PubMed Central

    Kelty, Catherine A.; Oshiro, Robin; Haugland, Richard A.; Madi, Tania; Brooks, Lauren; Field, Katharine G.; Sivaganesan, Mano

    2016-01-01

    There is growing interest in the application of human-associated fecal source identification quantitative real-time PCR (qPCR) technologies for water quality management. The transition from a research tool to a standardized protocol requires a high degree of confidence in data quality across laboratories. Data quality is typically determined through a series of specifications that ensure good experimental practice and the absence of bias in the results due to DNA isolation and amplification interferences. However, there is currently a lack of consensus on how best to evaluate and interpret human fecal source identification qPCR experiments. This is, in part, due to the lack of standardized protocols and information on interlaboratory variability under conditions for data acceptance. The aim of this study is to provide users and reviewers with a complete series of conditions for data acceptance derived from a multiple laboratory data set using standardized procedures. To establish these benchmarks, data from HF183/BacR287 and HumM2 human-associated qPCR methods were generated across 14 laboratories. Each laboratory followed a standardized protocol utilizing the same lot of reference DNA materials, DNA isolation kits, amplification reagents, and test samples to generate comparable data. After removal of outliers, a nested analysis of variance (ANOVA) was used to establish proficiency metrics that include lab-to-lab, replicate testing within a lab, and random error for amplification inhibition and sample processing controls. Other data acceptance measurements included extraneous DNA contamination assessments (no-template and extraction blank controls) and calibration model performance (correlation coefficient, amplification efficiency, and lower limit of quantification). To demonstrate the implementation of the proposed standardized protocols and data acceptance criteria, comparable data from two additional laboratories were reviewed. The data acceptance criteria

  4. Error diffusion with a more symmetric error distribution

    NASA Astrophysics Data System (ADS)

    Fan, Zhigang

    1994-05-01

    In this paper a new error diffusion algorithm is presented that effectively eliminates the `worm' artifacts appearing in the standard methods. The new algorithm processes each scanline of the image in two passes, a forward pass followed by a backward one. This enables the error made at one pixel to be propagated to all the `future' pixels. A much more symmetric error distribution is achieved than that of the standard methods. The frequency response of the noise shaping filter associated with the new algorithm is mirror-symmetric in magnitude.

  5. Errors as allies: error management training in health professions education.

    PubMed

    King, Aimee; Holder, Michael G; Ahmed, Rami A

    2013-06-01

    This paper adopts methods from the organisational team training literature to outline how health professions education can improve patient safety. We argue that health educators can improve training quality by intentionally encouraging errors during simulation-based team training. Preventable medical errors are inevitable, but encouraging errors in low-risk settings like simulation can allow teams to have better emotional control and foresight to manage the situation if it occurs again with live patients. Our paper outlines an innovative approach for delivering team training.

  6. Predicting the acceptance of advanced rider assistance systems.

    PubMed

    Huth, Véronique; Gelau, Christhard

    2013-01-01

    The strong prevalence of human error as a crash causation factor in motorcycle accidents calls for countermeasures that help tackling this issue. Advanced rider assistance systems pursue this goal, providing the riders with support and thus contributing to the prevention of crashes. However, the systems can only enhance riding safety if the riders use them. For this reason, acceptance is a decisive aspect to be considered in the development process of such systems. In order to be able to improve behavioural acceptance, the factors that influence the intention to use the system need to be identified. This paper examines the particularities of motorcycle riding and the characteristics of this user group that should be considered when predicting the acceptance of advanced rider assistance systems. Founded on theories predicting behavioural intention, the acceptance of technologies and the acceptance of driver support systems, a model on the acceptance of advanced rider assistance systems is proposed, including the perceived safety when riding without support, the interface design and the social norm as determinants of the usage intention. Since actual usage cannot be measured in the development stage of the systems, the willingness to have the system installed on the own motorcycle and the willingness to pay for the system are analyzed, constituting relevant conditions that allow for actual usage at a later stage. Its validation with the results from user tests on four advanced rider assistance systems allows confirming the social norm and the interface design as powerful predictors of the acceptance of ARAS, while the extent of perceived safety when riding without support did not have any predictive value in the present study.

  7. Theoretical analysis of errors when estimating snow distribution through point measurements

    NASA Astrophysics Data System (ADS)

    Trujillo, E.; Lehning, M.

    2015-06-01

    In recent years, marked improvements in our knowledge of the statistical properties of the spatial distribution of snow properties have been achieved thanks to improvements in measuring technologies (e.g., LIDAR, terrestrial laser scanning (TLS), and ground-penetrating radar (GPR)). Despite this, objective and quantitative frameworks for the evaluation of errors in snow measurements have been lacking. Here, we present a theoretical framework for quantitative evaluations of the uncertainty in average snow depth derived from point measurements over a profile section or an area. The error is defined as the expected value of the squared difference between the real mean of the profile/field and the sample mean from a limited number of measurements. The model is tested for one- and two-dimensional survey designs that range from a single measurement to an increasing number of regularly spaced measurements. Using high-resolution (~ 1 m) LIDAR snow depths at two locations in Colorado, we show that the sample errors follow the theoretical behavior. Furthermore, we show how the determination of the spatial location of the measurements can be reduced to an optimization problem for the case of the predefined number of measurements, or to the designation of an acceptable uncertainty level to determine the total number of regularly spaced measurements required to achieve such an error. On this basis, a series of figures are presented as an aid for snow survey design under the conditions described, and under the assumption of prior knowledge of the spatial covariance/correlation properties. With this methodology, better objective survey designs can be accomplished that are tailored to the specific applications for which the measurements are going to be used. The theoretical framework can be extended to other spatially distributed snow variables (e.g., SWE - snow water equivalent) whose statistical properties are comparable to those of snow depth.

  8. Accepting the T3D

    SciTech Connect

    Rich, D.O.; Pope, S.C.; DeLapp, J.G.

    1994-10-01

    In April, a 128 PE Cray T3D was installed at Los Alamos National Laboratory`s Advanced Computing Laboratory as part of the DOE`s High-Performance Parallel Processor Program (H4P). In conjunction with CRI, the authors implemented a 30 day acceptance test. The test was constructed in part to help them understand the strengths and weaknesses of the T3D. In this paper, they briefly describe the H4P and its goals. They discuss the design and implementation of the T3D acceptance test and detail issues that arose during the test. They conclude with a set of system requirements that must be addressed as the T3D system evolves.

  9. Sweeteners: consumer acceptance in tea.

    PubMed

    Sprowl, D J; Ehrcke, L A

    1984-09-01

    Sucrose, fructose, aspartame, and saccharin were compared for consumer preference, aftertaste, and cost to determine acceptability of the sweeteners. A 23-member taste panel evaluated tea samples for preference and aftertaste. Mean retail cost of the sweeteners were calculated and adjusted to take sweetening power into consideration. Sucrose was the least expensive and most preferred sweetener. No significant difference in preference for fructose and aspartame was found, but both sweeteners were rated significantly lower than sucrose. Saccharin was the most disliked sweetener. Fructose was the most expensive sweetener and aspartame the next most expensive. Scores for aftertaste followed the same pattern as those for preference. Thus, a strong, unpleasant aftertaste seems to be associated with a dislike for a sweetener. From the results of this study, it seems that there is no completely acceptable low-calorie substitute for sucrose available to consumers.

  10. Acceptability of reactors in space

    SciTech Connect

    Buden, D.

    1981-01-01

    Reactors are the key to our future expansion into space. However, there has been some confusion in the public as to whether they are a safe and acceptable technology for use in space. The answer to these questions is explored. The US position is that when reactors are the preferred technical choice, that they can be used safely. In fact, it does not appear that reactors add measurably to the risk associated with the Space Transportation System.

  11. Acceptability of reactors in space

    SciTech Connect

    Buden, D.

    1981-04-01

    Reactors are the key to our future expansion into space. However, there has been some confusion in the public as to whether they are a safe and acceptable technology for use in space. The answer to these questions is explored. The US position is that when reactors are the preferred technical choice, that they can be used safely. In fact, it dies not appear that reactors add measurably to the risk associated with the Space Transportation System.

  12. Exploring Discretization Error in Simulation-Based Aerodynamic Databases

    NASA Technical Reports Server (NTRS)

    Aftosmis, Michael J.; Nemec, Marian

    2010-01-01

    This work examines the level of discretization error in simulation-based aerodynamic databases and introduces strategies for error control. Simulations are performed using a parallel, multi-level Euler solver on embedded-boundary Cartesian meshes. Discretization errors in user-selected outputs are estimated using the method of adjoint-weighted residuals and we use adaptive mesh refinement to reduce these errors to specified tolerances. Using this framework, we examine the behavior of discretization error throughout a token database computed for a NACA 0012 airfoil consisting of 120 cases. We compare the cost and accuracy of two approaches for aerodynamic database generation. In the first approach, mesh adaptation is used to compute all cases in the database to a prescribed level of accuracy. The second approach conducts all simulations using the same computational mesh without adaptation. We quantitatively assess the error landscape and computational costs in both databases. This investigation highlights sensitivities of the database under a variety of conditions. The presence of transonic shocks or the stiffness in the governing equations near the incompressible limit are shown to dramatically increase discretization error requiring additional mesh resolution to control. Results show that such pathologies lead to error levels that vary by over factor of 40 when using a fixed mesh throughout the database. Alternatively, controlling this sensitivity through mesh adaptation leads to mesh sizes which span two orders of magnitude. We propose strategies to minimize simulation cost in sensitive regions and discuss the role of error-estimation in database quality.

  13. Designing to Control Flight Crew Errors

    NASA Technical Reports Server (NTRS)

    Schutte, Paul C.; Willshire, Kelli F.

    1997-01-01

    It is widely accepted that human error is a major contributing factor in aircraft accidents. There has been a significant amount of research in why these errors occurred, and many reports state that the design of flight deck can actually dispose humans to err. This research has led to the call for changes in design according to human factors and human-centered principles. The National Aeronautics and Space Administration's (NASA) Langley Research Center has initiated an effort to design a human-centered flight deck from a clean slate (i.e., without constraints of existing designs.) The effort will be based on recent research in human-centered design philosophy and mission management categories. This design will match the human's model of the mission and function of the aircraft to reduce unnatural or non-intuitive interfaces. The product of this effort will be a flight deck design description, including training and procedures, and a cross reference or paper trail back to design hypotheses, and an evaluation of the design. The present paper will discuss the philosophy, process, and status of this design effort.

  14. 48 CFR 12.402 - Acceptance.

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... 48 Federal Acquisition Regulations System 1 2010-10-01 2010-10-01 false Acceptance. 12.402 Section... Acceptance. (a) The acceptance paragraph in 52.212-4 is based upon the assumption that the Government will rely on the contractor's assurances that the commercial item tendered for acceptance conforms to...

  15. Regulatory perspectives on acceptability testing of dosage forms in children.

    PubMed

    Kozarewicz, Piotr

    2014-08-01

    Current knowledge about the age-appropriateness of different dosage forms is still fragmented or limited. Applicants are asked to demonstrate that the target age group(s) can manage the dosage form or propose an alternative strategy. However, questions remain about how far the applicant must go and what percentage of patients must find the strategy 'acceptable'. The aim of this overview is to provide an update on current thinking and understanding of the problem, and discuss issues relating to the acceptability testing. This overview should be considered as means to start a wider discussion which hopefully will result in a harmonised, globally acceptable approach for confirmation of the acceptability in the future.

  16. Reduced error signalling in medication-naive children with ADHD: associations with behavioural variability and post-error adaptations

    PubMed Central

    Plessen, Kerstin J.; Allen, Elena A.; Eichele, Heike; van Wageningen, Heidi; Høvik, Marie Farstad; Sørensen, Lin; Worren, Marius Kalsås; Hugdahl, Kenneth; Eichele, Tom

    2016-01-01

    Background We examined the blood-oxygen level–dependent (BOLD) activation in brain regions that signal errors and their association with intraindividual behavioural variability and adaptation to errors in children with attention-deficit/hyperactivity disorder (ADHD). Methods We acquired functional MRI data during a Flanker task in medication-naive children with ADHD and healthy controls aged 8–12 years and analyzed the data using independent component analysis. For components corresponding to performance monitoring networks, we compared activations across groups and conditions and correlated them with reaction times (RT). Additionally, we analyzed post-error adaptations in behaviour and motor component activations. Results We included 25 children with ADHD and 29 controls in our analysis. Children with ADHD displayed reduced activation to errors in cingulo-opercular regions and higher RT variability, but no differences of interference control. Larger BOLD amplitude to error trials significantly predicted reduced RT variability across all participants. Neither group showed evidence of post-error response slowing; however, post-error adaptation in motor networks was significantly reduced in children with ADHD. This adaptation was inversely related to activation of the right-lateralized ventral attention network (VAN) on error trials and to task-driven connectivity between the cingulo-opercular system and the VAN. Limitations Our study was limited by the modest sample size and imperfect matching across groups. Conclusion Our findings show a deficit in cingulo-opercular activation in children with ADHD that could relate to reduced signalling for errors. Moreover, the reduced orienting of the VAN signal may mediate deficient post-error motor adaptions. Pinpointing general performance monitoring problems to specific brain regions and operations in error processing may help to guide the targets of future treatments for ADHD. PMID:26441332

  17. Medical error reduction and tort reform through private, contractually-based quality medicine societies.

    PubMed

    MacCourt, Duncan; Bernstein, Joseph

    2009-01-01

    The current medical malpractice system is broken. Many patients injured by malpractice are not compensated, whereas some patients who recover in tort have not suffered medical negligence; furthermore, the system's failures demoralize patients and physicians. But most importantly, the system perpetuates medical error because the adversarial nature of litigation induces a so-called "Culture of Silence" in physicians eager to shield themselves from liability. This silence leads to the pointless repetition of error, as the open discussion and analysis of the root causes of medical mistakes does not take place as fully as it should. In 1993, President Clinton's Task Force on National Health Care Reform considered a solution characterized by Enterprise Medical Liability (EML), Alternative Dispute Resolution (ADR), some limits on recovery for non-pecuniary damages (Caps), and offsets for collateral source recovery. Yet this list of ingredients did not include a strategy to surmount the difficulties associated with each element. Specifically, EML might be efficient, but none of the enterprises contemplated to assume responsibility, i.e., hospitals and payers, control physician behavior enough so that it would be fair to foist liability on them. Likewise, although ADR might be efficient, it will be resisted by individual litigants who perceive themselves as harmed by it. Finally, while limitations on collateral source recovery and damages might effectively reduce costs, patients and trial lawyers likely would not accept them without recompense. The task force also did not place error reduction at the center of malpractice tort reform -a logical and strategic error, in our view. In response, we propose a new system that employs the ingredients suggested by the task force but also addresses the problems with each. We also explicitly consider steps to rebuff the Culture of Silence and promote error reduction. We assert that patients would be better off with a system where

  18. Medical error reduction and tort reform through private, contractually-based quality medicine societies.

    PubMed

    MacCourt, Duncan; Bernstein, Joseph

    2009-01-01

    The current medical malpractice system is broken. Many patients injured by malpractice are not compensated, whereas some patients who recover in tort have not suffered medical negligence; furthermore, the system's failures demoralize patients and physicians. But most importantly, the system perpetuates medical error because the adversarial nature of litigation induces a so-called "Culture of Silence" in physicians eager to shield themselves from liability. This silence leads to the pointless repetition of error, as the open discussion and analysis of the root causes of medical mistakes does not take place as fully as it should. In 1993, President Clinton's Task Force on National Health Care Reform considered a solution characterized by Enterprise Medical Liability (EML), Alternative Dispute Resolution (ADR), some limits on recovery for non-pecuniary damages (Caps), and offsets for collateral source recovery. Yet this list of ingredients did not include a strategy to surmount the difficulties associated with each element. Specifically, EML might be efficient, but none of the enterprises contemplated to assume responsibility, i.e., hospitals and payers, control physician behavior enough so that it would be fair to foist liability on them. Likewise, although ADR might be efficient, it will be resisted by individual litigants who perceive themselves as harmed by it. Finally, while limitations on collateral source recovery and damages might effectively reduce costs, patients and trial lawyers likely would not accept them without recompense. The task force also did not place error reduction at the center of malpractice tort reform -a logical and strategic error, in our view. In response, we propose a new system that employs the ingredients suggested by the task force but also addresses the problems with each. We also explicitly consider steps to rebuff the Culture of Silence and promote error reduction. We assert that patients would be better off with a system where

  19. Errors, error detection, error correction and hippocampal-region damage: data and theories.

    PubMed

    MacKay, Donald G; Johnson, Laura W

    2013-11-01

    This review and perspective article outlines 15 observational constraints on theories of errors, error detection, and error correction, and their relation to hippocampal-region (HR) damage. The core observations come from 10 studies with H.M., an amnesic with cerebellar and HR damage but virtually no neocortical damage. Three studies examined the detection of errors planted in visual scenes (e.g., a bird flying in a fish bowl in a school classroom) and sentences (e.g., I helped themselves to the birthday cake). In all three experiments, H.M. detected reliably fewer errors than carefully matched memory-normal controls. Other studies examined the detection and correction of self-produced errors, with controls for comprehension of the instructions, impaired visual acuity, temporal factors, motoric slowing, forgetting, excessive memory load, lack of motivation, and deficits in visual scanning or attention. In these studies, H.M. corrected reliably fewer errors than memory-normal and cerebellar controls, and his uncorrected errors in speech, object naming, and reading aloud exhibited two consistent features: omission and anomaly. For example, in sentence production tasks, H.M. omitted one or more words in uncorrected encoding errors that rendered his sentences anomalous (incoherent, incomplete, or ungrammatical) reliably more often than controls. Besides explaining these core findings, the theoretical principles discussed here explain H.M.'s retrograde amnesia for once familiar episodic and semantic information; his anterograde amnesia for novel information; his deficits in visual cognition, sentence comprehension, sentence production, sentence reading, and object naming; and effects of aging on his ability to read isolated low frequency words aloud. These theoretical principles also explain a wide range of other data on error detection and correction and generate new predictions for future test.

  20. Errors, error detection, error correction and hippocampal-region damage: data and theories.

    PubMed

    MacKay, Donald G; Johnson, Laura W

    2013-11-01

    This review and perspective article outlines 15 observational constraints on theories of errors, error detection, and error correction, and their relation to hippocampal-region (HR) damage. The core observations come from 10 studies with H.M., an amnesic with cerebellar and HR damage but virtually no neocortical damage. Three studies examined the detection of errors planted in visual scenes (e.g., a bird flying in a fish bowl in a school classroom) and sentences (e.g., I helped themselves to the birthday cake). In all three experiments, H.M. detected reliably fewer errors than carefully matched memory-normal controls. Other studies examined the detection and correction of self-produced errors, with controls for comprehension of the instructions, impaired visual acuity, temporal factors, motoric slowing, forgetting, excessive memory load, lack of motivation, and deficits in visual scanning or attention. In these studies, H.M. corrected reliably fewer errors than memory-normal and cerebellar controls, and his uncorrected errors in speech, object naming, and reading aloud exhibited two consistent features: omission and anomaly. For example, in sentence production tasks, H.M. omitted one or more words in uncorrected encoding errors that rendered his sentences anomalous (incoherent, incomplete, or ungrammatical) reliably more often than controls. Besides explaining these core findings, the theoretical principles discussed here explain H.M.'s retrograde amnesia for once familiar episodic and semantic information; his anterograde amnesia for novel information; his deficits in visual cognition, sentence comprehension, sentence production, sentence reading, and object naming; and effects of aging on his ability to read isolated low frequency words aloud. These theoretical principles also explain a wide range of other data on error detection and correction and generate new predictions for future test. PMID:23999403

  1. BFC: correcting Illumina sequencing errors

    PubMed Central

    2015-01-01

    Summary: BFC is a free, fast and easy-to-use sequencing error corrector designed for Illumina short reads. It uses a non-greedy algorithm but still maintains a speed comparable to implementations based on greedy methods. In evaluations on real data, BFC appears to correct more errors with fewer overcorrections in comparison to existing tools. It particularly does well in suppressing systematic sequencing errors, which helps to improve the base accuracy of de novo assemblies. Availability and implementation: https://github.com/lh3/bfc Contact: hengli@broadinstitute.org Supplementary information: Supplementary data are available at Bioinformatics online. PMID:25953801

  2. Altimeter error sources at the 10-cm performance level

    NASA Technical Reports Server (NTRS)

    Martin, C. F.

    1977-01-01

    Error sources affecting the calibration and operational use of a 10 cm altimeter are examined to determine the magnitudes of current errors and the investigations necessary to reduce them to acceptable bounds. Errors considered include those affecting operational data pre-processing, and those affecting altitude bias determination, with error budgets developed for both. The most significant error sources affecting pre-processing are bias calibration, propagation corrections for the ionosphere, and measurement noise. No ionospheric models are currently validated at the required 10-25% accuracy level. The optimum smoothing to reduce the effects of measurement noise is investigated and found to be on the order of one second, based on the TASC model of geoid undulations. The 10 cm calibrations are found to be feasible only through the use of altimeter passes that are very high elevation for a tracking station which tracks very close to the time of altimeter track, such as a high elevation pass across the island of Bermuda. By far the largest error source, based on the current state-of-the-art, is the location of the island tracking station relative to mean sea level in the surrounding ocean areas.

  3. Sensitivity of feedforward neural networks to weight errors--

    SciTech Connect

    Stevenson, M.; Widrow, B. . Dept. of Engineering-Economic Systems); Winter, R. )

    1990-03-01

    An important consideration when implementing neural networks with digital or analog hardware of limited precision is the sensitivity of neural networks to weight errors. In this paper, the authors analyze the sensitivity of feedforward layered networks of Adaline elements (threshold logic units) to weight errors. An approximation is derived which expresses the probability of error for an output neuron of a large network ( a network with many neurons per layer) as a function of the percentage change in the weights. As would be expected, the probability of error increases with the number of layers in the network and with the percentage change in the weights. Surprisingly, the probability of error is essentially independent of the number of weights per neuron and of the number of neurons per layer, as long as these numbers are large (on the order of 100 or more).

  4. Acceptability of bio-engineered vaccines.

    PubMed

    Danner, K

    1997-01-01

    For hundreds of years bacterial and viral vaccines have been-in a way-bioengineered and were generally well received by the public, the authorities, and the medical profession. Today, additional tools, e.g. molecular biology, enable new approaches to the development of better and safer products. Various vaccines derived from gene technology have now been licensed for commercial use and are acknowledged within the scientific community. Acceptance by the public and the politicians is, however, negatively influenced by the discussions encompassing gene manipulation in man and animals, transgenic plant, and "novel food". Lack of information leads to confusion and fear. Concurrently, the absence of spectacular and life-threatening epidemics limits the perceived value of immune prophylaxis and its benefits. Scientists in institutes and industry are in a position to stimulate acceptability of bio-engineered vaccines by following some simple rule: (1) adherence to the principles of safety; (2) establishment of analytical and control methods; (3) well functioning regulatory and reporting systems; (4) demonstration of usefulness and economic benefits; (5) open communication; and (6) correct and prudent wording. PMID:9023035

  5. Total error vs. measurement uncertainty: revolution or evolution?

    PubMed

    Oosterhuis, Wytze P; Theodorsson, Elvar

    2016-02-01

    The first strategic EFLM conference "Defining analytical performance goals, 15 years after the Stockholm Conference" was held in the autumn of 2014 in Milan. It maintained the Stockholm 1999 hierarchy of performance goals but rearranged them and established five task and finish groups to work on topics related to analytical performance goals including one on the "total error" theory. Jim Westgard recently wrote a comprehensive overview of performance goals and of the total error theory critical of the results and intentions of the Milan 2014 conference. The "total error" theory originated by Jim Westgard and co-workers has a dominating influence on the theory and practice of clinical chemistry but is not accepted in other fields of metrology. The generally accepted uncertainty theory, however, suffers from complex mathematics and conceived impracticability in clinical chemistry. The pros and cons of the total error theory need to be debated, making way for methods that can incorporate all relevant causes of uncertainty when making medical diagnoses and monitoring treatment effects. This development should preferably proceed not as a revolution but as an evolution.

  6. Correcting for particle counting bias error in turbulent flow

    NASA Technical Reports Server (NTRS)

    Edwards, R. V.; Baratuci, W.

    1985-01-01

    An ideal seeding device is proposed generating particles that exactly follow the flow out are still a major source of error, i.e., with a particle counting bias wherein the probability of measuring velocity is a function of velocity. The error in the measured mean can be as much as 25%. Many schemes have been put forward to correct for this error, but there is not universal agreement as to the acceptability of any one method. In particular it is sometimes difficult to know if the assumptions required in the analysis are fulfilled by any particular flow measurement system. To check various correction mechanisms in an ideal way and to gain some insight into how to correct with the fewest initial assumptions, a computer simulation is constructed to simulate laser anemometer measurements in a turbulent flow. That simulator and the results of its use are discussed.

  7. Controlling type-1 error rates in whole effluent toxicity testing

    SciTech Connect

    Smith, R.; Johnson, S.C.

    1995-12-31

    A form of variability, called the dose x test interaction, has been found to affect the variability of the mean differences from control in the statistical tests used to evaluate Whole Effluent Toxicity Tests for compliance purposes. Since the dose x test interaction is not included in these statistical tests, the assumed type-1 and type-2 error rates can be incorrect. The accepted type-1 error rate for these tests is 5%. Analysis of over 100 Ceriodaphnia, fathead minnow and sea urchin fertilization tests showed that when the test x dose interaction term was not included in the calculations the type-1 error rate was inflated to as high as 20%. In a compliance setting, this problem may lead to incorrect regulatory decisions. Statistical tests are proposed that properly incorporate the dose x test interaction variance.

  8. FORCE: FORtran for Cosmic Errors

    NASA Astrophysics Data System (ADS)

    Colombi, Stéphane; Szapudi, István

    We review the theory of cosmic errors we have recently developed for count-in-cells statistics. The corresponding FORCE package provides a simple and useful way to compute cosmic covariance on factorial moments and cumulants measured in galaxy catalogs.

  9. Human errors and measurement uncertainty

    NASA Astrophysics Data System (ADS)

    Kuselman, Ilya; Pennecchi, Francesca

    2015-04-01

    Evaluating the residual risk of human errors in a measurement and testing laboratory, remaining after the error reduction by the laboratory quality system, and quantifying the consequences of this risk for the quality of the measurement/test results are discussed based on expert judgments and Monte Carlo simulations. A procedure for evaluation of the contribution of the residual risk to the measurement uncertainty budget is proposed. Examples are provided using earlier published sets of expert judgments on human errors in pH measurement of groundwater, elemental analysis of geological samples by inductively coupled plasma mass spectrometry, and multi-residue analysis of pesticides in fruits and vegetables. The human error contribution to the measurement uncertainty budget in the examples was not negligible, yet also not dominant. This was assessed as a good risk management result.

  10. Quantile Regression With Measurement Error

    PubMed Central

    Wei, Ying; Carroll, Raymond J.

    2010-01-01

    Regression quantiles can be substantially biased when the covariates are measured with error. In this paper we propose a new method that produces consistent linear quantile estimation in the presence of covariate measurement error. The method corrects the measurement error induced bias by constructing joint estimating equations that simultaneously hold for all the quantile levels. An iterative EM-type estimation algorithm to obtain the solutions to such joint estimation equations is provided. The finite sample performance of the proposed method is investigated in a simulation study, and compared to the standard regression calibration approach. Finally, we apply our methodology to part of the National Collaborative Perinatal Project growth data, a longitudinal study with an unusual measurement error structure. PMID:20305802

  11. Static Detection of Disassembly Errors

    SciTech Connect

    Krishnamoorthy, Nithya; Debray, Saumya; Fligg, Alan K

    2009-10-13

    Static disassembly is a crucial first step in reverse engineering executable files, and there is a consider- able body of work in reverse-engineering of binaries, as well as areas such as semantics-based security anal- ysis, that assumes that the input executable has been correctly disassembled. However, disassembly errors, e.g., arising from binary obfuscations, can render this assumption invalid. This work describes a machine- learning-based approach, using decision trees, for stat- ically identifying possible errors in a static disassem- bly; such potential errors may then be examined more closely, e.g., using dynamic analyses. Experimental re- sults using a variety of input executables indicate that our approach performs well, correctly identifying most disassembly errors with relatively few false positives.

  12. Dual processing and diagnostic errors.

    PubMed

    Norman, Geoff

    2009-09-01

    In this paper, I review evidence from two theories in psychology relevant to diagnosis and diagnostic errors. "Dual Process" theories of thinking, frequently mentioned with respect to diagnostic error, propose that categorization decisions can be made with either a fast, unconscious, contextual process called System 1 or a slow, analytical, conscious, and conceptual process, called System 2. Exemplar theories of categorization propose that many category decisions in everyday life are made by unconscious matching to a particular example in memory, and these remain available and retrievable individually. I then review studies of clinical reasoning based on these theories, and show that the two processes are equally effective; System 1, despite its reliance in idiosyncratic, individual experience, is no more prone to cognitive bias or diagnostic error than System 2. Further, I review evidence that instructions directed at encouraging the clinician to explicitly use both strategies can lead to consistent reduction in error rates.

  13. Prospective errors determine motor learning

    PubMed Central

    Takiyama, Ken; Hirashima, Masaya; Nozaki, Daichi

    2015-01-01

    Diverse features of motor learning have been reported by numerous studies, but no single theoretical framework concurrently accounts for these features. Here, we propose a model for motor learning to explain these features in a unified way by extending a motor primitive framework. The model assumes that the recruitment pattern of motor primitives is determined by the predicted movement error of an upcoming movement (prospective error). To validate this idea, we perform a behavioural experiment to examine the model’s novel prediction: after experiencing an environment in which the movement error is more easily predictable, subsequent motor learning should become faster. The experimental results support our prediction, suggesting that the prospective error might be encoded in the motor primitives. Furthermore, we demonstrate that this model has a strong explanatory power to reproduce a wide variety of motor-learning-related phenomena that have been separately explained by different computational models. PMID:25635628

  14. Orbital and Geodetic Error Analysis

    NASA Technical Reports Server (NTRS)

    Felsentreger, T.; Maresca, P.; Estes, R.

    1985-01-01

    Results that previously required several runs determined in more computer-efficient manner. Multiple runs performed only once with GEODYN and stored on tape. ERODYN then performs matrix partitioning and linear algebra required for each individual error-analysis run.

  15. Abundance recovery error analysis using simulated AVIRIS data

    NASA Technical Reports Server (NTRS)

    Stoner, William W.; Harsanyi, Joseph C.; Farrand, William H.; Wong, Jennifer A.

    1992-01-01

    Measurement noise and imperfect atmospheric correction translate directly into errors in the determination of the surficial abundance of materials from imaging spectrometer data. The effects of errors on abundance recovery were investigated previously using Monte Carlo simulation methods by Sabol et. al. The drawback of the Monte Carlo approach is that thousands of trials are needed to develop good statistics on the probable error in abundance recovery. This computational burden invariably limits the number of scenarios of interest that can practically be investigated. A more efficient approach is based on covariance analysis. The covariance analysis approach expresses errors in abundance as a function of noise in the spectral measurements and provides a closed form result eliminating the need for multiple trials. Monte Carlo simulation and covariance analysis are used to predict confidence limits for abundance recovery for a scenario which is modeled as being derived from Airborne Visible/Infrared Imaging Spectrometer (AVIRIS).

  16. 26 CFR 1.1314(a)-1 - Ascertainment of amount of adjustment in year of error.

    Code of Federal Regulations, 2011 CFR

    2011-04-01

    ... of error. 1.1314(a)-1 Section 1.1314(a)-1 Internal Revenue INTERNAL REVENUE SERVICE, DEPARTMENT OF... Years and Special Limitations § 1.1314(a)-1 Ascertainment of amount of adjustment in year of error. (a... ascertained the amount of the tax previously determined for the taxpayer as to whom the error was made for...

  17. 26 CFR 1.1314(a)-1 - Ascertainment of amount of adjustment in year of error.

    Code of Federal Regulations, 2014 CFR

    2014-04-01

    ... of error. 1.1314(a)-1 Section 1.1314(a)-1 Internal Revenue INTERNAL REVENUE SERVICE, DEPARTMENT OF... Years and Special Limitations § 1.1314(a)-1 Ascertainment of amount of adjustment in year of error. (a... ascertained the amount of the tax previously determined for the taxpayer as to whom the error was made for...

  18. 26 CFR 1.1314(a)-1 - Ascertainment of amount of adjustment in year of error.

    Code of Federal Regulations, 2013 CFR

    2013-04-01

    ... of error. 1.1314(a)-1 Section 1.1314(a)-1 Internal Revenue INTERNAL REVENUE SERVICE, DEPARTMENT OF... Years and Special Limitations § 1.1314(a)-1 Ascertainment of amount of adjustment in year of error. (a... ascertained the amount of the tax previously determined for the taxpayer as to whom the error was made for...

  19. 26 CFR 1.1314(a)-1 - Ascertainment of amount of adjustment in year of error.

    Code of Federal Regulations, 2012 CFR

    2012-04-01

    ... of error. 1.1314(a)-1 Section 1.1314(a)-1 Internal Revenue INTERNAL REVENUE SERVICE, DEPARTMENT OF... Years and Special Limitations § 1.1314(a)-1 Ascertainment of amount of adjustment in year of error. (a... ascertained the amount of the tax previously determined for the taxpayer as to whom the error was made for...

  20. 19 CFR 351.224 - Disclosure of calculations and procedures for the correction of ministerial errors.

    Code of Federal Regulations, 2013 CFR

    2013-04-01

    ... determination may submit comments concerning a significant ministerial error in such calculations. A party to... determination or the final results of a review may submit comments concerning any ministerial error in such... the time limit for filing comments concerning a ministerial error in a final determination or...

  1. 19 CFR 351.224 - Disclosure of calculations and procedures for the correction of ministerial errors.

    Code of Federal Regulations, 2012 CFR

    2012-04-01

    ... determination may submit comments concerning a significant ministerial error in such calculations. A party to... determination or the final results of a review may submit comments concerning any ministerial error in such... the time limit for filing comments concerning a ministerial error in a final determination or...

  2. 19 CFR 351.224 - Disclosure of calculations and procedures for the correction of ministerial errors.

    Code of Federal Regulations, 2014 CFR

    2014-04-01

    ... determination may submit comments concerning a significant ministerial error in such calculations. A party to... determination or the final results of a review may submit comments concerning any ministerial error in such... the time limit for filing comments concerning a ministerial error in a final determination or...

  3. Impact of CMOS Scaling on Single-Event Hard Errors in Space Systems

    NASA Technical Reports Server (NTRS)

    Johnston, A. H.; Swift, G. M.; Shaw, D. C.

    1995-01-01

    Applications of highly scaled devices in space applications are shown to be limited by hard errors from cosmic rays. Hard errors were first observed in 0.8 (micro)m DRAMs. For feature sizes below 0.5 (micro)m, scaling theory predicts that low power devices will have much lower hard error rates than devices optimized for high speed.

  4. Optical synthetic-aperture radar processor archietecture with quadratic phase-error correction

    SciTech Connect

    Dickey, F.M.; Mason, J.J. )

    1990-10-15

    Uncompensated phase errors limit the image quality of synthetic-aperture radar. We present an acousto-optic synthetic-aperture radar processor architecture capable of measuring the quadratic phase error. This architecture allows for the error signal to be fed back to the processor to generate the corrected image.

  5. Optical synthetic-aperture radar processor architecture with quadratic phase-error correction.

    PubMed

    Dickey, F M; Mason, J J

    1990-10-15

    Uncompensated phase errors limit the image quality of synthetic-aperture radar. We present an acousto-optic synthetic-aperture radar processor architecture capable of measuring the quadratic phase error. This architecture allows for the error signal to be fed back to the processor to generate the corrected image.

  6. Interpolation Errors in Spectrum Analyzers

    NASA Technical Reports Server (NTRS)

    Martin, J. L.

    1996-01-01

    To obtain the proper measurement amplitude with a spectrum analyzer, the correct frequency-dependent transducer factor must be added to the voltage measured by the transducer. This report examines how entering transducer factors into a spectrum analyzer can cause significant errors in field amplitude due to the misunderstanding of the analyzer's interpolation methods. It also discusses how to reduce these errors to obtain a more accurate field amplitude reading.

  7. Relative-Error-Covariance Algorithms

    NASA Technical Reports Server (NTRS)

    Bierman, Gerald J.; Wolff, Peter J.

    1991-01-01

    Two algorithms compute error covariance of difference between optimal estimates, based on data acquired during overlapping or disjoint intervals, of state of discrete linear system. Provides quantitative measure of mutual consistency or inconsistency of estimates of states. Relative-error-covariance concept applied, to determine degree of correlation between trajectories calculated from two overlapping sets of measurements and construct real-time test of consistency of state estimates based upon recently acquired data.

  8. Error resilient image transmission based on virtual SPIHT

    NASA Astrophysics Data System (ADS)

    Liu, Rongke; He, Jie; Zhang, Xiaolin

    2007-02-01

    SPIHT is one of the most efficient image compression algorithms. It had been successfully applied to a wide variety of images, such as medical and remote sensing images. However, it is highly susceptible to channel errors. A single bit error could potentially lead to decoder derailment. In this paper, we integrate new error resilient tools into wavelet coding algorithm and present an error-resilient image transmission scheme based on virtual set partitioning in hierarchical trees (SPIHT), EREC and self truncation mechanism. After wavelet decomposition, the virtual spatial-orientation trees in the wavelet domain are individually encoded using virtual SPIHT. Since the self-similarity across sub bands is preserved, a high source coding efficiency can be achieved. The scheme is essentially a tree-based coding, thus error propagation is limited within each virtual tree. The number of virtual trees may be adjusted according to the channel conditions. When the channel is excellent, we may decrease the number of trees to further improve the compression efficiency, otherwise increase the number of trees to guarantee the error resilience to channel. EREC is also adopted to enhance the error resilience capability of the compressed bit streams. At the receiving side, the self-truncation mechanism based on self constraint of set partition trees is introduced. The decoding of any sub-tree halts in case the violation of self-constraint relationship occurs in the tree. So the bits impacted by the error propagation are limited and more likely located in the low bit-layers. In additional, inter-trees interpolation method is applied, thus some errors are compensated. Preliminary experimental results demonstrate that the proposed scheme can achieve much more benefits on error resilience.

  9. Modelling non-Gaussianity of background and observational errors by the Maximum Entropy method

    NASA Astrophysics Data System (ADS)

    Pires, Carlos; Talagrand, Olivier; Bocquet, Marc

    2010-05-01

    The Best Linear Unbiased Estimator (BLUE) has widely been used in atmospheric-oceanic data assimilation. However, when data errors have non-Gaussian pdfs, the BLUE differs from the absolute Minimum Variance Unbiased Estimator (MVUE), minimizing the mean square analysis error. The non-Gaussianity of errors can be due to the statistical skewness and positiveness of some physical observables (e.g. moisture, chemical species) or due to the nonlinearity of the data assimilation models and observation operators acting on Gaussian errors. Non-Gaussianity of assimilated data errors can be justified from a priori hypotheses or inferred from statistical diagnostics of innovations (observation minus background). Following this rationale, we compute measures of innovation non-Gaussianity, namely its skewness and kurtosis, relating it to: a) the non-Gaussianity of the individual error themselves, b) the correlation between nonlinear functions of errors, and c) the heteroscedasticity of errors within diagnostic samples. Those relationships impose bounds for skewness and kurtosis of errors which are critically dependent on the error variances, thus leading to a necessary tuning of error variances in order to accomplish consistency with innovations. We evaluate the sub-optimality of the BLUE as compared to the MVUE, in terms of excess of error variance, under the presence of non-Gaussian errors. The error pdfs are obtained by the maximum entropy method constrained by error moments up to fourth order, from which the Bayesian probability density function and the MVUE are computed. The impact is higher for skewed extreme innovations and grows in average with the skewness of data errors, especially if those skewnesses have the same sign. Application has been performed to the quality-accepted ECMWF innovations of brightness temperatures of a set of High Resolution Infrared Sounder channels. In this context, the MVUE has led in some extreme cases to a potential reduction of 20-60% error

  10. Medical Error and Moral Luck.

    PubMed

    Hubbeling, Dieneke

    2016-09-01

    This paper addresses the concept of moral luck. Moral luck is discussed in the context of medical error, especially an error of omission that occurs frequently, but only rarely has adverse consequences. As an example, a failure to compare the label on a syringe with the drug chart results in the wrong medication being administered and the patient dies. However, this error may have previously occurred many times with no tragic consequences. Discussions on moral luck can highlight conflicting intuitions. Should perpetrators receive a harsher punishment because of an adverse outcome, or should they be dealt with in the same way as colleagues who have acted similarly, but with no adverse effects? An additional element to the discussion, specifically with medical errors, is that according to the evidence currently available, punishing individual practitioners does not seem to be effective in preventing future errors. The following discussion, using relevant philosophical and empirical evidence, posits a possible solution for the moral luck conundrum in the context of medical error: namely, making a distinction between the duty to make amends and assigning blame. Blame should be assigned on the basis of actual behavior, while the duty to make amends is dependent on the outcome. PMID:26662613

  11. Error image aware content restoration

    NASA Astrophysics Data System (ADS)

    Choi, Sungwoo; Lee, Moonsik; Jung, Byunghee

    2015-12-01

    As the resolution of TV significantly increased, content consumers have become increasingly sensitive to the subtlest defect in TV contents. This rising standard in quality demanded by consumers has posed a new challenge in today's context where the tape-based process has transitioned to the file-based process: the transition necessitated digitalizing old archives, a process which inevitably produces errors such as disordered pixel blocks, scattered white noise, or totally missing pixels. Unsurprisingly, detecting and fixing such errors require a substantial amount of time and human labor to meet the standard demanded by today's consumers. In this paper, we introduce a novel, automated error restoration algorithm which can be applied to different types of classic errors by utilizing adjacent images while preserving the undamaged parts of an error image as much as possible. We tested our method to error images detected from our quality check system in KBS(Korean Broadcasting System) video archive. We are also implementing the algorithm as a plugin of well-known NLE(Non-linear editing system), which is a familiar tool for quality control agent.

  12. Authentic tolerance: between forbearance and acceptance.

    PubMed

    Von Bergen, C W; Von Bergen, Beth A; Stubblefield, Claire; Bandow, Diane

    2012-01-01

    Promoting tolerance is seen as a key weapon in battling prejudice in diversity and multicultural training but its meaning has been modified recently. The classical definition of tolerance meant that others are entitled to their opinions and have the right to express them and that even though one may disagree with them, one can live in peace with such differences. In recent years, however, tolerance has come to mean that all ideas and practices must be accepted and affirmed and where appreciation and valuing of differences is the ultimate virtue. Such a neo-classical definition has alienated many who value equality and justice and limits the effectiveness of diversity initiatives that teach the promotion of tolerance. The authors offer authentic tolerance as an alternative, incorporating respect and civility toward others, not necessarily approval of their beliefs and behavior. All persons are equal, but all opinions and conduct are not equal.

  13. Fairness and the development of inequality acceptance.

    PubMed

    Almås, Ingvild; Cappelen, Alexander W; Sørensen, Erik Ø; Tungodden, Bertil

    2010-05-28

    Fairness considerations fundamentally affect human behavior, but our understanding of the nature and development of people's fairness preferences is limited. The dictator game has been the standard experimental design for studying fairness preferences, but it only captures a situation where there is broad agreement that fairness requires equality. In real life, people often disagree on what is fair because they disagree on whether individual achievements, luck, and efficiency considerations of what maximizes total benefits can justify inequalities. We modified the dictator game to capture these features and studied how inequality acceptance develops in adolescence. We found that as children enter adolescence, they increasingly view inequalities reflecting differences in individual achievements, but not luck, as fair, whereas efficiency considerations mainly play a role in late adolescence. PMID:20508132

  14. 48 CFR 28.202 - Acceptability of corporate sureties.

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... GENERAL CONTRACTING REQUIREMENTS BONDS AND INSURANCE Sureties and Other Security for Bonds 28.202... Federal Bonds and Acceptable Reinsuring Companies.” (2) The penal amount of the bond should not exceed the surety's underwriting limit stated in the Department of the Treasury circular. If the penal...

  15. Acceptance and Commitment Therapy (ACT) as a Career Counselling Strategy

    ERIC Educational Resources Information Center

    Hoare, P. Nancey; McIlveen, Peter; Hamilton, Nadine

    2012-01-01

    Acceptance and commitment therapy (ACT) has potential to contribute to career counselling. In this paper, the theoretical tenets of ACT and a selection of its counselling techniques are overviewed along with a descriptive case vignette. There is limited empirical research into ACT's application in career counselling. Accordingly, a research agenda…

  16. Acceptance-Enhanced Behavior Therapy for Trichotillomania in Adolescents

    ERIC Educational Resources Information Center

    Fine, Kathi M.; Walther, Michael R.; Joseph, Jessica M.; Robinson, Jordan; Ricketts, Emily J.; Bowe, William E.; Woods, Douglas W.

    2012-01-01

    Although several studies have examined the efficacy of Acceptance Enhanced Behavior Therapy (AEBT) for the treatment of trichotillomania (TTM) in adults, data are limited with respect to the treatment of adolescents. Our case series illustrates the use of AEBT for TTM in the treatment of two adolescents. The AEBT protocol (Woods & Twohig, 2008) is…

  17. Acceptance of Online Degrees by Undergraduate Mexican Students

    ERIC Educational Resources Information Center

    Padilla Rodriguez, Brenda Cecilia; Adams, Jonathan

    2014-01-01

    The quality and acceptance of online degree programs are still controversial issues. In Mexico, where access to technology is limited, there are few studies on the matter. Undergraduate students (n = 104) answered a survey that aimed to evaluate their knowledge of virtual education, their likelihood of enrollment in an online degree program, and…

  18. Improving end of life care: an information systems approach to reducing medical errors.

    PubMed

    Tamang, S; Kopec, D; Shagas, G; Levy, K

    2005-01-01

    Chronic and terminally ill patients are disproportionately affected by medical errors. In addition, the elderly suffer more preventable adverse events than younger patients. Targeting system wide "error-reducing" reforms to vulnerable populations can significantly reduce the incidence and prevalence of human error in medical practice. Recent developments in health informatics, particularly the application of artificial intelligence (AI) techniques such as data mining, neural networks, and case-based reasoning (CBR), presents tremendous opportunities for mitigating error in disease diagnosis and patient management. Additionally, the ubiquity of the Internet creates the possibility of an almost ideal network for the dissemination of medical information. We explore the capacity and limitations of web-based palliative information systems (IS) to transform the delivery of care, streamline processes and improve the efficiency and appropriateness of medical treatment. As a result, medical error(s) that occur with patients dealing with severe, chronic illness and the frail elderly can be reduced.The palliative model grew out of the need for pain relief and comfort measures for patients diagnosed with cancer. Applied definitions of palliative care extend this convention, but there is no widely accepted definition. This research will discuss the development life cycle of two palliative information systems: the CONFER QOLP management information system (MIS), currently used by a community-based palliative care program in Brooklyn, New York, and the CAREN case-based reasoning prototype. CONFER is a web platform based on the idea of "eCare". CONFER uses XML (extensible mark-up language), a W3C-endorced standard mark up to define systems data. The second system, CAREN, is a CBR prototype designed for palliative care patients in the cancer trajectory. CBR is a technique, which tries to exploit the similarities of two situations and match decision-making to the best

  19. Reversal of Photon-Scattering Errors in Atomic Qubits

    NASA Astrophysics Data System (ADS)

    Akerman, N.; Kotler, S.; Glickman, Y.; Ozeri, R.

    2012-09-01

    Spontaneous photon scattering by an atomic qubit is a notable example of environment-induced error and is a fundamental limit to the fidelity of quantum operations. In the scattering process, the qubit loses its distinctive and coherent character owing to its entanglement with the photon. Using a single trapped ion, we show that by utilizing the information carried by the photon, we are able to coherently reverse this process and correct for the scattering error. We further used quantum process tomography to characterize the photon-scattering error and its correction scheme and demonstrate a correction fidelity greater than 85% whenever a photon was measured.

  20. The Study of Prescribing Errors Among General Dentists

    PubMed Central

    Araghi, Solmaz; Sharifi, Rohollah; Ahmadi, Goran; Esfehani, Mahsa; Rezaei, Fatemeh

    2016-01-01

    Introduction: In dentistry, medicine often prescribed to relieve pain and remove infections. Therefore, wrong prescription can lead to a range of problems including lack of pain, antimicrobial treatment failure and the development of resistance to antibiotics. Materials and Methods: In this cross-sectional study, the aim was to evaluate the common errors in written prescriptions by general dentists in Kermanshah in 2014. Dentists received a questionnaire describing five hypothetical patient and the appropriate prescription for the patient in question was asked. Information about age, gender, work experience and the admission in university was collected. The frequency of errors in prescriptions was determined. Data by SPSS 20 statistical software and using statistical t-test, chi-square and Pearson correlation were analyzed (0.05> P). Results: A total of 180 dentists (62.6% male and 37.4% female) with a mean age of 8.23 ± 39.199 participated in this study. Prescription errors include the wrong in pharmaceutical form (11%), not having to write therapeutic dose (13%), writing wrong dose (14%), typos (15%), error prescription (23%) and writing wrong number of drugs (24%). The most frequent errors in the administration of antiviral drugs (31%) and later stages of antifungal drugs (30%), analgesics (23%) and antibiotics (16%) was observed. Males dentists compared with females dentists showed more frequent errors (P=0.046). Error frequency among dentists with a long work history (P>0.001) and the acceptance in the university except for the entrance examination (P=0.041) had a statistically significant relationship. Conclusion: This study showed that the written prescription by general dentists examined contained significant errors and improve prescribing through continuing education of dentists is essential. PMID:26573049

  1. Estimation of rod scale errors in geodetic leveling

    USGS Publications Warehouse

    Craymer, Michael R.; Vaníček, Petr; Castle, Robert O.

    1995-01-01

    Comparisons among repeated geodetic levelings have often been used for detecting and estimating residual rod scale errors in leveled heights. Individual rod-pair scale errors are estimated by a two-step procedure using a model based on either differences in heights, differences in section height differences, or differences in section tilts. It is shown that the estimated rod-pair scale errors derived from each model are identical only when the data are correctly weighted, and the mathematical correlations are accounted for in the model based on heights. Analyses based on simple regressions of changes in height versus height can easily lead to incorrect conclusions. We also show that the statistically estimated scale errors are not a simple function of height, height difference, or tilt. The models are valid only when terrain slope is constant over adjacent pairs of setups (i.e., smoothly varying terrain). In order to discriminate between rod scale errors and vertical displacements due to crustal motion, the individual rod-pairs should be used in more than one leveling, preferably in areas of contrasting tectonic activity. From an analysis of 37 separately calibrated rod-pairs used in 55 levelings in southern California, we found eight statistically significant coefficients that could be reasonably attributed to rod scale errors, only one of which was larger than the expected random error in the applied calibration-based scale correction. However, significant differences with other independent checks indicate that caution should be exercised before accepting these results as evidence of scale error. Further refinements of the technique are clearly needed if the results are to be routinely applied in practice.

  2. Surface errors in the course of machining precision optics

    NASA Astrophysics Data System (ADS)

    Biskup, H.; Haberl, A.; Rascher, R.

    2015-08-01

    Precision optical components are usually machined by grinding and polishing in several steps with increasing accuracy. Spherical surfaces will be finished in a last step with large tools to smooth the surface. The requested surface accuracy of non-spherical surfaces only can be achieved with tools in point contact to the surface. So called mid-frequency errors (MSFE) can accumulate with zonal processes. This work is on the formation of surface errors from grinding to polishing by conducting an analysis of the surfaces in their machining steps by non-contact interferometric methods. The errors on the surface can be distinguished as described in DIN 4760 whereby 2nd to 3rd order errors are the so-called MSFE. By appropriate filtering of the measured data frequencies of errors can be suppressed in a manner that only defined spatial frequencies will be shown in the surface plot. It can be observed that some frequencies already may be formed in the early machining steps like grinding and main-polishing. Additionally it is known that MSFE can be produced by the process itself and other side effects. Beside a description of surface errors based on the limits of measurement technologies, different formation mechanisms for selected spatial frequencies are presented. A correction may be only possible by tools that have a lateral size below the wavelength of the error structure. The presented considerations may be used to develop proposals to handle surface errors.

  3. Error propagation in energetic carrying capacity models

    USGS Publications Warehouse

    Pearse, Aaron T.; Stafford, Joshua D.

    2014-01-01

    Conservation objectives derived from carrying capacity models have been used to inform management of landscapes for wildlife populations. Energetic carrying capacity models are particularly useful in conservation planning for wildlife; these models use estimates of food abundance and energetic requirements of wildlife to target conservation actions. We provide a general method for incorporating a foraging threshold (i.e., density of food at which foraging becomes unprofitable) when estimating food availability with energetic carrying capacity models. We use a hypothetical example to describe how past methods for adjustment of foraging thresholds biased results of energetic carrying capacity models in certain instances. Adjusting foraging thresholds at the patch level of the species of interest provides results consistent with ecological foraging theory. Presentation of two case studies suggest variation in bias which, in certain instances, created large errors in conservation objectives and may have led to inefficient allocation of limited resources. Our results also illustrate how small errors or biases in application of input parameters, when extrapolated to large spatial extents, propagate errors in conservation planning and can have negative implications for target populations.

  4. Explaining errors in children's questions.

    PubMed

    Rowland, Caroline F

    2007-07-01

    The ability to explain the occurrence of errors in children's speech is an essential component of successful theories of language acquisition. The present study tested some generativist and constructivist predictions about error on the questions produced by ten English-learning children between 2 and 5 years of age. The analyses demonstrated that, as predicted by some generativist theories [e.g. Santelmann, L., Berk, S., Austin, J., Somashekar, S. & Lust. B. (2002). Continuity and development in the acquisition of inversion in yes/no questions: dissociating movement and inflection, Journal of Child Language, 29, 813-842], questions with auxiliary DO attracted higher error rates than those with modal auxiliaries. However, in wh-questions, questions with modals and DO attracted equally high error rates, and these findings could not be explained in terms of problems forming questions with why or negated auxiliaries. It was concluded that the data might be better explained in terms of a constructivist account that suggests that entrenched item-based constructions may be protected from error in children's speech, and that errors occur when children resort to other operations to produce questions [e.g. Dabrowska, E. (2000). From formula to schema: the acquisition of English questions. Cognitive Liguistics, 11, 83-102; Rowland, C. F. & Pine, J. M. (2000). Subject-auxiliary inversion errors and wh-question acquisition: What children do know? Journal of Child Language, 27, 157-181; Tomasello, M. (2003). Constructing a language: A usage-based theory of language acquisition. Cambridge, MA: Harvard University Press]. However, further work on constructivist theory development is required to allow researchers to make predictions about the nature of these operations.

  5. Influence of Gender and Computer Teaching Efficacy on Computer Acceptance among Malaysian Student Teachers: An Extended Technology Acceptance Model

    ERIC Educational Resources Information Center

    Wong, Kung-Teck; Teo, Timothy; Russo, Sharon

    2012-01-01

    The purpose of this study is to validate the technology acceptance model (TAM) in an educational context and explore the role of gender and computer teaching efficacy as external variables. From the literature, it appeared that only limited studies had developed models to explain statistically the chain of influence of computer teaching efficacy…

  6. Error-associated behaviors and error rates for robotic geology

    NASA Technical Reports Server (NTRS)

    Anderson, Robert C.; Thomas, Geb; Wagner, Jacob; Glasgow, Justin

    2004-01-01

    This study explores human error as a function of the decision-making process. One of many models for human decision-making is Rasmussen's decision ladder [9]. The decision ladder identifies the multiple tasks and states of knowledge involved in decision-making. The tasks and states of knowledge can be classified by the level of cognitive effort required to make the decision, leading to the skill, rule, and knowledge taxonomy (Rasmussen, 1987). Skill based decisions require the least cognitive effort and knowledge based decisions require the greatest cognitive effort. Errors can occur at any of the cognitive levels.

  7. North error estimation based on solar elevation errors in the third step of sky-polarimetric Viking navigation

    NASA Astrophysics Data System (ADS)

    Száz, Dénes; Farkas, Alexandra; Barta, András; Kretzer, Balázs; Egri, Ádám; Horváth, Gábor

    2016-07-01

    The theory of sky-polarimetric Viking navigation has been widely accepted for decades without any information about the accuracy of this method. Previously, we have measured the accuracy of the first and second steps of this navigation method in psychophysical laboratory and planetarium experiments. Now, we have tested the accuracy of the third step in a planetarium experiment, assuming that the first and second steps are errorless. Using the fists of their outstretched arms, 10 test persons had to estimate the elevation angles (measured in numbers of fists and fingers) of black dots (representing the position of the occluded Sun) projected onto the planetarium dome. The test persons performed 2400 elevation estimations, 48% of which were more accurate than ±1°. We selected three test persons with the (i) largest and (ii) smallest elevation errors and (iii) highest standard deviation of the elevation error. From the errors of these three persons, we calculated their error function, from which the North errors (the angles with which they deviated from the geographical North) were determined for summer solstice and spring equinox, two specific dates of the Viking sailing period. The range of possible North errors ΔωN was the lowest and highest at low and high solar elevations, respectively. At high elevations, the maximal ΔωN was 35.6° and 73.7° at summer solstice and 23.8° and 43.9° at spring equinox for the best and worst test person (navigator), respectively. Thus, the best navigator was twice as good as the worst one. At solstice and equinox, high elevations occur the most frequently during the day, thus high North errors could occur more frequently than expected before. According to our findings, the ideal periods for sky-polarimetric Viking navigation are immediately after sunrise and before sunset, because the North errors are the lowest at low solar elevations.

  8. Statistical error analysis of surface-structure parameters determined by low-energy electron and positron diffraction: Data errors

    NASA Astrophysics Data System (ADS)

    Duke, C. B.; Lazarides, A.; Paton, A.; Wang, Y. R.

    1995-11-01

    An error-analysis procedure that gives statistically significant error estimates for surface-structure parameters extracted from analyses of measured low-energy electron and positron diffraction (LEED and LEPD) intensities is proposed. This procedure is applied to a surface-structure analysis of Cu(100) in which experimental data are simulated by adding Gaussian-distributed random errors to the calculated intensities for relaxed surface structures. Quantitative expressions for the variances in the surface-structural parameters are given and shown to obey the expected scaling laws for Gaussian errors in the experimental data. The procedure is shown to describe rigorously parameter errors in the limit that the errors in the measured intensities are described by uncorrelated Gaussian statistics. The analysis is valid for structure determinations that are of sufficient quality to admit errors that have magnitudes within the region of convergence of a linear theory that relates perturbations of diffracted intensities to perturbations in structural parameters. It is compared with previously proposed error-estimation techniques used in LEED, LEPD, and x-ray intensity analyses.

  9. Computer acceptance of older adults.

    PubMed

    Nägle, Sibylle; Schmidt, Ludger

    2012-01-01

    Even though computers play a massive role in everyday life of modern societies, older adults, and especially older women, are less likely to use a computer, and they perform fewer activities on it than younger adults. To get a better understanding of the factors affecting older adults' intention towards and usage of computers, the Unified Theory of Acceptance and Usage of Technology (UTAUT) was applied as part of a more extensive study with 52 users and non-users of computers, ranging in age from 50 to 90 years. The model covers various aspects of computer usage in old age via four key constructs, namely performance expectancy, effort expectancy, social influences, and facilitating conditions, as well as the variables gender, age, experience, and voluntariness it. Interestingly, next to performance expectancy, facilitating conditions showed the strongest correlation with use as well as with intention. Effort expectancy showed no significant correlation with the intention of older adults to use a computer.

  10. Force Limited Vibration Testing

    NASA Technical Reports Server (NTRS)

    Scharton, Terry; Chang, Kurng Y.

    2005-01-01

    This slide presentation reviews the concept and applications of Force Limited Vibration Testing. The goal of vibration testing of aerospace hardware is to identify problems that would result in flight failures. The commonly used aerospace vibration tests uses artificially high shaker forces and responses at the resonance frequencies of the test item. It has become common to limit the acceleration responses in the test to those predicted for the flight. This requires an analysis of the acceleration response, and requires placing accelerometers on the test item. With the advent of piezoelectric gages it has become possible to improve vibration testing. The basic equations have are reviewed. Force limits are analogous and complementary to the acceleration specifications used in conventional vibration testing. Just as the acceleration specification is the frequency spectrum envelope of the in-flight acceleration at the interface between the test item and flight mounting structure, the force limit is the envelope of the in-flight force at the interface . In force limited vibration tests, both the acceleration and force specifications are needed, and the force specification is generally based on and proportional to the acceleration specification. Therefore, force limiting does not compensate for errors in the development of the acceleration specification, e.g., too much conservatism or the lack thereof. These errors will carry over into the force specification. Since in-flight vibratory force data are scarce, force limits are often derived from coupled system analyses and impedance information obtained from measurements or finite element models (FEM). Fortunately, data on the interface forces between systems and components are now available from system acoustic and vibration tests of development test models and from a few flight experiments. Semi-empirical methods of predicting force limits are currently being developed on the basis of the limited flight and system test

  11. Influence of Errors in Tactile Sensors on Some High Level Parameters Used for Manipulation with Robotic Hands

    PubMed Central

    Sánchez-Durán, José A.; Hidalgo-López, José A.; Castellanos-Ramos, Julián; Oballe-Peinado, Óscar; Vidal-Verdú, Fernando

    2015-01-01

    Tactile sensors suffer from many types of interference and errors like crosstalk, non-linearity, drift or hysteresis, therefore calibration should be carried out to compensate for these deviations. However, this procedure is difficult in sensors mounted on artificial hands for robots or prosthetics for instance, where the sensor usually bends to cover a curved surface. Moreover, the calibration procedure should be repeated often because the correction parameters are easily altered by time and surrounding conditions. Furthermore, this intensive and complex calibration could be less determinant, or at least simpler. This is because manipulation algorithms do not commonly use the whole data set from the tactile image, but only a few parameters such as the moments of the tactile image. These parameters could be changed less by common errors and interferences, or at least their variations could be in the order of those caused by accepted limitations, like reduced spatial resolution. This paper shows results from experiments to support this idea. The experiments are carried out with a high performance commercial sensor as well as with a low-cost error-prone sensor built with a common procedure in robotics. PMID:26295393

  12. Influence of Errors in Tactile Sensors on Some High Level Parameters Used for Manipulation with Robotic Hands.

    PubMed

    Sánchez-Durán, José A; Hidalgo-López, José A; Castellanos-Ramos, Julián; Oballe-Peinado, Óscar; Vidal-Verdú, Fernando

    2015-08-19

    Tactile sensors suffer from many types of interference and errors like crosstalk, non-linearity, drift or hysteresis, therefore calibration should be carried out to compensate for these deviations. However, this procedure is difficult in sensors mounted on artificial hands for robots or prosthetics for instance, where the sensor usually bends to cover a curved surface. Moreover, the calibration procedure should be repeated often because the correction parameters are easily altered by time and surrounding conditions. Furthermore, this intensive and complex calibration could be less determinant, or at least simpler. This is because manipulation algorithms do not commonly use the whole data set from the tactile image, but only a few parameters such as the moments of the tactile image. These parameters could be changed less by common errors and interferences, or at least their variations could be in the order of those caused by accepted limitations, like reduced spatial resolution. This paper shows results from experiments to support this idea. The experiments are carried out with a high performance commercial sensor as well as with a low-cost error-prone sensor built with a common procedure in robotics.

  13. Spacecraft and propulsion technician error

    NASA Astrophysics Data System (ADS)

    Schultz, Daniel Clyde

    Commercial aviation and commercial space similarly launch, fly, and land passenger vehicles. Unlike aviation, the U.S. government has not established maintenance policies for commercial space. This study conducted a mixed methods review of 610 U.S. space launches from 1984 through 2011, which included 31 failures. An analysis of the failure causal factors showed that human error accounted for 76% of those failures, which included workmanship error accounting for 29% of the failures. With the imminent future of commercial space travel, the increased potential for the loss of human life demands that changes be made to the standardized procedures, training, and certification to reduce human error and failure rates. Several recommendations were made by this study to the FAA's Office of Commercial Space Transportation, space launch vehicle operators, and maintenance technician schools in an effort to increase the safety of the space transportation passengers.

  14. Synthetic aperture interferometry: error analysis

    SciTech Connect

    Biswas, Amiya; Coupland, Jeremy

    2010-07-10

    Synthetic aperture interferometry (SAI) is a novel way of testing aspherics and has a potential for in-process measurement of aspherics [Appl. Opt.42, 701 (2003)].APOPAI0003-693510.1364/AO.42.000701 A method to measure steep aspherics using the SAI technique has been previously reported [Appl. Opt.47, 1705 (2008)].APOPAI0003-693510.1364/AO.47.001705 Here we investigate the computation of surface form using the SAI technique in different configurations and discuss the computational errors. A two-pass measurement strategy is proposed to reduce the computational errors, and a detailed investigation is carried out to determine the effect of alignment errors on the measurement process.

  15. Orbit IMU alignment: Error analysis

    NASA Technical Reports Server (NTRS)

    Corson, R. W.

    1980-01-01

    A comprehensive accuracy analysis of orbit inertial measurement unit (IMU) alignments using the shuttle star trackers was completed and the results are presented. Monte Carlo techniques were used in a computer simulation of the IMU alignment hardware and software systems to: (1) determine the expected Space Transportation System 1 Flight (STS-1) manual mode IMU alignment accuracy; (2) investigate the accuracy of alignments in later shuttle flights when the automatic mode of star acquisition may be used; and (3) verify that an analytical model previously used for estimating the alignment error is a valid model. The analysis results do not differ significantly from expectations. The standard deviation in the IMU alignment error for STS-1 alignments was determined to the 68 arc seconds per axis. This corresponds to a 99.7% probability that the magnitude of the total alignment error is less than 258 arc seconds.

  16. Error Field Correction in ITER

    SciTech Connect

    Park, Jong-kyu; Boozer, Allen H.; Menard, Jonathan E.; Schaffer, Michael J.

    2008-05-22

    A new method for correcting magnetic field errors in the ITER tokamak is developed using the Ideal Perturbed Equilibrium Code (IPEC). The dominant external magnetic field for driving islands is shown to be localized to the outboard midplane for three ITER equilibria that represent the projected range of operational scenarios. The coupling matrices between the poloidal harmonics of the external magnetic perturbations and the resonant fields on the rational surfaces that drive islands are combined for different equilibria and used to determine an ordered list of the dominant errors in the external magnetic field. It is found that efficient and robust error field correction is possible with a fixed setting of the correction currents relative to the currents in the main coils across the range of ITER operating scenarios that was considered.

  17. Reward positivity: Reward prediction error or salience prediction error?

    PubMed

    Heydari, Sepideh; Holroyd, Clay B

    2016-08-01

    The reward positivity is a component of the human ERP elicited by feedback stimuli in trial-and-error learning and guessing tasks. A prominent theory holds that the reward positivity reflects a reward prediction error signal that is sensitive to outcome valence, being larger for unexpected positive events relative to unexpected negative events (Holroyd & Coles, 2002). Although the theory has found substantial empirical support, most of these studies have utilized either monetary or performance feedback to test the hypothesis. However, in apparent contradiction to the theory, a recent study found that unexpected physical punishments also elicit the reward positivity (Talmi, Atkinson, & El-Deredy, 2013). The authors of this report argued that the reward positivity reflects a salience prediction error rather than a reward prediction error. To investigate this finding further, in the present study participants navigated a virtual T maze and received feedback on each trial under two conditions. In a reward condition, the feedback indicated that they would either receive a monetary reward or not and in a punishment condition the feedback indicated that they would receive a small shock or not. We found that the feedback stimuli elicited a typical reward positivity in the reward condition and an apparently delayed reward positivity in the punishment condition. Importantly, this signal was more positive to the stimuli that predicted the omission of a possible punishment relative to stimuli that predicted a forthcoming punishment, which is inconsistent with the salience hypothesis. PMID:27184070

  18. Reward positivity: Reward prediction error or salience prediction error?

    PubMed

    Heydari, Sepideh; Holroyd, Clay B

    2016-08-01

    The reward positivity is a component of the human ERP elicited by feedback stimuli in trial-and-error learning and guessing tasks. A prominent theory holds that the reward positivity reflects a reward prediction error signal that is sensitive to outcome valence, being larger for unexpected positive events relative to unexpected negative events (Holroyd & Coles, 2002). Although the theory has found substantial empirical support, most of these studies have utilized either monetary or performance feedback to test the hypothesis. However, in apparent contradiction to the theory, a recent study found that unexpected physical punishments also elicit the reward positivity (Talmi, Atkinson, & El-Deredy, 2013). The authors of this report argued that the reward positivity reflects a salience prediction error rather than a reward prediction error. To investigate this finding further, in the present study participants navigated a virtual T maze and received feedback on each trial under two conditions. In a reward condition, the feedback indicated that they would either receive a monetary reward or not and in a punishment condition the feedback indicated that they would receive a small shock or not. We found that the feedback stimuli elicited a typical reward positivity in the reward condition and an apparently delayed reward positivity in the punishment condition. Importantly, this signal was more positive to the stimuli that predicted the omission of a possible punishment relative to stimuli that predicted a forthcoming punishment, which is inconsistent with the salience hypothesis.

  19. 20 Tips to Help Prevent Medical Errors

    MedlinePlus

    ... Prevent Medical Errors 20 Tips to Help Prevent Medical Errors: Patient Fact Sheet This information is for ... current information. Select to Download PDF (295 KB). Medical errors can occur anywhere in the health care ...

  20. Waste-acceptance criteria for radioactive waste disposal

    SciTech Connect

    Gilbert, T.L.; Meshkov, N.K.

    1987-02-01

    A method has been developed for establishing waste-acceptance criteria based on quantitative performance factors that characterize the confinement capabilities of a disposal facility for radioactive waste. The method starts from the objective of protecting public health and safety by assuring that disposal of the waste will not result in a radiation dose of any member of the general public, in either the short or long term, in excess of an established basic dose limit. A key aspect of the method is the introduction of a confinement factor that characterizes the overall confinement capability of a particular disposal facility and can be used for quantitative performance assessments as well as for establishing facility-specific waste-acceptance criteria. Confinement factors enable direct and simple conversion of a basic dose limit into waste-acceptance criteria, specified as concentration limits on rationuclides in the waste streams. Waste-acceptance criteria can be represented visually as activity/time plots for various waste streams. These plots show the concentrations of radionuclides in a waste stream as a function of time and permit a visual, quantitative assessment of long-term performance, relative risks from different radionuclides in the waste stream, and contributions from ingrowth. Application of the method to generic facility designs provides a radional basis for a waste classification system. 14 refs.

  1. Analysis of Medication Error Reports

    SciTech Connect

    Whitney, Paul D.; Young, Jonathan; Santell, John; Hicks, Rodney; Posse, Christian; Fecht, Barbara A.

    2004-11-15

    In medicine, as in many areas of research, technological innovation and the shift from paper based information to electronic records has created a climate of ever increasing availability of raw data. There has been, however, a corresponding lag in our abilities to analyze this overwhelming mass of data, and classic forms of statistical analysis may not allow researchers to interact with data in the most productive way. This is true in the emerging area of patient safety improvement. Traditionally, a majority of the analysis of error and incident reports has been carried out based on an approach of data comparison, and starts with a specific question which needs to be answered. Newer data analysis tools have been developed which allow the researcher to not only ask specific questions but also to “mine” data: approach an area of interest without preconceived questions, and explore the information dynamically, allowing questions to be formulated based on patterns brought up by the data itself. Since 1991, United States Pharmacopeia (USP) has been collecting data on medication errors through voluntary reporting programs. USP’s MEDMARXsm reporting program is the largest national medication error database and currently contains well over 600,000 records. Traditionally, USP has conducted an annual quantitative analysis of data derived from “pick-lists” (i.e., items selected from a list of items) without an in-depth analysis of free-text fields. In this paper, the application of text analysis and data analysis tools used by Battelle to analyze the medication error reports already analyzed in the traditional way by USP is described. New insights and findings were revealed including the value of language normalization and the distribution of error incidents by day of the week. The motivation for this effort is to gain additional insight into the nature of medication errors to support improvements in medication safety.

  2. How psychotherapists handle treatment errors – an ethical analysis

    PubMed Central

    2013-01-01

    Background Dealing with errors in psychotherapy is challenging, both ethically and practically. There is almost no empirical research on this topic. We aimed (1) to explore psychotherapists’ self-reported ways of dealing with an error made by themselves or by colleagues, and (2) to reconstruct their reasoning according to the two principle-based ethical approaches that are dominant in the ethics discourse of psychotherapy, Beauchamp & Childress (B&C) and Lindsay et al. (L). Methods We conducted 30 semi-structured interviews with 30 psychotherapists (physicians and non-physicians) and analysed the transcripts using qualitative content analysis. Answers were deductively categorized according to the two principle-based ethical approaches. Results Most psychotherapists reported that they preferred to an disclose error to the patient. They justified this by spontaneous intuitions and common values in psychotherapy, rarely using explicit ethical reasoning. The answers were attributed to the following categories with descending frequency: 1. Respect for patient autonomy (B&C; L), 2. Non-maleficence (B&C) and Responsibility (L), 3. Integrity (L), 4. Competence (L) and Beneficence (B&C). Conclusions Psychotherapists need specific ethical and communication training to complement and articulate their moral intuitions as a support when disclosing their errors to the patients. Principle-based ethical approaches seem to be useful for clarifying the reasons for disclosure. Further research should help to identify the most effective and acceptable ways of error disclosure in psychotherapy. PMID:24321503

  3. 48 CFR 2911.103 - Market acceptance.

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... offered have either achieved commercial market acceptance or been satisfactorily supplied to an agency... 48 Federal Acquisition Regulations System 7 2010-10-01 2010-10-01 false Market acceptance. 2911... DESCRIBING AGENCY NEEDS Selecting And Developing Requirements Documents 2911.103 Market acceptance....

  4. 48 CFR 11.103 - Market acceptance.

    Code of Federal Regulations, 2011 CFR

    2011-10-01

    ... 48 Federal Acquisition Regulations System 1 2011-10-01 2011-10-01 false Market acceptance. 11.103... DESCRIBING AGENCY NEEDS Selecting and Developing Requirements Documents 11.103 Market acceptance. (a) Section... either— (i) Achieved commercial market acceptance; or (ii) Been satisfactorily supplied to an...

  5. 48 CFR 2911.103 - Market acceptance.

    Code of Federal Regulations, 2011 CFR

    2011-10-01

    ... 48 Federal Acquisition Regulations System 7 2011-10-01 2011-10-01 false Market acceptance. 2911... DESCRIBING AGENCY NEEDS Selecting And Developing Requirements Documents 2911.103 Market acceptance. The... offered have either achieved commercial market acceptance or been satisfactorily supplied to an...

  6. Older Adults' Acceptance of Information Technology

    ERIC Educational Resources Information Center

    Wang, Lin; Rau, Pei-Luen Patrick; Salvendy, Gavriel

    2011-01-01

    This study investigated variables contributing to older adults' information technology acceptance through a survey, which was used to find factors explaining and predicting older adults' information technology acceptance behaviors. Four factors, including needs satisfaction, perceived usability, support availability, and public acceptance, were…

  7. Apollo experience report environmental acceptance testing

    NASA Technical Reports Server (NTRS)

    Laubach, C. H. M.

    1976-01-01

    Environmental acceptance testing was used extensively to screen selected spacecraft hardware for workmanship defects and manufacturing flaws. The minimum acceptance levels and durations and methods for their establishment are described. Component selection and test monitoring, as well as test implementation requirements, are included. Apollo spacecraft environmental acceptance test results are summarized, and recommendations for future programs are presented.

  8. 48 CFR 245.606-3 - Acceptance.

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... 48 Federal Acquisition Regulations System 3 2010-10-01 2010-10-01 false Acceptance. 245.606-3..., DEPARTMENT OF DEFENSE CONTRACT MANAGEMENT GOVERNMENT PROPERTY Reporting, Redistribution, and Disposal of Contractor Inventory 245.606-3 Acceptance. (a) If the schedules are acceptable, the plant clearance...

  9. 46 CFR 28.73 - Accepted organizations.

    Code of Federal Regulations, 2012 CFR

    2012-10-01

    ... 46 Shipping 1 2012-10-01 2012-10-01 false Accepted organizations. 28.73 Section 28.73 Shipping... INDUSTRY VESSELS General Provisions § 28.73 Accepted organizations. An organization desiring to be designated by the Commandant as an accepted organization must request such designation in writing. As...

  10. 46 CFR 28.73 - Accepted organizations.

    Code of Federal Regulations, 2011 CFR

    2011-10-01

    ... 46 Shipping 1 2011-10-01 2011-10-01 false Accepted organizations. 28.73 Section 28.73 Shipping... INDUSTRY VESSELS General Provisions § 28.73 Accepted organizations. An organization desiring to be designated by the Commandant as an accepted organization must request such designation in writing. As...

  11. 46 CFR 28.73 - Accepted organizations.

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... 46 Shipping 1 2010-10-01 2010-10-01 false Accepted organizations. 28.73 Section 28.73 Shipping... INDUSTRY VESSELS General Provisions § 28.73 Accepted organizations. An organization desiring to be designated by the Commandant as an accepted organization must request such designation in writing. As...

  12. 46 CFR 28.73 - Accepted organizations.

    Code of Federal Regulations, 2013 CFR

    2013-10-01

    ... 46 Shipping 1 2013-10-01 2013-10-01 false Accepted organizations. 28.73 Section 28.73 Shipping... INDUSTRY VESSELS General Provisions § 28.73 Accepted organizations. An organization desiring to be designated by the Commandant as an accepted organization must request such designation in writing. As...

  13. 46 CFR 28.73 - Accepted organizations.

    Code of Federal Regulations, 2014 CFR

    2014-10-01

    ... 46 Shipping 1 2014-10-01 2014-10-01 false Accepted organizations. 28.73 Section 28.73 Shipping... INDUSTRY VESSELS General Provisions § 28.73 Accepted organizations. An organization desiring to be designated by the Commandant as an accepted organization must request such designation in writing. As...

  14. Toward a theoretical approach to medical error reporting system research and design.

    PubMed

    Karsh, Ben-Tzion; Escoto, Kamisha Hamilton; Beasley, John W; Holden, Richard J

    2006-05-01

    The release of the Institute of Medicine (Kohn et al., 2000) report "To Err is Human", brought attention to the problem of medical errors, which led to a concerted effort to study and design medical error reporting systems for the purpose of capturing and analyzing error data so that safety interventions could be designed. However, to make real gains in the efficacy of medical error or event reporting systems, it is necessary to begin developing a theory of reporting systems adoption and use and to understand how existing theories may play a role in explaining adoption and use. This paper presents the results of a 9-month study exploring the barriers and facilitators for the design of a statewide medical error reporting system and discusses how several existing theories of technology acceptance, adoption and implementation fit with many of the results. In addition we present an integrated theoretical model of medical error reporting system design and implementation. PMID:16182233

  15. Automatic-repeat-request error control schemes

    NASA Technical Reports Server (NTRS)

    Lin, S.; Costello, D. J., Jr.; Miller, M. J.

    1983-01-01

    Error detection incorporated with automatic-repeat-request (ARQ) is widely used for error control in data communication systems. This method of error control is simple and provides high system reliability. If a properly chosen code is used for error detection, virtually error-free data transmission can be attained. Various types of ARQ and hybrid ARQ schemes, and error detection using linear block codes are surveyed.

  16. Measurement Accuracy Limitation Analysis on Synchrophasors

    SciTech Connect

    Zhao, Jiecheng; Zhan, Lingwei; Liu, Yilu; Qi, Hairong; Gracia, Jose R; Ewing, Paul D

    2015-01-01

    This paper analyzes the theoretical accuracy limitation of synchrophasors measurements on phase angle and frequency of the power grid. Factors that cause the measurement error are analyzed, including error sources in the instruments and in the power grid signal. Different scenarios of these factors are evaluated according to the normal operation status of power grid measurement. Based on the evaluation and simulation, the errors of phase angle and frequency caused by each factor are calculated and discussed.

  17. System-related factors contributing to diagnostic errors.

    PubMed

    Thammasitboon, Satid; Thammasitboon, Supat; Singhal, Geeta

    2013-10-01

    Several studies in primary care, internal medicine, and emergency departments show that rates of errors in test requests and result interpretations are unacceptably high and translate into missed, delayed, or erroneous diagnoses. Ineffective follow-up of diagnostic test results could lead to patient harm if appropriate therapeutic interventions are not delivered in a timely manner. The frequency of system-related factors that contribute directly to diagnostic errors depends on the types and sources of errors involved. Recent studies reveal that the errors and patient harm in the diagnostic testing loop have occurred mainly at the pre- and post-analytic phases, which are directed primarily by clinicians who may have limited expertise in the rapidly expanding field of clinical pathology. These errors may include inappropriate test requests, failure/delay in receiving results, and erroneous interpretation and application of test results to patient care. Efforts to address system-related factors often focus on technical errors in laboratory testing or failures in delivery of intended treatment. System-improvement strategies related to diagnostic errors tend to focus on technical aspects of laboratory medicine or delivery of treatment after completion of the diagnostic process. System failures and cognitive errors, more often than not, coexist and together contribute to the incidents of errors in diagnostic process and in laboratory testing. The use of highly structured hand-off procedures and pre-planned follow-up for any diagnostic test could improve efficiency and reliability of the follow-up process. Many feedback pathways should be established so that providers can learn if or when a diagnosis is changed. Patients can participate in the effort to reduce diagnostic errors. Providers should educate their patients about diagnostic probabilities and uncertainties. The patient-safety strategies focusing on the interface between diagnostic system and therapeutic

  18. Management of human error by design

    NASA Technical Reports Server (NTRS)

    Wiener, Earl

    1988-01-01

    Design-induced errors and error prevention as well as the concept of lines of defense against human error are discussed. The concept of human error prevention, whose main focus has been on hardware, is extended to other features of the human-machine interface vulnerable to design-induced errors. In particular, it is pointed out that human factors and human error prevention should be part of the process of transport certification. Also, the concept of error tolerant systems is considered as a last line of defense against error.

  19. Reducing medical errors and adverse events.

    PubMed

    Pham, Julius Cuong; Aswani, Monica S; Rosen, Michael; Lee, HeeWon; Huddle, Matthew; Weeks, Kristina; Pronovost, Peter J

    2012-01-01

    Medical errors account for ∼98,000 deaths per year in the United States. They increase disability and costs and decrease confidence in the health care system. We review several important types of medical errors and adverse events. We discuss medication errors, healthcare-acquired infections, falls, handoff errors, diagnostic errors, and surgical errors. We describe the impact of these errors, review causes and contributing factors, and provide an overview of strategies to reduce these events. We also discuss teamwork/safety culture, an important aspect in reducing medical errors.

  20. Errors in airborne flux measurements

    NASA Astrophysics Data System (ADS)

    Mann, Jakob; Lenschow, Donald H.

    1994-07-01

    We present a general approach for estimating systematic and random errors in eddy correlation fluxes and flux gradients measured by aircraft in the convective boundary layer as a function of the length of the flight leg, or of the cutoff wavelength of a highpass filter. The estimates are obtained from empirical expressions for various length scales in the convective boundary layer and they are experimentally verified using data from the First ISLSCP (International Satellite Land Surface Climatology Experiment) Field Experiment (FIFE), the Air Mass Transformation Experiment (AMTEX), and the Electra Radome Experiment (ELDOME). We show that the systematic flux and flux gradient errors can be important if fluxes are calculated from a set of several short flight legs or if the vertical velocity and scalar time series are high-pass filtered. While the systematic error of the flux is usually negative, that of the flux gradient can change sign. For example, for temperature flux divergence the systematic error changes from negative to positive about a quarter of the way up in the convective boundary layer.

  1. Sampling Errors of Variance Components.

    ERIC Educational Resources Information Center

    Sanders, Piet F.

    A study on sampling errors of variance components was conducted within the framework of generalizability theory by P. L. Smith (1978). The study used an intuitive approach for solving the problem of how to allocate the number of conditions to different facets in order to produce the most stable estimate of the universe score variance. Optimization…

  2. Measurement error in geometric morphometrics.

    PubMed

    Fruciano, Carmelo

    2016-06-01

    Geometric morphometrics-a set of methods for the statistical analysis of shape once saluted as a revolutionary advancement in the analysis of morphology -is now mature and routinely used in ecology and evolution. However, a factor often disregarded in empirical studies is the presence and the extent of measurement error. This is potentially a very serious issue because random measurement error can inflate the amount of variance and, since many statistical analyses are based on the amount of "explained" relative to "residual" variance, can result in loss of statistical power. On the other hand, systematic bias can affect statistical analyses by biasing the results (i.e. variation due to bias is incorporated in the analysis and treated as biologically-meaningful variation). Here, I briefly review common sources of error in geometric morphometrics. I then review the most commonly used methods to measure and account for both random and non-random measurement error, providing a worked example using a real dataset.

  3. The Errors of Our Ways

    ERIC Educational Resources Information Center

    Kane, Michael

    2011-01-01

    Errors don't exist in our data, but they serve a vital function. Reality is complicated, but our models need to be simple in order to be manageable. We assume that attributes are invariant over some conditions of observation, and once we do that we need some way of accounting for the variability in observed scores over these conditions of…

  4. Typical errors of ESP users

    NASA Astrophysics Data System (ADS)

    Eremina, Svetlana V.; Korneva, Anna A.

    2004-07-01

    The paper presents analysis of the errors made by ESP (English for specific purposes) users which have been considered as typical. They occur as a result of misuse of resources of English grammar and tend to resist. Their origin and places of occurrence have also been discussed.

  5. Reduced discretization error in HZETRN

    NASA Astrophysics Data System (ADS)

    Slaba, Tony C.; Blattnig, Steve R.; Tweed, John

    2013-02-01

    The deterministic particle transport code HZETRN is an efficient analysis tool for studying the effects of space radiation on humans, electronics, and shielding materials. In a previous work, numerical methods in the code were reviewed, and new methods were developed that further improved efficiency and reduced overall discretization error. It was also shown that the remaining discretization error could be attributed to low energy light ions (A < 4) with residual ranges smaller than the physical step-size taken by the code. Accurately resolving the spectrum of low energy light particles is important in assessing risk associated with astronaut radiation exposure. In this work, modifications to the light particle transport formalism are presented that accurately resolve the spectrum of low energy light ion target fragments. The modified formalism is shown to significantly reduce overall discretization error and allows a physical approximation to be removed. For typical step-sizes and energy grids used in HZETRN, discretization errors for the revised light particle transport algorithms are shown to be less than 4% for aluminum and water shielding thicknesses as large as 100 g/cm2 exposed to both solar particle event and galactic cosmic ray environments.

  6. Amplify Errors to Minimize Them

    ERIC Educational Resources Information Center

    Stewart, Maria Shine

    2009-01-01

    In this article, the author offers her experience of modeling mistakes and writing spontaneously in the computer classroom to get students' attention and elicit their editorial response. She describes how she taught her class about major sentence errors--comma splices, run-ons, and fragments--through her Sentence Meditation exercise, a rendition…

  7. Theory of Test Translation Error

    ERIC Educational Resources Information Center

    Solano-Flores, Guillermo; Backhoff, Eduardo; Contreras-Nino, Luis Angel

    2009-01-01

    In this article, we present a theory of test translation whose intent is to provide the conceptual foundation for effective, systematic work in the process of test translation and test translation review. According to the theory, translation error is multidimensional; it is not simply the consequence of defective translation but an inevitable fact…

  8. Error Patterns of Bilingual Readers.

    ERIC Educational Resources Information Center

    Gonzalez, Phillip C.; Elijah, David V.

    1979-01-01

    In a study of developmental reading behaviors, errors of 75 Spanish-English bilingual students (grades 2-9) on the McLeod GAP Comprehension Test were categorized in an attempt to ascertain a pattern of language difficulties. Contrary to previous research, bilingual readers minimally used native language cues in reading second language materials.…

  9. What Is a Reading Error?

    ERIC Educational Resources Information Center

    Labov, William; Baker, Bettina

    2010-01-01

    Early efforts to apply knowledge of dialect differences to reading stressed the importance of the distinction between differences in pronunciation and mistakes in reading. This study develops a method of estimating the probability that a given oral reading that deviates from the text is a true reading error by observing the semantic impact of the…

  10. Having Fun with Error Analysis

    ERIC Educational Resources Information Center

    Siegel, Peter

    2007-01-01

    We present a fun activity that can be used to introduce students to error analysis: the M&M game. Students are told to estimate the number of individual candies plus uncertainty in a bag of M&M's. The winner is the group whose estimate brackets the actual number with the smallest uncertainty. The exercise produces enthusiastic discussions and…

  11. Input/output error analyzer

    NASA Technical Reports Server (NTRS)

    Vaughan, E. T.

    1977-01-01

    Program aids in equipment assessment. Independent assembly-language utility program is designed to operate under level 27 or 31 of EXEC 8 Operating System. It scans user-selected portions of system log file, whether located on tape or mass storage, and searches for and processes 1/0 error (type 6) entries.

  12. A brief history of error.

    PubMed

    Murray, Andrew W

    2011-10-01

    The spindle checkpoint monitors chromosome alignment on the mitotic and meiotic spindle. When the checkpoint detects errors, it arrests progress of the cell cycle while it attempts to correct the mistakes. This perspective will present a brief history summarizing what we know about the checkpoint, and a list of questions we must answer before we understand it. PMID:21968991

  13. Reduced discretization error in HZETRN

    SciTech Connect

    Slaba, Tony C.; Blattnig, Steve R.; Tweed, John

    2013-02-01

    The deterministic particle transport code HZETRN is an efficient analysis tool for studying the effects of space radiation on humans, electronics, and shielding materials. In a previous work, numerical methods in the code were reviewed, and new methods were developed that further improved efficiency and reduced overall discretization error. It was also shown that the remaining discretization error could be attributed to low energy light ions (A < 4) with residual ranges smaller than the physical step-size taken by the code. Accurately resolving the spectrum of low energy light particles is important in assessing risk associated with astronaut radiation exposure. In this work, modifications to the light particle transport formalism are presented that accurately resolve the spectrum of low energy light ion target fragments. The modified formalism is shown to significantly reduce overall discretization error and allows a physical approximation to be removed. For typical step-sizes and energy grids used in HZETRN, discretization errors for the revised light particle transport algorithms are shown to be less than 4% for aluminum and water shielding thicknesses as large as 100 g/cm{sup 2} exposed to both solar particle event and galactic cosmic ray environments.

  14. FRamework Assessing Notorious Contributing Influences for Error (FRANCIE): Perspective on Taxonomy Development to Support Error Reporting and Analysis

    SciTech Connect

    Lon N. Haney; David I. Gertman

    2003-04-01

    Beginning in the 1980s a primary focus of human reliability analysis was estimation of human error probabilities. However, detailed qualitative modeling with comprehensive representation of contextual variables often was lacking. This was likely due to the lack of comprehensive error and performance shaping factor taxonomies, and the limited data available on observed error rates and their relationship to specific contextual variables. In the mid 90s Boeing, America West Airlines, NASA Ames Research Center and INEEL partnered in a NASA sponsored Advanced Concepts grant to: assess the state of the art in human error analysis, identify future needs for human error analysis, and develop an approach addressing these needs. Identified needs included the need for a method to identify and prioritize task and contextual characteristics affecting human reliability. Other needs identified included developing comprehensive taxonomies to support detailed qualitative modeling and to structure meaningful data collection efforts across domains. A result was the development of the FRamework Assessing Notorious Contributing Influences for Error (FRANCIE) with a taxonomy for airline maintenance tasks. The assignment of performance shaping factors to generic errors by experts proved to be valuable to qualitative modeling. Performance shaping factors and error types from such detailed approaches can be used to structure error reporting schemes. In a recent NASA Advanced Human Support Technology grant FRANCIE was refined, and two new taxonomies for use on space missions were developed. The development, sharing, and use of error taxonomies, and the refinement of approaches for increased fidelity of qualitative modeling is offered as a means to help direct useful data collection strategies.

  15. Acceptability of GM foods among Pakistani consumers.

    PubMed

    Ali, Akhter; Rahut, Dil Bahadur; Imtiaz, Muhammad

    2016-04-01

    In Pakistan majority of the consumers do not have information about genetically modified (GM) foods. In developing countries particularly in Pakistan few studies have focused on consumers' acceptability about GM foods. Using comprehensive primary dataset collected from 320 consumers in 2013 from Pakistan, this study analyzes the determinants of consumers' acceptability of GM foods. The data was analyzed by employing the bivariate probit model and censored least absolute deviation (CLAD) models. The empirical results indicated that urban consumers are more aware of GM foods compared to rural consumers. The acceptance of GM foods was more among females' consumers as compared to male consumers. In addition, the older consumers were more willing to accept GM food compared to young consumers. The acceptability of GM foods was also higher among wealthier households. Low price is the key factor leading to the acceptability of GM foods. The acceptability of the GM foods also reduces the risks among Pakistani consumers.

  16. Acceptability of GM foods among Pakistani consumers.

    PubMed

    Ali, Akhter; Rahut, Dil Bahadur; Imtiaz, Muhammad

    2016-04-01

    In Pakistan majority of the consumers do not have information about genetically modified (GM) foods. In developing countries particularly in Pakistan few studies have focused on consumers' acceptability about GM foods. Using comprehensive primary dataset collected from 320 consumers in 2013 from Pakistan, this study analyzes the determinants of consumers' acceptability of GM foods. The data was analyzed by employing the bivariate probit model and censored least absolute deviation (CLAD) models. The empirical results indicated that urban consumers are more aware of GM foods compared to rural consumers. The acceptance of GM foods was more among females' consumers as compared to male consumers. In addition, the older consumers were more willing to accept GM food compared to young consumers. The acceptability of GM foods was also higher among wealthier households. Low price is the key factor leading to the acceptability of GM foods. The acceptability of the GM foods also reduces the risks among Pakistani consumers. PMID:27494790

  17. [On the applied medicolegal significance of the notion of "medical error"].

    PubMed

    Iurasov, V V; Smakhtin, R E

    2014-01-01

    The current practice of expertise of the adequacy of organization of the provision of medical aid introduces a new aspect of the notion of "medical error" that is widely employed in medical profession, among lawyers, patients, and their relatives as well as in mass media. The universally accepted meaning of this notion has not thus far been proposed. The authors consider the medico-legal concept of "medical error" reconciling the contradictory opinions.

  18. Three-Dimensional Turbulent RANS Adjoint-Based Error Correction

    NASA Technical Reports Server (NTRS)

    Park, Michael A.

    2003-01-01

    Engineering problems commonly require functional outputs of computational fluid dynamics (CFD) simulations with specified accuracy. These simulations are performed with limited computational resources. Computable error estimates offer the possibility of quantifying accuracy on a given mesh and predicting a fine grid functional on a coarser mesh. Such an estimate can be computed by solving the flow equations and the associated adjoint problem for the functional of interest. An adjoint-based error correction procedure is demonstrated for transonic inviscid and subsonic laminar and turbulent flow. A mesh adaptation procedure is formulated to target uncertainty in the corrected functional and terminate when error remaining in the calculation is less than a user-specified error tolerance. This adaptation scheme is shown to yield anisotropic meshes with corrected functionals that are more accurate for a given number of grid points then isotropic adapted and uniformly refined grids.

  19. Analysis of Measurement Error and Estimator Shape in Three-Point Hydraulic Gradient Estimators

    NASA Astrophysics Data System (ADS)

    McKenna, S. A.; Wahi, A. K.

    2003-12-01

    Three spatially separated measurements of head provide a means of estimating the magnitude and orientation of the hydraulic gradient. Previous work with three-point estimators has focused on the effect of the size (area) of the three-point estimator and measurement error on the final estimates of the gradient magnitude and orientation in laboratory and field studies (Mizell, 1980; Silliman and Frost, 1995; Silliman and Mantz, 2000; Ruskauff and Rumbaugh, 1996). However, a systematic analysis of the combined effects of measurement error, estimator shape and estimator orientation relative to the gradient orientation has not previously been conducted. Monte Carlo simulation with an underlying assumption of a homogeneous transmissivity field is used to examine the effects of uncorrelated measurement error on a series of eleven different three-point estimators having the same size but different shapes as a function of the orientation of the true gradient. Results show that the variance in the estimate of both the magnitude and the orientation increase linearly with the increase in measurement error in agreement with the results of stochastic theory for estimators that are small relative to the correlation length of transmissivity (Mizell, 1980). Three-point estimator shapes with base to height ratios between 0.5 and 5.0 provide accurate estimates of magnitude and orientation across all orientations of the true gradient. As an example, these results are applied to data collected from a monitoring network of 25 wells at the WIPP site during two different time periods. The simulation results are used to reduce the set of all possible combinations of three wells to those combinations with acceptable measurement errors relative to the amount of head drop across the estimator and base to height ratios between 0.5 and 5.0. These limitations reduce the set of all possible well combinations by 98 percent and show that size alone as defined by triangle area is not a valid

  20. The Impact of Medical Interpretation Method on Time and Errors

    PubMed Central

    Kapelusznik, Luciano; Prakash, Kavitha; Gonzalez, Javier; Orta, Lurmag Y.; Tseng, Chi-Hong; Changrani, Jyotsna

    2007-01-01

    Background Twenty-two million Americans have limited English proficiency. Interpreting for limited English proficient patients is intended to enhance communication and delivery of quality medical care. Objective Little is known about the impact of various interpreting methods on interpreting speed and errors. This investigation addresses this important gap. Design Four scripted clinical encounters were used to enable the comparison of equivalent clinical content. These scripts were run across four interpreting methods, including remote simultaneous, remote consecutive, proximate consecutive, and proximate ad hoc interpreting. The first 3 methods utilized professional, trained interpreters, whereas the ad hoc method utilized untrained staff. Measurements Audiotaped transcripts of the encounters were coded, using a prespecified algorithm to determine medical error and linguistic error, by coders blinded to the interpreting method. Encounters were also timed. Results Remote simultaneous medical interpreting (RSMI) encounters averaged 12.72 vs 18.24 minutes for the next fastest mode (proximate ad hoc) (p = 0.002). There were 12 times more medical errors of moderate or greater clinical significance among utterances in non-RSMI encounters compared to RSMI encounters (p = 0.0002). Conclusions Whereas limited by the small number of interpreters involved, our study found that RSMI resulted in fewer medical errors and was faster than non-RSMI methods of interpreting. PMID:17957418

  1. Acceptance in Romantic Relationships: The Frequency and Acceptability of Partner Behavior Inventory

    ERIC Educational Resources Information Center

    Doss, Brian D.; Christensen, Andrew

    2006-01-01

    Despite the recent emphasis on acceptance in romantic relationships, no validated measure of relationship acceptance presently exists. To fill this gap, the 20-item Frequency and Acceptability of Partner Behavior Inventory (FAPBI; A. Christensen & N. S. Jacobson, 1997) was created to assess separately the acceptability and frequency of both…

  2. Students' Formalising Process of the Limit Concept

    ERIC Educational Resources Information Center

    Kabael, Tangul

    2014-01-01

    The concept of limit is the foundation for many concepts such as the derivative and the integral in advanced mathematics. The limit concept has been a research topic in mathematics education for years and in the literature it is a broadly accepted fact that the limit is a difficult notion for most students. The study presented in this article is a…

  3. Latent human error analysis and efficient improvement strategies by fuzzy TOPSIS in aviation maintenance tasks.

    PubMed

    Chiu, Ming-Chuan; Hsieh, Min-Chih

    2016-05-01

    The purposes of this study were to develop a latent human error analysis process, to explore the factors of latent human error in aviation maintenance tasks, and to provide an efficient improvement strategy for addressing those errors. First, we used HFACS and RCA to define the error factors related to aviation maintenance tasks. Fuzzy TOPSIS with four criteria was applied to evaluate the error factors. Results show that 1) adverse physiological states, 2) physical/mental limitations, and 3) coordination, communication, and planning are the factors related to airline maintenance tasks that could be addressed easily and efficiently. This research establishes a new analytic process for investigating latent human error and provides a strategy for analyzing human error using fuzzy TOPSIS. Our analysis process complements shortages in existing methodologies by incorporating improvement efficiency, and it enhances the depth and broadness of human error analysis methodology. PMID:26851473

  4. Latent human error analysis and efficient improvement strategies by fuzzy TOPSIS in aviation maintenance tasks.

    PubMed

    Chiu, Ming-Chuan; Hsieh, Min-Chih

    2016-05-01

    The purposes of this study were to develop a latent human error analysis process, to explore the factors of latent human error in aviation maintenance tasks, and to provide an efficient improvement strategy for addressing those errors. First, we used HFACS and RCA to define the error factors related to aviation maintenance tasks. Fuzzy TOPSIS with four criteria was applied to evaluate the error factors. Results show that 1) adverse physiological states, 2) physical/mental limitations, and 3) coordination, communication, and planning are the factors related to airline maintenance tasks that could be addressed easily and efficiently. This research establishes a new analytic process for investigating latent human error and provides a strategy for analyzing human error using fuzzy TOPSIS. Our analysis process complements shortages in existing methodologies by incorporating improvement efficiency, and it enhances the depth and broadness of human error analysis methodology.

  5. GP-B error modeling and analysis

    NASA Technical Reports Server (NTRS)

    Hung, J. C.

    1982-01-01

    Individual source errors and their effects on the accuracy of the Gravity Probe B (GP-B) experiment were investigated. Emphasis was placed on: (1) the refinement of source error identification and classifications of error according to their physical nature; (2) error analysis for the GP-B data processing; and (3) measurement geometry for the experiment.

  6. Error estimation for ORION baseline vector determination

    NASA Technical Reports Server (NTRS)

    Wu, S. C.

    1980-01-01

    Effects of error sources on Operational Radio Interferometry Observing Network (ORION) baseline vector determination are studied. Partial derivatives of delay observations with respect to each error source are formulated. Covariance analysis is performed to estimate the contribution of each error source to baseline vector error. System design parameters such as antenna sizes, system temperatures and provision for dual frequency operation are discussed.

  7. A simple double error correcting BCH codes

    NASA Astrophysics Data System (ADS)

    Sinha, V.

    1983-07-01

    With the availability of various cost effective digital hardware components, error correcting codes are realized in hardware in simpler fashion than was hitherto possible. Instead of computing error locations in BCH decoding by Berklekamp algorith, syndrome to error location mapping using an EPROM for double error correcting BCH code is described. The processing is parallel instead of serial. Possible applications are given.

  8. Discretization vs. Rounding Error in Euler's Method

    ERIC Educational Resources Information Center

    Borges, Carlos F.

    2011-01-01

    Euler's method for solving initial value problems is an excellent vehicle for observing the relationship between discretization error and rounding error in numerical computation. Reductions in stepsize, in order to decrease discretization error, necessarily increase the number of steps and so introduce additional rounding error. The problem is…

  9. Error Analysis in the Introductory Physics Laboratory.

    ERIC Educational Resources Information Center

    Deacon, Christopher G.

    1992-01-01

    Describes two simple methods of error analysis: (1) combining errors in the measured quantities; and (2) calculating the error or uncertainty in the slope of a straight-line graph. Discusses significance of the error in the comparison of experimental results with some known value. (MDH)

  10. Medical Errors: Tips to Help Prevent Them

    MedlinePlus

    ... to Web version Medical Errors: Tips to Help Prevent Them Medical Errors: Tips to Help Prevent Them Medical errors are one of the nation's ... single most important way you can help to prevent errors is to be an active member of ...

  11. Error-Related Psychophysiology and Negative Affect

    ERIC Educational Resources Information Center

    Hajcak, G.; McDonald, N.; Simons, R.F.

    2004-01-01

    The error-related negativity (ERN/Ne) and error positivity (Pe) have been associated with error detection and response monitoring. More recently, heart rate (HR) and skin conductance (SC) have also been shown to be sensitive to the internal detection of errors. An enhanced ERN has consistently been observed in anxious subjects and there is some…

  12. Reducing collective quantum state rotation errors with reversible dephasing

    SciTech Connect

    Cox, Kevin C.; Norcia, Matthew A.; Weiner, Joshua M.; Bohnet, Justin G.; Thompson, James K.

    2014-12-29

    We demonstrate that reversible dephasing via inhomogeneous broadening can greatly reduce collective quantum state rotation errors, and observe the suppression of rotation errors by more than 21 dB in the context of collective population measurements of the spin states of an ensemble of 2.1×10{sup 5} laser cooled and trapped {sup 87}Rb atoms. The large reduction in rotation noise enables direct resolution of spin state populations 13(1) dB below the fundamental quantum projection noise limit. Further, the spin state measurement projects the system into an entangled state with 9.5(5) dB of directly observed spectroscopic enhancement (squeezing) relative to the standard quantum limit, whereas no enhancement would have been obtained without the suppression of rotation errors.

  13. EC: an efficient error correction algorithm for short reads

    PubMed Central

    2015-01-01

    Background In highly parallel next-generation sequencing (NGS) techniques millions to billions of short reads are produced from a genomic sequence in a single run. Due to the limitation of the NGS technologies, there could be errors in the reads. The error rate of the reads can be reduced with trimming and by correcting the erroneous bases of the reads. It helps to achieve high quality data and the computational complexity of many biological applications will be greatly reduced if the reads are first corrected. We have developed a novel error correction algorithm called EC and compared it with four other state-of-the-art algorithms using both real and simulated sequencing reads. Results We have done extensive and rigorous experiments that reveal that EC is indeed an effective, scalable, and efficient error correction tool. Real reads that we have employed in our performance evaluation are Illumina-generated short reads of various lengths. Six experimental datasets we have utilized are taken from sequence and read archive (SRA) at NCBI. The simulated reads are obtained by picking substrings from random positions of reference genomes. To introduce errors, some of the bases of the simulated reads are changed to other bases with some probabilities. Conclusions Error correction is a vital problem in biology especially for NGS data. In this paper we present a novel algorithm, called Error Corrector (EC), for correcting substitution errors in biological sequencing reads. We plan to investigate the possibility of employing the techniques introduced in this research paper to handle insertion and deletion errors also. Software availability The implementation is freely available for non-commercial purposes. It can be downloaded from: http://engr.uconn.edu/~rajasek/EC.zip. PMID:26678663

  14. ERROR ANALYSIS OF COMPOSITE SHOCK INTERACTION PROBLEMS.

    SciTech Connect

    LEE,T.MU,Y.ZHAO,M.GLIMM,J.LI,X.YE,K.

    2004-07-26

    We propose statistical models of uncertainty and error in numerical solutions. To represent errors efficiently in shock physics simulations we propose a composition law. The law allows us to estimate errors in the solutions of composite problems in terms of the errors from simpler ones as discussed in a previous paper. In this paper, we conduct a detailed analysis of the errors. One of our goals is to understand the relative magnitude of the input uncertainty vs. the errors created within the numerical solution. In more detail, we wish to understand the contribution of each wave interaction to the errors observed at the end of the simulation.

  15. Report on errors in pretransfusion testing from a tertiary care center: A step toward transfusion safety

    PubMed Central

    Sidhu, Meena; Meenia, Renu; Akhter, Naveen; Sawhney, Vijay; Irm, Yasmeen

    2016-01-01

    Introduction: Errors in the process of pretransfusion testing for blood transfusion can occur at any stage from collection of the sample to administration of the blood component. The present study was conducted to analyze the errors that threaten patients’ transfusion safety and actual harm/serious adverse events that occurred to the patients due to these errors. Materials and Methods: The prospective study was conducted in the Department Of Transfusion Medicine, Shri Maharaja Gulab Singh Hospital, Government Medical College, Jammu, India from January 2014 to December 2014 for a period of 1 year. Errors were defined as any deviation from established policies and standard operating procedures. A near-miss event was defined as those errors, which did not reach the patient. Location and time of occurrence of the events/errors were also noted. Results: A total of 32,672 requisitions for the transfusion of blood and blood components were received for typing and cross-matching. Out of these, 26,683 products were issued to the various clinical departments. A total of 2,229 errors were detected over a period of 1 year. Near-miss events constituted 53% of the errors and actual harmful events due to errors occurred in 0.26% of the patients. Sample labeling errors were 2.4%, inappropriate request for blood components 2%, and information on requisition forms not matching with that on the sample 1.5% of all the requisitions received were the most frequent errors in clinical services. In transfusion services, the most common event was accepting sample in error with the frequency of 0.5% of all requisitions. ABO incompatible hemolytic reactions were the most frequent harmful event with the frequency of 2.2/10,000 transfusions. Conclusion: Sample labeling, inappropriate request, and sample received in error were the most frequent high-risk errors. PMID:27011670

  16. Acoustic evidence for phonologically mismatched speech errors.

    PubMed

    Gormley, Andrea

    2015-04-01

    Speech errors are generally said to accommodate to their new phonological context. This accommodation has been validated by several transcription studies. The transcription methodology is not the best choice for detecting errors at this level, however, as this type of error can be difficult to perceive. This paper presents an acoustic analysis of speech errors that uncovers non-accommodated or mismatch errors. A mismatch error is a sub-phonemic error that results in an incorrect surface phonology. This type of error could arise during the processing of phonological rules or they could be made at the motor level of implementation. The results of this work have important implications for both experimental and theoretical research. For experimentalists, it validates the tools used for error induction and the acoustic determination of errors free of the perceptual bias. For theorists, this methodology can be used to test the nature of the processes proposed in language production.

  17. Robot learning and error correction

    NASA Technical Reports Server (NTRS)

    Friedman, L.

    1977-01-01

    A model of robot learning is described that associates previously unknown perceptions with the sensed known consequences of robot actions. For these actions, both the categories of outcomes and the corresponding sensory patterns are incorporated in a knowledge base by the system designer. Thus the robot is able to predict the outcome of an action and compare the expectation with the experience. New knowledge about what to expect in the world may then be incorporated by the robot in a pre-existing structure whether it detects accordance or discrepancy between a predicted consequence and experience. Errors committed during plan execution are detected by the same type of comparison process and learning may be applied to avoiding the errors.

  18. Negligence, genuine error, and litigation.

    PubMed

    Sohn, David H

    2013-01-01

    Not all medical injuries are the result of negligence. In fact, most medical injuries are the result either of the inherent risk in the practice of medicine, or due to system errors, which cannot be prevented simply through fear of disciplinary action. This paper will discuss the differences between adverse events, negligence, and system errors; the current medical malpractice tort system in the United States; and review current and future solutions, including medical malpractice reform, alternative dispute resolution, health courts, and no-fault compensation systems. The current political environment favors investigation of non-cap tort reform remedies; investment into more rational oversight systems, such as health courts or no-fault systems may reap both quantitative and qualitative benefits for a less costly and safer health system. PMID:23426783

  19. Human error in aviation operations

    NASA Technical Reports Server (NTRS)

    Billings, C. E.; Lanber, J. K.; Cooper, G. E.

    1974-01-01

    This report is a brief description of research being undertaken by the National Aeronautics and Space Administration. The project is designed to seek out factors in the aviation system which contribute to human error, and to search for ways of minimizing the potential threat posed by these factors. The philosophy and assumptions underlying the study are discussed, together with an outline of the research plan.

  20. Clinical review: Medication errors in critical care

    PubMed Central

    Moyen, Eric; Camiré, Eric; Stelfox, Henry Thomas

    2008-01-01

    Medication errors in critical care are frequent, serious, and predictable. Critically ill patients are prescribed twice as many medications as patients outside of the intensive care unit (ICU) and nearly all will suffer a potentially life-threatening error at some point during their stay. The aim of this article is to provide a basic review of medication errors in the ICU, identify risk factors for medication errors, and suggest strategies to prevent errors and manage their consequences. PMID:18373883

  1. Recovery at the edge of error: debunking the myth of the infallible expert.

    PubMed

    Patel, Vimla L; Cohen, Trevor; Murarka, Tripti; Olsen, Joanne; Kagita, Srujana; Myneni, Sahiti; Buchman, Timothy; Ghaemmaghami, Vafa

    2011-06-01

    The notion that human error should not be tolerated is prevalent in both the public and personal perception of the performance of clinicians. However, researchers in other safety-critical domains have long since abandoned the quest for zero defects as an impractical goal, choosing to focus instead on the development of strategies to enhance the ability to recover from error. This paper presents a cognitive framework for the study of error recovery, and the results of our empirical research into error detection and recovery in the critical care domain, using both laboratory-based and naturalistic approaches. Both attending physicians and residents were prone to commit, detect and recover from errors, but the nature of these errors was different. Experts corrected the errors as soon as they detected them and were better able to detect errors requiring integration of multiple elements in the case. Residents were more cautious in making decisions showing a slower error recovery pattern, and the detected errors were more procedural in nature with specific patient outcomes. Error detection and correction are shown to be dependent on expertise, and on the nature of the everyday tasks of the clinicians concerned. Understanding the limits and failures of human decision-making is important if we are to build robust decision-support systems to manage the boundaries of risk of error in decision-making. Detection and correction of potential error is an integral part of cognitive work in the complex, critical care workplace. PMID:20869466

  2. Recovery at the edge of error: debunking the myth of the infallible expert.

    PubMed

    Patel, Vimla L; Cohen, Trevor; Murarka, Tripti; Olsen, Joanne; Kagita, Srujana; Myneni, Sahiti; Buchman, Timothy; Ghaemmaghami, Vafa

    2011-06-01

    The notion that human error should not be tolerated is prevalent in both the public and personal perception of the performance of clinicians. However, researchers in other safety-critical domains have long since abandoned the quest for zero defects as an impractical goal, choosing to focus instead on the development of strategies to enhance the ability to recover from error. This paper presents a cognitive framework for the study of error recovery, and the results of our empirical research into error detection and recovery in the critical care domain, using both laboratory-based and naturalistic approaches. Both attending physicians and residents were prone to commit, detect and recover from errors, but the nature of these errors was different. Experts corrected the errors as soon as they detected them and were better able to detect errors requiring integration of multiple elements in the case. Residents were more cautious in making decisions showing a slower error recovery pattern, and the detected errors were more procedural in nature with specific patient outcomes. Error detection and correction are shown to be dependent on expertise, and on the nature of the everyday tasks of the clinicians concerned. Understanding the limits and failures of human decision-making is important if we are to build robust decision-support systems to manage the boundaries of risk of error in decision-making. Detection and correction of potential error is an integral part of cognitive work in the complex, critical care workplace.

  3. Treatment acceptability among mexican american parents.

    PubMed

    Borrego, Joaquin; Ibanez, Elizabeth S; Spendlove, Stuart J; Pemberton, Joy R

    2007-09-01

    There is a void in the literature with regard to Hispanic parents' views about common interventions for children with behavior problems. The purpose of this study was to examine the treatment acceptability of child management techniques in a Mexican American sample. Parents' acculturation was also examined to determine if it would account for differences in treatment acceptability. Mexican American parents found response cost, a punishment-based technique, more acceptable than positive reinforcement-based techniques (e.g., differential attention). Results suggest that Mexican American parents' acculturation has little impact on acceptability of child management interventions. No association was found between mothers' acculturation and treatment acceptability. However, more acculturated Mexican American fathers viewed token economy as more acceptable than less acculturated fathers. Results are discussed in the context of clinical work and research with Mexican Americans.

  4. Error control in the GCF: An information-theoretic model for error analysis and coding

    NASA Technical Reports Server (NTRS)

    Adeyemi, O.

    1974-01-01

    The structure of data-transmission errors within the Ground Communications Facility is analyzed in order to provide error control (both forward error correction and feedback retransmission) for improved communication. Emphasis is placed on constructing a theoretical model of errors and obtaining from it all the relevant statistics for error control. No specific coding strategy is analyzed, but references to the significance of certain error pattern distributions, as predicted by the model, to error correction are made.

  5. Rigorous Error Estimates for Reynolds' Lubrication Approximation

    NASA Astrophysics Data System (ADS)

    Wilkening, Jon

    2006-11-01

    Reynolds' lubrication equation is used extensively in engineering calculations to study flows between moving machine parts, e.g. in journal bearings or computer disk drives. It is also used extensively in micro- and bio-fluid mechanics to model creeping flows through narrow channels and in thin films. To date, the only rigorous justification of this equation (due to Bayada and Chambat in 1986 and to Nazarov in 1987) states that the solution of the Navier-Stokes equations converges to the solution of Reynolds' equation in the limit as the aspect ratio ɛ approaches zero. In this talk, I will show how the constants in these error bounds depend on the geometry. More specifically, I will show how to compute expansion solutions of the Stokes equations in a 2-d periodic geometry to arbitrary order and exhibit error estimates with constants which are either (1) given in the problem statement or easily computable from h(x), or (2) difficult to compute but universal (independent of h(x)). Studying the constants in the latter category, we find that the effective radius of convergence actually increases through 10th order, but then begins to decrease as the inverse of the order, indicating that the expansion solution is probably an asymptotic series rather than a convergent series.

  6. An investigation of error correcting techniques for OMV data

    NASA Technical Reports Server (NTRS)

    Ingels, Frank; Fryer, John

    1992-01-01

    Papers on the following topics are presented: considerations of testing the Orbital Maneuvering Vehicle (OMV) system with CLASS; OMV CLASS test results (first go around); equivalent system gain available from R-S encoding versus a desire to lower the power amplifier from 25 watts to 20 watts for OMV; command word acceptance/rejection rates for OMV; a memo concerning energy-to-noise ratio for the Viterbi-BSC Channel and the impact of Manchester coding loss; and an investigation of error correcting techniques for OMV and Advanced X-ray Astrophysics Facility (AXAF).

  7. The nearest neighbor and the bayes error rates.

    PubMed

    Loizou, G; Maybank, S J

    1987-02-01

    The (k, l) nearest neighbor method of pattern classification is compared to the Bayes method. If the two acceptance rates are equal then the asymptotic error rates satisfy the inequalities Ek,l + 1 ¿ E*(¿) ¿ Ek,l dE*(¿), where d is a function of k, l, and the number of pattern classes, and ¿ is the reject threshold for the Bayes method. An explicit expression for d is given which is optimal in the sense that for some probability distributions Ek,l and dE* (¿) are equal. PMID:21869395

  8. Acceptability of blood and blood substitutes.

    PubMed

    Ferguson, E; Prowse, C; Townsend, E; Spence, A; Hilten, J A van; Lowe, K

    2008-03-01

    Alternatives to donor blood have been developed in part to meet increasing demand. However, new biotechnologies are often associated with increased perceptions of risk and low acceptance. This paper reviews developments of alternatives and presents data, from a field-based experiment in the UK and Holland, on the risks and acceptance of donor blood and alternatives (chemical, genetically modified and bovine). UK groups perceived all substitutes as riskier than the Dutch. There is a negative association between perceived risk and acceptability. Solutions to increasing acceptance are discussed in terms of implicit attitudes, product naming and emotional responses.

  9. 7 CFR 932.32 - Acceptance.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... and Orders; Fruits, Vegetables, Nuts), DEPARTMENT OF AGRICULTURE OLIVES GROWN IN CALIFORNIA Order Regulating Handling Olive Administrative Committee § 932.32 Acceptance. Any person selected by the...

  10. 7 CFR 932.32 - Acceptance.

    Code of Federal Regulations, 2012 CFR

    2012-01-01

    ... and Orders; Fruits, Vegetables, Nuts), DEPARTMENT OF AGRICULTURE OLIVES GROWN IN CALIFORNIA Order Regulating Handling Olive Administrative Committee § 932.32 Acceptance. Any person selected by the...

  11. 7 CFR 932.32 - Acceptance.

    Code of Federal Regulations, 2011 CFR

    2011-01-01

    ... and Orders; Fruits, Vegetables, Nuts), DEPARTMENT OF AGRICULTURE OLIVES GROWN IN CALIFORNIA Order Regulating Handling Olive Administrative Committee § 932.32 Acceptance. Any person selected by the...

  12. 7 CFR 932.32 - Acceptance.

    Code of Federal Regulations, 2013 CFR

    2013-01-01

    ... AND ORDERS; FRUITS, VEGETABLES, NUTS), DEPARTMENT OF AGRICULTURE OLIVES GROWN IN CALIFORNIA Order Regulating Handling Olive Administrative Committee § 932.32 Acceptance. Any person selected by the...

  13. 7 CFR 932.32 - Acceptance.

    Code of Federal Regulations, 2014 CFR

    2014-01-01

    ... AND ORDERS; FRUITS, VEGETABLES, NUTS), DEPARTMENT OF AGRICULTURE OLIVES GROWN IN CALIFORNIA Order Regulating Handling Olive Administrative Committee § 932.32 Acceptance. Any person selected by the...

  14. 7 CFR 1207.323 - Acceptance.

    Code of Federal Regulations, 2013 CFR

    2013-01-01

    ... AND ORDERS; MISCELLANEOUS COMMODITIES), DEPARTMENT OF AGRICULTURE POTATO RESEARCH AND PROMOTION PLAN Potato Research and Promotion Plan National Potato Promotion Board § 1207.323 Acceptance. Each...

  15. 7 CFR 1207.323 - Acceptance.

    Code of Federal Regulations, 2012 CFR

    2012-01-01

    ... AND ORDERS; MISCELLANEOUS COMMODITIES), DEPARTMENT OF AGRICULTURE POTATO RESEARCH AND PROMOTION PLAN Potato Research and Promotion Plan National Potato Promotion Board § 1207.323 Acceptance. Each...

  16. 7 CFR 1207.323 - Acceptance.

    Code of Federal Regulations, 2014 CFR

    2014-01-01

    ... AND ORDERS; MISCELLANEOUS COMMODITIES), DEPARTMENT OF AGRICULTURE POTATO RESEARCH AND PROMOTION PLAN Potato Research and Promotion Plan National Potato Promotion Board § 1207.323 Acceptance. Each...

  17. 7 CFR 1207.323 - Acceptance.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... AND ORDERS; MISCELLANEOUS COMMODITIES), DEPARTMENT OF AGRICULTURE POTATO RESEARCH AND PROMOTION PLAN Potato Research and Promotion Plan National Potato Promotion Board § 1207.323 Acceptance. Each...

  18. 7 CFR 1207.323 - Acceptance.

    Code of Federal Regulations, 2011 CFR

    2011-01-01

    ... AND ORDERS; MISCELLANEOUS COMMODITIES), DEPARTMENT OF AGRICULTURE POTATO RESEARCH AND PROMOTION PLAN Potato Research and Promotion Plan National Potato Promotion Board § 1207.323 Acceptance. Each...

  19. 2013 SYR Accepted Poster Abstracts.

    PubMed

    2013-01-01

    SYR 2013 Accepted Poster abstracts: 1. Benefits of Yoga as a Wellness Practice in a Veterans Affairs (VA) Health Care Setting: If You Build It, Will They Come? 2. Yoga-based Psychotherapy Group With Urban Youth Exposed to Trauma. 3. Embodied Health: The Effects of a Mind�Body Course for Medical Students. 4. Interoceptive Awareness and Vegetable Intake After a Yoga and Stress Management Intervention. 5. Yoga Reduces Performance Anxiety in Adolescent Musicians. 6. Designing and Implementing a Therapeutic Yoga Program for Older Women With Knee Osteoarthritis. 7. Yoga and Life Skills Eating Disorder Prevention Among 5th Grade Females: A Controlled Trial. 8. A Randomized, Controlled Trial Comparing the Impact of Yoga and Physical Education on the Emotional and Behavioral Functioning of Middle School Children. 9. Feasibility of a Multisite, Community based Randomized Study of Yoga and Wellness Education for Women With Breast Cancer Undergoing Chemotherapy. 10. A Delphi Study for the Development of Protocol Guidelines for Yoga Interventions in Mental Health. 11. Impact Investigation of Breathwalk Daily Practice: Canada�India Collaborative Study. 12. Yoga Improves Distress, Fatigue, and Insomnia in Older Veteran Cancer Survivors: Results of a Pilot Study. 13. Assessment of Kundalini Mantra and Meditation as an Adjunctive Treatment With Mental Health Consumers. 14. Kundalini Yoga Therapy Versus Cognitive Behavior Therapy for Generalized Anxiety Disorder and Co-Occurring Mood Disorder. 15. Baseline Differences in Women Versus Men Initiating Yoga Programs to Aid Smoking Cessation: Quitting in Balance Versus QuitStrong. 16. Pranayam Practice: Impact on Focus and Everyday Life of Work and Relationships. 17. Participation in a Tailored Yoga Program is Associated With Improved Physical Health in Persons With Arthritis. 18. Effects of Yoga on Blood Pressure: Systematic Review and Meta-analysis. 19. A Quasi-experimental Trial of a Yoga based Intervention to Reduce Stress and

  20. 2013 SYR Accepted Poster Abstracts.

    PubMed

    2013-01-01

    SYR 2013 Accepted Poster abstracts: 1. Benefits of Yoga as a Wellness Practice in a Veterans Affairs (VA) Health Care Setting: If You Build It, Will They Come? 2. Yoga-based Psychotherapy Group With Urban Youth Exposed to Trauma. 3. Embodied Health: The Effects of a Mind�Body Course for Medical Students. 4. Interoceptive Awareness and Vegetable Intake After a Yoga and Stress Management Intervention. 5. Yoga Reduces Performance Anxiety in Adolescent Musicians. 6. Designing and Implementing a Therapeutic Yoga Program for Older Women With Knee Osteoarthritis. 7. Yoga and Life Skills Eating Disorder Prevention Among 5th Grade Females: A Controlled Trial. 8. A Randomized, Controlled Trial Comparing the Impact of Yoga and Physical Education on the Emotional and Behavioral Functioning of Middle School Children. 9. Feasibility of a Multisite, Community based Randomized Study of Yoga and Wellness Education for Women With Breast Cancer Undergoing Chemotherapy. 10. A Delphi Study for the Development of Protocol Guidelines for Yoga Interventions in Mental Health. 11. Impact Investigation of Breathwalk Daily Practice: Canada�India Collaborative Study. 12. Yoga Improves Distress, Fatigue, and Insomnia in Older Veteran Cancer Survivors: Results of a Pilot Study. 13. Assessment of Kundalini Mantra and Meditation as an Adjunctive Treatment With Mental Health Consumers. 14. Kundalini Yoga Therapy Versus Cognitive Behavior Therapy for Generalized Anxiety Disorder and Co-Occurring Mood Disorder. 15. Baseline Differences in Women Versus Men Initiating Yoga Programs to Aid Smoking Cessation: Quitting in Balance Versus QuitStrong. 16. Pranayam Practice: Impact on Focus and Everyday Life of Work and Relationships. 17. Participation in a Tailored Yoga Program is Associated With Improved Physical Health in Persons With Arthritis. 18. Effects of Yoga on Blood Pressure: Systematic Review and Meta-analysis. 19. A Quasi-experimental Trial of a Yoga based Intervention to Reduce Stress and

  1. In acceptance we trust? Conceptualising acceptance as a viable approach to NGO security management.

    PubMed

    Fast, Larissa A; Freeman, C Faith; O'Neill, Michael; Rowley, Elizabeth

    2013-04-01

    This paper documents current understanding of acceptance as a security management approach and explores issues and challenges non-governmental organisations (NGOs) confront when implementing an acceptance approach to security management. It argues that the failure of organisations to systematise and clearly articulate acceptance as a distinct security management approach and a lack of organisational policies and procedures concerning acceptance hinder its efficacy as a security management approach. The paper identifies key and cross-cutting components of acceptance that are critical to its effective implementation in order to advance a comprehensive and systematic concept of acceptance. The key components of acceptance illustrate how organisational and staff functions affect positively or negatively an organisation's acceptance, and include: an organisation's principles and mission, communications, negotiation, programming, relationships and networks, stakeholder and context analysis, staffing, and image. The paper contends that acceptance is linked not only to good programming, but also to overall organisational management and structures. PMID:23278470

  2. Physiologically acceptable resistance of an air purifying respirator.

    PubMed

    Shykoff, Barbara E; Warkander, Dan E

    2011-12-01

    Physiologically acceptable limits of inspiratory impediment for air purifying respirators (APRs) were sought.Measurements on 30 subjects included pressure in, and flow through, an APR, and respiratory and cardiovascular variables. Exercise with and without APR included ladder climbing, load lift and transfer, incremental running and endurance running, with endurance at 85% peak oxygen uptake. Resistance that did not alter minute ventilation (VE) was judged acceptable long-term. Acceptable short-term impediments were deduced from end exercise conditions. Proposed long-term limits are inspiratory work of breathing per tidal volume (WOBi/VT) ≤ 0.9 kPa and peak inspiratory pressure (P (i) peak) ≤1.2 kPa. Proposed short-term limits are: for VE ≤110 L min(-1), WOBi/VT ≤1.3 kPa and P (i) peak ≤ 1.8 kPa; and for VE >130 L min(-1), WOBi/VT ≤1.6 kPa. A design relation among VE, pressure–flow coefficients of an APR, and WOBi/VT is proposed. STATEMENT OF RELEVANCE: This work generalises results from one APR by considering the altered physiological parameters related to factors inhibiting exercise. Simple expressions are proposed to connect bench-test parameters to the relation between ventilation and work of breathing. Population-based recommendations recognise that those who need more air flow can also generate higher pressures. PMID:22103726

  3. Righting errors in writing errors: the Wing and Baddeley (1980) spelling error corpus revisited.

    PubMed

    Wing, Alan M; Baddeley, Alan D

    2009-03-01

    We present a new analysis of our previously published corpus of handwriting errors (slips) using the proportional allocation algorithm of Machtynger and Shallice (2009). As previously, the proportion of slips is greater in the middle of the word than at the ends, however, in contrast to before, the proportion is greater at the end than at the beginning of the word. The findings are consistent with the hypothesis of memory effects in a graphemic output buffer.

  4. Noise in neural populations accounts for errors in working memory.

    PubMed

    Bays, Paul M

    2014-03-01

    Errors in short-term memory increase with the quantity of information stored, limiting the complexity of cognition and behavior. In visual memory, attempts to account for errors in terms of allocation of a limited pool of working memory resources have met with some success, but the biological basis for this cognitive architecture is unclear. An alternative perspective attributes recall errors to noise in tuned populations of neurons that encode stimulus features in spiking activity. I show that errors associated with decreasing signal strength in probabilistically spiking neurons reproduce the pattern of failures in human recall under increasing memory load. In particular, deviations from the normal distribution that are characteristic of working memory errors and have been attributed previously to guesses or variability in precision are shown to arise as a natural consequence of decoding populations of tuned neurons. Observers possess fine control over memory representations and prioritize accurate storage of behaviorally relevant information, at a cost to lower priority stimuli. I show that changing the input drive to neurons encoding a prioritized stimulus biases population activity in a manner that reproduces this empirical tradeoff in memory precision. In a task in which predictive cues indicate stimuli most probable for test, human observers use the cues in an optimal manner to maximize performance, within the constraints imposed by neural noise. PMID:24599462

  5. Compiler-Assisted Detection of Transient Memory Errors

    SciTech Connect

    Tavarageri, Sanket; Krishnamoorthy, Sriram; Sadayappan, Ponnuswamy

    2014-06-09

    The probability of bit flips in hardware memory systems is projected to increase significantly as memory systems continue to scale in size and complexity. Effective hardware-based error detection and correction requires that the complete data path, involving all parts of the memory system, be protected with sufficient redundancy. First, this may be costly to employ on commodity computing platforms and second, even on high-end systems, protection against multi-bit errors may be lacking. Therefore, augmenting hardware error detection schemes with software techniques is of consider- able interest. In this paper, we consider software-level mechanisms to comprehensively detect transient memory faults. We develop novel compile-time algorithms to instrument application programs with checksum computation codes so as to detect memory errors. Unlike prior approaches that employ checksums on computational and architectural state, our scheme verifies every data access and works by tracking variables as they are produced and consumed. Experimental evaluation demonstrates that the proposed comprehensive error detection solution is viable as a completely software-only scheme. We also demonstrate that with limited hardware support, overheads of error detection can be further reduced.

  6. Development of an errorable car-following driver model

    NASA Astrophysics Data System (ADS)

    Yang, H.-H.; Peng, H.

    2010-06-01

    An errorable car-following driver model is presented in this paper. An errorable driver model is one that emulates human driver's functions and can generate both nominal (error-free), as well as devious (with error) behaviours. This model was developed for evaluation and design of active safety systems. The car-following data used for developing and validating the model were obtained from a large-scale naturalistic driving database. The stochastic car-following behaviour was first analysed and modelled as a random process. Three error-inducing behaviours were then introduced. First, human perceptual limitation was studied and implemented. Distraction due to non-driving tasks was then identified based on the statistical analysis of the driving data. Finally, time delay of human drivers was estimated through a recursive least-square identification process. By including these three error-inducing behaviours, rear-end collisions with the lead vehicle could occur. The simulated crash rate was found to be similar but somewhat higher than that reported in traffic statistics.

  7. Strategies for reducing medication errors in the emergency department

    PubMed Central

    Weant, Kyle A; Bailey, Abby M; Baker, Stephanie N

    2014-01-01

    Medication errors are an all-too-common occurrence in emergency departments across the nation. This is largely secondary to a multitude of factors that create an almost ideal environment for medication errors to thrive. To limit and mitigate these errors, it is necessary to have a thorough knowledge of the medication-use process in the emergency department and develop strategies targeted at each individual step. Some of these strategies include medication-error analysis, computerized provider-order entry systems, automated dispensing cabinets, bar-coding systems, medication reconciliation, standardizing medication-use processes, education, and emergency-medicine clinical pharmacists. Special consideration also needs to be given to the development of strategies for the pediatric population, as they can be at an elevated risk of harm. Regardless of the strategies implemented, the prevention of medication errors begins and ends with the development of a culture that promotes the reporting of medication errors, and a systematic, nonpunitive approach to their elimination. PMID:27147879

  8. Experimental investigation of observation error in anuran call surveys

    USGS Publications Warehouse

    McClintock, B.T.; Bailey, L.L.; Pollock, K.H.; Simons, T.R.

    2010-01-01

    Occupancy models that account for imperfect detection are often used to monitor anuran and songbird species occurrence. However, presenceabsence data arising from auditory detections may be more prone to observation error (e.g., false-positive detections) than are sampling approaches utilizing physical captures or sightings of individuals. We conducted realistic, replicated field experiments using a remote broadcasting system to simulate simple anuran call surveys and to investigate potential factors affecting observation error in these studies. Distance, time, ambient noise, and observer abilities were the most important factors explaining false-negative detections. Distance and observer ability were the best overall predictors of false-positive errors, but ambient noise and competing species also affected error rates for some species. False-positive errors made up 5 of all positive detections, with individual observers exhibiting false-positive rates between 0.5 and 14. Previous research suggests false-positive errors of these magnitudes would induce substantial positive biases in standard estimators of species occurrence, and we recommend practices to mitigate for false positives when developing occupancy monitoring protocols that rely on auditory detections. These recommendations include additional observer training, limiting the number of target species, and establishing distance and ambient noise thresholds during surveys. ?? 2010 The Wildlife Society.

  9. Non-Gaussian error distribution of 7Li abundance measurements

    NASA Astrophysics Data System (ADS)

    Crandall, Sara; Houston, Stephen; Ratra, Bharat

    2015-07-01

    We construct the error distribution of 7Li abundance measurements for 66 observations (with error bars) used by Spite et al. (2012) that give A(Li) = 2.21 ± 0.065 (median and 1σ symmetrized error). This error distribution is somewhat non-Gaussian, with larger probability in the tails than is predicted by a Gaussian distribution. The 95.4% confidence limits are 3.0σ in terms of the quoted errors. We fit the data to four commonly used distributions: Gaussian, Cauchy, Student’s t and double exponential with the center of the distribution found with both weighted mean and median statistics. It is reasonably well described by a widened n = 8 Student’s t distribution. Assuming Gaussianity, the observed A(Li) is 6.5σ away from that expected from standard Big Bang Nucleosynthesis (BBN) given the Planck observations. Accounting for the non-Gaussianity of the observed A(Li) error distribution reduces the discrepancy to 4.9σ, which is still significant.

  10. Search, Memory, and Choice Error: An Experiment

    PubMed Central

    Sanjurjo, Adam

    2015-01-01

    Multiple attribute search is a central feature of economic life: we consider much more than price when purchasing a home, and more than wage when choosing a job. An experiment is conducted in order to explore the effects of cognitive limitations on choice in these rich settings, in accordance with the predictions of a new model of search memory load. In each task, subjects are made to search the same information in one of two orders, which differ in predicted memory load. Despite standard models of choice treating such variations in order of acquisition as irrelevant, lower predicted memory load search orders are found to lead to substantially fewer choice errors. An implication of the result for search behavior, more generally, is that in order to reduce memory load (thus choice error) a limited memory searcher ought to deviate from the search path of an unlimited memory searcher in predictable ways-a mechanism that can explain the systematic deviations from optimal sequential search that have recently been discovered in peoples' behavior. Further, as cognitive load is induced endogenously (within the task), and found to affect choice behavior, this result contributes to the cognitive load literature (in which load is induced exogenously), as well as the cognitive ability literature (in which cognitive ability is measured in a separate task). In addition, while the information overload literature has focused on the detrimental effects of the quantity of information on choice, this result suggests that, holding quantity constant, the order that information is observed in is an essential determinant of choice failure. PMID:26121356

  11. Quantum error correction of photon-scattering errors

    NASA Astrophysics Data System (ADS)

    Akerman, Nitzan; Glickman, Yinnon; Kotler, Shlomi; Ozeri, Roee

    2011-05-01

    Photon scattering by an atomic ground-state superposition is often considered as a source of decoherence. The same process also results in atom-photon entanglement which had been directly observed in various experiments using single atom, ion or a diamond nitrogen-vacancy center. Here we combine these two aspects to implement a quantum error correction protocol. We encode a qubit in the two Zeeman-splitted ground states of a single trapped 88 Sr+ ion. Photons are resonantly scattered on the S1 / 2 -->P1 / 2 transition. We study the process of single photon scattering i.e. the excitation of the ion to the excited manifold followed by a spontaneous emission and decay. In the absence of any knowledge on the emitted photon, the ion-qubit coherence is lost. However the joined ion-photon system still maintains coherence. We show that while scattering events where spin population is preserved (Rayleigh scattering) do not affect coherence, spin-changing (Raman) scattering events result in coherent amplitude exchange between the two qubit states. By applying a unitary spin rotation that is dependent on the detected photon polarization we retrieve the ion-qubit initial state. We characterize this quantum error correction protocol by process tomography and demonstrate an ability to preserve ion-qubit coherence with high fidelity.

  12. Towards a Bayesian total error analysis of conceptual rainfall-runoff models: Characterising model error using storm-dependent parameters

    NASA Astrophysics Data System (ADS)

    Kuczera, George; Kavetski, Dmitri; Franks, Stewart; Thyer, Mark

    2006-11-01

    SummaryCalibration and prediction in conceptual rainfall-runoff (CRR) modelling is affected by the uncertainty in the observed forcing/response data and the structural error in the model. This study works towards the goal of developing a robust framework for dealing with these sources of error and focuses on model error. The characterisation of model error in CRR modelling has been thwarted by the convenient but indefensible treatment of CRR models as deterministic descriptions of catchment dynamics. This paper argues that the fluxes in CRR models should be treated as stochastic quantities because their estimation involves spatial and temporal averaging. Acceptance that CRR models are intrinsically stochastic paves the way for a more rational characterisation of model error. The hypothesis advanced in this paper is that CRR model error can be characterised by storm-dependent random variation of one or more CRR model parameters. A simple sensitivity analysis is used to identify the parameters most likely to behave stochastically, with variation in these parameters yielding the largest changes in model predictions as measured by the Nash-Sutcliffe criterion. A Bayesian hierarchical model is then formulated to explicitly differentiate between forcing, response and model error. It provides a very general framework for calibration and prediction, as well as for testing hypotheses regarding model structure and data uncertainty. A case study calibrating a six-parameter CRR model to daily data from the Abercrombie catchment (Australia) demonstrates the considerable potential of this approach. Allowing storm-dependent variation in just two model parameters (with one of the parameters characterising model error and the other reflecting input uncertainty) yields a substantially improved model fit raising the Nash-Sutcliffe statistic from 0.74 to 0.94. Of particular significance is the use of posterior diagnostics to test the key assumptions about the data and model errors

  13. Error Gravity: Perceptions of Native-Speaking and Non-Native Speaking Faculty in EFL.

    ERIC Educational Resources Information Center

    Kresovich, Brant M.

    1988-01-01

    A survey of teachers of composition in English as a Second Language in Japan addressed the perceptions of native-English-speaking and non-native-English-speaking teachers of the acceptability of specific error types within sentences. The native speakers of English were one British and 16 Americans. The non-native group was comprised of 26 Japanese…

  14. Meditation, mindfulness and executive control: the importance of emotional acceptance and brain-based performance monitoring.

    PubMed

    Teper, Rimma; Inzlicht, Michael

    2013-01-01

    Previous studies have documented the positive effects of mindfulness meditation on executive control. What has been lacking, however, is an understanding of the mechanism underlying this effect. Some theorists have described mindfulness as embodying two facets-present moment awareness and emotional acceptance. Here, we examine how the effect of meditation practice on executive control manifests in the brain, suggesting that emotional acceptance and performance monitoring play important roles. We investigated the effect of meditation practice on executive control and measured the neural correlates of performance monitoring, specifically, the error-related negativity (ERN), a neurophysiological response that occurs within 100 ms of error commission. Meditators and controls completed a Stroop task, during which we recorded ERN amplitudes with electroencephalography. Meditators showed greater executive control (i.e. fewer errors), a higher ERN and more emotional acceptance than controls. Finally, mediation pathway models further revealed that meditation practice relates to greater executive control and that this effect can be accounted for by heightened emotional acceptance, and to a lesser extent, increased brain-based performance monitoring.

  15. Why the distribution of medical errors matters.

    PubMed

    McLean, Thomas R

    2015-07-01

    During the last decade, interventions to reduce the number of medical errors have been largely ineffective. Although it is widely assumed that medical errors follow a Gaussian distribution, they may actually follow a Power Rule distribution. This article presents the evidence in favor of a Power Rule distribution for medical errors and then examines the consequences of such a distribution for medical errors. As the distribution of medical errors has real-world implications, further research is needed to determine whether medical errors follow a Gaussian or Power Rule distribution.

  16. Quantum error correction via robust probe modes

    SciTech Connect

    Yamaguchi, Fumiko; Nemoto, Kae; Munro, William J.

    2006-06-15

    We propose a scheme for quantum error correction using robust continuous variable probe modes, rather than fragile ancilla qubits, to detect errors without destroying data qubits. The use of such probe modes reduces the required number of expensive qubits in error correction and allows efficient encoding, error detection, and error correction. Moreover, the elimination of the need for direct qubit interactions significantly simplifies the construction of quantum circuits. We will illustrate how the approach implements three existing quantum error correcting codes: the three-qubit bit-flip (phase-flip) code, the Shor code, and an erasure code.

  17. 12 CFR 250.164 - Bankers' acceptances.

    Code of Federal Regulations, 2011 CFR

    2011-01-01

    ... 12 Banks and Banking 3 2011-01-01 2011-01-01 false Bankers' acceptances. 250.164 Section 250.164 Banks and Banking FEDERAL RESERVE SYSTEM (CONTINUED) BOARD OF GOVERNORS OF THE FEDERAL RESERVE SYSTEM MISCELLANEOUS INTERPRETATIONS Interpretations § 250.164 Bankers' acceptances. (a) Section 207 of the Bank...

  18. 48 CFR 3011.103 - Market acceptance.

    Code of Federal Regulations, 2011 CFR

    2011-10-01

    ... 48 Federal Acquisition Regulations System 7 2011-10-01 2011-10-01 false Market acceptance. 3011.103 Section 3011.103 Federal Acquisition Regulations System DEPARTMENT OF HOMELAND SECURITY, HOMELAND... Developing Requirements Documents 3011.103 Market acceptance. (a) Contracting officers may act on behalf...

  19. 48 CFR 411.103 - Market acceptance.

    Code of Federal Regulations, 2011 CFR

    2011-10-01

    ... 48 Federal Acquisition Regulations System 4 2011-10-01 2011-10-01 false Market acceptance. 411.103... ACQUISITION PLANNING DESCRIBING AGENCY NEEDS Selecting and Developing Requirements Documents 411.103 Market... accordance with FAR 11.103(a), the market acceptability of their items to be offered. (b) The...

  20. 48 CFR 3011.103 - Market acceptance.

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... 48 Federal Acquisition Regulations System 7 2010-10-01 2010-10-01 false Market acceptance. 3011.103 Section 3011.103 Federal Acquisition Regulations System DEPARTMENT OF HOMELAND SECURITY, HOMELAND... Developing Requirements Documents 3011.103 Market acceptance. (a) Contracting officers may act on behalf...