Science.gov

Sample records for acceptable error limits

  1. Confidence limits and their errors

    SciTech Connect

    Rajendran Raja

    2002-03-22

    Confidence limits are common place in physics analysis. Great care must be taken in their calculation and use especially in cases of limited statistics. We introduce the concept of statistical errors of confidence limits and argue that not only should limits be calculated but also their errors in order to represent the results of the analysis to the fullest. We show that comparison of two different limits from two different experiments becomes easier when their errors are also quoted. Use of errors of confidence limits will lead to abatement of the debate on which method is best suited to calculate confidence limits.

  2. What Are Acceptable Limits of Radiation?

    NASA Image and Video Library

    Brad Gersey, lead research scientist at the Center for Radiation Engineering and Science for Space Exploration, or CRESSE, at Prairie View A&M University, describes the legal and acceptable limits ...

  3. The Acceptability Limit in Food Shelf Life Studies.

    PubMed

    Manzocco, Lara

    2016-07-26

    Despite its apparently intuitive nature, the acceptability limit is probably the most difficult parameter to be defined when developing a shelf life test. Although it dramatically affects the final shelf life value, it is surprising that discussion on its nature has been largely neglected in the literature and only rare indications about the possible methodologies for its determination are available in the literature. This is due to the fact that the definition of this parameter is a consumer- and market-oriented issue, requiring a rational evaluation of the potential negative consequences of food unacceptability in the actual market scenario. This paper critically analyzes the features of the acceptability limit and the role of the decision maker. The methodologies supporting the choice of the acceptability limit as well as acceptability limit values proposed in the literature to calculate shelf life of different foods are reviewed.

  4. The limits of acceptable change process: modifications and clarifications

    Treesearch

    David N. Cole; Stephen F. McCool

    1997-01-01

    Limits of Acceptable Change (LAC) was originally formulated to deal with the issue of recreation carrying capacity in wilderness. Enthusiasm for the process has led to questions about its applicability to a broad range of natural resource issues—both within and outside of protected areas. This paper uses a generic version of the LAC process to identify situations where...

  5. Beyond wilderness: Broadening the applicability of limits of acceptable change

    Treesearch

    Mark W. Brunson

    1977-01-01

    The Limits of Acceptable Change (LAC) process helps managers preserve wilderness attributes along with recreation opportunities. Ecosystem management likewise requires managers to balance societal and ecosystem needs. Both are more likely to succeed through collaborative planning. Consequently, LAC can offer a conceptual framework for achieving sustainable solutions...

  6. Defining fire and wilderness objectives: Applying limits of acceptable change

    Treesearch

    David N. Cole

    1995-01-01

    The Limits of Acceptable Change (LAC) planning process was developed to help define objectives for recreation management in wilderness. This process can be applied to fire in wilderness if its conceptual foundation is broadened. LAC would lead decision makers to identify a compromise between the goal of allowing fire to play its natural role in wilderness and various...

  7. Error analysis of flux limiter schemes at extrema

    NASA Astrophysics Data System (ADS)

    Kriel, A. J.

    2017-01-01

    Total variation diminishing (TVD) schemes have been an invaluable tool for the solution of hyperbolic conservation laws. One of the major shortcomings of commonly used TVD methods is the loss of accuracy near extrema. Although large amounts of anti-diffusion usually benefit the resolution of discontinuities, a balanced limiter such as Van Leer's performs better at extrema. Reliable criteria, however, for the performance of a limiter near extrema are not readily apparent. This work provides theoretical quantitative estimates for the local truncation errors of flux limiter schemes at extrema for a uniform grid. Moreover, the component of the error attributed to the flux limiter was obtained. This component is independent of the problem and grid spacing, and may be considered a property of the limiter that reflects the performance at extrema. Numerical test problems validate the results.

  8. An Error Score Model for Time-Limit Tests

    ERIC Educational Resources Information Center

    Ven, A. H. G. S. van der

    1976-01-01

    A more generalized error model for time-limit tests is developed. Model estimates are derived for right-attempted and wrong-attempted correlations both within the same test and between different tests. A comparison is made between observed correlations and their model counterparts and a fair agreement is found between observed and expected…

  9. An error limit for the evolution of language.

    PubMed

    Nowak, M A; Krakauer, D C; Dress, A

    1999-10-22

    On the evolutionary trajectory that led to human language there must have been a transition from a fairly limited to an essentially unlimited communication system. The structure of modern human languages reveals at least two steps that are required for such a transition: in all languages (i) a small number of phonemes are used to generate a large number of words; and (ii) a large number of words are used to a produce an unlimited number of sentences. The first (and simpler) step is the topic of the current paper. We study the evolution of communication in the presence of errors and show that this limits the number of objects (or concepts) that can be described by a simple communication system. The evolutionary optimum is achieved by using only a small number of signals to describe a few valuable concepts. Adding more signals does not increase the fitness of a language. This represents an error limit for the evolution of communication. We show that this error limit can be overcome by combining signals (phonemes) into words. The transition from an analogue to a digital system was a necessary step toward the evolution of human language.

  10. An error limit for the evolution of language.

    PubMed Central

    Nowak, M A; Krakauer, D C; Dress, A

    1999-01-01

    On the evolutionary trajectory that led to human language there must have been a transition from a fairly limited to an essentially unlimited communication system. The structure of modern human languages reveals at least two steps that are required for such a transition: in all languages (i) a small number of phonemes are used to generate a large number of words; and (ii) a large number of words are used to a produce an unlimited number of sentences. The first (and simpler) step is the topic of the current paper. We study the evolution of communication in the presence of errors and show that this limits the number of objects (or concepts) that can be described by a simple communication system. The evolutionary optimum is achieved by using only a small number of signals to describe a few valuable concepts. Adding more signals does not increase the fitness of a language. This represents an error limit for the evolution of communication. We show that this error limit can be overcome by combining signals (phonemes) into words. The transition from an analogue to a digital system was a necessary step toward the evolution of human language. PMID:10902547

  11. Customization of user interfaces to reduce errors and enhance user acceptance.

    PubMed

    Burkolter, Dina; Weyers, Benjamin; Kluge, Annette; Luther, Wolfram

    2014-03-01

    Customization is assumed to reduce error and increase user acceptance in the human-machine relation. Reconfiguration gives the operator the option to customize a user interface according to his or her own preferences. An experimental study with 72 computer science students using a simulated process control task was conducted. The reconfiguration group (RG) interactively reconfigured their user interfaces and used the reconfigured user interface in the subsequent test whereas the control group (CG) used a default user interface. Results showed significantly lower error rates and higher acceptance of the RG compared to the CG while there were no significant differences between the groups regarding situation awareness and mental workload. Reconfiguration seems to be promising and therefore warrants further exploration.

  12. WTO accepts rules limiting medicine exports to poor countries.

    PubMed

    James, John S

    2003-09-12

    In a controversial decision on August 30, 2003, the World Trade Organization agreed to complex rules limiting the export of medications to developing countries. Reaction to the decision so far has shown a complete disconnect between trade delegates and the WTO, both of which praise the new rules as a humanitarian advance, and those working in treatment access in poor countries, who believe that they will effectively block treatment from reaching many who need it. We have prepared a background paper that analyzes this decision and its implications and offers the opinions of key figures on both sides of the debate. It is clear that the rules were largely written for and probably by the proprietary pharmaceutical industry, and imposed on the countries in the WTO mainly by the United States. The basic conflict is that this industry does not want the development of international trade in low-cost generic copies of its patented medicines--not even for poor countries, where little or no market exists. Yet millions of people die each year without medication for treatable conditions such as AIDS, and drug pricing remains one of several major obstacles to controlling global epidemics.

  13. DWPF COAL CARBON WASTE ACCEPTANCE CRITERIA LIMIT EVALUATION

    SciTech Connect

    Lambert, D.; Choi, A.

    2010-06-21

    A paper study was completed to assess the impact on the Defense Waste Processing Facility (DWPF)'s Chemical Processing Cell (CPC) acid addition and melter off-gas flammability control strategy in processing Sludge Batch 10 (SB10) to SB13 with an added Fluidized Bed Steam Reformer (FBSR) stream and two Salt Waste Processing Facility (SWPF) products (Strip Effluent and Actinide Removal Stream). In all of the cases that were modeled, an acid mix using formic acid and nitric acid could be achieved that would produce a predicted Reducing/Oxidizing (REDOX) Ratio of 0.20 Fe{sup +2}/{Sigma}Fe. There was sufficient formic acid in these combinations to reduce both the manganese and mercury present. Reduction of manganese and mercury are both necessary during Sludge Receipt and Adjustment Tank (SRAT) processing, however, other reducing agents such as coal and oxalate are not effective in this reduction. The next phase in this study will be experimental testing with SB10, FBSR, and both SWPF simulants to validate the assumptions in this paper study and determine whether there are any issues in processing these streams simultaneously. The paper study also evaluated a series of abnormal processing conditions to determine whether potential abnormal conditions in FBSR, SWPF or DWPF would produce melter feed that was too oxidizing or too reducing. In most of the cases that were modeled with one parameter at its extreme, an acid mix using formic acid and nitric acid could be achieved that would produce a predicted REDOX of 0.09-0.30 (target 0.20). However, when a run was completed with both high coal and oxalate, with minimum formic acid to reduce mercury and manganese, the final REDOX was predicted to be 0.49 with sludge and FBSR product and 0.47 with sludge, FBSR product and both SWPF products which exceeds the upper REDOX limit.

  14. 10 CFR 2.643 - Acceptance and docketing of application for limited work authorization.

    Code of Federal Regulations, 2013 CFR

    2013-01-01

    ... acceptable for processing, the Director of New Reactors or the Director of Nuclear Reactor Regulation will... 10 Energy 1 2013-01-01 2013-01-01 false Acceptance and docketing of application for limited work authorization. 2.643 Section 2.643 Energy NUCLEAR REGULATORY COMMISSION AGENCY RULES OF PRACTICE AND...

  15. Deconstructing the "reign of error": interpersonal warmth explains the self-fulfilling prophecy of anticipated acceptance.

    PubMed

    Stinson, Danu Anthony; Cameron, Jessica J; Wood, Joanne V; Gaucher, Danielle; Holmes, John G

    2009-09-01

    People's expectations of acceptance often come to create the acceptance or rejection they anticipate. The authors tested the hypothesis that interpersonal warmth is the behavioral key to this acceptance prophecy: If people expect acceptance, they will behave warmly, which in turn will lead other people to accept them; if they expect rejection, they will behave coldly, which will lead to less acceptance. A correlational study and an experiment supported this model. Study 1 confirmed that participants' warm and friendly behavior was a robust mediator of the acceptance prophecy compared to four plausible alternative explanations. Study 2 demonstrated that situational cues that reduced the risk of rejection also increased socially pessimistic participants' warmth and thus improved their social outcomes.

  16. Pain related catastrophizing on physical limitation in rheumatoid arthritis patients. Is acceptance important?

    PubMed

    Costa, Joana; Pinto-Gouveia, José; Marôco, João

    2014-01-01

    The experience of Rheumatoid Arthritis (RA) includes significant suffering and life disruption. This cross-sectional study examined the associations between pain, catastrophizing, acceptance and physical limitation in 55 individuals (11 males and 44 female; Mean age = 54.37; SD = 18.346), from the Portuguese population with (RA) 2 years after the diagnosis; also explored the role of acceptance as a mediator process between pain, catastrophizing and physical limitation. Results showed positive correlation between pain and catastrophizing (r = .544; p ≤ .001), and also between pain and 2-years' physical limitation (r = .531; p ≤ .001) Results also showed that acceptance was negatively correlated with physical limitation 2 years after the diagnosis (r = -.476; p ≤ .001). Path-analysis was performed to explore the direct effect of pain (ß = -.393; SD = .044; Z = 3.180; p ≤ .001) and catastrophizing (n.sig.) on physical limitation and also to explore the buffer effect of acceptance in this relationship (indirect effect ß = -.080). Results showed that physical limitation is not necessarily a direct product of pain and catastrophizing but acceptance was also involved. Pain and catastrophizing are associated but the influence of catastrophizing on physical limitation is promoted by low levels of acceptance. Results emphasize the relevance of acceptance as the emotional regulation process by which pain and catastrophizing influence physical functioning and establish the basic mechanism by which pain and catastrophizing operate in a contextual-based perspective. Also the study results offer a novel approach that may help behavioral health and medical providers prevent and treat these conditions.

  17. 10 CFR 2.643 - Acceptance and docketing of application for limited work authorization.

    Code of Federal Regulations, 2011 CFR

    2011-01-01

    ... 10 Energy 1 2011-01-01 2011-01-01 false Acceptance and docketing of application for limited work authorization. 2.643 Section 2.643 Energy NUCLEAR REGULATORY COMMISSION RULES OF PRACTICE FOR DOMESTIC LICENSING... Construct Certain Utilization Facilities; and Advance Issuance of Limited Work Authorizations...

  18. 10 CFR 2.643 - Acceptance and docketing of application for limited work authorization.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... 10 Energy 1 2010-01-01 2010-01-01 false Acceptance and docketing of application for limited work authorization. 2.643 Section 2.643 Energy NUCLEAR REGULATORY COMMISSION RULES OF PRACTICE FOR DOMESTIC LICENSING... Construct Certain Utilization Facilities; and Advance Issuance of Limited Work Authorizations...

  19. 75 FR 6371 - Jordan Hydroelectric Limited Partnership; Notice of Application Accepted for Filing and...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2010-02-09

    ... Energy Regulatory Commission Jordan Hydroelectric Limited Partnership; Notice of Application Accepted for... hydroelectric application has been filed with the Commission and is available for public inspection. a. Type of...: Jordan Hydroelectric Limited Partnership e. Name of Project: Flannagan Hydroelectric Project f. Location...

  20. Legitimization of regulatory norms: Waterfowl hunter acceptance of changing duck bag limits

    USGS Publications Warehouse

    Schroeder, Susan A.; Fulton, David C.; Lawrence, Jeffrey S.; Cordts, Steven D.

    2014-01-01

    Few studies have examined response to regulatory change over time, or addressed hunter attitudes about changes in hunting bag limits. This article explores Minnesota waterfowl hunters’ attitudes about duck bag limits, examining attitudes about two state duck bag limits that were initially more restrictive than the maximum set by the U.S. Fish and Wildlife Service (USFWS), but then increased to match federal limits. Results are from four mail surveys that examined attitudes about bag limits over time. Following two bag limit increases, a greater proportion of hunters rated the new bag limit “too high” and a smaller proportion rated it “too low.” Several years following the first bag limit increase, the proportion of hunters who indicated that the limit was “too high” had declined, suggesting hunter acceptance of the new regulation. Results suggest that waterfowl bag limits may represent legal norms that influence hunter attitudes and gain legitimacy over time.

  1. Statistical analysis of the limitation of half integer resonances on the available momentum acceptance of the High Energy Photon Source

    NASA Astrophysics Data System (ADS)

    Jiao, Yi; Duan, Zhe

    2017-01-01

    In a diffraction-limited storage ring, half integer resonances can have strong effects on the beam dynamics, associated with the large detuning terms from the strong focusing and strong sextupoles as required for an ultralow emittance. In this study, the limitation of half integer resonances on the available momentum acceptance (MA) was statistically analyzed based on one design of the High Energy Photon Source (HEPS). It was found that the probability of MA reduction due to crossing of half integer resonances is closely correlated with the level of beta beats at the nominal tunes, but independent of the error sources. The analysis indicated that for the presented HEPS lattice design, the rms amplitude of beta beats should be kept below 1.5% horizontally and 2.5% vertically to reach a small MA reduction probability of about 1%.

  2. A SECOND MOMENT EXPONENTIAL ERROR BOUND FOR PEAK LIMITED BINARY SYMMETRIC COHERENT CHANNELS AT LOW SNR.

    DTIC Science & Technology

    An exponential-type bound on error rate, Pe, for peak limited binary coherent channels operated at low SNR is presented. The bound depends...exponentially only on the first and second moments of the channel output and serves to justify, in part, the use of SNR calculations for error rate performance

  3. Institutional barriers and opportunities in application of the limits of acceptable change

    Treesearch

    George H. Stankey

    1997-01-01

    Although the Limits of Acceptable Change (LAC) process has been in use since the mid-1980’s and has contributed to improved wilderness management, significant barriers and challenges remain. Formal and informal institutional barriers are the principal constraint to more effective implementation. Although grounded in a traditional management-by-objectives model, the LAC...

  4. Proceedings - Limits of Acceptable Change and related planning processes: Progress and future directions

    Treesearch

    Stephen F. McCool; David N. Cole

    1997-01-01

    Experience with Limits of Acceptable Change (LAC) and related planning processes has accumulated since the mid-1980's. These processes were developoed as a means of dealing with recreation carrying capacity issues in wilderness and National Parks. These processes clearly also have application outside of protected areas and to issues other than recreation...

  5. Experiencing limits of acceptable change: some thoughts after a decade of implementation

    Treesearch

    Stephen F. McCool; David N. Cole

    1997-01-01

    Wilderness managers and researchers have experienced implementation of the Limits of Acceptable Change planning system for over a decade. In a sense, implementation of LAC has been a broad scale experiment in planning, with the hypothesis being that LAC processes are more effective approaches to deal with questions of recreation management in protected areas than the...

  6. Limits of acceptable change planning in the Selway-Bitterroot Wilderness: 1985 to 1997 (FIDL)

    Treesearch

    Dan Ritter

    1997-01-01

    In 1985 the Forest Supervisors and staff of the Bitterroot, Clearwater, and Nez Perce National Forests met and agreed to an action plan for implementing a Limits of Acceptable Change (LAC) planning process for the Selway-Bitterroot Wilderness (SBW). The process, which was to include a citizens task force, was to produce a completed management plan in 2 years. Eight...

  7. Historical development of limits of acceptable change: conceptual clarifications and possible extensions

    Treesearch

    David N. Cole; George H. Stankey

    1997-01-01

    The Limits of Acceptable Change (LAC) process was developed to deal with the issue of recreational carrying capacity. For that purpose, the LAC process sought to explicitly define a compromise between resource/visitor experience protection and recreation use goals. The most critical and unique element of the process is the specification of LAC standards that define...

  8. Small Inertial Measurement Units - Soures of Error and Limitations on Accuracy

    NASA Technical Reports Server (NTRS)

    Hoenk, M. E.

    1994-01-01

    Limits on the precision of small accelerometers for inertial measurement units are enumerated and discussed. Scaling laws and errors which affect the precision are discussed in terms of tradeoffs between size, sensitivity, and cost.

  9. Acceptance Control Charts with Stipulated Error Probabilities Based on Poisson Count Data

    DTIC Science & Technology

    1973-01-01

    Richard L / Scheaffer ’.* Richard S eavenwort December,... 198 *Department of Industrial and Systems Engineering University of Florida Gainesville...L. Scheaffer N00014-75-C-0783 Richard S. Leavenworth 9. PERFORMING ORGANIZATION NAME AND ADDRESS . PROGRAM ELEMENT. PROJECT, TASK Industrial and...PROBABILITIES BASED ON POISSON COUNT DATA by Suresh 1Ihatre Richard L. Scheaffer S..Richard S. Leavenworth ABSTRACT An acceptance control charting

  10. Content uniformity acceptance limit for a validation batch--suppositories, transdermal systems, and inhalations.

    PubMed

    Senderak, Edith T

    2009-06-01

    The USP test for 'Uniformity of Dosage Units' specified by USP Chapter <905> is required of every drug product sold in the United States. Dosage-unit uniformity is determined either by weight variation or by assay of individual units. The USP acceptance criteria for content uniformity states that the relative standard deviation (RSD) of a sample of 30 units should not exceed 7.8%. This article provides a methodology for deriving an upper acceptance limit on the RSD of dosage units from a validation batch of suppositories, transdermal systems, or inhalations such that future batches will have a 95% chance of passing the USP content uniformity RSD acceptance criterion (the RSD of 30 dosage units does not exceed 7.8%).

  11. A Technological Innovation to Reduce Prescribing Errors Based on Implementation Intentions: The Acceptability and Feasibility of MyPrescribe.

    PubMed

    Keyworth, Chris; Hart, Jo; Thoong, Hong; Ferguson, Jane; Tully, Mary

    2017-08-01

    Although prescribing of medication in hospitals is rarely an error-free process, prescribers receive little feedback on their mistakes and ways to change future practices. Audit and feedback interventions may be an effective approach to modifying the clinical practice of health professionals, but these may pose logistical challenges when used in hospitals. Moreover, such interventions are often labor intensive. Consequently, there is a need to develop effective and innovative interventions to overcome these challenges and to improve the delivery of feedback on prescribing. Implementation intentions, which have been shown to be effective in changing behavior, link critical situations with an appropriate response; however, these have rarely been used in the context of improving prescribing practices. Semistructured qualitative interviews were conducted to evaluate the acceptability and feasibility of providing feedback on prescribing errors via MyPrescribe, a mobile-compatible website informed by implementation intentions. Data relating to 200 prescribing errors made by 52 junior doctors were collected by 11 hospital pharmacists. These errors were populated into MyPrescribe, where prescribers were able to construct their own personalized action plans. Qualitative interviews with a subsample of 15 junior doctors were used to explore issues regarding feasibility and acceptability of MyPrescribe and their experiences of using implementation intentions to construct prescribing action plans. Framework analysis was used to identify prominent themes, with findings mapped to the behavioral components of the COM-B model (capability, opportunity, motivation, and behavior) to inform the development of future interventions. MyPrescribe was perceived to be effective in providing opportunities for critical reflection on prescribing errors and to complement existing training (such as junior doctors' e-portfolio). The participants were able to provide examples of how they would use

  12. A Technological Innovation to Reduce Prescribing Errors Based on Implementation Intentions: The Acceptability and Feasibility of MyPrescribe

    PubMed Central

    Hart, Jo; Thoong, Hong; Ferguson, Jane; Tully, Mary

    2017-01-01

    Background Although prescribing of medication in hospitals is rarely an error-free process, prescribers receive little feedback on their mistakes and ways to change future practices. Audit and feedback interventions may be an effective approach to modifying the clinical practice of health professionals, but these may pose logistical challenges when used in hospitals. Moreover, such interventions are often labor intensive. Consequently, there is a need to develop effective and innovative interventions to overcome these challenges and to improve the delivery of feedback on prescribing. Implementation intentions, which have been shown to be effective in changing behavior, link critical situations with an appropriate response; however, these have rarely been used in the context of improving prescribing practices. Objective Semistructured qualitative interviews were conducted to evaluate the acceptability and feasibility of providing feedback on prescribing errors via MyPrescribe, a mobile-compatible website informed by implementation intentions. Methods Data relating to 200 prescribing errors made by 52 junior doctors were collected by 11 hospital pharmacists. These errors were populated into MyPrescribe, where prescribers were able to construct their own personalized action plans. Qualitative interviews with a subsample of 15 junior doctors were used to explore issues regarding feasibility and acceptability of MyPrescribe and their experiences of using implementation intentions to construct prescribing action plans. Framework analysis was used to identify prominent themes, with findings mapped to the behavioral components of the COM-B model (capability, opportunity, motivation, and behavior) to inform the development of future interventions. Results MyPrescribe was perceived to be effective in providing opportunities for critical reflection on prescribing errors and to complement existing training (such as junior doctors’ e-portfolio). The participants were able to

  13. Error Pattern Analysis of Elementary School-Aged Students with Limited English Proficiency

    ERIC Educational Resources Information Center

    Yang, Chin Wen; Sherman, Helene; Murdick, Nikki

    2011-01-01

    The purpose of this research study was to investigate and classify particular categories of mathematical errors made by students with Limited English Proficiency. Participants included 15 general education teachers, two English as Second Language teachers, and 91 Limited English Proficiency students. General education teachers provided mathematics…

  14. Error Pattern Analysis of Elementary School-Aged Students with Limited English Proficiency

    ERIC Educational Resources Information Center

    Yang, Chin Wen; Sherman, Helene; Murdick, Nikki

    2011-01-01

    The purpose of this research study was to investigate and classify particular categories of mathematical errors made by students with Limited English Proficiency. Participants included 15 general education teachers, two English as Second Language teachers, and 91 Limited English Proficiency students. General education teachers provided mathematics…

  15. An efficient approach for limited-data chemical species tomography and its error bounds

    PubMed Central

    Polydorides, N.; Tsekenis, S.-A.; McCann, H.; Prat, V.-D. A.; Wright, P.

    2016-01-01

    We present a computationally efficient reconstruction method for the limited-data chemical species tomography problem that incorporates projection of the unknown gas concentration function onto a low-dimensional subspace, and regularization using prior information obtained from a simple flow model. In this context, the contribution of this work is on the analysis of the projection-induced data errors and the calculation of bounds for the overall image error incorporating the impact of projection and regularization errors as well as measurement noise. As an extension to this methodology, we present a variant algorithm that preserves the positivity of the concentration image. PMID:27118923

  16. Application of zoning and "limits of acceptable change" to manage snorkelling tourism.

    PubMed

    Roman, George S J; Dearden, Philip; Rollins, Rick

    2007-06-01

    Zoning and applying Limits of Acceptable Change (LAC) are two promising strategies for managing tourism in Marine Protected Areas (MPAs). Typically, these management strategies require the collection and integration of ecological and socioeconomic data. This problem is illustrated by a case study of Koh Chang National Marine Park, Thailand. Biophysical surveys assessed coral communities in the MPA to derive indices of reef diversity and vulnerability. Social surveys assessed visitor perceptions and satisfaction with conditions encountered on snorkelling tours. Notably, increased coral mortality caused a significant decrease in visitor satisfaction. The two studies were integrated to prescribe zoning and "Limits of Acceptable Change" (LAC). As a biophysical indicator, the data suggest a LAC value of 0.35 for the coral mortality index. As a social indicator, the data suggest that a significant fraction of visitors would find a LAC value of under 30 snorkellers per site as acceptable. The draft zoning plan prescribed four different types of zones: (I) a Conservation Zone with no access apart from monitoring or research; (II) Tourism Zones with high tourism intensities at less vulnerable reefs; (III) Ecotourism zones with a social LAC standard of <30 snorkellers per site, and (IV) General Use Zones to meet local artisanal fishery needs. This study illustrates how ecological and socioeconomic field studies in MPAs can be integrated to craft zoning plans addressing multiple objectives.

  17. Application of Zoning and ``Limits of Acceptable Change'' to Manage Snorkelling Tourism

    NASA Astrophysics Data System (ADS)

    Roman, George S. J.; Dearden, Philip; Rollins, Rick

    2007-06-01

    Zoning and applying Limits of Acceptable Change (LAC) are two promising strategies for managing tourism in Marine Protected Areas (MPAs). Typically, these management strategies require the collection and integration of ecological and socioeconomic data. This problem is illustrated by a case study of Koh Chang National Marine Park, Thailand. Biophysical surveys assessed coral communities in the MPA to derive indices of reef diversity and vulnerability. Social surveys assessed visitor perceptions and satisfaction with conditions encountered on snorkelling tours. Notably, increased coral mortality caused a significant decrease in visitor satisfaction. The two studies were integrated to prescribe zoning and “Limits of Acceptable Change” (LAC). As a biophysical indicator, the data suggest a LAC value of 0.35 for the coral mortality index. As a social indicator, the data suggest that a significant fraction of visitors would find a LAC value of under 30 snorkellers per site as acceptable. The draft zoning plan prescribed four different types of zones: (I) a Conservation Zone with no access apart from monitoring or research; (II) Tourism Zones with high tourism intensities at less vulnerable reefs; (III) Ecotourism zones with a social LAC standard of <30 snorkellers per site, and (IV) General Use Zones to meet local artisanal fishery needs. This study illustrates how ecological and socioeconomic field studies in MPAs can be integrated to craft zoning plans addressing multiple objectives.

  18. Identifying and preventing medical errors in patients with limited English proficiency: key findings and tools for the field.

    PubMed

    Wasserman, Melanie; Renfrew, Megan R; Green, Alexander R; Lopez, Lenny; Tan-McGrory, Aswita; Brach, Cindy; Betancourt, Joseph R

    2014-01-01

    Since the 1999 Institute of Medicine (IOM) report To Err is Human, progress has been made in patient safety, but few efforts have focused on safety in patients with limited English proficiency (LEP). This article describes the development, content, and testing of two new evidence-based Agency for Healthcare Research and Quality (AHRQ) tools for LEP patient safety. In the content development phase, a comprehensive mixed-methods approach was used to identify common causes of errors for LEP patients, high-risk scenarios, and evidence-based strategies to address them. Based on our findings, Improving Patient Safety Systems for Limited English Proficient Patients: A Guide for Hospitals contains recommendations to improve detection and prevention of medical errors across diverse populations, and TeamSTEPPS Enhancing Safety for Patients with Limited English Proficiency Module trains staff to improve safety through team communication and incorporating interpreters in the care process. The Hospital Guide was validated with leaders in quality and safety at diverse hospitals, and the TeamSTEPPS LEP module was field-tested in varied settings within three hospitals. Both tools were found to be implementable, acceptable to their audiences, and conducive to learning. Further research on the impact of the combined use of the guide and module would shed light on their value as a multifaceted intervention. © 2014 National Association for Healthcare Quality.

  19. Quantum Error-Correction-Enhanced Magnetometer Overcoming the Limit Imposed by Relaxation.

    PubMed

    Herrera-Martí, David A; Gefen, Tuvia; Aharonov, Dorit; Katz, Nadav; Retzker, Alex

    2015-11-13

    When incorporated in quantum sensing protocols, quantum error correction can be used to correct for high frequency noise, as the correction procedure does not depend on the actual shape of the noise spectrum. As such, it provides a powerful way to complement usual refocusing techniques. Relaxation imposes a fundamental limit on the sensitivity of state of the art quantum sensors which cannot be overcome by dynamical decoupling. The only way to overcome this is to utilize quantum error correcting codes. We present a superconducting magnetometry design that incorporates approximate quantum error correction, in which the signal is generated by a two qubit Hamiltonian term. This two-qubit term is provided by the dynamics of a tunable coupler between two transmon qubits. For fast enough correction, it is possible to lengthen the coherence time of the device beyond the relaxation limit.

  20. Quantum Error-Correction-Enhanced Magnetometer Overcoming the Limit Imposed by Relaxation

    NASA Astrophysics Data System (ADS)

    Herrera-Martí, David A.; Gefen, Tuvia; Aharonov, Dorit; Katz, Nadav; Retzker, Alex

    2015-11-01

    When incorporated in quantum sensing protocols, quantum error correction can be used to correct for high frequency noise, as the correction procedure does not depend on the actual shape of the noise spectrum. As such, it provides a powerful way to complement usual refocusing techniques. Relaxation imposes a fundamental limit on the sensitivity of state of the art quantum sensors which cannot be overcome by dynamical decoupling. The only way to overcome this is to utilize quantum error correcting codes. We present a superconducting magnetometry design that incorporates approximate quantum error correction, in which the signal is generated by a two qubit Hamiltonian term. This two-qubit term is provided by the dynamics of a tunable coupler between two transmon qubits. For fast enough correction, it is possible to lengthen the coherence time of the device beyond the relaxation limit.

  1. 20 CFR 410.671 - Revision for error or other reason; time limitation generally.

    Code of Federal Regulations, 2010 CFR

    2010-04-01

    ... 20 Employees' Benefits 2 2010-04-01 2010-04-01 false Revision for error or other reason; time limitation generally. 410.671 Section 410.671 Employees' Benefits SOCIAL SECURITY ADMINISTRATION FEDERAL COAL..., Other Determinations, Administrative Review, Finality of Decisions, and Representation of Parties §...

  2. Setting limits for acceptable change in sediment particle size composition following marine aggregate dredging.

    PubMed

    Cooper, Keith M

    2012-08-01

    In the UK, Government policy requires marine aggregate extraction companies to leave the seabed in a similar physical condition after the cessation of dredging. This measure is intended to promote recovery, and the return of a similar faunal community to that which existed before dredging. Whilst the policy is sensible, and in line with the principles of sustainable development, the use of the word 'similar' is open to interpretation. There is, therefore, a need to set quantifiable limits for acceptable change in sediment composition. Using a case study site, it is shown how such limits could be defined by the range of sediment particle size composition naturally found in association with the faunal assemblages in the wider region. Whilst the approach offers a number of advantages over the present system, further testing would be required before it could be recommended for use in the regulatory context. Crown Copyright © 2012. Published by Elsevier Ltd. All rights reserved.

  3. Developing acceptance limits for measured bearing wear of the Space Shuttle Main Engine high pressure oxidizer turbopump

    NASA Technical Reports Server (NTRS)

    Genge, Gary G.

    1991-01-01

    The probabilistic design approach currently receiving attention for structural failure modes has been adapted for obtaining measured bearing wear limits in the Space Shuttle Main Engine high-pressure oxidizer turbopump. With the development of the shaft microtravel measurements to determine bearing health, an acceptance limit was neeed that protects against all known faiure modes yet is not overly conservative. This acceptance criteria limit has been successfully determined using probabilistic descriptions of preflight hardware geometry, empirical bearing wear data, mission requirements, and measurement tool precision as an input for a Monte Carlo simulation. The result of the simulation is a frequency distribution of failures as a function of preflight acceptance limits. When the distribution is converted into a reliability curve, a conscious risk management decision is made concerning the acceptance limit.

  4. Natural Conception May Be an Acceptable Option in HIV-Serodiscordant Couples in Resource Limited Settings.

    PubMed

    Sun, Lijun; Wang, Fang; Liu, An; Xin, Ruolei; Zhu, Yunxia; Li, Jianwei; Shao, Ying; Ye, Jiangzhu; Chen, Danqing; Li, Zaicun

    2015-01-01

    Many HIV serodiscordant couples have a strong desire to have their own biological children. Natural conception may be the only choice in some resource limited settings but data about natural conception is limited. Here, we reported our findings of natural conception in HIV serodiscordant couples. Between January 2008 and June 2014, we retrospectively collected data on 91 HIV serodiscordant couples presenting to Beijing Youan Hospital with childbearing desires. HIV counseling, effective ART on HIV infected partners, pre-exposure prophylaxis (PrEP) and post-exposure prophylaxis (PEP) in negative female partners and timed intercourse were used to maximally reduce the risk of HIV transmission. Of the 91 HIV serodiscordant couples, 43 were positive in male partners and 48 were positive in female partners. There were 196 unprotected vaginal intercourses, 100 natural conception and 97 newborns. There were no cases of HIV seroconversion in uninfected sexual partners. Natural conception may be an acceptable option in HIV-serodiscordant couples in resource limited settings if HIV-positive individuals have undetectable viremia on HAART, combined with HIV counseling, PrEP, PEP and timed intercourse.

  5. Setting limits for acceptable change in sediment particle size composition: testing a new approach to managing marine aggregate dredging.

    PubMed

    Cooper, Keith M

    2013-08-15

    A baseline dataset from 2005 was used to identify the spatial distribution of macrofaunal assemblages across the eastern English Channel. The range of sediment composition found in association with each assemblage was used to define limits for acceptable change at ten licensed marine aggregate extraction areas. Sediment data acquired in 2010, 4 years after the onset of dredging, were used to assess whether conditions remained within the acceptable limits. Despite the observed changes in sediment composition, the composition of sediments in and around nine extraction areas remained within pre-defined acceptable limits. At the tenth site, some of the observed changes within the licence area were judged to have gone beyond the acceptable limits. Implications of the changes are discussed, and appropriate management measures identified. The approach taken in this study offers a simple, objective and cost-effective method for assessing the significance of change, and could simplify the existing monitoring regime. Copyright © 2013 Elsevier Ltd. All rights reserved.

  6. Multipoint Lods Provide Reliable Linkage Evidence Despite Unknown Limiting Distribution: Type I Error Probabilities Decrease with Sample Size for Multipoint Lods and Mods

    PubMed Central

    Hodge, Susan E.; Rodriguez-Murillo, Laura; Strug, Lisa J.; Greenberg, David A.

    2009-01-01

    We investigate the behavior of type I error rates in model-based multipoint (MP) linkage analysis, as a function of sample size (N). We consider both MP lods (i.e., MP linkage analysis that uses the correct genetic model) and MP mods (maximizing MP lods over 18 dominant and recessive models). Following Xing & Elston [2006], we first consider MP linkage analysis limited to a single position; then we enlarge the scope and maximize the lods and mods over a span of positions. In all situations we examined, type I error rates decrease with increasing sample size, apparently approaching zero. We show: (a) For MP lods analyzed only at a single position, well-known statistical theory predicts that type I error rates approach zero. (b) For MP lods and mods maximized over position, this result has a different explanation, related to the fact that one maximizes the scores over only a finite portion of the parameter range. The implications of these findings may be far-reaching: Although it is widely accepted that fixed nominal critical values for MP lods and mods are not known, this study shows that whatever the nominal error rates are, the actual error rates appear to decrease with increasing sample size. Moreover, the actual (observed) type I error rate may be quite small for any given study. We conclude that multipoint lod and mod scores provide reliable linkage evidence for complex diseases, despite the unknown limiting distributions of these multipoint scores. PMID:18613118

  7. Fetal tolerance in human pregnancy--a crucial balance between acceptance and limitation of trophoblast invasion.

    PubMed

    von Rango, Ulrike

    2008-01-15

    During human pregnancy the semi-allogeneic/allogeneic fetal graft is normally accepted by the mother's immune system. Initially the contact between maternal and fetal cells is restricted to the decidua but during the 2nd trimester it is extended to the entire body. Two contrary requirements influence the extent of invasion of extravillous fetal trophoblast cells (EVT) in the maternal decidua: anchorage of the placenta to ensure fetal nutrition and protection of the uterine wall against over-invasion. To establish the crucial balance between tolerance of the EVT and its limitation, recognition of the semi-allogeneic/allogeneic fetal cell by maternal leukocytes is prerequisite. A key mechanism to limit EVT invasion is induction of EVT apoptosis. Apoptotic bodies are phagocytosed by antigen-presenting cells (APC). Peptides from apoptotic cells are presented by APC cells and induce an antigen-specific tolerance against the foreign antigens on EVT cells. These pathways, including up-regulation of the expression of IDO, IFNgamma and CTLA-4 as well as the induction of T(regulatory) cells, are general immunological mechanisms which have developed to maintain peripheral tolerance to self-antigens. Together these data suggest that the mother extends her "definition of self" for 9 months on the foreign antigens of the fetus.

  8. IUDs as EC? Limited awareness and high reported acceptability: evidence from Argentina.

    PubMed

    Pichardo, Margaret; Arribas, Lia; Coccio, Elina; Heredia, Graciela; Jagroep, Sherani; Palermo, Tia

    2014-11-01

    We explored knowledge and attitudes regarding the copper intrauterine device (IUD) as emergency contraception (EC) among women in Buenos Aires, Argentina. We interviewed a convenience sample of women attending a family planning center at a public hospital. Participants were asked about knowledge and use of contraceptives, including EC (pre-script). Then they were given information about the IUD as EC and subsequently asked about acceptability of using the copper IUD as EC (post-script), the primary outcome in this analysis. We analyzed data on 273 women. While only 1.83% of participants knew the IUD served as EC at baseline, 79.85% said they would be willing to use the device as such if the need arose after given relevant information. Multivariate results from a pre-script revealed that women with low levels of education and those born outside of Argentina were less knowledgeable about EC pills. Only previous use of the IUD was associated with high levels of IUD knowledge. Post-script, results indicated that being Argentine [odds ratio (OR)=2.15, 95% confidence interval (CI) 1.21, 3.81] and previous IUD use (OR=2.12, 95% CI=1.07, 4.19) were positively associated with considering the IUD as EC. Nulliparity was negatively associated with willingness to use the IUD as EC (OR=0.44, 95% CI 0.22, 0.86). We examined acceptability of the copper IUD as EC in a Latin American setting and found that, while prior levels of knowledg'e were low, acceptability of the IUD as EC was high. Implications for programming and policy include outreach and education regarding this highly effective method and advocacy to change existing regulations in Argentina prohibiting the use of IUD as EC. After given information about the IUD as a method of EC, women interviewed said they would be willing to use the IUD EC despite their limited prior knowledge of this method. With more widespread information and availability of the IUD as EC, more women may opt for this highly effective method, which

  9. Acceptable symbiont cell size differs among cnidarian species and may limit symbiont diversity.

    PubMed

    Biquand, Elise; Okubo, Nami; Aihara, Yusuke; Rolland, Vivien; Hayward, David C; Hatta, Masayuki; Minagawa, Jun; Maruyama, Tadashi; Takahashi, Shunichi

    2017-07-01

    Reef-building corals form symbiotic relationships with dinoflagellates of the genus Symbiodinium. Symbiodinium are genetically and physiologically diverse, and corals may be able to adapt to different environments by altering their dominant Symbiodinium phylotype. Notably, each coral species associates only with specific Symbiodinium phylotypes, and consequently the diversity of symbionts available to the host is limited by the species specificity. Currently, it is widely presumed that species specificity is determined by the combination of cell-surface molecules on the host and symbiont. Here we show experimental evidence supporting a new model to explain at least part of the specificity in coral-Symbiodinium symbiosis. Using the laboratory model Aiptasia-Symbiodinium system, we found that symbiont infectivity is related to cell size; larger Symbiodinium phylotypes are less likely to establish a symbiotic relationship with the host Aiptasia. This size dependency is further supported by experiments where symbionts were replaced by artificial fluorescent microspheres. Finally, experiments using two different coral species demonstrate that our size-dependent-infection model can be expanded to coral-Symbiodinium symbiosis, with the acceptability of large-sized Symbiodinium phylotypes differing between two coral species. Thus the selectivity of the host for symbiont cell size can affect the diversity of symbionts in corals.

  10. Validation of analytical methods involved in dissolution assays: acceptance limits and decision methodologies.

    PubMed

    Rozet, E; Ziemons, E; Marini, R D; Boulanger, B; Hubert, Ph

    2012-11-02

    Dissolution tests are key elements to ensure continuing product quality and performance. The ultimate goal of these tests is to assure consistent product quality within a defined set of specification criteria. Validation of an analytical method aimed at assessing the dissolution profile of products or at verifying pharmacopoeias compliance should demonstrate that this analytical method is able to correctly declare two dissolution profiles as similar or drug products as compliant with respect to their specifications. It is essential to ensure that these analytical methods are fit for their purpose. Method validation is aimed at providing this guarantee. However, even in the ICHQ2 guideline there is no information explaining how to decide whether the method under validation is valid for its final purpose or not. Are the entire validation criterion needed to ensure that a Quality Control (QC) analytical method for dissolution test is valid? What acceptance limits should be set on these criteria? How to decide about method's validity? These are the questions that this work aims at answering. Focus is made to comply with the current implementation of the Quality by Design (QbD) principles in the pharmaceutical industry in order to allow to correctly defining the Analytical Target Profile (ATP) of analytical methods involved in dissolution tests. Analytical method validation is then the natural demonstration that the developed methods are fit for their intended purpose and is not any more the inconsiderate checklist validation approach still generally performed to complete the filing required to obtain product marketing authorization.

  11. A Hybrid Variational-Ensemble data assimilation scheme with systematic error correction for limited area ocean models

    NASA Astrophysics Data System (ADS)

    Oddo, Paolo; Storto, Andrea; Dobricic, Srdjan; Russo, Aniello; Lewis, Craig; Onken, Reiner; Coelho, Emanuel

    2017-04-01

    A hybrid variational-ensemble data assimilation scheme to estimate the vertical and horizontal components of the background-error covariance matrix for an ocean variational data assimilation system is presented and tested in a limited area ocean model. The high resolution limited area model is implemented in the western Mediterranean Sea where an extensive dataset has been collected during the Recognized Environmental Picture Experiments (REP14-MED) conducted in June 2014 by the Centre for Maritime Research and Experimentation with several partners. Observational data is used for assimilation and validation purposes. The hybrid scheme is used both to correct the systematic error introduced in the system from the external forcing (initialization, lateral and surface open boundary conditions) and model parameterization and to improve the representation of small scale errors in the background error covariance matrix. A 14-members ensemble system generated through perturbation of assimilated observations is run off-line for further use in the hybrid scheme. Results of four different experiments are compared. The reference experiment uses the classical stationary formulation of the background error covariance matrix and has no systematic error correction. The other three experiments account, or not, for systematic error correction and hybrid background error covariance matrix combining the static and the ensemble derived errors of the day. Results show that the hybrid scheme when used in conjunction with the systematic error correction reduces the mean absolute error of temperature and salinity misfit by 55% and 42% respectively versus statistics arising from standard climatological covariances without systematic error correction.

  12. [Possibilities, limitations and errors with ultrasound tomography in computer-assisted treatment planning (author's transl)].

    PubMed

    Quast, U; Glaeser, L; Heckemann, R

    1978-07-01

    Ultrasound tomography provides two-dimensional images, true in scale, of sonographic interfaces within nearly every sectional plane desired; it has become especially important, therefore, in irradiation planning. Complementary to results from a therapy simulator, an improvement in localization of tumor and target volume or of critical organs and of tissue inhomogeneities is possible. The hard-copy of gray-scale sonotomograms furnishes all essential geometric and anatomical input data needed for electronic systems used in irradiation planning. Technical, physical and diagnostic limitations of the method are stated. The possible systematic or technical-instrumental errors in sonographic treatment planning are discussed and the necessary calibration controls specified.

  13. 12 CFR 250.163 - Inapplicability of amount limitations to “ineligible acceptances.”

    Code of Federal Regulations, 2011 CFR

    2011-01-01

    ... the acceptances are not the type described in section 13 of the Federal Reserve Act. (c) A review of... section 13 (of the Federal Reserve Act), inasmuch as the laws of many States confer broader acceptance... section 13 of the Federal Reserve Act. Yet, this appears to be a development that Congress did...

  14. 12 CFR 250.163 - Inapplicability of amount limitations to “ineligible acceptances.”

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... the acceptances are not the type described in section 13 of the Federal Reserve Act. (c) A review of... section 13 (of the Federal Reserve Act), inasmuch as the laws of many States confer broader acceptance... section 13 of the Federal Reserve Act. Yet, this appears to be a development that Congress did...

  15. Understanding the Factors Limiting the Acceptability of Online Courses and Degrees

    ERIC Educational Resources Information Center

    Adams, Jonathan

    2008-01-01

    This study examines prior research conducted on the acceptability of online degrees in hiring situations. In a national survey, a questionnaire was developed for assessing the importance of objections to accepting job candidates with online degrees and sent to university search committee chairs in institutions advertising open faculty positions…

  16. Improving transient performance of adaptive control architectures using frequency-limited system error dynamics

    NASA Astrophysics Data System (ADS)

    Yucelen, Tansel; De La Torre, Gerardo; Johnson, Eric N.

    2014-11-01

    Although adaptive control theory offers mathematical tools to achieve system performance without excessive reliance on dynamical system models, its applications to safety-critical systems can be limited due to poor transient performance and robustness. In this paper, we develop an adaptive control architecture to achieve stabilisation and command following of uncertain dynamical systems with improved transient performance. Our framework consists of a new reference system and an adaptive controller. The proposed reference system captures a desired closed-loop dynamical system behaviour modified by a mismatch term representing the high-frequency content between the uncertain dynamical system and this reference system, i.e., the system error. In particular, this mismatch term allows the frequency content of the system error dynamics to be limited, which is used to drive the adaptive controller. It is shown that this key feature of our framework yields fast adaptation without incurring high-frequency oscillations in the transient performance. We further show the effects of design parameters on the system performance, analyse closeness of the uncertain dynamical system to the unmodified (ideal) reference system, discuss robustness of the proposed approach with respect to time-varying uncertainties and disturbances, and make connections to gradient minimisation and classical control theory. A numerical example is provided to demonstrate the efficacy of the proposed architecture.

  17. Ptychographic overlap constraint errors and the limits of their numerical recovery using conjugate gradient descent methods.

    PubMed

    Tripathi, Ashish; McNulty, Ian; Shpyrko, Oleg G

    2014-01-27

    Ptychographic coherent x-ray diffractive imaging is a form of scanning microscopy that does not require optics to image a sample. A series of scanned coherent diffraction patterns recorded from multiple overlapping illuminated regions on the sample are inverted numerically to retrieve its image. The technique recovers the phase lost by detecting the diffraction patterns by using experimentally known constraints, in this case the measured diffraction intensities and the assumed scan positions on the sample. The spatial resolution of the recovered image of the sample is limited by the angular extent over which the diffraction patterns are recorded and how well these constraints are known. Here, we explore how reconstruction quality degrades with uncertainties in the scan positions. We show experimentally that large errors in the assumed scan positions on the sample can be numerically determined and corrected using conjugate gradient descent methods. We also explore in simulations the limits, based on the signal to noise of the diffraction patterns and amount of overlap between adjacent scan positions, of just how large these errors can be and still be rendered tractable by this method.

  18. A hybrid variational-ensemble data assimilation scheme with systematic error correction for limited-area ocean models

    NASA Astrophysics Data System (ADS)

    Oddo, Paolo; Storto, Andrea; Dobricic, Srdjan; Russo, Aniello; Lewis, Craig; Onken, Reiner; Coelho, Emanuel

    2016-10-01

    A hybrid variational-ensemble data assimilation scheme to estimate the vertical and horizontal parts of the background error covariance matrix for an ocean variational data assimilation system is presented and tested in a limited-area ocean model implemented in the western Mediterranean Sea. An extensive data set collected during the Recognized Environmental Picture Experiments conducted in June 2014 by the Centre for Maritime Research and Experimentation has been used for assimilation and validation. The hybrid scheme is used to both correct the systematic error introduced in the system from the external forcing (initialisation, lateral and surface open boundary conditions) and model parameterisation, and improve the representation of small-scale errors in the background error covariance matrix. An ensemble system is run offline for further use in the hybrid scheme, generated through perturbation of assimilated observations. Results of four different experiments have been compared. The reference experiment uses the classical stationary formulation of the background error covariance matrix and has no systematic error correction. The other three experiments account for, or not, systematic error correction and hybrid background error covariance matrix combining the static and the ensemble-derived errors of the day. Results show that the hybrid scheme when used in conjunction with the systematic error correction reduces the mean absolute error of temperature and salinity misfit by 55 and 42 % respectively, versus statistics arising from standard climatological covariances without systematic error correction.

  19. Guidance on the establishment of acceptable daily exposure limits (ADE) to support Risk-Based Manufacture of Pharmaceutical Products.

    PubMed

    Sargent, Edward V; Faria, Ellen; Pfister, Thomas; Sussman, Robert G

    2013-03-01

    Health-based limits for active pharmaceutical ingredients (API) referred to as acceptable daily exposures (ADEs) are necessary to the pharmaceutical industry and used to derive acceptance limits for cleaning validation purposes and evaluating cross-carryover. ADEs represent a dose of an API unlikely to cause adverse effects if an individual is exposed, by any route, at or below this dose every day over a lifetime. Derivations of ADEs need to be consistent with ICH Q9 as well as other scientific approaches for the derivation of health-based limits that help to manage risks to both product quality and operator safety during the manufacture of pharmaceutical products. Previous methods for the establishment of acceptance limits in cleaning validation programs are considered arbitrary and have largely ignored the available clinical and toxicological data available for a drug substance. Since the ADE utilizes all available pharmaceutical data and applies scientifically acceptable risk assessment methodology it is more holistic and consistent with other quantitative risk assessments purposes such derivation of occupational exposure limits. Processes for hazard identification, dose response assessment, uncertainty factor analysis and documentation are reviewed.

  20. Analysis of operator splitting errors for near-limit flame simulations

    NASA Astrophysics Data System (ADS)

    Lu, Zhen; Zhou, Hua; Li, Shan; Ren, Zhuyin; Lu, Tianfeng; Law, Chung K.

    2017-04-01

    High-fidelity simulations of ignition, extinction and oscillatory combustion processes are of practical interest in a broad range of combustion applications. Splitting schemes, widely employed in reactive flow simulations, could fail for stiff reaction-diffusion systems exhibiting near-limit flame phenomena. The present work first employs a model perfectly stirred reactor (PSR) problem with an Arrhenius reaction term and a linear mixing term to study the effects of splitting errors on the near-limit combustion phenomena. Analysis shows that the errors induced by decoupling of the fractional steps may result in unphysical extinction or ignition. The analysis is then extended to the prediction of ignition, extinction and oscillatory combustion in unsteady PSRs of various fuel/air mixtures with a 9-species detailed mechanism for hydrogen oxidation and an 88-species skeletal mechanism for n-heptane oxidation, together with a Jacobian-based analysis for the time scales. The tested schemes include the Strang splitting, the balanced splitting, and a newly developed semi-implicit midpoint method. Results show that the semi-implicit midpoint method can accurately reproduce the dynamics of the near-limit flame phenomena and it is second-order accurate over a wide range of time step size. For the extinction and ignition processes, both the balanced splitting and midpoint method can yield accurate predictions, whereas the Strang splitting can lead to significant shifts on the ignition/extinction processes or even unphysical results. With an enriched H radical source in the inflow stream, a delay of the ignition process and the deviation on the equilibrium temperature are observed for the Strang splitting. On the contrary, the midpoint method that solves reaction and diffusion together matches the fully implicit accurate solution. The balanced splitting predicts the temperature rise correctly but with an over-predicted peak. For the sustainable and decaying oscillatory

  1. Limitations of statistical measures of error in assessing the accuracy of continuous glucose sensors.

    PubMed

    Kollman, Craig; Wilson, Darrell M; Wysocki, Tim; Tamborlane, William V; Beck, Roy W

    2005-10-01

    Various statistical methods are commonly used to assess the accuracy of near-continuous glucose sensors. The performance and reliability of these methods have not been well described. We used computer simulation to describe the behavior of several statistical measures including error grid analysis, receiver operating characteristics, correlation, and repeated measures under varying conditions. Actual data from an inpatient accuracy study conducted by the Diabetes Research in Children Network (DirecNet) were also used to demonstrate these limitations. Sensors that were made artificially inaccurate by randomly shuffling the pairings to reference values still fell in Zone A or B 78% of the time for the Clarke grid and 79% of the time for the modified grid. Area under the curve values for these shuffled pairs averaged 64% for hypoglycemia and 68% for hyperglycemia. Continuous error grid analysis resulted in 75% of shuffled pairs designated as "Accurate Readings" or "Benign Errors." Correlation analysis gave inconsistent results for sensors simulated to have identical accuracies with values ranging from 0.50 to 0.96. Simplistic repeated-measures analyses accounting for subject effects, but ignoring temporal correlation patterns substantially inflated the probability of falsely obtaining a statistically significant result. In simulations where the null hypothesis was correct, 23% of observed P values were <0.05 and 12% of observed P values were <0.01. Commonly used statistical methods can give overly optimistic and/or inconsistent notions of sensor accuracy if results are not placed in proper context. Novel techniques are needed to assess the accuracy of near-continuous glucose sensors.

  2. First Year Wilkinson Microwave Anisotropy Probe(WMAP) Observations: Data Processing Methods and Systematic Errors Limits

    NASA Technical Reports Server (NTRS)

    Hinshaw, G.; Barnes, C.; Bennett, C. L.; Greason, M. R.; Halpern, M.; Hill, R. S.; Jarosik, N.; Kogut, A.; Limon, M.; Meyer, S. S.

    2003-01-01

    We describe the calibration and data processing methods used to generate full-sky maps of the cosmic microwave background (CMB) from the first year of Wilkinson Microwave Anisotropy Probe (WMAP) observations. Detailed limits on residual systematic errors are assigned based largely on analyses of the flight data supplemented, where necessary, with results from ground tests. The data are calibrated in flight using the dipole modulation of the CMB due to the observatory's motion around the Sun. This constitutes a full-beam calibration source. An iterative algorithm simultaneously fits the time-ordered data to obtain calibration parameters and pixelized sky map temperatures. The noise properties are determined by analyzing the time-ordered data with this sky signal estimate subtracted. Based on this, we apply a pre-whitening filter to the time-ordered data to remove a low level of l/f noise. We infer and correct for a small (approx. 1 %) transmission imbalance between the two sky inputs to each differential radiometer, and we subtract a small sidelobe correction from the 23 GHz (K band) map prior to further analysis. No other systematic error corrections are applied to the data. Calibration and baseline artifacts, including the response to environmental perturbations, are negligible. Systematic uncertainties are comparable to statistical uncertainties in the characterization of the beam response. Both are accounted for in the covariance matrix of the window function and are propagated to uncertainties in the final power spectrum. We characterize the combined upper limits to residual systematic uncertainties through the pixel covariance matrix.

  3. 12 CFR 250.163 - Inapplicability of amount limitations to “ineligible acceptances.”

    Code of Federal Regulations, 2012 CFR

    2012-01-01

    ... Reserve Act. (c) A review of the legislative history surrounding the enactment of the acceptance... the provisions of section 13 (of the Federal Reserve Act), inasmuch as the laws of many States confer... the type described in section 13 of the Federal Reserve Act. Yet, this appears to be a...

  4. 12 CFR 250.163 - Inapplicability of amount limitations to “ineligible acceptances.”

    Code of Federal Regulations, 2014 CFR

    2014-01-01

    ... Reserve Act. (c) A review of the legislative history surrounding the enactment of the acceptance... the provisions of section 13 (of the Federal Reserve Act), inasmuch as the laws of many States confer... the type described in section 13 of the Federal Reserve Act. Yet, this appears to be a...

  5. 12 CFR 250.163 - Inapplicability of amount limitations to “ineligible acceptances.”

    Code of Federal Regulations, 2013 CFR

    2013-01-01

    ... Reserve Act. (c) A review of the legislative history surrounding the enactment of the acceptance... the provisions of section 13 (of the Federal Reserve Act), inasmuch as the laws of many States confer... the type described in section 13 of the Federal Reserve Act. Yet, this appears to be a...

  6. La composition academique: les limites de l'acceptabilite (Composition for Academic Purposes: Criteria for Acceptability).

    ERIC Educational Resources Information Center

    Grenall, G. M.

    1981-01-01

    Examines the pedagogical approaches and problems attendant to the development of English writing programs for foreign students. Discusses the skills necessary to handle course work, such as essay tests, term papers and reports, theses and dissertations, and focuses particularly on diagnostic problems and acceptability criteria. Societe Nouvelle…

  7. La composition academique: les limites de l'acceptabilite (Composition for Academic Purposes: Criteria for Acceptability).

    ERIC Educational Resources Information Center

    Grenall, G. M.

    1981-01-01

    Examines the pedagogical approaches and problems attendant to the development of English writing programs for foreign students. Discusses the skills necessary to handle course work, such as essay tests, term papers and reports, theses and dissertations, and focuses particularly on diagnostic problems and acceptability criteria. Societe Nouvelle…

  8. The limits of parental authority to accept or refuse medical treatment.

    PubMed

    Miller, Geoffrey

    2011-06-01

    The legal and ethical right of parents to refuse medical treatment for their children differs from the authority possessed by competent adults with decisional capacity. Parents have a duty to act in the best interests of their children from the children's perspective and not to inflict harm. Best interests are determined by weighing benefits and burdens, which includes using evidence-based outcomes and value judgments. The result is placed along a risk/benefit spectrum. If the result is close to low risk/high benefit, the parents have a strong obligation to accept a health care team recommendation. Otherwise, parents may choose between reasonable medical options without threat or coercion.

  9. Basis set limit and systematic errors in local-orbital based all-electron DFT

    NASA Astrophysics Data System (ADS)

    Blum, Volker; Behler, Jörg; Gehrke, Ralf; Reuter, Karsten; Scheffler, Matthias

    2006-03-01

    With the advent of efficient integration schemes,^1,2 numeric atom-centered orbitals (NAO's) are an attractive basis choice in practical density functional theory (DFT) calculations of nanostructured systems (surfaces, clusters, molecules). Though all-electron, the efficiency of practical implementations promises to be on par with the best plane-wave pseudopotential codes, while having a noticeably higher accuracy if required: Minimal-sized effective tight-binding like calculations and chemically accurate all-electron calculations are both possible within the same framework; non-periodic and periodic systems can be treated on equal footing; and the localized nature of the basis allows in principle for O(N)-like scaling. However, converging an observable with respect to the basis set is less straightforward than with competing systematic basis choices (e.g., plane waves). We here investigate the basis set limit of optimized NAO basis sets in all-electron calculations, using as examples small molecules and clusters (N2, Cu2, Cu4, Cu10). meV-level total energy convergence is possible using <=50 basis functions per atom in all cases. We also find a clear correlation between the errors which arise from underconverged basis sets, and the system geometry (interatomic distance). ^1 B. Delley, J. Chem. Phys. 92, 508 (1990), ^2 J.M. Soler et al., J. Phys.: Condens. Matter 14, 2745 (2002).

  10. The sky is a limit: errors in prehospital diagnosis by flight physicians.

    PubMed

    Linn, S; Knoller, N; Giligan, C G; Dreifus, U

    1997-05-01

    The medical records and air evacuation reports of 186 trauma patients were examined to determine the type and characteristics of missed diagnoses. More than 35% of all cases of hypovolemic shock were not identified, nor were two cases of respiratory distress. Although unconsciousness was always identified correctly, almost 7% of all cases with partial unconsciousness were not recorded. Of 443 diagnoses, 337 were correctly recorded by the flight physician, slightly more than 76%. The flight physicians missed 10 critical diagnoses, all of which were feasible, 56 important diagnoses, 42 of which were feasible, and 40 relatively marginal diagnoses, 27 of which were feasible. Injuries to the head, face, and limbs were usually diagnosed correctly, and were missed only in a few cases. Of considerable clinical relevance was the observation that flight physicians missed a significant number of critical and important feasible diagnoses of five types: (1) more than half of all feasible diagnoses in the eyes; (2) a third of feasible diagnoses of cervical spine injuries; and a significant percentage of injuries to the (3) abdomen, (4) chest, and (5) pelvis. Blunt diagnoses were missed more often than penetrating injuries. Feasible diagnoses were missed in two of the four cases of paralysis, approximately one third of all crush injuries, and one quarter of all fractures. This study illuminates preventable errors of physicians during air evacuation and indicates particular types of serious, feasible diagnoses that flight physicians are prone to miss. Medicine in the sky may pose limits to our diagnostic abilities but the limits could be pushed further.

  11. Error Performance of Differentially Coherent Detection of Binary DPSK Data Transmission on the Hard-Limiting Satellite Channel.

    DTIC Science & Technology

    1979-08-01

    unequal power levels and noise correlations between the two adjacent time slot pulses. In practice, the power imbalance, or equivalently SNR imbalance...is a practical assumption since the noise is necessarily band limited in the system. Error probabilities are given as a function of uplink SNR with...different levels of SNR imbalances and different downlink SNR as parameters. It is discovered that, while SNR imbalance affects error performance, the

  12. Wireless smart meters and public acceptance: the environment, limited choices, and precautionary politics.

    PubMed

    Hess, David J; Coley, Jonathan S

    2014-08-01

    Wireless smart meters (WSMs) promise numerous environmental benefits, but they have been installed without full consideration of public acceptance issues. Although societal-implications research and regulatory policy have focused on privacy, security, and accuracy issues, our research indicates that health concerns have played an important role in the public policy debates that have emerged in California. Regulatory bodies do not recognize non-thermal health effects for non-ionizing electromagnetic radiation, but both homeowners and counter-experts have contested the official assurances that WSMs pose no health risks. Similarities and differences with the existing social science literature on mobile phone masts are discussed, as are the broader political implications of framing an alternative policy based on an opt-out choice. The research suggests conditions under which health-oriented precautionary politics can be particularly effective, namely, if there is a mandatory technology, a network of counter-experts, and a broader context of democratic contestation.

  13. Post-manufacturing, 17-times acceptable raw bit error rate enhancement, dynamic codeword transition ECC scheme for highly reliable solid-state drives, SSDs

    NASA Astrophysics Data System (ADS)

    Tanakamaru, Shuhei; Fukuda, Mayumi; Higuchi, Kazuhide; Esumi, Atsushi; Ito, Mitsuyoshi; Li, Kai; Takeuchi, Ken

    2011-04-01

    A dynamic codeword transition ECC scheme is proposed for highly reliable solid-state drives, SSDs. By monitoring the error number or the write/erase cycles, the ECC codeword dynamically increases from 512 Byte (+parity) to 1 KByte, 2 KByte, 4 KByte…32 KByte. The proposed ECC with a larger codeword decreases the failure rate after ECC. As a result, the acceptable raw bit error rate, BER, before ECC is enhanced. Assuming a NAND Flash memory which requires 8-bit correction in 512 Byte codeword ECC, a 17-times higher acceptable raw BER than the conventional fixed 512 Byte codeword ECC is realized for the mobile phone application without an interleaving. For the MP3 player, digital-still camera and high-speed memory card applications with a dual channel interleaving, 15-times higher acceptable raw BER is achieved. Finally, for the SSD application with 8 channel interleaving, 13-times higher acceptable raw BER is realized. Because the ratio of the user data to the parity bits is the same in each ECC codeword, no additional memory area is required. Note that the reliability of SSD is improved after the manufacturing without cost penalty. Compared with the conventional ECC with the fixed large 32 KByte codeword, the proposed scheme achieves a lower power consumption by introducing the "best-effort" type operation. In the proposed scheme, during the most of the lifetime of SSD, a weak ECC with a shorter codeword such as 512 Byte (+parity), 1 KByte and 2 KByte is used and 98% lower power consumption is realized. At the life-end of SSD, a strong ECC with a 32 KByte codeword is used and the highly reliable operation is achieved. The random read performance is also discussed. The random read performance is estimated by the latency. The latency is below 1.5 ms for ECC codeword up to 32 KByte. This latency is below the average latency of 15,000 rpm HDD, 2 ms.

  14. Pluribus - Exploring the Limits of Error Correction Using a Suffix Tree.

    PubMed

    Savel, Daniel; LaFramboise, Thomas; Grama, Ananth; Koyuturk, Mehmet

    2016-06-29

    Next generation sequencing technologies enable efficient and cost-effective genome sequencing. However, sequencing errors increase the complexity of the de novo assembly process, and reduce the quality of the assembled sequences. Many error correction techniques utilizing substring frequencies have been developed to mitigate this effect. In this paper, we present a novel and effective method called PLURIBUS, for correcting sequencing errors using a generalized suffix trie. PLURIBUS utilizes multiple manifestations of an error in the trie to accurately identify errors and suggest corrections. We show that PLURIBUS produces the least number of false positives across a diverse set of real sequencing datasets when compared to other methods. Furthermore, PLURIBUS can be used in conjunction with other contemporary error correction methods to achieve higher levels of accuracy than either tool alone. These increases in error correction accuracy are also realized in the quality of the contigs that are generated during assembly. We explore, in-depth, the behavior of PLURIBUS, to explain the observed improvement in accuracy and assembly performance. PLURIBUS is freely available at http://compbio.

  15. Estimation of measurement error in plasma HIV-1 RNA assays near their limit of quantification

    PubMed Central

    Wang, Lu; Brumme, Chanson; Wu, Lang; Montaner, Julio S. G.; Harrigan, P. Richard

    2017-01-01

    Background Plasma HIV-1 RNA levels (pVLs), routinely used for clinical management, are influenced by measurement error (ME) due to physiologic and assay variation. Objective To assess the ME of the COBAS HIV-1 Ampliprep AMPLICOR MONITOR ultrasensitive assay version 1.5 and the COBAS Ampliprep Taqman HIV-1 assay versions 1.0 and 2.0 close to their lower limit of detection. Secondly to examine whether there was any evidence that pVL measurements closest to the lower limit of quantification, where clinical decisions are made, were susceptible to a higher degree of random noise than the remaining range. Methods We analysed longitudinal pVL of treatment-naïve patients from British Columbia, Canada, during their first six months on treatment, for time periods when each assay was uniquely available: Period 1 (Amplicor): 08/03/2000–01/02/2008; Period 2 (Taqman v1.0): 07/01/2010–07/03/2012; Period 3 (Taqman v2.0): 08/03/2012–30/06/2014. ME was estimated via generalized additive mixed effects models, adjusting for several clinical and demographic variables and follow-up time. Results The ME associated with each assay was approximately 0.5 log10 copies/mL. The number of pVL measurements, at a given pVL value, was not randomly distributed; values ≤250 copies/mL were strongly systematically overrepresented in all assays, with the prevalence decreasing monotonically as the pVL increased. Model residuals for pVL ≤250 copies/mL were approximately three times higher than that for the higher range, and pVL measurements in this range could not be modelled effectively due to considerable random noise of the data. Conclusions Although the ME was stable across assays, there is substantial increase in random noise in measuring pVL close to the lower level of detection. These findings have important clinical significance, especially in the range where key clinical decisions are made. Thus, pVL values ≤250 copies/mL should not be taken as the “truth” and repeat p

  16. 241-SY-101 DACS High hydrogen abort limit reduction (SCR 473) acceptance test report

    SciTech Connect

    ERMI, A.M.

    1999-09-09

    The capability of the 241-SY-101 Data Acquisition and Control System (DACS) computer system to provide proper control and monitoring of the 241-SY-101 underground storage tank hydrogen monitoring system utilizing the reduced hydrogen abort limit of 0.69% was systematically evaluated by the performance of ATP HNF-4927. This document reports the results of the ATP.

  17. Effect and acceptance of bluegill length limits in Nebraska natural lakes

    USGS Publications Warehouse

    Paukert, C.P.; Willis, D.W.; Gabelhouse, D.W.

    2002-01-01

    Bluegill Lepomis macrochirus populations in 18 Nebraska Sandhill lakes were evaluated to determine if a 200-mm minimum length limit would increase population size structure. Bluegills were trap-netted in May and June 1998 and 1999, and a creel survey was conducted during winter 1998-2001 on one or two lakes where bluegills had been tagged to determine angler exploitation. Thirty-three percent of anglers on one creeled lake were trophy anglers (i.e., fishing for large [???250 mm] bluegills), whereas 67% were there to harvest fish to eat. Exploitation was always less than 10% and the total annual mortality averaged 40% across all 18 lakes. The time to reach 200 mm ranged from 4.3 to 8.3 years. The relative stock density of preferred-length fish increased an average of 2.2 units in all 18 lakes with a 10% exploitation rate. However, yield declined 39% and the number harvested declined 62%. Bluegills would need to reach 200 mm in 4.2 years to ensure no reduction in yield at 10% exploitation. Both yield and size structure were higher with a 200-mm minimum length limit (relative to having no length limit) only in populations with the lowest natural mortality and at exploitation of 30% or more. Although 100% (N = 39) of anglers surveyed said they would favor a 200-mm minimum length limit to improve bluegill size structure, anglers would have to sacrifice harvest to achieve this goal. While a 200-mm minimum length limit did minimally increase size structure at current levels of exploitation across all 18 bluegill populations, the populations with the lowest natural mortality and fastest growth provided the highest increase in size structure with the lowest reduction in yield and number harvested.

  18. Sampling hazelnuts for aflatoxin: effect of sample size and accept/reject limit on reducing the risk of misclassifying lots.

    PubMed

    Ozay, Guner; Seyhan, Ferda; Yilmaz, Aysun; Whitaker, Thomas B; Slate, Andrew B; Giesbrecht, Francis G

    2007-01-01

    About 100 countries have established regulatory limits for aflatoxin in food and feeds. Because these limits vary widely among regulating countries, the Codex Committee on Food Additives and Contaminants began work in 2004 to harmonize aflatoxin limits and sampling plans for aflatoxin in almonds, pistachios, hazelnuts, and Brazil nuts. Studies were developed to measure the uncertainty and distribution among replicated sample aflatoxin test results taken from aflatoxin-contaminated treenut lots. The uncertainty and distribution information is used to develop a model that can evaluate the performance (risk of misclassifying lots) of aflatoxin sampling plan designs for treenuts. Once the performance of aflatoxin sampling plans can be predicted, they can be designed to reduce the risks of misclassifying lots traded in either the domestic or export markets. A method was developed to evaluate the performance of sampling plans designed to detect aflatoxin in hazelnuts lots. Twenty hazelnut lots with varying levels of contamination were sampled according to an experimental protocol where 16 test samples were taken from each lot. The observed aflatoxin distribution among the 16 aflatoxin sample test results was compared to lognormal, compound gamma, and negative binomial distributions. The negative binomial distribution was selected to model aflatoxin distribution among sample test results because it gave acceptable fits to observed distributions among sample test results taken from a wide range of lot concentrations. Using the negative binomial distribution, computer models were developed to calculate operating characteristic curves for specific aflatoxin sampling plan designs. The effect of sample size and accept/reject limits on the chances of rejecting good lots (sellers' risk) and accepting bad lots (buyers' risk) was demonstrated for various sampling plan designs.

  19. Predicting tool operator capacity to react against torque within acceptable handle deflection limits in automotive assembly.

    PubMed

    Radwin, Robert G; Chourasia, Amrish; Fronczak, Frank J; Subedi, Yashpal; Howery, Robert; Yen, Thomas Y; Sesto, Mary E; Irwin, Curtis B

    2016-05-01

    The proportion of tool operators capable of maintaining published psychophysically derived threaded fastener tool handle deflection limits were predicted using a biodynamic tool operator model, interacting with the tool, task and workstation. Tool parameters, including geometry, speed and torque were obtained from the specifications for 35 tools used in an auto assembly plant. Tool mass moments of inertia were measured for these tools using a novel device that engages the tool in a rotating system of known inertia. Task parameters, including fastener target torque and joint properties (soft, medium or hard), were ascertained from the vehicle design specifications. Workstation parameters, including vertical and horizontal distances from the operator were measured using a laser rangefinder for 69 tool installations in the plant. These parameters were entered into the model and tool handle deflection was predicted for each job. While handle deflection for most jobs did not exceed the capacity of 75% females and 99% males, six jobs exceeded the deflection criterion. Those tool installations were examined and modifications in tool speed and operator position improved those jobs within the deflection limits, as predicted by the model. We conclude that biodynamic tool operator models may be useful for identifying stressful tool installations and interventions that bring them within the capacity of most operators.

  20. The Limits of Coding with Joint Constraints on Detected and Undetected Error Rates

    NASA Technical Reports Server (NTRS)

    Dolinar, Sam; Andrews, Kenneth; Pollara, Fabrizio; Divsalar, Dariush

    2008-01-01

    We develop a remarkably tight upper bound on the performance of a parameterized family of bounded angle maximum-likelihood (BA-ML) incomplete decoders. The new bound for this class of incomplete decoders is calculated from the code's weight enumerator, and is an extension of Poltyrev-type bounds developed for complete ML decoders. This bound can also be applied to bound the average performance of random code ensembles in terms of an ensemble average weight enumerator. We also formulate conditions defining a parameterized family of optimal incomplete decoders, defined to minimize both the total codeword error probability and the undetected error probability for any fixed capability of the decoder to detect errors. We illustrate the gap between optimal and BA-ML incomplete decoding via simulation of a small code.

  1. A Complementary Note to 'A Lag-1 Smoother Approach to System-Error Estimation': The Intrinsic Limitations of Residual Diagnostics

    NASA Technical Reports Server (NTRS)

    Todling, Ricardo

    2015-01-01

    Recently, this author studied an approach to the estimation of system error based on combining observation residuals derived from a sequential filter and fixed lag-1 smoother. While extending the methodology to a variational formulation, experimenting with simple models and making sure consistency was found between the sequential and variational formulations, the limitations of the residual-based approach came clearly to the surface. This note uses the sequential assimilation application to simple nonlinear dynamics to highlight the issue. Only when some of the underlying error statistics are assumed known is it possible to estimate the unknown component. In general, when considerable uncertainties exist in the underlying statistics as a whole, attempts to obtain separate estimates of the various error covariances are bound to lead to misrepresentation of errors. The conclusions are particularly relevant to present-day attempts to estimate observation-error correlations from observation residual statistics. A brief illustration of the issue is also provided by comparing estimates of error correlations derived from a quasi-operational assimilation system and a corresponding Observing System Simulation Experiments framework.

  2. A Complementary Note to 'A Lag-1 Smoother Approach to System-Error Estimation': The Intrinsic Limitations of Residual Diagnostics

    NASA Technical Reports Server (NTRS)

    Todling, Ricardo

    2015-01-01

    Recently, this author studied an approach to the estimation of system error based on combining observation residuals derived from a sequential filter and fixed lag-1 smoother. While extending the methodology to a variational formulation, experimenting with simple models and making sure consistency was found between the sequential and variational formulations, the limitations of the residual-based approach came clearly to the surface. This note uses the sequential assimilation application to simple nonlinear dynamics to highlight the issue. Only when some of the underlying error statistics are assumed known is it possible to estimate the unknown component. In general, when considerable uncertainties exist in the underlying statistics as a whole, attempts to obtain separate estimates of the various error covariances are bound to lead to misrepresentation of errors. The conclusions are particularly relevant to present-day attempts to estimate observation-error correlations from observation residual statistics. A brief illustration of the issue is also provided by comparing estimates of error correlations derived from a quasi-operational assimilation system and a corresponding Observing System Simulation Experiments framework.

  3. Accepting Error to Make Less Error.

    DTIC Science & Technology

    1985-04-01

    20374 Alexandria, VA 22332 CDR C. Hutchins CAPT Robert Biersner Code 55 Naval Biodynamics Laboratory Naval Postgraduate School Michoud Station Monterey...Decision Research 1201 Oak Street Eugene, OR 97401 Dr. Alan Morse Intelligent Software Systems Inc. 160 Old Farm Road Amherst, MA 01002 Dr. J. Miller

  4. 5 CFR 1605.16 - Claims for correction of employing agency errors; time limitations.

    Code of Federal Regulations, 2014 CFR

    2014-01-01

    ... months before it was discovered, the agency may exercise sound discretion in deciding whether to correct... a claim to correct any such error after that time, the agency may do so at its sound discretion. (c... employing agency provides the participant with good cause for requiring a longer period to decide the...

  5. 5 CFR 1605.16 - Claims for correction of employing agency errors; time limitations.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... months before it was discovered, the agency may exercise sound discretion in deciding whether to correct... a claim to correct any such error after that time, the agency may do so at its sound discretion. (c... employing agency provides the participant with good cause for requiring a longer period to decide the...

  6. 5 CFR 1605.16 - Claims for correction of employing agency errors; time limitations.

    Code of Federal Regulations, 2013 CFR

    2013-01-01

    ... months before it was discovered, the agency may exercise sound discretion in deciding whether to correct... a claim to correct any such error after that time, the agency may do so at its sound discretion. (c... employing agency provides the participant with good cause for requiring a longer period to decide the...

  7. Simultaneous inference for longitudinal data with detection limits and covariates measured with errors, with application to AIDS studies.

    PubMed

    Wu, Lang

    2004-06-15

    In AIDS studies such as HIV viral dynamics, statistical inference is often complicated because the viral load measurements may be subject to left censoring due to a detection limit and time-varying covariates such as CD4 counts may be measured with substantial errors. Mixed-effects models are often used to model the response and the covariate processes in these studies. We propose a unified approach which addresses the censoring and measurement errors simultaneously. We estimate the model parameters by a Monte-Carlo EM algorithm via the Gibbs sampler. A simulation study is conducted to compare the proposed method with the usual two-step method and a naive method. We find that the proposed method produces approximately unbiased estimates with more reliable standard errors. A real data set from an AIDS study is analysed using the proposed method.

  8. Visual Impairment, Undercorrected Refractive Errors, and Activity Limitations in Older Adults: Findings From the Three-City Alienor Study.

    PubMed

    Naël, Virginie; Pérès, Karine; Carrière, Isabelle; Daien, Vincent; Scherlen, Anne-Catherine; Arleo, Angelo; Korobelnik, Jean-Francois; Delcourt, Cécile; Helmer, Catherine

    2017-04-01

    As vision is required in almost all activities of daily living, visual impairment (VI) may be one of the major treatable factors for preventing activity limitations. We aimed to evaluate the attributable risk of VI associated with activity limitations and the extent to which limitations are avoidable with optimal optical correction of undercorrected refractive errors. We analyzed 709 older adults from the Three-City-Alienor population-based study. VI was defined by presenting distance visual acuity in the better-seeing eye. Multivariate modified Poisson regressions were used to estimate the associations between vision, activity limitations, and social participation restrictions. Population attributable risk (PAR) and generalized impact fraction (GIF) were estimated. Bootstrapping was used to estimate 95% confidence intervals (CI). After adjustment for potential confounders, VI was associated with each domain of activity limitations, except basic activities of daily living (ADL) limitations. These associations were found for even minimal levels of VI. PAR was estimated at 10.1% (95% CI: 5.2-10.6) for mobility limitations, at 26.0% (95% CI: 13.5-41.2) for instrumental ADL (IADL) limitations, and at 24.9% (95% CI: 10.5-47.1) for social participation restrictions. GIF for improvement of undercorrected refractive errors was 6.1% (95% CI: 3.8-8.5) for mobility limitations, 15.8% (95% CI: 11.5-20.1) for IADL limitations and 21.4% (95% CI: 13.8-28.5) for social participation restrictions. About one-sixth of IADL limitations and one-fifth of social participation restrictions could be prevented by an optimal optical correction. These results underline the importance of eye examinations in older adults to prevent disability.

  9. Whole exome sequencing in inborn errors of immunity: use the power but mind the limits.

    PubMed

    Bucciol, Giorgia; Van Nieuwenhove, Erika; Moens, Leen; Itan, Yuval; Meyts, Isabelle

    2017-09-21

    Next-generation sequencing, especially whole exome sequencing (WES), has revolutionized the molecular diagnosis of inborn errors of immunity. This review summarizes the generation and analysis of next-generation sequencing data. The focus is on prioritizing strategies for unveiling the potential disease-causing variant. We also highlighted oversights and imperfections of WES and targeted panel sequencing, as well as the need for functional validation. The information is crucial for a judicious use of WES by researchers, but even more so by the clinical immunologist.

  10. 20 CFR 404.780 - Evidence of “good cause” for exceeding time limits on accepting proof of support or application...

    Code of Federal Regulations, 2010 CFR

    2010-04-01

    ... limits on accepting proof of support or application for a lump-sum death payment. 404.780 Section 404.780 Employees' Benefits SOCIAL SECURITY ADMINISTRATION FEDERAL OLD-AGE, SURVIVORS AND DISABILITY INSURANCE (1950... accepting proof of support or application for a lump-sum death payment. (a) When evidence of good cause is...

  11. Technique Errors and Limiting Factors in Laser Ranging to Geodetic Satellites

    NASA Astrophysics Data System (ADS)

    Appleby, G. M.; Luceri, V.; Mueller, H.; Noll, C. E.; Otsubo, T.; Wilkinson, M.

    2012-12-01

    The tracking stations of the International Laser Ranging Service (ILRS) global network provide to the Data Centres a steady stream of very precise laser range normal points to the primary geodetic spherical satellites LAGEOS (-1 and -2) and Etalon (-1 and -2). Analysis of these observations to determine instantaneous site coordinates and Earth orientation parameters provides a major contribution to ongoing international efforts to define a precise terrestrial reference frame, which itself supports research into geophysical processes at the few mm level of precision. For example, the latest realization of the reference frame, ITRF2008, used weekly laser range solutions from 1983 to 2009, the origin of the Frame being determined solely by the SLR technique. However, in the ITRF2008 publication, Altamimi et al (2011, Journal of Geodesy) point out that further improvement in the ITRF is partly dependent upon improving an understanding of sources of technique error. In this study we look at SLR station hardware configuration that has been subject to major improvements over the last four decades, at models that strive to provide accurate translations of the laser range observations to the centres of mass of the small geodetic satellites and at the considerable body of work that has been carried out via orbital analyses to determine range corrections for some of the tracking stations. Through this study, with specific examples, we start to put together an inventory of system-dependent technique errors that will be important information for SLR re-analysis towards the next realization of the ITRF.

  12. X-ray optics metrology limited by random noise, instrumental drifts, and systematic errors

    SciTech Connect

    Yashchuk, Valeriy V.; Anderson, Erik H.; Barber, Samuel K.; Cambie, Rossana; Celestre, Richard; Conley, Raymond; Goldberg, Kenneth A.; McKinney, Wayne R.; Morrison, Gregory; Takacs, Peter Z.; Voronov, Dmitriy L.; Yuan, Sheng; Padmore, Howard A.

    2010-07-09

    Continuous, large-scale efforts to improve and develop third- and forth-generation synchrotron radiation light sources for unprecedented high-brightness, low emittance, and coherent x-ray beams demand diffracting and reflecting x-ray optics suitable for micro- and nano-focusing, brightness preservation, and super high resolution. One of the major impediments for development of x-ray optics with the required beamline performance comes from the inadequate present level of optical and at-wavelength metrology and insufficient integration of the metrology into the fabrication process and into beamlines. Based on our experience at the ALS Optical Metrology Laboratory, we review the experimental methods and techniques that allow us to mitigate significant optical metrology problems related to random, systematic, and drift errors with super-high-quality x-ray optics. Measurement errors below 0.2 mu rad have become routine. We present recent results from the ALS of temperature stabilized nano-focusing optics and dedicated at-wavelength metrology. The international effort to develop a next generation Optical Slope Measuring System (OSMS) to address these problems is also discussed. Finally, we analyze the remaining obstacles to further improvement of beamline x-ray optics and dedicated metrology, and highlight the ways we see to overcome the problems.

  13. Technical Errors May Affect Accuracy of Torque Limiter in Locking Plate Osteosynthesis.

    PubMed

    Savin, David D; Lee, Simon; Bohnenkamp, Frank C; Pastor, Andrew; Garapati, Rajeev; Goldberg, Benjamin A

    2016-01-01

    In locking plate osteosynthesis, proper surgical technique is crucial in reducing potential pitfalls, and use of a torque limiter makes it possible to control insertion torque. We conducted a study of the ways in which different techniques can alter the accuracy of torque limiters. We tested 22 torque limiters (1.5 Nm) for accuracy using hand and power tools under different rotational scenarios: hand power at low and high velocity and drill power at low and high velocity. We recorded the maximum torque reached after each torque-limiting event. Use of torque limiters under hand power at low velocity and high velocity resulted in significantly (P < .0001) different mean (SD) measurements: 1.49 (0.15) Nm and 3.73 (0.79) Nm. Use under drill power at controlled low velocity and at high velocity also resulted in significantly (P < .0001) different mean (SD) measurements: 1.47 (0.14) Nm and 5.37 (0.90) Nm. Maximum single measurement obtained was 9.0 Nm using drill power at high velocity. Locking screw insertion with improper technique may result in higher than expected torque and subsequent complications. For torque limiters, the most reliable technique involves hand power at slow velocity or drill power with careful control of insertion speed until 1 torque-limiting event occurs.

  14. Bit Error Rate Performance Limitations Due to Raman Amplifier Induced Crosstalk in a WDM Transmission System

    NASA Astrophysics Data System (ADS)

    Tithi, F. H.; Majumder, S. P.

    2017-03-01

    Analysis is carried out for a single span wavelength division multiplexing (WDM) transmission system with distributed Raman amplification to find the effect of amplifier induced crosstalk on the bit error rate (BER) with different system parameters. The results are evaluated in terms of crosstalk power induced in a WDM channel due to Raman amplification, optical signal to crosstalk ratio (OSCR) and BER at any distance for different pump power and number of WDM channels. The results show that the WDM system suffers power penalty due to crosstalk which is significant at higher pump power, higher channel separation and number of WDM channel. It is noticed that at a BER 10-9, the power penalty is 8.7 dB and 10.5 dB for the length of 180 km and number of WDM channel N=32 and 64 respectively when the pump power is 20 mW and is higher at high pump power. Analytical results are validated by simulation.

  15. Evaluation and Acceptability of a Simplified Test of Visual Function at Birth in a Limited-Resource Setting.

    PubMed

    Carrara, Verena I; Darakomon, Mue Chae; Thin, Nant War War; Paw, Naw Ta Kaw; Wah, Naw; Wah, Hser Gay; Helen, Naw; Keereecharoen, Suporn; Paw, Naw Ta Mlar; Jittamala, Podjanee; Nosten, François H; Ricci, Daniela; McGready, Rose

    2016-01-01

    Neurological examination, including visual fixation and tracking of a target, is routinely performed in the Shoklo Malaria Research Unit postnatal care units on the Thailand-Myanmar border. We aimed to evaluate a simple visual newborn test developed in Italy and performed by non-specialized personnel working in neonatal care units. An intensive training of local health staff in Thailand was conducted prior to performing assessments at 24, 48 and 72 hours of life in healthy, low-risk term singletons. The 48 and 72 hours results were then compared to values obtained to those from Italy. Parents and staff administering the test reported on acceptability. One hundred and seventy nine newborns, between June 2011 and October 2012, participated in the study. The test was rapidly completed if the infant remained in an optimal behavioral stage (7 ± 2 minutes) but the test duration increased significantly (12 ± 4 minutes, p < 0.001) if its behavior changed. Infants were able to fix a target and to discriminate a colored face at 24 hours of life. Horizontal tracking of a target was achieved by 96% (152/159) of the infants at 48 hours. Circular tracking, stripe discrimination and attention to distance significantly improved between each 24-hour test period. The test was easily performed by non-specialized local staff and well accepted by the parents. Healthy term singletons in this limited-resource setting have a visual response similar to that obtained to gestational age matched newborns in Italy. It is possible to use these results as a reference set of values for the visual assessment in Karen and Burmese infants in the first 72 hours of life. The utility of the 24 hours test should be pursued.

  16. Evaluation and Acceptability of a Simplified Test of Visual Function at Birth in a Limited-Resource Setting

    PubMed Central

    Carrara, Verena I.; Darakomon, Mue Chae; Thin, Nant War War; Paw, Naw Ta Kaw; Wah, Naw; Wah, Hser Gay; Helen, Naw; Keereecharoen, Suporn; Paw, Naw Ta Mlar; Jittamala, Podjanee; Nosten, François H.; Ricci, Daniela; McGready, Rose

    2016-01-01

    Neurological examination, including visual fixation and tracking of a target, is routinely performed in the Shoklo Malaria Research Unit postnatal care units on the Thailand-Myanmar border. We aimed to evaluate a simple visual newborn test developed in Italy and performed by non-specialized personnel working in neonatal care units. An intensive training of local health staff in Thailand was conducted prior to performing assessments at 24, 48 and 72 hours of life in healthy, low-risk term singletons. The 48 and 72 hours results were then compared to values obtained to those from Italy. Parents and staff administering the test reported on acceptability. One hundred and seventy nine newborns, between June 2011 and October 2012, participated in the study. The test was rapidly completed if the infant remained in an optimal behavioral stage (7 ± 2 minutes) but the test duration increased significantly (12 ± 4 minutes, p < 0.001) if its behavior changed. Infants were able to fix a target and to discriminate a colored face at 24 hours of life. Horizontal tracking of a target was achieved by 96% (152/159) of the infants at 48 hours. Circular tracking, stripe discrimination and attention to distance significantly improved between each 24-hour test period. The test was easily performed by non-specialized local staff and well accepted by the parents. Healthy term singletons in this limited-resource setting have a visual response similar to that obtained to gestational age matched newborns in Italy. It is possible to use these results as a reference set of values for the visual assessment in Karen and Burmese infants in the first 72 hours of life. The utility of the 24 hours test should be pursued. PMID:27300137

  17. Errors in the determination of the limits of detection using JEOL's electron microprobe interface.

    NASA Astrophysics Data System (ADS)

    Tonkacheev, Dmitry

    2017-04-01

    The first commercially available electron microprobe was made in the middle of XIX century. At the moment, this technique of determination of chemical composition of matter has a lot of applications in Geoscience, even in trace element analysis. During our work in the field of spectroscopy of minerals, it was necessary to determine the EPMA limits of detection for trace elements in sulphides. We measured several samples of synthetic sulfides (sphalerite, covellite) with the concentration of gold in the range from 15 to 5000 ppm using JEOL-JXA8200 in IGEM RAS and JEOL-JXA8230 in MSU equipped with energy-dispersive and 5 wavelength spectrometers, employing different crystals (PETH or LIFH), modes (integral or differential), acceleration voltage, counting time, and the beam size. We calculated the real limit of detection, using the equation from the EPMA JXA-8200 Manual and [Reed, 2000]. Our data did not correspond with the values appears on the screen after the analysis. The difference in estimation of the limits of detection between our and computer's data varies from 8 up to 13 times. We suggested that observed dissimilarity of the typed and the real values may be related to desire JEOL Ltd to promote devices for better selling. We are firmly recommend checking this values while performing the trace element analysis. References JEOL JXA8200 Manual Reed S.J.B. (2000) Quantitative trace analysis by wavelength-dispersive EPMA. Mikrochim. Acta 132 145-151.

  18. Type I errors and power of the parametric bootstrap goodness-of-fit test: full and limited information.

    PubMed

    Tollenaar, Nikolaj; Mooijaart, Ab

    2003-11-01

    In sparse tables for categorical data well-known goodness-of-fit statistics are not chi-square distributed. A consequence is that model selection becomes a problem. It has been suggested that a way out of this problem is the use of the parametric bootstrap. In this paper, the parametric bootstrap goodness-of-fit test is studied by means of an extensive simulation study; the Type I error rates and power of this test are studied under several conditions of sparseness. In the presence of sparseness, models were used that were likely to violate the regularity conditions. Besides bootstrapping the goodness-of-fit usually used (full information statistics), corrected versions of these statistics and a limited information statistic are bootstrapped. These bootstrap tests were also compared to an asymptotic test using limited information. Results indicate that bootstrapping the usual statistics fails because these tests are too liberal, and that bootstrapping or asymptotically testing the limited information statistic works better with respect to Type I error and outperforms the other statistics by far in terms of statistical power. The properties of all tests are illustrated using categorical Markov models.

  19. Application of thresholds of potential concern and limits of acceptable change in the condition assessment of a significant wetland.

    PubMed

    Rogers, Kerrylee; Saintilan, Neil; Colloff, Matthew J; Wen, Li

    2013-10-01

    We propose a framework in which thresholds of potential concern (TPCs) and limits of acceptable change (LACs) are used in concert in the assessment of wetland condition and vulnerability and apply the framework in a case study. The lower Murrumbidgee River floodplain (the 'Lowbidgee') is one of the most ecologically important wetlands in Australia and the focus of intense management intervention by State and Federal government agencies. We used a targeted management stakeholder workshop to identify key values that contribute to the ecological significance of the Lowbidgee floodplain, and identified LACs that, if crossed, would signify the loss of significance. We then used conceptual models linking the condition of these values (wetland vegetation communities, waterbirds, fish species and the endangered southern bell frog) to measurable threat indicators, for which we defined a management goal and a TPC. We applied this framework to data collected across 70 wetland storages', or eco-hydrological units, at the peak of a prolonged drought (2008) and following extensive re-flooding (2010). At the suggestion of water and wetland mangers, we neither aggregated nor integrated indices but reported separately in a series of chloropleth maps. The resulting assessment clearly identified the effect of rewetting in restoring indicators within TPC in most cases, for most storages. The scale of assessment was useful in informing the targeted and timely management intervention and provided a context for retaining and utilising monitoring information in an adaptive management context.

  20. [Allowable limits of analytical error which can guarantee the reliability of reference intervals for interpretation of clinical laboratory data].

    PubMed

    Hosogaya, Shigemi; Ozaki, Yukio

    2008-07-01

    The International Organization for Standardization (ISO) developed a guide to the expression of uncertainty in measurement (GUM). The purpose of such guidance is to provide a basis for the international comparison of measurement results. In this study, we propose a basic protocol to evaluate and express uncertainty in measurement for routine test results in the clinical laboratory. We also sought to investigate the effects of measurement errors on the evaluation of biological variations in healthy subjects. To this end, we analyzed the allowable limits of analytical error which guarantee the reliability of reference intervals for the interpretation of clinical laboratory data. As a conclusion, we suggest that 1/2 or less of biological intraindividual variations is an appropriate criterion for an allowable limit of uncertainty to be applied in health check-ups, and this value is in agreement with previous reports. If this criterion as a marker for intra laboratory imprecision is met, it suggests that a given institute is able to evaluate time series changes in follow-up of individual data. If the reference interval of laboratory data for disease screening is shared by different institutes, it is suggested that a criterion of 1/4 or less of a biological inter- pulse intra-individual variation is appropriate. This criterion appears to be the goal for analytical inter-laboratory variations.

  1. Analysis and mitigation of systematic errors in spectral shearing interferometry of pulses approaching the single-cycle limit [Invited

    SciTech Connect

    Birge, Jonathan R.; Kaertner, Franz X.

    2008-06-15

    We derive an analytical approximation for the measured pulse width error in spectral shearing methods, such as spectral phase interferometry for direct electric-field reconstruction (SPIDER), caused by an anomalous delay between the two sheared pulse components. This analysis suggests that, as pulses approach the single-cycle limit, the resulting requirements on the calibration and stability of this delay become significant, requiring precision orders of magnitude higher than the scale of a wavelength. This is demonstrated by numerical simulations of SPIDER pulse reconstruction using actual data from a sub-two-cycle laser. We briefly propose methods to minimize the effects of this sensitivity in SPIDER and review variants of spectral shearing that attempt to avoid this difficulty.

  2. Water-balance uncertainty in Honduras: a limits-of-acceptability approach to model evaluation using a time-variant rating curve

    NASA Astrophysics Data System (ADS)

    Westerberg, I.; Guerrero, J.-L.; Beven, K.; Seibert, J.; Halldin, S.; Lundin, L.-C.; Xu, C.-Y.

    2009-04-01

    The climate of Central America is highly variable both spatially and temporally; extreme events like floods and droughts are recurrent phenomena posing great challenges to regional water-resources management. Scarce and low-quality hydro-meteorological data complicate hydrological modelling and few previous studies have addressed the water-balance in Honduras. In the alluvial Choluteca River, the river bed changes over time as fill and scour occur in the channel, leading to a fast-changing relation between stage and discharge and difficulties in deriving consistent rating curves. In this application of a four-parameter water-balance model, a limits-of-acceptability approach to model evaluation was used within the General Likelihood Uncertainty Estimation (GLUE) framework. The limits of acceptability were determined for discharge alone for each time step, and ideally a simulated result should always be contained within the limits. A moving-window weighted fuzzy regression of the ratings, based on estimated uncertainties in the rating-curve data, was used to derive the limits. This provided an objective way to determine the limits of acceptability and handle the non-stationarity of the rating curves. The model was then applied within GLUE and evaluated using the derived limits. Preliminary results show that the best simulations are within the limits 75-80% of the time, indicating that precipitation data and other uncertainties like model structure also have a significant effect on predictability.

  3. Building sustainable communities using sense of place indicators in three Hudson River Valley, NY, tourism destinations: An application of the limits of acceptable change process

    Treesearch

    Laura E. Sullivan; Rudy M. Schuster; Diane M. Kuehn; Cheryl S. Doble; Duarte. Morais

    2010-01-01

    This study explores whether measures of residents' sense of place can act as indicators in the Limits of Acceptable Change (LAC) process to facilitate tourism planning and management. Data on community attributes valued by residents and the associated values and meanings were collected through focus groups with 27 residents in three Hudson River Valley, New York,...

  4. Accelerated failure time model for case-cohort design with longitudinal covariates subject to measurement error and detection limits.

    PubMed

    Dong, Xinxin; Kong, Lan; Wahed, Abdus S

    2016-04-15

    Biomarkers are often measured over time in epidemiological studies and clinical trials for better understanding of the mechanism of diseases. In large cohort studies, case-cohort sampling provides a cost effective method to collect expensive biomarker data for revealing the relationship between biomarker trajectories and time to event. However, biomarker measurements are often limited by the sensitivity and precision of a given assay, resulting in data that are censored at detection limits and prone to measurement errors. Additionally, the occurrence of an event of interest may preclude biomarkers from being further evaluated. Inappropriate handling of these types of data can lead to biased conclusions. Under a classical case cohort design, we propose a modified likelihood-based approach to accommodate these special features of longitudinal biomarker measurements in the accelerated failure time models. The maximum likelihood estimators based on the full likelihood function are obtained by Gaussian quadrature method. We evaluate the performance of our case-cohort estimator and compare its relative efficiency to the full cohort estimator through simulation studies. The proposed method is further illustrated using the data from a biomarker study of sepsis among patients with community acquired pneumonia.

  5. 50 CFR 648.53 - Acceptable biological catch (ABC), annual catch limits (ACL), annual catch targets (ACT), DAS...

    Code of Federal Regulations, 2014 CFR

    2014-10-01

    ... divided as sub-ACLs between limited access vessels, limited access vessels that are fishing under a LAGC... adjustment. (i) The limited access fishery sub-ACLs for fishing years 2014 and 2015 are: (A) 2014: 18,885 mt...). (i) The ACLs for fishing years 2014 and 2015 for LAGC IFQ vessels without a limited access...

  6. 50 CFR 648.53 - Acceptable biological catch (ABC), annual catch limits (ACL), annual catch targets (ACT), DAS...

    Code of Federal Regulations, 2013 CFR

    2013-10-01

    ... divided as sub-ACLs between limited access vessels, limited access vessels that are fishing under a LAGC... adjustment. (i) The limited access fishery sub-ACLs for fishing years 2013 and 2014 are: (A) 2013: 19,093 mt... paragraph (a). (i) The ACLs for fishing years 2013 and 2014 for LAGC IFQ vessels without a limited...

  7. Setting acceptable exposure limits for toluene diisocyanate on the basis of different airway effects observed in animals.

    PubMed

    Borm, P J; Jorna, T H; Henderson, P T

    1990-08-01

    Little epidemiological data are available to enable the development of a dose-response relationship for the effects of isocyanates, powerful sensitizing agents in humans. Remarkably, most classes of effects have been reproduced in some animal models and parallels between animals and man are impressive. In this paper animal data concerning different effects of TDI on the respiratory system were used to calculate acceptable exposure levels for humans. Animal data on respiratory irritation, sensitization, airway hyperresponsiveness, and gradual loss of pulmonary function are discussed. Two different approaches for extrapolation to man were applied to these data. The two models used to extrapolate animal data to man gave similar results. The extrapolations lead to acceptable exposure varying from 6 to 46 ppb. Most international acceptable levels for occupational airborne TDI exposure are within this range. Interestingly, the lowest standard is obtained using the data on respiratory irritation. It is, however, concluded that there is no critical (adverse) effect to define acceptable toluene diisocyanate exposure since the data were obtained from different studies and the accuracy of the applied extrapolation approach might depend on the biological effect considered. We recommend prior testing of "alternative" diisocyanates in one of the animal models described and calibrated for TDI.

  8. Setting acceptable exposure limits for toluene diisocyanate on the basis of different airway effects observed in animals

    SciTech Connect

    Borm, P.J.; Jorna, T.H.; Henderson, P.T. )

    1990-08-01

    Little epidemiological data are available to enable the development of a dose-response relationship for the effects of isocyanates, powerful sensitizing agents in humans. Remarkably, most classes of effects have been reproduced in some animal models and parallels between animals and man are impressive. In this paper animal data concerning different effects of TDI on the respiratory system were used to calculate acceptable exposure levels for humans. Animal data on respiratory irritation, sensitization, airway hyperresponsiveness, and gradual loss of pulmonary function are discussed. Two different approaches for extrapolation to man were applied to these data. The two models used to extrapolate animal data to man gave similar results. The extrapolations lead to acceptable exposure varying from 6 to 46 ppb. Most international acceptable levels for occupational airborne TDI exposure are within this range. Interestingly, the lowest standard is obtained using the data on respiratory irritation. It is, however, concluded that there is no critical (adverse) effect to define acceptable toluene diisocyanate exposure since the data were obtained from different studies and the accuracy of the applied extrapolation approach might depend on the biological effect considered. We recommend prior testing of alternative diisocyanates in one of the animal models described and calibrated for TDI.

  9. Limits on m = 2, n = 1 error field induced locked mode instability in TPX with typical sources of poloidal field coil error field and a prototype correction coil, C-coil''

    SciTech Connect

    La Haye, R.J.

    1992-12-01

    Irregularities in the winding or alignment of poloidal or toroidal magnetic field coils in tokamaks produce resonant low m, n = 1 static error fields. Otherwise stable discharges can become nonlinearly unstable, and locked modes can occur with subsequent disruption when subjected to modest m = 2, n = 1 external perturbations. Using both theory and the results of error field/locked mode experiments on DIII-D and other tokamaks, the critical m = 2, n = 1 applied error field for locked mode instability in TPX is calculated for discharges with ohmic, neutral beam, or rf heating. Ohmic discharges axe predicted to be most sensitive, but even co-injected neutral beam discharges (at [beta][sub N] = 3) in TPX will require keeping the relative 2, 1 error field (B[sub r21]/B[sub T]) below 2 [times] 10[sup [minus]4]. The error fields resulting from as-built'' alignment irregularities of various poloidal field coils are computed. Coils if well-designed must be positioned to within 3 mm with respect to the toroidal field to keep the total 2,1 error field within limits. Failing this, a set of prototype correction coils is analyzed for use in bringing 2,1 error field down to a tolerable level.

  10. Limits on m = 2, n = 1 error field induced locked mode instability in TPX with typical sources of poloidal field coil error field and a prototype correction coil, ``C-coil``

    SciTech Connect

    La Haye, R.J.

    1992-12-01

    Irregularities in the winding or alignment of poloidal or toroidal magnetic field coils in tokamaks produce resonant low m, n = 1 static error fields. Otherwise stable discharges can become nonlinearly unstable, and locked modes can occur with subsequent disruption when subjected to modest m = 2, n = 1 external perturbations. Using both theory and the results of error field/locked mode experiments on DIII-D and other tokamaks, the critical m = 2, n = 1 applied error field for locked mode instability in TPX is calculated for discharges with ohmic, neutral beam, or rf heating. Ohmic discharges axe predicted to be most sensitive, but even co-injected neutral beam discharges (at {beta}{sub N} = 3) in TPX will require keeping the relative 2, 1 error field (B{sub r21}/B{sub T}) below 2 {times} 10{sup {minus}4}. The error fields resulting from ``as-built`` alignment irregularities of various poloidal field coils are computed. Coils if well-designed must be positioned to within 3 mm with respect to the toroidal field to keep the total 2,1 error field within limits. Failing this, a set of prototype correction coils is analyzed for use in bringing 2,1 error field down to a tolerable level.

  11. 50 CFR 648.53 - Acceptable biological catch (ABC), annual catch limits (ACL), annual catch targets (ACT), DAS...

    Code of Federal Regulations, 2011 CFR

    2011-10-01

    ... divided as sub-ACLs between limited access vessels, limited access vessels that are fishing under a... limited access fishery sub-ACLs for fishing years 2011 through 2013 are: (A) 2011: 24,954 mt. (B) 2012: 26... catch, observer set-aside, and research set-aside, as specified in this paragraph (a). The LAGC ACLs...

  12. 50 CFR 648.53 - Acceptable biological catch (ABC), annual catch limits (ACL), annual catch targets (ACT), DAS...

    Code of Federal Regulations, 2012 CFR

    2012-10-01

    ... catch limits (ACL), annual catch targets (ACT), DAS allocations, and individual fishing quotas (IFQ... limits (ACL), annual catch targets (ACT), DAS allocations, and individual fishing quotas (IFQ). (a... limited access scallop fishery shall be allocated 94.5 percent of the ACL specified in paragraph (a)(1)...

  13. Achieving the Complete-Basis Limit in Large Molecular Clusters: Computationally Efficient Procedures to Eliminate Basis-Set Superposition Error

    NASA Astrophysics Data System (ADS)

    Richard, Ryan M.; Herbert, John M.

    2013-06-01

    Previous electronic structure studies that have relied on fragmentation have been primarily interested in those methods' abilities to replicate the supersystem energy (or a related energy difference) without recourse to the ability of those supersystem results to replicate experiment or high accuracy benchmarks. Here we focus on replicating accurate ab initio benchmarks, that are suitable for comparison to experimental data. In doing this it becomes imperative that we correct our methods for basis-set superposition errors (BSSE) in a computationally feasible way. This criterion leads us to develop a new method for BSSE correction, which we term the many-body counterpoise correction, or MBn for short. MBn is truncated at order n, in much the same manner as a normal many-body expansion leading to a decrease in computational time. Furthermore, its formulation in terms of fragments makes it especially suitable for use with pre-existing fragment codes. A secondary focus of this study is directed at assessing fragment methods' abilities to extrapolate to the complete basis set (CBS) limit as well as compute approximate triples corrections. Ultimately, by analysis of (H_2O)_6 and (H_2O)_{10}F^- systems, it is concluded that with large enough basis-sets (triple or quad zeta) fragment based methods can replicate high level benchmarks in a fraction of the time.

  14. Effect of model error on precipitation forecasts in the high-resolution limited area ensemble prediction system of the Korea Meteorological Administration

    NASA Astrophysics Data System (ADS)

    Kim, SeHyun; Kim, Hyun Mee

    2015-04-01

    In numerical weather prediction using convective-scale model resolution, forecast uncertainties are caused by initial condition error, boundary condition error, and model error. Because convective-scale forecasts are influenced by subgrid scale processes which cannot be resolved easily, the model error becomes more important than the initial and boundary condition errors. To consider the model error, multi-model and multi-physics methods use several models and physics schemes and the stochastic physics method uses random numbers to create a noise term in the model equations (e.g. Stochastic Perturbed Parameterization Tendency (SPPT), Stochastic Kinetic Energy Backscatter (SKEB), Stochastic Convective Vorticity (SCV), and Random Parameters (RP)). In this study, the RP method was used to consider the model error in the high-resolution limited area ensemble prediction system (EPS) of the Korea Meteorological Administration (KMA). The EPS has 12 ensemble members with 3 km horizontal resolution which generate 48 h forecasts. The initial and boundary conditions were provided by the global EPS of the KMA. The RP method was applied to microphysics and boundary layer schemes, and the ensemble forecasts using RP were compared with those without RP during July 2013. Both Root Mean Square Error (RMSE) and spread of wind at 10 m verified by surface Automatic Weather System (AWS) observations decreased when using RP. However, for 1 hour accumulated precipitation, the spread increased with RP and Equitable Threat Score (ETS) showed different results for each rainfall event.

  15. Personal digital assistants to collect tuberculosis bacteriology data in Peru reduce delays, errors, and workload, and are acceptable to users: cluster randomized controlled trial

    PubMed Central

    Blaya, Joaquín A.; Cohen, Ted; Rodríguez, Pablo; Kim, Jihoon; Fraser, Hamish S.F.

    2009-01-01

    Summary Objectives To evaluate the effectiveness of a personal digital assistant (PDA)-based system for collecting tuberculosis test results and to compare this new system to the previous paper-based system. The PDA- and paper-based systems were evaluated based on processing times, frequency of errors, and number of work-hours expended by data collectors. Methods We conducted a cluster randomized controlled trial in 93 health establishments in Peru. Baseline data were collected for 19 months. Districts (n = 4) were then randomly assigned to intervention (PDA) or control (paper) groups, and further data were collected for 6 months. Comparisons were made between intervention and control districts and within-districts before and after the introduction of the intervention. Results The PDA-based system had a significant effect on processing times (p < 0.001) and errors (p = 0.005). In the between-districts comparison, the median processing time for cultures was reduced from 23 to 8 days and for smears was reduced from 25 to 12 days. In that comparison, the proportion of cultures with delays >90 days was reduced from 9.2% to 0.1% and the number of errors was decreased by 57.1%. The intervention reduced the work-hours necessary to process results by 70% and was preferred by all users. Conclusions A well-designed PDA-based system to collect data from institutions over a large, resource-poor area can significantly reduce delays, errors, and person-hours spent processing data. PMID:19097925

  16. In vivo erythrocyte micronucleus assay III. Validation and regulatory acceptance of automated scoring and the use of rat peripheral blood reticulocytes, with discussion of non-hematopoietic target cells and a single dose-level limit test.

    PubMed

    Hayashi, Makoto; MacGregor, James T; Gatehouse, David G; Blakey, David H; Dertinger, Stephen D; Abramsson-Zetterberg, Lilianne; Krishna, Gopala; Morita, Takeshi; Russo, Antonella; Asano, Norihide; Suzuki, Hiroshi; Ohyama, Wakako; Gibson, Dave

    2007-02-03

    The in vivo micronucleus assay working group of the International Workshop on Genotoxicity Testing (IWGT) discussed new aspects in the in vivo micronucleus (MN) test, including the regulatory acceptance of data derived from automated scoring, especially with regard to the use of flow cytometry, the suitability of rat peripheral blood reticulocytes to serve as the principal cell population for analysis, the establishment of in vivo MN assays in tissues other than bone marrow and blood (for example liver, skin, colon, germ cells), and the biological relevance of the single-dose-level test. Our group members agreed that flow cytometric systems to detect induction of micronucleated immature erythrocytes have advantages based on the presented data, e.g., they give good reproducibility compared to manual scoring, are rapid, and require only small quantities of peripheral blood. Flow cytometric analysis of peripheral blood reticulocytes has the potential to allow monitoring of chromosome damage in rodents and also other species as part of routine toxicology studies. It appears that it will be applicable to humans as well, although in this case the possible confounding effects of splenic activity will need to be considered closely. Also, the consensus of the group was that any system that meets the validation criteria recommended by the IWGT (2000) should be acceptable. A number of different flow cytometric-based micronucleus assays have been developed, but at the present time the validation data are most extensive for the flow cytometric method using anti-CD71 fluorescent staining especially in terms of inter-laboratory collaborative data. Whichever method is chosen, it is desirable that each laboratory should determine the minimum sample size required to ensure that scoring error is maintained below the level of animal-to-animal variation. In the second IWGT, the potential to use rat peripheral blood reticulocytes as target cells for the micronucleus assay was discussed

  17. Error Analysis

    NASA Astrophysics Data System (ADS)

    Scherer, Philipp O. J.

    Input data as well as the results of elementary operations have to be represented by machine numbers, the subset of real numbers which is used by the arithmetic unit of today's computers. Generally this generates rounding errors. This kind of numerical error can be avoided in principle by using arbitrary precision arithmetics or symbolic algebra programs. But this is unpractical in many cases due to the increase in computing time and memory requirements. Results from more complex operations like square roots or trigonometric functions can have even larger errors since series expansions have to be truncated and iterations accumulate the errors of the individual steps. In addition, the precision of input data from an experiment is limited. In this chapter we study the influence of numerical errors on the uncertainties of the calculated results and the stability of simple algorithms.

  18. DWPF COAL-CARBON WASTE ACCEPTANCE CRITERIA LIMIT EVALUATION BASED ON EXPERIMENTAL WORK (TANK 48 IMPACT STUDY)

    SciTech Connect

    Lambert, D.; Choi, A.

    2010-10-15

    This report summarizes the results of both experimental and modeling studies performed using Sludge Batch 10 (SB10) simulants and FBSR product from Tank 48 simulant testing in order to develop higher levels of coal-carbon that can be managed by DWPF. Once the Fluidized Bed Steam Reforming (FBSR) process starts up for treatment of Tank 48 legacy waste, the FBSR product stream will contribute higher levels of coal-carbon in the sludge batch for processing at DWPF. Coal-carbon is added into the FBSR process as a reductant and some of it will be present in the FBSR product as unreacted coal. The FBSR product will be slurried in water, transferred to Tank Farm and will be combined with sludge and washed to produce the sludge batch that DWPF will process. The FBSR product is high in both water soluble sodium carbonate and unreacted coal-carbon. Most of the sodium carbonate is removed during washing but all of the coal-carbon will remain and become part of the DWPF sludge batch. A paper study was performed earlier to assess the impact of FBSR coal-carbon on the DWPF Chemical Processing Cell (CPC) operation and melter off-gas flammability by combining it with SB10-SB13. The results of the paper study are documented in Ref. 7 and the key findings included that SB10 would be the most difficult batch to process with the FBSR coal present and up to 5,000 mg/kg of coal-carbon could be fed to the melter without exceeding the off-gas flammability safety basis limits. In the present study, a bench-scale demonstration of the DWPF CPC processing was performed using SB10 simulants spiked with varying amounts of coal, and the resulting seven CPC products were fed to the DWPF melter cold cap and off-gas dynamics models to determine the maximum coal that can be processed through the melter without exceeding the off-gas flammability safety basis limits. Based on the results of these experimental and modeling studies, the presence of coal-carbon in the sludge feed to DWPF is found to have

  19. Operational Interventions to Maintenance Error

    NASA Technical Reports Server (NTRS)

    Kanki, Barbara G.; Walter, Diane; Dulchinos, VIcki

    1997-01-01

    A significant proportion of aviation accidents and incidents are known to be tied to human error. However, research of flight operational errors has shown that so-called pilot error often involves a variety of human factors issues and not a simple lack of individual technical skills. In aircraft maintenance operations, there is similar concern that maintenance errors which may lead to incidents and accidents are related to a large variety of human factors issues. Although maintenance error data and research are limited, industry initiatives involving human factors training in maintenance have become increasingly accepted as one type of maintenance error intervention. Conscientious efforts have been made in re-inventing the team7 concept for maintenance operations and in tailoring programs to fit the needs of technical opeRAtions. Nevertheless, there remains a dual challenge: 1) to develop human factors interventions which are directly supported by reliable human error data, and 2) to integrate human factors concepts into the procedures and practices of everyday technical tasks. In this paper, we describe several varieties of human factors interventions and focus on two specific alternatives which target problems related to procedures and practices; namely, 1) structured on-the-job training and 2) procedure re-design. We hope to demonstrate that the key to leveraging the impact of these solutions comes from focused interventions; that is, interventions which are derived from a clear understanding of specific maintenance errors, their operational context and human factors components.

  20. Operational Interventions to Maintenance Error

    NASA Technical Reports Server (NTRS)

    Kanki, Barbara G.; Walter, Diane; Dulchinos, VIcki

    1997-01-01

    A significant proportion of aviation accidents and incidents are known to be tied to human error. However, research of flight operational errors has shown that so-called pilot error often involves a variety of human factors issues and not a simple lack of individual technical skills. In aircraft maintenance operations, there is similar concern that maintenance errors which may lead to incidents and accidents are related to a large variety of human factors issues. Although maintenance error data and research are limited, industry initiatives involving human factors training in maintenance have become increasingly accepted as one type of maintenance error intervention. Conscientious efforts have been made in re-inventing the team7 concept for maintenance operations and in tailoring programs to fit the needs of technical opeRAtions. Nevertheless, there remains a dual challenge: 1) to develop human factors interventions which are directly supported by reliable human error data, and 2) to integrate human factors concepts into the procedures and practices of everyday technical tasks. In this paper, we describe several varieties of human factors interventions and focus on two specific alternatives which target problems related to procedures and practices; namely, 1) structured on-the-job training and 2) procedure re-design. We hope to demonstrate that the key to leveraging the impact of these solutions comes from focused interventions; that is, interventions which are derived from a clear understanding of specific maintenance errors, their operational context and human factors components.

  1. Improvement of synchrotron radiation mirrors below the 0.1-arcsec rms slope error limit with the help of a long trace profiler

    NASA Astrophysics Data System (ADS)

    Lammert, Heiner; Senf, Friedmar; Berger, Marion

    1997-11-01

    Traditional optical manufacturing methods employing both conventional and modern interferometric techniques, enable one to measure surface deviations to high accuracy, e.g. up to (lambda) 100 for flats (6 nm P-V). In synchrotron radiation applications the slope error is an important criterion for the quality of optical surfaces. In order to predict the performance of a synchrotron radiation mirror the slope errors of the surface must be known. Up to now, the highest achievable accuracy in the production of synchrotron radiation mirrors and in the measuring methods did not fall significantly below the 0.1 arcsec rms limit (spherical and flat surfaces). A long-trace profiler (LTP) is ideally suited for this task since it directly measures slope deviations with high precision. On the other hand, using an LTP becomes very sensitive to random and systematic errors at the limit of 0.1 arcsec. The main influence is the variation of the surrounding temperature in creating temporal and local temperature gradients at the instrument. At BESSY both temperature and vibrations are monitored at the most sensitive points of the LTP. In 1996 BESSY started a collaboration with a neighboring optical workshop combining traditional manufacturing technology with quasi- in-process high precision LTP measurements. As result of this mutual polishing and LTP measuring process, flat surfaces have been repeatedly produced with slope errors of 0.05 arcsec rms, e.g. 1 nm rms and 3 nm P-V (approximately equals (lambda) /200).

  2. Limiter

    DOEpatents

    Cohen, S.A.; Hosea, J.C.; Timberlake, J.R.

    1984-10-19

    A limiter with a specially contoured front face is provided. The front face of the limiter (the plasma-side face) is flat with a central indentation. In addition, the limiter shape is cylindrically symmetric so that the limiter can be rotated for greater heat distribution. This limiter shape accommodates the various power scrape-off distances lambda p, which depend on the parallel velocity, V/sub parallel/, of the impacting particles.

  3. 13 CFR 124.504 - What circumstances limit SBA's ability to accept a procurement for award as an 8(a) contract?

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... offer and acceptance. The procuring activity competed a requirement among Participants prior to offering... is offered to the 8(a) BD program. (2) In determining whether the acceptance of a requirement would... requests in writing that SBA decline to accept the offer prior to SBA's acceptance of the requirement...

  4. Limiter

    DOEpatents

    Cohen, Samuel A.; Hosea, Joel C.; Timberlake, John R.

    1986-01-01

    A limiter with a specially contoured front face accommodates the various power scrape-off distances .lambda..sub.p, which depend on the parallel velocity, V.sub..parallel., of the impacting particles. The front face of the limiter (the plasma-side face) is flat with a central indentation. In addition, the limiter shape is cylindrically symmetric so that the limiter can be rotated for greater heat distribution.

  5. Adaptive tracking control for double-pendulum overhead cranes subject to tracking error limitation, parametric uncertainties and external disturbances

    NASA Astrophysics Data System (ADS)

    Zhang, Menghua; Ma, Xin; Rong, Xuewen; Tian, Xincheng; Li, Yibin

    2016-08-01

    In a practical application, overhead cranes are usually subjected to system parameter uncertainties, such as uncertain payload masses, cable lengths, frictions, and external disturbances, such as air resistance. Most existing crane control methods treat the payload swing as that of a single-pendulum. However, certain types of payloads and hoisting mechanisms result in double-pendulum dynamics. The double-pendulum effects will make most existing crane control methods fail to work normally. Therefore, an adaptive tracking controller for double-pendulum overhead cranes subject to parametric uncertainties and external disturbances is developed in this paper. The proposed adaptive tracking control method guarantees that the trolley tracking error is always within a prior set of boundary conditions and converges to zero rapidly. The asymptotic stability of the closed-loop system's equilibrium point is assured by Lyapunov techniques and Barbalat's Lemma. Simulation results show that the proposed adaptive tracking control method is robust with respect to system parametric uncertainties and external disturbances.

  6. Pitfalls in Inversion and Interpretation of Continuous Resistivity Profiling Data: Effects of Resolution Limitations and Measurement Error

    NASA Astrophysics Data System (ADS)

    Lane, J. W.; Day-Lewis, F. D.; Loke, M. H.; White, E. A.

    2005-12-01

    Water-borne continuous resistivity profiling (CRP), also called marine or streaming resistivity, increasingly is used to support hydrogeophysical studies in freshwater and saltwater environments. CRP can provide resistivity tomograms for delineation of focused ground-water discharge, identification of sediment types, and mapping the near-shore freshwater/saltwater interface. Data collection, performed with a boat-towed electrode streamer, is commonly fast and relatively straightforward. In contrast, data processing and interpretation are potentially time consuming and subject to pitfalls. Data analysis is difficult due to the underdetermined nature of the tomographic inverse problem and the poorly understood resolution of tomograms, which is a function of the measurement physics, survey geometry, measurement error, and inverse problem parameterization and regularization. CRP data analysis in particular is complicated by noise in the data, sources of which include water leaking into the electrode cable, inefficient data collection geometry, and electrode obstruction by vegetation in the water column. Preliminary modeling has shown that, as in other types of geotomography, inversions of CRP data tend to overpredict the extent of and underpredict the magnitude of resistivity anomalies. Previous work also has shown that the water layer has a strong effect on the measured apparent resistivity values as it commonly has a much lower resistivity than the subsurface. Here we use synthetic examples and inverted field data sets to (1) assess the ability of CRP to resolve hydrogeophysical targets of interest for a range of water depths and salinities; and (2) examine the effects of CRP streamer noise on inverted resistivity sections. Our results show that inversion and interpretation of CRP data should be guided by hydrologic insight, available data for bathymetry and water layer resistivity, and a reliable model of measurement errors.

  7. A closed-form representation of an upper limit error function and its interpretation on measurements with noise

    NASA Astrophysics Data System (ADS)

    Geise, Robert

    2017-07-01

    Any measurement of an electrical quantity, e.g. in network or spectrum analysis, is influenced by noise inducing a measurement uncertainty, the statistical quantification of which is rarely discussed in literature. A measurement uncertainty in such a context means a measurement error that is associated with a given probability, e.g. one standard deviation. The measurement uncertainty mainly depends on the signal-to-noise-ratio (SNR), but additionally can be influenced by the acquisition stage of the measurement setup. The analytical treatment of noise is hardly feasible as the physical nature of a noise vector needs to account for a certain magnitude and phase in a combined probability function. However, in a previous work a closed-form analytical solution for the uncertainties of amplitude and phase measurements depending on the SNR has been derived and validated. The derived formula turned out to be a good representation of the measured reality, though several approximations had to be made for the sake of an analytical expression. This contribution gives a physical interpretation on the approximations made and discusses the results in the context of the acquisition of measurement data.

  8. Action errors, error management, and learning in organizations.

    PubMed

    Frese, Michael; Keith, Nina

    2015-01-03

    Every organization is confronted with errors. Most errors are corrected easily, but some may lead to negative consequences. Organizations often focus on error prevention as a single strategy for dealing with errors. Our review suggests that error prevention needs to be supplemented by error management--an approach directed at effectively dealing with errors after they have occurred, with the goal of minimizing negative and maximizing positive error consequences (examples of the latter are learning and innovations). After defining errors and related concepts, we review research on error-related processes affected by error management (error detection, damage control). Empirical evidence on positive effects of error management in individuals and organizations is then discussed, along with emotional, motivational, cognitive, and behavioral pathways of these effects. Learning from errors is central, but like other positive consequences, learning occurs under certain circumstances--one being the development of a mind-set of acceptance of human error.

  9. Spatial frequency domain error budget

    SciTech Connect

    Hauschildt, H; Krulewich, D

    1998-08-27

    The aim of this paper is to describe a methodology for designing and characterizing machines used to manufacture or inspect parts with spatial-frequency-based specifications. At Lawrence Livermore National Laboratory, one of our responsibilities is to design or select the appropriate machine tools to produce advanced optical and weapons systems. Recently, many of the component tolerances for these systems have been specified in terms of the spatial frequency content of residual errors on the surface. We typically use an error budget as a sensitivity analysis tool to ensure that the parts manufactured by a machine will meet the specified component tolerances. Error budgets provide the formalism whereby we account for all sources of uncertainty in a process, and sum them to arrive at a net prediction of how "precisely" a manufactured component can meet a target specification. Using the error budget, we are able to minimize risk during initial stages by ensuring that the machine will produce components that meet specifications before the machine is actually built or purchased. However, the current error budgeting procedure provides no formal mechanism for designing machines that can produce parts with spatial-frequency-based specifications. The output from the current error budgeting procedure is a single number estimating the net worst case or RMS error on the work piece. This procedure has limited ability to differentiate between low spatial frequency form errors versus high frequency surface finish errors. Therefore the current error budgeting procedure can lead us to reject a machine that is adequate or accept a machine that is inadequate. This paper will describe a new error budgeting methodology to aid in the design and characterization of machines used to manufacture or inspect parts with spatial-frequency-based specifications. The output from this new procedure is the continuous spatial frequency content of errors that result on a machined part. If the machine

  10. Quantification and correction of the error due to limited PIV resolution on the accuracy of non-intrusive spatial pressure measurement using a DNS channel flow database

    NASA Astrophysics Data System (ADS)

    Liu, Xiaofeng; Siddle-Mitchell, Seth

    2016-11-01

    The effect of the subgrid-scale (SGS) stress due to limited PIV resolution on pressure measurement accuracy is quantified using data from a direct numerical simulation database of turbulent channel flow (JHTDB). A series of 2000 consecutive realizations of sample block data with 512x512x49 grid nodal points were selected and spatially filtered with a coarse 17x17x17 and a fine 5x5x5 box averaging, respectively, giving rise to corresponding PIV resolutions of roughly 62.6 and 18.4 times of the viscous length scale. Comparison of the reconstructed pressure at different levels of pressure gradient approximation with the filtered pressure shows that the neglect of the viscous term leads to a small but noticeable change in the reconstructed pressure, especially in regions near the channel walls. As a contrast, the neglect of the SGS stress results in a more significant increase in both the bias and the random errors, indicating the SGS term must be accounted for in PIV pressure measurement. Correction using similarity SGS modeling reduces the random error due to the omission of SGS stress from 114.5% of the filtered pressure r.m.s. fluctuation to 89.1% for the coarse PIV resolution, and from 66.5% to 35.9% for the fine PIV resolution, respectively, confirming the benefit of the error compensation method and the positive influence of increasing PIV resolution on pressure measurement accuracy improvement.

  11. Reverse-polynomial dilution calibration methodology extends lower limit of quantification and reduces relative residual error in targeted peptide measurements in blood plasma.

    PubMed

    Yau, Yunki Y; Duo, Xizi; Leong, Rupert W L; Wasinger, Valerie C

    2015-02-01

    . Reverse-polynomial dilution techniques extend the Lower Limit of Quantification and reduce error (p = 0.005) in low-concentration plasma peptide assays and is broadly applicable for verification phase Tier 2 multiplexed multiple reaction monitoring assay development within the FDA-National Cancer Institute (NCI) biomarker development pipeline.

  12. Feasibility of establishing a biosafety level 3 tuberculosis culture laboratory of acceptable quality standards in a resource-limited setting: an experience from Uganda.

    PubMed

    Ssengooba, Willy; Gelderbloem, Sebastian J; Mboowa, Gerald; Wajja, Anne; Namaganda, Carolyn; Musoke, Philippa; Mayanja-Kizza, Harriet; Joloba, Moses Lutaakome

    2015-01-15

    Despite the recent innovations in tuberculosis (TB) and multi-drug resistant TB (MDR-TB) diagnosis, culture remains vital for difficult-to-diagnose patients, baseline and end-point determination for novel vaccines and drug trials. Herein, we share our experience of establishing a BSL-3 culture facility in Uganda as well as 3-years performance indicators and post-TB vaccine trials (pioneer) and funding experience of sustaining such a facility. Between September 2008 and April 2009, the laboratory was set-up with financial support from external partners. After an initial procedure validation phase in parallel with the National TB Reference Laboratory (NTRL) and legal approvals, the laboratory registered for external quality assessment (EQA) from the NTRL, WHO, National Health Laboratories Services (NHLS), and the College of American Pathologists (CAP). The laboratory also instituted a functional quality management system (QMS). Pioneer funding ended in 2012 and the laboratory remained in self-sustainability mode. The laboratory achieved internationally acceptable standards in both structural and biosafety requirements. Of the 14 patient samples analyzed in the procedural validation phase, agreement for all tests with NTRL was 90% (P <0.01). It started full operations in October 2009 performing smear microscopy, culture, identification, and drug susceptibility testing (DST). The annual culture workload was 7,636, 10,242, and 2,712 inoculations for the years 2010, 2011, and 2012, respectively. Other performance indicators of TB culture laboratories were also monitored. Scores from EQA panels included smear microscopy >80% in all years from NTRL, CAP, and NHLS, and culture was 100% for CAP panels and above regional average scores for all years with NHLS. Quarterly DST scores from WHO-EQA ranged from 78% to 100% in 2010, 80% to 100% in 2011, and 90 to 100% in 2012. From our experience, it is feasible to set-up a BSL-3 TB culture laboratory with acceptable quality

  13. Dose error analysis for a scanned proton beam delivery system

    NASA Astrophysics Data System (ADS)

    Coutrakon, G.; Wang, N.; Miller, D. W.; Yang, Y.

    2010-12-01

    All particle beam scanning systems are subject to dose delivery errors due to errors in position, energy and intensity of the delivered beam. In addition, finite scan speeds, beam spill non-uniformities, and delays in detector, detector electronics and magnet responses will all contribute errors in delivery. In this paper, we present dose errors for an 8 × 10 × 8 cm3 target of uniform water equivalent density with 8 cm spread out Bragg peak and a prescribed dose of 2 Gy. Lower doses are also analyzed and presented later in the paper. Beam energy errors and errors due to limitations of scanning system hardware have been included in the analysis. By using Gaussian shaped pencil beams derived from measurements in the research room of the James M Slater Proton Treatment and Research Center at Loma Linda, CA and executing treatment simulations multiple times, statistical dose errors have been calculated in each 2.5 mm cubic voxel in the target. These errors were calculated by delivering multiple treatments to the same volume and calculating the rms variation in delivered dose at each voxel in the target. The variations in dose were the result of random beam delivery errors such as proton energy, spot position and intensity fluctuations. The results show that with reasonable assumptions of random beam delivery errors, the spot scanning technique yielded an rms dose error in each voxel less than 2% or 3% of the 2 Gy prescribed dose. These calculated errors are within acceptable clinical limits for radiation therapy.

  14. Programming Errors in APL.

    ERIC Educational Resources Information Center

    Kearsley, Greg P.

    This paper discusses and provides some preliminary data on errors in APL programming. Data were obtained by analyzing listings of 148 complete and partial APL sessions collected from student terminal rooms at the University of Alberta. Frequencies of errors for the various error messages are tabulated. The data, however, are limited because they…

  15. Estimating errors in cloud amount and cloud optical thickness due to limited spatial sampling using a satellite imager as a proxy for nadir-view sensors

    NASA Astrophysics Data System (ADS)

    Liu, Yinghui

    2015-07-01

    Cloud climatologies from space-based active sensors have been used in climate and other studies without their uncertainties specified. This study quantifies the errors in monthly mean cloud amount and optical thickness due to the limited spatial sampling of space-based active sensors. Nadir-view observations from a satellite imager, the Moderate Resolution Imaging Spectroradiometer (MODIS), serve as a proxy for those active sensors and observations within 10° of the sensor's nadir view serve as truth for data from 2003 to 2013 in the Arctic. June-July monthly mean cloud amount and liquid water and ice cloud optical thickness from MODIS for both observations are calculated and compared. Results show that errors increase with decreasing sample numbers for monthly means in cloud amount and cloud optical thickness. The root-mean-square error of monthly mean cloud amount from nadir-view observations increases with lower latitudes, with 0.7% (1.4%) at 80°N and 4.2% (11.2%) at 60°N using data from 2003 to 2013 (from 2012). For a 100 km resolution Equal-Area Scalable Earth Grid (EASE-Grid) cell of 1000 sample numbers, the absolute differences in these two monthly mean cloud amounts are less than 6.5% (9.0%, 11.5%) with an 80 (90, 95)%chance; such differences decrease to 4.0% (5.0%, 6.5%) with 5000 sample numbers. For a 100 km resolution EASE-Grid of 1000 sample numbers, the absolute differences in these two monthly mean cloud optical thicknesses are less than 2.7 (3.8) with a 90% chance for liquid water cloud (ice cloud); such differences decrease to 1.3 (1.0) for 5000 sample numbers. The uncertainties in monthly mean cloud amount and optical thickness estimated in this study may provide useful information for applying cloud climatologies from active sensors in climate studies and suggest the need for future spaceborne active sensors with a wide swath.

  16. On the validity of the basis set superposition error and complete basis set limit extrapolations for the binding energy of the formic acid dimer

    SciTech Connect

    Miliordos, Evangelos; Xantheas, Sotiris S.

    2015-03-07

    We report the variation of the binding energy of the Formic Acid Dimer with the size of the basis set at the Coupled Cluster with iterative Singles, Doubles and perturbatively connected Triple replacements [CCSD(T)] level of theory, estimate the Complete Basis Set (CBS) limit, and examine the validity of the Basis Set Superposition Error (BSSE)-correction for this quantity that was previously challenged by Kalescky, Kraka, and Cremer (KKC) [J. Chem. Phys. 140, 084315 (2014)]. Our results indicate that the BSSE correction, including terms that account for the substantial geometry change of the monomers due to the formation of two strong hydrogen bonds in the dimer, is indeed valid for obtaining accurate estimates for the binding energy of this system as it exhibits the expected decrease with increasing basis set size. We attribute the discrepancy between our current results and those of KKC to their use of a valence basis set in conjunction with the correlation of all electrons (i.e., including the 1s of C and O). We further show that the use of a core-valence set in conjunction with all electron correlation converges faster to the CBS limit as the BSSE correction is less than half than the valence electron/valence basis set case. The uncorrected and BSSE-corrected binding energies were found to produce the same (within 0.1 kcal/mol) CBS limits. We obtain CCSD(T)/CBS best estimates for D{sub e} = − 16.1 ± 0.1 kcal/mol and for D{sub 0} = − 14.3 ± 0.1 kcal/mol, the later in excellent agreement with the experimental value of −14.22 ± 0.12 kcal/mol.

  17. Reduction of Maintenance Error Through Focused Interventions

    NASA Technical Reports Server (NTRS)

    Kanki, Barbara G.; Walter, Diane; Rosekind, Mark R. (Technical Monitor)

    1997-01-01

    It is well known that a significant proportion of aviation accidents and incidents are tied to human error. In flight operations, research of operational errors has shown that so-called "pilot error" often involves a variety of human factors issues and not a simple lack of individual technical skills. In aircraft maintenance operations, there is similar concern that maintenance errors which may lead to incidents and accidents are related to a large variety of human factors issues. Although maintenance error data and research are limited, industry initiatives involving human factors training in maintenance have become increasingly accepted as one type of maintenance error intervention. Conscientious efforts have been made in re-inventing the "team" concept for maintenance operations and in tailoring programs to fit the needs of technical operations. Nevertheless, there remains a dual challenge: to develop human factors interventions which are directly supported by reliable human error data, and to integrate human factors concepts into the procedures and practices of everyday technical tasks. In this paper, we describe several varieties of human factors interventions and focus on two specific alternatives which target problems related to procedures and practices; namely, 1) structured on-the-job training and 2) procedure re-design. We hope to demonstrate that the key to leveraging the impact of these solutions comes from focused interventions; that is, interventions which are derived from a clear understanding of specific maintenance errors, their operational context and human factors components.

  18. Reduction of Maintenance Error Through Focused Interventions

    NASA Technical Reports Server (NTRS)

    Kanki, Barbara G.; Walter, Diane; Rosekind, Mark R. (Technical Monitor)

    1997-01-01

    It is well known that a significant proportion of aviation accidents and incidents are tied to human error. In flight operations, research of operational errors has shown that so-called "pilot error" often involves a variety of human factors issues and not a simple lack of individual technical skills. In aircraft maintenance operations, there is similar concern that maintenance errors which may lead to incidents and accidents are related to a large variety of human factors issues. Although maintenance error data and research are limited, industry initiatives involving human factors training in maintenance have become increasingly accepted as one type of maintenance error intervention. Conscientious efforts have been made in re-inventing the "team" concept for maintenance operations and in tailoring programs to fit the needs of technical operations. Nevertheless, there remains a dual challenge: to develop human factors interventions which are directly supported by reliable human error data, and to integrate human factors concepts into the procedures and practices of everyday technical tasks. In this paper, we describe several varieties of human factors interventions and focus on two specific alternatives which target problems related to procedures and practices; namely, 1) structured on-the-job training and 2) procedure re-design. We hope to demonstrate that the key to leveraging the impact of these solutions comes from focused interventions; that is, interventions which are derived from a clear understanding of specific maintenance errors, their operational context and human factors components.

  19. Futility interim monitoring with control of type I and II error probabilities using the interim Z-value or confidence limit.

    PubMed

    Lachin, John M

    2009-12-01

    It is highly desirable to terminate a clinical trial early if the emerging data suggests that the experimental treatment is ineffective, or substantially less effective than the level the study was designed to detect. Many studies have used a conditional power calculation as the basis for termination for futility. However, in order to compute conditional power one must posit an assumption about the distribution of the future data yet to be observed, such as that the original design assumptions will apply, or that the future data will have the same treatment effect as that estimated from the current 'trend' in the data. Each such assumption will yield a different conditional power value. The assessment of futility is described in terms of the observed quantities alone, specifically the interim Z-value or the interim confidence limit on the magnitude of the treatment effect, such that specified type I and II error probabilities are achieved. No assumption is required regarding the distribution of the future data yet to be observed. Lachin [1] presents a review of futility stopping based on assessment of conditional power and evaluates the statistical properties of a futility stopping rule. These methods are adapted to futility stopping using only the observed data without any assumption about the future data yet to be observed. The statistical properties of the futility monitoring plan depend specifically on the corresponding boundary value for the interim Z-value. These include the probability of interim stopping under the null or under a specific alternative hypothesis, and the resulting type I and II error probabilities. Thus, the stopping rule can be uniquely specified in terms of a boundary for the interim Z-value. Alternately, the stopping rule can be specified in terms of a boundary on the upper confidence limit for the treatment group effect (favoring treatment). Herein it is shown that this approach is equivalent to a boundary on the test Z-value, from which

  20. Randomized Trial of a Computerized Touch Screen Decision Aid to Increase Acceptance of Colonoscopy Screening in an African American Population with Limited Literacy.

    PubMed

    Ruzek, Sheryl B; Bass, Sarah Bauerle; Greener, Judith; Wolak, Caitlin; Gordon, Thomas F

    2016-10-01

    The goal of this study was to assess the effectiveness of a touch screen decision aid to increase acceptance of colonoscopy screening among African American patients with low literacy, developed and tailored using perceptual mapping methods grounded in Illness Self-Regulation and Information-Communication Theories. The pilot randomized controlled trial investigated the effects of a theory-based intervention on patients' acceptance of screening, including their perceptions of educational value, feelings about colonoscopy, likelihood to undergo screening, and decisional conflict about colonoscopy screening. Sixty-one African American patients with low literacy, aged 50-70 years, with no history of colonoscopy, were randomly assigned to receive a computerized touch screen decision aid (CDA; n = 33) or a literacy appropriate print tool (PT; n = 28) immediately before a primary care appointment in an urban, university-affiliated general internal medicine clinic. Patients rated the CDA significantly higher than the PT on all indicators of acceptance, including the helpfulness of the information for making a screening decision, and reported positive feelings about colonoscopy, greater likelihood to be screened, and lower decisional conflict. Results showed that a touch screen decision tool is acceptable to African American patients with low iteracy and, by increasing intent to screen, may increase rates of colonoscopy screening.

  1. 13 CFR 124.504 - What circumstances limit SBA's ability to accept a procurement for award as an 8(a) contract?

    Code of Federal Regulations, 2012 CFR

    2012-01-01

    ... impact on an individual small business, SBA will consider all relevant factors. (i) In connection with a... impact on other small business programs, SBA will consider all relevant factors, including but not...) procedures. (c) Adverse impact. SBA has made a written determination that acceptance of the procurement for...

  2. Individual Bayesian Information Matrix for Predicting Estimation Error and Shrinkage of Individual Parameters Accounting for Data Below the Limit of Quantification.

    PubMed

    Nguyen, Thi Huyen Tram; Nguyen, Thu Thuy; Mentré, France

    2017-06-28

    In mixed models, the relative standard errors (RSE) and shrinkage of individual parameters can be predicted from the individual Bayesian information matrix (MBF). We proposed an approach accounting for data below the limit of quantification (LOQ) in MBF. MBF is the sum of the expectation of the individual Fisher information (MIF) which can be evaluated by First-Order linearization and the inverse of random effect variance. We expressed the individual information as a weighted sum of predicted MIF for every possible design composing of measurements above and/or below LOQ. When evaluating MIF, we derived the likelihood expressed as the product of the likelihood of observed data and the probability for data to be below LOQ. The relevance of RSE and shrinkage predicted by MBF in absence or presence of data below LOQ were evaluated by simulations, using a pharmacokinetic/viral kinetic model defined by differential equations. Simulations showed good agreement between predicted and observed RSE and shrinkage in absence or presence of data below LOQ. We found that RSE and shrinkage increased with sparser designs and with data below LOQ. The proposed method based on MBF adequately predicted individual RSE and shrinkage, allowing for evaluation of a large number of scenarios without extensive simulations.

  3. Uncertainty quantification and error analysis

    SciTech Connect

    Higdon, Dave M; Anderson, Mark C; Habib, Salman; Klein, Richard; Berliner, Mark; Covey, Curt; Ghattas, Omar; Graziani, Carlo; Seager, Mark; Sefcik, Joseph; Stark, Philip

    2010-01-01

    UQ studies all sources of error and uncertainty, including: systematic and stochastic measurement error; ignorance; limitations of theoretical models; limitations of numerical representations of those models; limitations on the accuracy and reliability of computations, approximations, and algorithms; and human error. A more precise definition for UQ is suggested below.

  4. [To consider negative viral loads below the limit of quantification can lead to errors in the diagnosis and treatment of hepatitis C virus infection].

    PubMed

    Acero Fernández, Doroteo; Ferri Iglesias, María José; López Nuñez, Carme; Louvrie Freire, René; Aldeguer Manté, Xavier

    2013-01-01

    For years many clinical laboratories have routinely classified undetectable and unquantifiable levels of hepatitis C virus RNA (HCV-RNA) determined by RT-PCR as below limit of quantification (BLOQ). This practice might result in erroneous clinical decisions. To assess the frequency and clinical relevance of assuming that samples that are BLOQ are negative. We performed a retrospective analysis of RNA determinations performed between 2009 and 2011 (Cobas/Taqman, lower LOQ: 15 IU/ml). We distinguished between samples classified as «undetectable» and those classified as «<1.50E+01IU/mL» (BLOQ). We analyzed 2.432 HCV-RNA measurements in 1.371 patients. RNA was BLOQ in 26 samples (1.07%) from 23 patients (1.68%). BLOQ results were highly prevalent among patients receiving Peg-Riba: 23 of 216 samples (10.6%) from 20 of 88 patients receiving treatment (22.7%). The clinical impact of BLOQ RNA samples was as follows: a) 2 patients initially considered to have negative results subsequently showed quantifiable RNA; b) 8 of 9 patients (88.9%) with BLOQ RNA at week 4 of treatment later showed sustained viral response; c) 3 patients with BLOQ RNA at weeks 12 and 48 of treatment relapsed; d) 4 patients with BLOQ RNA at week 24 and/or later had partial or breakthrough treatment responses, and e) in 5 patients the impact were null or could not be ascertained. This study suggests that BLOQ HCV-RNA indicates viremia and that equating a BLOQ result with a negative result can lead to treatment errors. BLOQ results are highly prevalent in on-treatment patients. The results of HCV-RNA quantification should be classified clearly, distinguishing between undetectable levels and levels that are BLOQ. Copyright © 2013 Elsevier España, S.L. and AEEH y AEG. All rights reserved.

  5. The Werner syndrome protein limits the error-prone 8-oxo-dG lesion bypass activity of human DNA polymerase kappa.

    PubMed

    Maddukuri, Leena; Ketkar, Amit; Eddy, Sarah; Zafar, Maroof K; Eoff, Robert L

    2014-10-29

    Human DNA polymerase kappa (hpol κ) is the only Y-family member to preferentially insert dAMP opposite 7,8-dihydro-8-oxo-2'-deoxyguanosine (8-oxo-dG) during translesion DNA synthesis. We have studied the mechanism of action by which hpol κ activity is modulated by the Werner syndrome protein (WRN), a RecQ helicase known to influence repair of 8-oxo-dG. Here we show that WRN stimulates the 8-oxo-dG bypass activity of hpol κ in vitro by enhancing the correct base insertion opposite the lesion, as well as extension from dC:8-oxo-dG base pairs. Steady-state kinetic analysis reveals that WRN improves hpol κ-catalyzed dCMP insertion opposite 8-oxo-dG ∼10-fold and extension from dC:8-oxo-dG by 2.4-fold. Stimulation is primarily due to an increase in the rate constant for polymerization (kpol), as assessed by pre-steady-state kinetics, and it requires the RecQ C-terminal (RQC) domain. In support of the functional data, recombinant WRN and hpol κ were found to physically interact through the exo and RQC domains of WRN, and co-localization of WRN and hpol κ was observed in human cells treated with hydrogen peroxide. Thus, WRN limits the error-prone bypass of 8-oxo-dG by hpol κ, which could influence the sensitivity to oxidative damage that has previously been observed for Werner's syndrome cells.

  6. The Werner syndrome protein limits the error-prone 8-oxo-dG lesion bypass activity of human DNA polymerase kappa

    PubMed Central

    Maddukuri, Leena; Ketkar, Amit; Eddy, Sarah; Zafar, Maroof K.; Eoff, Robert L.

    2014-01-01

    Human DNA polymerase kappa (hpol κ) is the only Y-family member to preferentially insert dAMP opposite 7,8-dihydro-8-oxo-2′-deoxyguanosine (8-oxo-dG) during translesion DNA synthesis. We have studied the mechanism of action by which hpol κ activity is modulated by the Werner syndrome protein (WRN), a RecQ helicase known to influence repair of 8-oxo-dG. Here we show that WRN stimulates the 8-oxo-dG bypass activity of hpol κ in vitro by enhancing the correct base insertion opposite the lesion, as well as extension from dC:8-oxo-dG base pairs. Steady-state kinetic analysis reveals that WRN improves hpol κ-catalyzed dCMP insertion opposite 8-oxo-dG ∼10-fold and extension from dC:8-oxo-dG by 2.4-fold. Stimulation is primarily due to an increase in the rate constant for polymerization (kpol), as assessed by pre-steady-state kinetics, and it requires the RecQ C-terminal (RQC) domain. In support of the functional data, recombinant WRN and hpol κ were found to physically interact through the exo and RQC domains of WRN, and co-localization of WRN and hpol κ was observed in human cells treated with hydrogen peroxide. Thus, WRN limits the error-prone bypass of 8-oxo-dG by hpol κ, which could influence the sensitivity to oxidative damage that has previously been observed for Werner's syndrome cells. PMID:25294835

  7. Limits of Expertise: Rethinking Pilot Error and the Causes of Airline Accidents. CRM/HF Conference, Held in Denver, Colorado on April 16-17, 2006

    NASA Technical Reports Server (NTRS)

    Dismukes, Key; Berman, Ben; Loukopoulos, Loukisa

    2007-01-01

    Reviewed NTSB reports of the 19 U.S. airline accidents between 1991-2000 attributed primarily to crew error. Asked: Why might any airline crew in situation of accident crew--knowing only what they knew--be vulnerable. Can never know with certainty why accident crew made specific errors but can determine why the population of pilots is vulnerable. Considers variability of expert performance as function of interplay of multiple factors.

  8. Limits of Expertise: Rethinking Pilot Error and the Causes of Airline Accidents. CRM/HF Conference, Held in Denver, Colorado on April 16-17, 2006

    NASA Technical Reports Server (NTRS)

    Dismukes, Key; Berman, Ben; Loukopoulos, Loukisa

    2007-01-01

    Reviewed NTSB reports of the 19 U.S. airline accidents between 1991-2000 attributed primarily to crew error. Asked: Why might any airline crew in situation of accident crew--knowing only what they knew--be vulnerable. Can never know with certainty why accident crew made specific errors but can determine why the population of pilots is vulnerable. Considers variability of expert performance as function of interplay of multiple factors.

  9. Refractive Errors

    MedlinePlus

    ... and lens of your eye helps you focus. Refractive errors are vision problems that happen when the shape ... cornea, or aging of the lens. Four common refractive errors are Myopia, or nearsightedness - clear vision close up ...

  10. ALTIMETER ERRORS,

    DTIC Science & Technology

    CIVIL AVIATION, *ALTIMETERS, FLIGHT INSTRUMENTS, RELIABILITY, ERRORS , PERFORMANCE(ENGINEERING), BAROMETERS, BAROMETRIC PRESSURE, ATMOSPHERIC TEMPERATURE, ALTITUDE, CORRECTIONS, AVIATION SAFETY, USSR.

  11. UGV acceptance testing

    NASA Astrophysics Data System (ADS)

    Kramer, Jeffrey A.; Murphy, Robin R.

    2006-05-01

    With over 100 models of unmanned vehicles now available for military and civilian safety, security or rescue applications, it is important to for agencies to establish acceptance testing. However, there appears to be no general guidelines for what constitutes a reasonable acceptance test. This paper describes i) a preliminary method for acceptance testing by a customer of the mechanical and electrical components of an unmanned ground vehicle system, ii) how it has been applied to a man-packable micro-robot, and iii) discusses the value of testing both to ensure that the customer has a workable system and to improve design. The test method automated the operation of the robot to repeatedly exercise all aspects and combinations of components on the robot for 6 hours. The acceptance testing process uncovered many failures consistent with those shown to occur in the field, showing that testing by the user does predict failures. The process also demonstrated that the testing by the manufacturer can provide important design data that can be used to identify, diagnose, and prevent long-term problems. Also, the structured testing environment showed that sensor systems can be used to predict errors and changes in performance, as well as uncovering unmodeled behavior in subsystems.

  12. Medication Errors

    MedlinePlus

    ... address broader product safety issues. FDA Drug Safety Communications for Drug Products Associated with Medication Errors FDA Drug Safety Communication: FDA approves brand name change for antidepressant drug ...

  13. Errors in general practice: development of an error classification and pilot study of a method for detecting errors

    PubMed Central

    Rubin, G; George, A; Chinn, D; Richardson, C

    2003-01-01

    Objective: To describe a classification of errors and to assess the feasibility and acceptability of a method for recording staff reported errors in general practice. Design: An iterative process in a pilot practice was used to develop a classification of errors. This was incorporated in an anonymous self-report form which was then used to collect information on errors during June 2002. The acceptability of the reporting process was assessed using a self-completion questionnaire. Setting: UK general practice. Participants: Ten general practices in the North East of England. Main outcome measures: Classification of errors, frequency of errors, error rates per 1000 appointments, acceptability of the process to participants. Results: 101 events were used to create an initial error classification. This contained six categories: prescriptions, communication, appointments, equipment, clinical care, and "other" errors. Subsequently, 940 errors were recorded in a single 2 week period from 10 practices, providing additional information. 42% (397/940) were related to prescriptions, although only 6% (22/397) of these were medication errors. Communication errors accounted for 30% (282/940) of errors and clinical errors 3% (24/940). The overall error rate was 75.6/1000 appointments (95% CI 71 to 80). The method of error reporting was found to be acceptable by 68% (36/53) of respondents with only 8% (4/53) finding the process threatening. Conclusion: We have developed a classification of errors and described a practical and acceptable method for reporting them that can be used as part of the process of risk management. Errors are common and, although all have the potential to lead to an adverse event, most are administrative. PMID:14645760

  14. Freeform solar concentrator with a highly asymmetric acceptance cone

    NASA Astrophysics Data System (ADS)

    Wheelwright, Brian; Angel, J. Roger P.; Coughenour, Blake; Hammer, Kimberly

    2014-10-01

    A solar concentrator with a highly asymmetric acceptance cone is investigated. Concentrating photovoltaic systems require dual-axis sun tracking to maintain nominal concentration throughout the day. In addition to collecting direct rays from the solar disk, which subtends ~0.53 degrees, concentrating optics must allow for in-field tracking errors due to mechanical misalignment of the module, wind loading, and control loop biases. The angular range over which the concentrator maintains <90% of on-axis throughput is defined as the optical acceptance angle. Concentrators with substantial rotational symmetry likewise exhibit rotationally symmetric acceptance angles. In the field, this is sometimes a poor match with azimuth-elevation trackers, which have inherently asymmetric tracking performance. Pedestal-mounted trackers with low torsional stiffness about the vertical axis have better elevation tracking than azimuthal tracking. Conversely, trackers which rotate on large-footprint circular tracks are often limited by elevation tracking performance. We show that a line-focus concentrator, composed of a parabolic trough primary reflector and freeform refractive secondary, can be tailored to have a highly asymmetric acceptance angle. The design is suitable for a tracker with excellent tracking accuracy in the elevation direction, and poor accuracy in the azimuthal direction. In the 1000X design given, when trough optical errors (2mrad rms slope deviation) are accounted for, the azimuthal acceptance angle is +/- 1.65°, while the elevation acceptance angle is only +/-0.29°. This acceptance angle does not include the angular width of the sun, which consumes nearly all of the elevation tolerance at this concentration level. By decreasing the average concentration, the elevation acceptance angle can be increased. This is well-suited for a pedestal alt-azimuth tracker with a low cost slew bearing (without anti-backlash features).

  15. Acceptance speech.

    PubMed

    Yusuf, C K

    1994-01-01

    I am proud and honored to accept this award on behalf of the Government of Bangladesh, and the millions of Bangladeshi children saved by oral rehydration solution. The Government of Bangladesh is grateful for this recognition of its commitment to international health and population research and cost-effective health care for all. The Government of Bangladesh has already made remarkable strides forward in the health and population sector, and this was recognized in UNICEF's 1993 "State of the World's Children". The national contraceptive prevalence rate, at 40%, is higher than that of many developed countries. It is appropriate that Bangladesh, where ORS was discovered, has the largest ORS production capacity in the world. It was remarkable that after the devastating cyclone in 1991, the country was able to produce enough ORS to meet the needs and remain self-sufficient. Similarly, Bangladesh has one of the most effective, flexible and efficient control of diarrheal disease and epidemic response program in the world. Through the country, doctors have been trained in diarrheal disease management, and stores of ORS are maintained ready for any outbreak. Despite grim predictions after the 1991 cyclone and the 1993 floods, relatively few people died from diarrheal disease. This is indicative of the strength of the national program. I want to take this opportunity to acknowledge the contribution of ICDDR, B and the important role it plays in supporting the Government's efforts in the health and population sector. The partnership between the Government of Bangladesh and ICDDR, B has already borne great fruit, and I hope and believe that it will continue to do so for many years in the future. Thank you.

  16. Acceptance of tinnitus: validation of the tinnitus acceptance questionnaire.

    PubMed

    Weise, Cornelia; Kleinstäuber, Maria; Hesser, Hugo; Westin, Vendela Zetterqvist; Andersson, Gerhard

    2013-01-01

    The concept of acceptance has recently received growing attention within tinnitus research due to the fact that tinnitus acceptance is one of the major targets of psychotherapeutic treatments. Accordingly, acceptance-based treatments will most likely be increasingly offered to tinnitus patients and assessments of acceptance-related behaviours will thus be needed. The current study investigated the factorial structure of the Tinnitus Acceptance Questionnaire (TAQ) and the role of tinnitus acceptance as mediating link between sound perception (i.e. subjective loudness of tinnitus) and tinnitus distress. In total, 424 patients with chronic tinnitus completed the TAQ and validated measures of tinnitus distress, anxiety, and depression online. Confirmatory factor analysis provided support to a good fit of the data to the hypothesised bifactor model (root-mean-square-error of approximation = .065; Comparative Fit Index = .974; Tucker-Lewis Index = .958; standardised root mean square residual = .032). In addition, mediation analysis, using a non-parametric joint coefficient approach, revealed that tinnitus-specific acceptance partially mediated the relation between subjective tinnitus loudness and tinnitus distress (path ab = 5.96; 95% CI: 4.49, 7.69). In a multiple mediator model, tinnitus acceptance had a significantly stronger indirect effect than anxiety. The results confirm the factorial structure of the TAQ and suggest the importance of a general acceptance factor that contributes important unique variance beyond that of the first-order factors activity engagement and tinnitus suppression. Tinnitus acceptance as measured with the TAQ is proposed to be a key construct in tinnitus research and should be further implemented into treatment concepts to reduce tinnitus distress.

  17. Development of an iterative reconstruction method to overcome 2D detector low resolution limitations in MLC leaf position error detection for 3D dose verification in IMRT.

    PubMed

    Visser, R; Godart, J; Wauben, D J L; Langendijk, J A; Van't Veld, A A; Korevaar, E W

    2016-05-21

    The objective of this study was to introduce a new iterative method to reconstruct multi leaf collimator (MLC) positions based on low resolution ionization detector array measurements and to evaluate its error detection performance. The iterative reconstruction method consists of a fluence model, a detector model and an optimizer. Expected detector response was calculated using a radiotherapy treatment plan in combination with the fluence model and detector model. MLC leaf positions were reconstructed by minimizing differences between expected and measured detector response. The iterative reconstruction method was evaluated for an Elekta SLi with 10.0 mm MLC leafs in combination with the COMPASS system and the MatriXX Evolution (IBA Dosimetry) detector with a spacing of 7.62 mm. The detector was positioned in such a way that each leaf pair of the MLC was aligned with one row of ionization chambers. Known leaf displacements were introduced in various field geometries ranging from  -10.0 mm to 10.0 mm. Error detection performance was tested for MLC leaf position dependency relative to the detector position, gantry angle dependency, monitor unit dependency, and for ten clinical intensity modulated radiotherapy (IMRT) treatment beams. For one clinical head and neck IMRT treatment beam, influence of the iterative reconstruction method on existing 3D dose reconstruction artifacts was evaluated. The described iterative reconstruction method was capable of individual MLC leaf position reconstruction with millimeter accuracy, independent of the relative detector position within the range of clinically applied MU's for IMRT. Dose reconstruction artifacts in a clinical IMRT treatment beam were considerably reduced as compared to the current dose verification procedure. The iterative reconstruction method allows high accuracy 3D dose verification by including actual MLC leaf positions reconstructed from low resolution 2D measurements.

  18. Development of an iterative reconstruction method to overcome 2D detector low resolution limitations in MLC leaf position error detection for 3D dose verification in IMRT

    NASA Astrophysics Data System (ADS)

    Visser, R.; Godart, J.; Wauben, D. J. L.; Langendijk, J. A.; van't Veld, A. A.; Korevaar, E. W.

    2016-05-01

    The objective of this study was to introduce a new iterative method to reconstruct multi leaf collimator (MLC) positions based on low resolution ionization detector array measurements and to evaluate its error detection performance. The iterative reconstruction method consists of a fluence model, a detector model and an optimizer. Expected detector response was calculated using a radiotherapy treatment plan in combination with the fluence model and detector model. MLC leaf positions were reconstructed by minimizing differences between expected and measured detector response. The iterative reconstruction method was evaluated for an Elekta SLi with 10.0 mm MLC leafs in combination with the COMPASS system and the MatriXX Evolution (IBA Dosimetry) detector with a spacing of 7.62 mm. The detector was positioned in such a way that each leaf pair of the MLC was aligned with one row of ionization chambers. Known leaf displacements were introduced in various field geometries ranging from  -10.0 mm to 10.0 mm. Error detection performance was tested for MLC leaf position dependency relative to the detector position, gantry angle dependency, monitor unit dependency, and for ten clinical intensity modulated radiotherapy (IMRT) treatment beams. For one clinical head and neck IMRT treatment beam, influence of the iterative reconstruction method on existing 3D dose reconstruction artifacts was evaluated. The described iterative reconstruction method was capable of individual MLC leaf position reconstruction with millimeter accuracy, independent of the relative detector position within the range of clinically applied MU’s for IMRT. Dose reconstruction artifacts in a clinical IMRT treatment beam were considerably reduced as compared to the current dose verification procedure. The iterative reconstruction method allows high accuracy 3D dose verification by including actual MLC leaf positions reconstructed from low resolution 2D measurements.

  19. TU-C-BRE-08: IMRT QA: Selecting Meaningful Gamma Criteria Based On Error Detection Sensitivity

    SciTech Connect

    Steers, J; Fraass, B

    2014-06-15

    Purpose: To develop a strategy for defining meaningful tolerance limits and studying the sensitivity of IMRT QA gamma criteria by inducing known errors in QA plans. Methods: IMRT QA measurements (ArcCHECK, Sun Nuclear) were compared to QA plan calculations with induced errors. Many (>24) gamma comparisons between data and calculations were performed for each of several kinds of cases and classes of induced error types with varying magnitudes (e.g. MU errors ranging from -10% to +10%), resulting in over 3,000 comparisons. Gamma passing rates for each error class and case were graphed against error magnitude to create error curves in order to represent the range of missed errors in routine IMRT QA using various gamma criteria. Results: This study demonstrates that random, case-specific, and systematic errors can be detected by the error curve analysis. Depending on location of the peak of the error curve (e.g., not centered about zero), 3%/3mm threshold=10% criteria may miss MU errors of up to 10% and random MLC errors of up to 5 mm. Additionally, using larger dose thresholds for specific devices may increase error sensitivity (for the same X%/Ymm criteria) by up to a factor of two. This analysis will allow clinics to select more meaningful gamma criteria based on QA device, treatment techniques, and acceptable error tolerances. Conclusion: We propose a strategy for selecting gamma parameters based on the sensitivity of gamma criteria and individual QA devices to induced calculation errors in QA plans. Our data suggest large errors may be missed using conventional gamma criteria and that using stricter criteria with an increased dose threshold may reduce the range of missed errors. This approach allows quantification of gamma criteria sensitivity and is straightforward to apply to other combinations of devices and treatment techniques.

  20. Ultimate limits to error probabilities for ionospheric models based on solar geophysical indices and how these compare with the state of the art

    NASA Technical Reports Server (NTRS)

    Nisbet, J. S.; Stehle, C. G.

    1981-01-01

    An ideal model based on a given set of geophysical indices is defined as a model that provides a least squares fit to the data set as a function of the indices considered. Satellite measurements of electron content for three stations at different magnetic latitudes were used to provide such data sets which were each fitted to the geophysical indices. The magnitude of the difference between the measured value and the derived equation for the data set was used to estimate the probability of making an error greater than a given magnitude for such an ideal model. Atmospheric Explorer C data is used to examine the causes of the fluctuations and suggestions are made about how real improvements can be made in ionospheric forecasting ability. Joule heating inputs in the auroral electrojets are related to the AL and AU magnetic indices. Magnetic indices based on the time integral of the energy deposited in the electrojets are proposed for modeling processes affected by auroral zone heating.

  1. Error detection in anatomic pathology.

    PubMed

    Zarbo, Richard J; Meier, Frederick A; Raab, Stephen S

    2005-10-01

    To define the magnitude of error occurring in anatomic pathology, to propose a scheme to classify such errors so their influence on clinical outcomes can be evaluated, and to identify quality assurance procedures able to reduce the frequency of errors. (a) Peer-reviewed literature search via PubMed for studies from single institutions and multi-institutional College of American Pathologists Q-Probes studies of anatomic pathology error detection and prevention practices; (b) structured evaluation of defects in surgical pathology reports uncovered in the Department of Pathology and Laboratory Medicine of the Henry Ford Health System in 2001-2003, using a newly validated error taxonomy scheme; and (c) comparative review of anatomic pathology quality assurance procedures proposed to reduce error. Marked differences in both definitions of error and pathology practice make comparison of error detection and prevention procedures among publications from individual institutions impossible. Q-Probes studies further suggest that observer redundancy reduces diagnostic variation and interpretive error, which ranges from 1.2 to 50 errors per 1000 cases; however, it is unclear which forms of such redundancy are the most efficient in uncovering diagnostic error. The proposed error taxonomy tested has shown a very good interobserver agreement of 91.4% (kappa = 0.8780; 95% confidence limit, 0.8416-0.9144), when applied to amended reports, and suggests a distribution of errors among identification, specimen, interpretation, and reporting variables. Presently, there are no standardized tools for defining error in anatomic pathology, so it cannot be reliably measured nor can its clinical impact be assessed. The authors propose a standardized error classification that would permit measurement of error frequencies, clinical impact of errors, and the effect of error reduction and prevention efforts. In particular, the value of double-reading, case conferences, and consultations (the

  2. Thermodynamics of Error Correction

    NASA Astrophysics Data System (ADS)

    Sartori, Pablo; Pigolotti, Simone

    2015-10-01

    Information processing at the molecular scale is limited by thermal fluctuations. This can cause undesired consequences in copying information since thermal noise can lead to errors that can compromise the functionality of the copy. For example, a high error rate during DNA duplication can lead to cell death. Given the importance of accurate copying at the molecular scale, it is fundamental to understand its thermodynamic features. In this paper, we derive a universal expression for the copy error as a function of entropy production and work dissipated by the system during wrong incorporations. Its derivation is based on the second law of thermodynamics; hence, its validity is independent of the details of the molecular machinery, be it any polymerase or artificial copying device. Using this expression, we find that information can be copied in three different regimes. In two of them, work is dissipated to either increase or decrease the error. In the third regime, the protocol extracts work while correcting errors, reminiscent of a Maxwell demon. As a case study, we apply our framework to study a copy protocol assisted by kinetic proofreading, and show that it can operate in any of these three regimes. We finally show that, for any effective proofreading scheme, error reduction is limited by the chemical driving of the proofreading reaction.

  3. Uncorrected refractive errors

    PubMed Central

    Naidoo, Kovin S; Jaggernath, Jyoti

    2012-01-01

    Global estimates indicate that more than 2.3 billion people in the world suffer from poor vision due to refractive error; of which 670 million people are considered visually impaired because they do not have access to corrective treatment. Refractive errors, if uncorrected, results in an impaired quality of life for millions of people worldwide, irrespective of their age, sex and ethnicity. Over the past decade, a series of studies using a survey methodology, referred to as Refractive Error Study in Children (RESC), were performed in populations with different ethnic origins and cultural settings. These studies confirmed that the prevalence of uncorrected refractive errors is considerably high for children in low-and-middle-income countries. Furthermore, uncorrected refractive error has been noted to have extensive social and economic impacts, such as limiting educational and employment opportunities of economically active persons, healthy individuals and communities. The key public health challenges presented by uncorrected refractive errors, the leading cause of vision impairment across the world, require urgent attention. To address these issues, it is critical to focus on the development of human resources and sustainable methods of service delivery. This paper discusses three core pillars to addressing the challenges posed by uncorrected refractive errors: Human Resource (HR) Development, Service Development and Social Entrepreneurship. PMID:22944755

  4. Uncorrected refractive errors.

    PubMed

    Naidoo, Kovin S; Jaggernath, Jyoti

    2012-01-01

    Global estimates indicate that more than 2.3 billion people in the world suffer from poor vision due to refractive error; of which 670 million people are considered visually impaired because they do not have access to corrective treatment. Refractive errors, if uncorrected, results in an impaired quality of life for millions of people worldwide, irrespective of their age, sex and ethnicity. Over the past decade, a series of studies using a survey methodology, referred to as Refractive Error Study in Children (RESC), were performed in populations with different ethnic origins and cultural settings. These studies confirmed that the prevalence of uncorrected refractive errors is considerably high for children in low-and-middle-income countries. Furthermore, uncorrected refractive error has been noted to have extensive social and economic impacts, such as limiting educational and employment opportunities of economically active persons, healthy individuals and communities. The key public health challenges presented by uncorrected refractive errors, the leading cause of vision impairment across the world, require urgent attention. To address these issues, it is critical to focus on the development of human resources and sustainable methods of service delivery. This paper discusses three core pillars to addressing the challenges posed by uncorrected refractive errors: Human Resource (HR) Development, Service Development and Social Entrepreneurship.

  5. IWGT report on quantitative approaches to genotoxicity risk assessment II. Use of point-of-departure (PoD) metrics in defining acceptable exposure limits and assessing human risk.

    PubMed

    MacGregor, James T; Frötschl, Roland; White, Paul A; Crump, Kenny S; Eastmond, David A; Fukushima, Shoji; Guérard, Melanie; Hayashi, Makoto; Soeteman-Hernández, Lya G; Johnson, George E; Kasamatsu, Toshio; Levy, Dan D; Morita, Takeshi; Müller, Lutz; Schoeny, Rita; Schuler, Maik J; Thybaud, Véronique

    2015-05-01

    This is the second of two reports from the International Workshops on Genotoxicity Testing (IWGT) Working Group on Quantitative Approaches to Genetic Toxicology Risk Assessment (the QWG). The first report summarized the discussions and recommendations of the QWG related to the need for quantitative dose-response analysis of genetic toxicology data, the existence and appropriate evaluation of threshold responses, and methods to analyze exposure-response relationships and derive points of departure (PoDs) from which acceptable exposure levels could be determined. This report summarizes the QWG discussions and recommendations regarding appropriate approaches to evaluate exposure-related risks of genotoxic damage, including extrapolation below identified PoDs and across test systems and species. Recommendations include the selection of appropriate genetic endpoints and target tissues, uncertainty factors and extrapolation methods to be considered, the importance and use of information on mode of action, toxicokinetics, metabolism, and exposure biomarkers when using quantitative exposure-response data to determine acceptable exposure levels in human populations or to assess the risk associated with known or anticipated exposures. The empirical relationship between genetic damage (mutation and chromosomal aberration) and cancer in animal models was also examined. It was concluded that there is a general correlation between cancer induction and mutagenic and/or clastogenic damage for agents thought to act via a genotoxic mechanism, but that the correlation is limited due to an inadequate number of cases in which mutation and cancer can be compared at a sufficient number of doses in the same target tissues of the same species and strain exposed under directly comparable routes and experimental protocols. Copyright © 2014 The Authors. Published by Elsevier B.V. All rights reserved.

  6. Smoothing error pitfalls

    NASA Astrophysics Data System (ADS)

    von Clarmann, T.

    2014-04-01

    The difference due to the content of a priori information between a constrained retrieval and the true atmospheric state is usually represented by the so-called smoothing error. In this paper it is shown that the concept of the smoothing error is questionable because it is not compliant with Gaussian error propagation. The reason for this is that the smoothing error does not represent the expected deviation of the retrieval from the true state but the expected deviation of the retrieval from the atmospheric state sampled on an arbitrary grid, which is itself a smoothed representation of the true state. The idea of a sufficiently fine sampling of this reference atmospheric state is untenable because atmospheric variability occurs on all scales, implying that there is no limit beyond which the sampling is fine enough. Even the idealization of infinitesimally fine sampling of the reference state does not help because the smoothing error is applied to quantities which are only defined in a statistical sense, which implies that a finite volume of sufficient spatial extent is needed to meaningfully talk about temperature or concentration. Smoothing differences, however, which play a role when measurements are compared, are still a useful quantity if the involved a priori covariance matrix has been evaluated on the comparison grid rather than resulting from interpolation. This is, because the undefined component of the smoothing error, which is the effect of smoothing implied by the finite grid on which the measurements are compared, cancels out when the difference is calculated.

  7. Grazing function g and collimation angular acceptance

    SciTech Connect

    Peggs, S.G.; Previtali, V.

    2009-11-02

    The grazing function g is introduced - a synchrobetatron optical quantity that is analogous (and closely connected) to the Twiss and dispersion functions {beta}, {alpha}, {eta}, and {eta}'. It parametrizes the rate of change of total angle with respect to synchrotron amplitude for grazing particles, which just touch the surface of an aperture when their synchrotron and betatron oscillations are simultaneously (in time) at their extreme displacements. The grazing function can be important at collimators with limited acceptance angles. For example, it is important in both modes of crystal collimation operation - in channeling and in volume reflection. The grazing function is independent of the collimator type - crystal or amorphous - but can depend strongly on its azimuthal location. The rigorous synchrobetatron condition g = 0 is solved, by invoking the close connection between the grazing function and the slope of the normalized dispersion. Propagation of the grazing function is described, through drifts, dipoles, and quadrupoles. Analytic expressions are developed for g in perfectly matched periodic FODO cells, and in the presence of {beta} or {eta} error waves. These analytic approximations are shown to be, in general, in good agreement with realistic numerical examples. The grazing function is shown to scale linearly with FODO cell bend angle, but to be independent of FODO cell length. The ideal value is g = 0 at the collimator, but finite nonzero values are acceptable. Practically achievable grazing functions are described and evaluated, for both amorphous and crystal primary collimators, at RHIC, the SPS (UA9), the Tevatron (T-980), and the LHC.

  8. Error and Error Mitigation in Low-Coverage Genome Assemblies

    PubMed Central

    Hubisz, Melissa J.; Lin, Michael F.; Kellis, Manolis; Siepel, Adam

    2011-01-01

    The recent release of twenty-two new genome sequences has dramatically increased the data available for mammalian comparative genomics, but twenty of these new sequences are currently limited to ∼2× coverage. Here we examine the extent of sequencing error in these 2× assemblies, and its potential impact in downstream analyses. By comparing 2× assemblies with high-quality sequences from the ENCODE regions, we estimate the rate of sequencing error to be 1–4 errors per kilobase. While this error rate is fairly modest, sequencing error can still have surprising effects. For example, an apparent lineage-specific insertion in a coding region is more likely to reflect sequencing error than a true biological event, and the length distribution of coding indels is strongly distorted by error. We find that most errors are contributed by a small fraction of bases with low quality scores, in particular, by the ends of reads in regions of single-read coverage in the assembly. We explore several approaches for automatic sequencing error mitigation (SEM), making use of the localized nature of sequencing error, the fact that it is well predicted by quality scores, and information about errors that comes from comparisons across species. Our automatic methods for error mitigation cannot replace the need for additional sequencing, but they do allow substantial fractions of errors to be masked or eliminated at the cost of modest amounts of over-correction, and they can reduce the impact of error in downstream phylogenomic analyses. Our error-mitigated alignments are available for download. PMID:21340033

  9. Dialogues on prediction errors.

    PubMed

    Niv, Yael; Schoenbaum, Geoffrey

    2008-07-01

    The recognition that computational ideas from reinforcement learning are relevant to the study of neural circuits has taken the cognitive neuroscience community by storm. A central tenet of these models is that discrepancies between actual and expected outcomes can be used for learning. Neural correlates of such prediction-error signals have been observed now in midbrain dopaminergic neurons, striatum, amygdala and even prefrontal cortex, and models incorporating prediction errors have been invoked to explain complex phenomena such as the transition from goal-directed to habitual behavior. Yet, like any revolution, the fast-paced progress has left an uneven understanding in its wake. Here, we provide answers to ten simple questions about prediction errors, with the aim of exposing both the strengths and the limitations of this active area of neuroscience research.

  10. Definition of the limit of quantification in the presence of instrumental and non-instrumental errors. Comparison among various definitions applied to the calibration of zinc by inductively coupled plasma-mass spectrometry

    NASA Astrophysics Data System (ADS)

    Badocco, Denis; Lavagnini, Irma; Mondin, Andrea; Favaro, Gabriella; Pastore, Paolo

    2015-12-01

    The limit of quantification (LOQ) in the presence of instrumental and non-instrumental errors was proposed. It was theoretically defined combining the two-component variance regression and LOQ schemas already present in the literature and applied to the calibration of zinc by the ICP-MS technique. At low concentration levels, the two-component variance LOQ definition should be always used above all when a clean room is not available. Three LOQ definitions were accounted for. One of them in the concentration and two in the signal domain. The LOQ computed in the concentration domain, proposed by Currie, was completed by adding the third order terms in the Taylor expansion because they are of the same order of magnitude of the second ones so that they cannot be neglected. In this context, the error propagation was simplified by eliminating the correlation contributions by using independent random variables. Among the signal domain definitions, a particular attention was devoted to the recently proposed approach based on at least one significant digit in the measurement. The relative LOQ values resulted very large in preventing the quantitative analysis. It was found that the Currie schemas in the signal and concentration domains gave similar LOQ values but the former formulation is to be preferred as more easily computable.

  11. Correction of subtle refractive error in aviators.

    PubMed

    Rabin, J

    1996-02-01

    Optimal visual acuity is a requirement for piloting aircraft in military and civilian settings. While acuity can be corrected with glasses, spectacle wear can limit or even prohibit use of certain devices such as night vision goggles, helmet mounted displays, and/or chemical protective masks. Although current Army policy is directed toward selection of pilots who do not require spectacle correction for acceptable vision, refractive error can become manifest over time, making optical correction necessary. In such cases, contact lenses have been used quite successfully. Another approach is to neglect small amounts of refractive error, provided that vision is at least 20/20 without correction. This report describes visual findings in an aviator who was fitted with a contact lens to correct moderate astigmatism in one eye, while the other eye, with lesser refractive error, was left uncorrected. Advanced methods of testing visual resolution, including high and low contrast visual acuity and small letter contrast sensitivity, were used to compare vision achieved with full spectacle correction to that attained with the habitual, contact lens correction. Although the patient was pleased with his habitual correction, vision was significantly better with full spectacle correction, particularly on the small letter contrast test. Implications of these findings are considered.

  12. Errors in laboratory medicine: practical lessons to improve patient safety.

    PubMed

    Howanitz, Peter J

    2005-10-01

    Patient safety is influenced by the frequency and seriousness of errors that occur in the health care system. Error rates in laboratory practices are collected routinely for a variety of performance measures in all clinical pathology laboratories in the United States, but a list of critical performance measures has not yet been recommended. The most extensive databases describing error rates in pathology were developed and are maintained by the College of American Pathologists (CAP). These databases include the CAP's Q-Probes and Q-Tracks programs, which provide information on error rates from more than 130 interlaboratory studies. To define critical performance measures in laboratory medicine, describe error rates of these measures, and provide suggestions to decrease these errors, thereby ultimately improving patient safety. A review of experiences from Q-Probes and Q-Tracks studies supplemented with other studies cited in the literature. Q-Probes studies are carried out as time-limited studies lasting 1 to 4 months and have been conducted since 1989. In contrast, Q-Tracks investigations are ongoing studies performed on a yearly basis and have been conducted only since 1998. Participants from institutions throughout the world simultaneously conducted these studies according to specified scientific designs. The CAP has collected and summarized data for participants about these performance measures, including the significance of errors, the magnitude of error rates, tactics for error reduction, and willingness to implement each of these performance measures. A list of recommended performance measures, the frequency of errors when these performance measures were studied, and suggestions to improve patient safety by reducing these errors. Error rates for preanalytic and postanalytic performance measures were higher than for analytic measures. Eight performance measures were identified, including customer satisfaction, test turnaround times, patient identification

  13. Improved Error Thresholds for Measurement-Free Error Correction

    NASA Astrophysics Data System (ADS)

    Crow, Daniel; Joynt, Robert; Saffman, M.

    2016-09-01

    Motivated by limitations and capabilities of neutral atom qubits, we examine whether measurement-free error correction can produce practical error thresholds. We show that this can be achieved by extracting redundant syndrome information, giving our procedure extra fault tolerance and eliminating the need for ancilla verification. The procedure is particularly favorable when multiqubit gates are available for the correction step. Simulations of the bit-flip, Bacon-Shor, and Steane codes indicate that coherent error correction can produce threshold error rates that are on the order of 10-3 to 10-4—comparable with or better than measurement-based values, and much better than previous results for other coherent error correction schemes. This indicates that coherent error correction is worthy of serious consideration for achieving protected logical qubits.

  14. Improved Error Thresholds for Measurement-Free Error Correction.

    PubMed

    Crow, Daniel; Joynt, Robert; Saffman, M

    2016-09-23

    Motivated by limitations and capabilities of neutral atom qubits, we examine whether measurement-free error correction can produce practical error thresholds. We show that this can be achieved by extracting redundant syndrome information, giving our procedure extra fault tolerance and eliminating the need for ancilla verification. The procedure is particularly favorable when multiqubit gates are available for the correction step. Simulations of the bit-flip, Bacon-Shor, and Steane codes indicate that coherent error correction can produce threshold error rates that are on the order of 10^{-3} to 10^{-4}-comparable with or better than measurement-based values, and much better than previous results for other coherent error correction schemes. This indicates that coherent error correction is worthy of serious consideration for achieving protected logical qubits.

  15. Acceptance criteria for urban dispersion model evaluation

    NASA Astrophysics Data System (ADS)

    Hanna, Steven; Chang, Joseph

    2012-05-01

    The authors suggested acceptance criteria for rural dispersion models' performance measures in this journal in 2004. The current paper suggests modified values of acceptance criteria for urban applications and tests them with tracer data from four urban field experiments. For the arc-maximum concentrations, the fractional bias should have a magnitude <0.67 (i.e., the relative mean bias is less than a factor of 2); the normalized mean-square error should be <6 (i.e., the random scatter is less than about 2.4 times the mean); and the fraction of predictions that are within a factor of two of the observations (FAC2) should be >0.3. For all data paired in space, for which a threshold concentration must always be defined, the normalized absolute difference should be <0.50, when the threshold is three times the instrument's limit of quantification (LOQ). An overall criterion is then applied that the total set of acceptance criteria should be satisfied in at least half of the field experiments. These acceptance criteria are applied to evaluations of the US Department of Defense's Joint Effects Model (JEM) with tracer data from US urban field experiments in Salt Lake City (U2000), Oklahoma City (JU2003), and Manhattan (MSG05 and MID05). JEM includes the SCIPUFF dispersion model with the urban canopy option and the urban dispersion model (UDM) option. In each set of evaluations, three or four likely options are tested for meteorological inputs (e.g., a local building top wind speed, the closest National Weather Service airport observations, or outputs from numerical weather prediction models). It is found that, due to large natural variability in the urban data, there is not a large difference between the performance measures for the two model options and the three or four meteorological input options. The more detailed UDM and the state-of-the-art numerical weather models do provide a slight improvement over the other options. The proposed urban dispersion model acceptance

  16. Error and its meaning in forensic science.

    PubMed

    Christensen, Angi M; Crowder, Christian M; Ousley, Stephen D; Houck, Max M

    2014-01-01

    The discussion of "error" has gained momentum in forensic science in the wake of the Daubert guidelines and has intensified with the National Academy of Sciences' Report. Error has many different meanings, and too often, forensic practitioners themselves as well as the courts misunderstand scientific error and statistical error rates, often confusing them with practitioner error (or mistakes). Here, we present an overview of these concepts as they pertain to forensic science applications, discussing the difference between practitioner error (including mistakes), instrument error, statistical error, and method error. We urge forensic practitioners to ensure that potential sources of error and method limitations are understood and clearly communicated and advocate that the legal community be informed regarding the differences between interobserver errors, uncertainty, variation, and mistakes.

  17. Smoothing error pitfalls

    NASA Astrophysics Data System (ADS)

    von Clarmann, T.

    2014-09-01

    The difference due to the content of a priori information between a constrained retrieval and the true atmospheric state is usually represented by a diagnostic quantity called smoothing error. In this paper it is shown that, regardless of the usefulness of the smoothing error as a diagnostic tool in its own right, the concept of the smoothing error as a component of the retrieval error budget is questionable because it is not compliant with Gaussian error propagation. The reason for this is that the smoothing error does not represent the expected deviation of the retrieval from the true state but the expected deviation of the retrieval from the atmospheric state sampled on an arbitrary grid, which is itself a smoothed representation of the true state; in other words, to characterize the full loss of information with respect to the true atmosphere, the effect of the representation of the atmospheric state on a finite grid also needs to be considered. The idea of a sufficiently fine sampling of this reference atmospheric state is problematic because atmospheric variability occurs on all scales, implying that there is no limit beyond which the sampling is fine enough. Even the idealization of infinitesimally fine sampling of the reference state does not help, because the smoothing error is applied to quantities which are only defined in a statistical sense, which implies that a finite volume of sufficient spatial extent is needed to meaningfully discuss temperature or concentration. Smoothing differences, however, which play a role when measurements are compared, are still a useful quantity if the covariance matrix involved has been evaluated on the comparison grid rather than resulting from interpolation and if the averaging kernel matrices have been evaluated on a grid fine enough to capture all atmospheric variations that the instruments are sensitive to. This is, under the assumptions stated, because the undefined component of the smoothing error, which is the

  18. The diffraction limit of an optical spectrum analyzer

    NASA Astrophysics Data System (ADS)

    Kolobrodov, V. G.; Tymchik, G. S.; Kolobrodov, M. S.

    2015-11-01

    This article examines a systematic error that occurs in optical spectrum analyzers and is caused by Fresnel approximation. The aim of the article is to determine acceptable errors of spatial frequency measurement in signal spectrum. The systematic error of spatial frequency measurement has been investigated on the basis of a physical and mathematical model of a coherent spectrum analyzer. It occurs as a result of the transition from light propagation in free space to Fresnel diffraction. Equations used to calculate absolute and relative measurement errors depending on a diffraction angle have been obtained. It allows us to determine the limits of the spectral range according to the given relative error of the spatial frequency measurement.

  19. Offer/Acceptance Ratio.

    ERIC Educational Resources Information Center

    Collins, Mimi

    1997-01-01

    Explores how human resource professionals, with above average offer/acceptance ratios, streamline their recruitment efforts. Profiles company strategies with internships, internal promotion, cooperative education programs, and how to get candidates to accept offers. Also discusses how to use the offer/acceptance ratio as a measure of program…

  20. Factors influencing alert acceptance: a novel approach for predicting the success of clinical decision support

    PubMed Central

    Seidling, Hanna M; Phansalkar, Shobha; Seger, Diane L; Paterno, Marilyn D; Shaykevich, Shimon; Haefeli, Walter E

    2011-01-01

    Background Clinical decision support systems can prevent knowledge-based prescription errors and improve patient outcomes. The clinical effectiveness of these systems, however, is substantially limited by poor user acceptance of presented warnings. To enhance alert acceptance it may be useful to quantify the impact of potential modulators of acceptance. Methods We built a logistic regression model to predict alert acceptance of drug–drug interaction (DDI) alerts in three different settings. Ten variables from the clinical and human factors literature were evaluated as potential modulators of provider alert acceptance. ORs were calculated for the impact of knowledge quality, alert display, textual information, prioritization, setting, patient age, dose-dependent toxicity, alert frequency, alert level, and required acknowledgment on acceptance of the DDI alert. Results 50 788 DDI alerts were analyzed. Providers accepted only 1.4% of non-interruptive alerts. For interruptive alerts, user acceptance positively correlated with frequency of the alert (OR 1.30, 95% CI 1.23 to 1.38), quality of display (4.75, 3.87 to 5.84), and alert level (1.74, 1.63 to 1.86). Alert acceptance was higher in inpatients (2.63, 2.32 to 2.97) and for drugs with dose-dependent toxicity (1.13, 1.07 to 1.21). The textual information influenced the mode of reaction and providers were more likely to modify the prescription if the message contained detailed advice on how to manage the DDI. Conclusion We evaluated potential modulators of alert acceptance by assessing content and human factors issues, and quantified the impact of a number of specific factors which influence alert acceptance. This information may help improve clinical decision support systems design. PMID:21571746

  1. Defining acceptable conditions in wilderness

    NASA Astrophysics Data System (ADS)

    Roggenbuck, J. W.; Williams, D. R.; Watson, A. E.

    1993-03-01

    The limits of acceptable change (LAC) planning framework recognizes that forest managers must decide what indicators of wilderness conditions best represent resource naturalness and high-quality visitor experiences and how much change from the pristine is acceptable for each indicator. Visitor opinions on the aspects of the wilderness that have great impact on their experience can provide valuable input to selection of indicators. Cohutta, Georgia; Caney Creek, Arkansas; Upland Island, Texas; and Rattlesnake, Montana, wilderness visitors have high shared agreement that littering and damage to trees in campsites, noise, and seeing wildlife are very important influences on wilderness experiences. Camping within sight or sound of other people influences experience quality more than do encounters on the trails. Visitors’ standards of acceptable conditions within wilderness vary considerably, suggesting a potential need to manage different zones within wilderness for different clientele groups and experiences. Standards across wildernesses, however, are remarkably similar.

  2. Manson's triple error.

    PubMed

    F, Delaporte

    2008-09-01

    The author discusses the significance, implications and limitations of Manson's work. How did Patrick Manson resolve some of the major problems raised by the filarial worm life cycle? The Amoy physician showed that circulating embryos could only leave the blood via the percutaneous route, thereby requiring a bloodsucking insect. The discovery of a new autonomous, airborne, active host undoubtedly had a considerable impact on the history of parasitology, but the way in which Manson formulated and solved the problem of the transfer of filarial worms from the body of the mosquito to man resulted in failure. This article shows how the epistemological transformation operated by Manson was indissociably related to a series of errors and how a major breakthrough can be the result of a series of false proposals and, consequently, that the history of truth often involves a history of error.

  3. Discretization errors in particle tracking

    NASA Astrophysics Data System (ADS)

    Carmon, G.; Mamman, N.; Feingold, M.

    2007-03-01

    High precision video tracking of microscopic particles is limited by systematic and random errors. Systematic errors are partly due to the discretization process both in position and in intensity. We study the behavior of such errors in a simple tracking algorithm designed for the case of symmetric particles. This symmetry algorithm uses interpolation to estimate the value of the intensity at arbitrary points in the image plane. We show that the discretization error is composed of two parts: (1) the error due to the discretization of the intensity, bD and (2) that due to interpolation, bI. While bD behaves asymptotically like N-1 where N is the number of intensity gray levels, bI is small when using cubic spline interpolation.

  4. Impulsive stabilization of a class of nonlinear system with bounded gain error

    NASA Astrophysics Data System (ADS)

    Ma, Tie-Dong; Zhao, Fei-Ya

    2014-12-01

    Considering mechanical limitation or device restriction in practical application, this paper investigates impulsive stabilization of nonlinear systems with impulsive gain error. Compared with the existing impulsive analytical approaches, the proposed impulsive control method is more practically applicable, which includes control gain error with an acceptable boundary. A sufficient criterion for global exponential stability of an impulsive control system is derived, which relaxes the condition for precise impulsive gain efficiently. The effectiveness of the proposed method is confirmed by theoretical analysis and numerical simulation based on Chua's circuit.

  5. Calculation of magnetic error fields in hybrid insertion devices

    NASA Astrophysics Data System (ADS)

    Savoy, R.; Halbach, K.; Hassenzahl, W.; Hoyer, E.; Humphries, D.; Kincaid, B.

    1990-05-01

    The Advanced Light Source (ALS) at the Lawrence Berkeley Laboratory requires insertion devices with fields sufficiently accurate to take advantage of the small emittance of the ALS electron beam. To maintain the spectral performance of the synchrotron radiation and to limit steering effects on the electron beam these errors must be smaller than 0.25%. This paper develops a procedure for calculating the steering error due to misalignment of the easy axis of the permanent-magnet material. The procedure is based on a three-dimensional theory of the design of hybrid insertion devices developed by one of us. The acceptable tolerance for easy axis misalignment is found for a 5-cm-period undulator proposed for the ALS.

  6. LIMS user acceptance testing.

    PubMed

    Klein, Corbett S

    2003-01-01

    Laboratory Information Management Systems (LIMS) play a key role in the pharmaceutical industry. Thorough and accurate validation of such systems is critical and is a regulatory requirement. LIMS user acceptance testing is one aspect of this testing and enables the user to make a decision to accept or reject implementation of the system. This paper discusses key elements in facilitating the development and execution of a LIMS User Acceptance Test Plan (UATP).

  7. Sun compass error model

    NASA Technical Reports Server (NTRS)

    Blucker, T. J.; Ferry, W. W.

    1971-01-01

    An error model is described for the Apollo 15 sun compass, a contingency navigational device. Field test data are presented along with significant results of the test. The errors reported include a random error resulting from tilt in leveling the sun compass, a random error because of observer sighting inaccuracies, a bias error because of mean tilt in compass leveling, a bias error in the sun compass itself, and a bias error because the device is leveled to the local terrain slope.

  8. Numerical Error Estimation with UQ

    NASA Astrophysics Data System (ADS)

    Ackmann, Jan; Korn, Peter; Marotzke, Jochem

    2014-05-01

    Ocean models are still in need of means to quantify model errors, which are inevitably made when running numerical experiments. The total model error can formally be decomposed into two parts, the formulation error and the discretization error. The formulation error arises from the continuous formulation of the model not fully describing the studied physical process. The discretization error arises from having to solve a discretized model instead of the continuously formulated model. Our work on error estimation is concerned with the discretization error. Given a solution of a discretized model, our general problem statement is to find a way to quantify the uncertainties due to discretization in physical quantities of interest (diagnostics), which are frequently used in Geophysical Fluid Dynamics. The approach we use to tackle this problem is called the "Goal Error Ensemble method". The basic idea of the Goal Error Ensemble method is that errors in diagnostics can be translated into a weighted sum of local model errors, which makes it conceptually based on the Dual Weighted Residual method from Computational Fluid Dynamics. In contrast to the Dual Weighted Residual method these local model errors are not considered deterministically but interpreted as local model uncertainty and described stochastically by a random process. The parameters for the random process are tuned with high-resolution near-initial model information. However, the original Goal Error Ensemble method, introduced in [1], was successfully evaluated only in the case of inviscid flows without lateral boundaries in a shallow-water framework and is hence only of limited use in a numerical ocean model. Our work consists in extending the method to bounded, viscous flows in a shallow-water framework. As our numerical model, we use the ICON-Shallow-Water model. In viscous flows our high-resolution information is dependent on the viscosity parameter, making our uncertainty measures viscosity-dependent. We

  9. Irregular analytical errors in diagnostic testing - a novel concept.

    PubMed

    Vogeser, Michael; Seger, Christoph

    2017-09-13

    In laboratory medicine, routine periodic analyses for internal and external quality control measurements interpreted by statistical methods are mandatory for batch clearance. Data analysis of these process-oriented measurements allows for insight into random analytical variation and systematic calibration bias over time. However, in such a setting, any individual sample is not under individual quality control. The quality control measurements act only at the batch level. Quantitative or qualitative data derived for many effects and interferences associated with an individual diagnostic sample can compromise any analyte. It is obvious that a process for a quality-control-sample-based approach of quality assurance is not sensitive to such errors. To address the potential causes and nature of such analytical interference in individual samples more systematically, we suggest the introduction of a new term called the irregular (individual) analytical error. Practically, this term can be applied in any analytical assay that is traceable to a reference measurement system. For an individual sample an irregular analytical error is defined as an inaccuracy (which is the deviation from a reference measurement procedure result) of a test result that is so high it cannot be explained by measurement uncertainty of the utilized routine assay operating within the accepted limitations of the associated process quality control measurements. The deviation can be defined as the linear combination of the process measurement uncertainty and the method bias for the reference measurement system. Such errors should be coined irregular analytical errors of the individual sample. The measurement result is compromised either by an irregular effect associated with the individual composition (matrix) of the sample or an individual single sample associated processing error in the analytical process. Currently, the availability of reference measurement procedures is still highly limited, but LC

  10. On Maximum FODO Acceptance

    SciTech Connect

    Batygin, Yuri Konstantinovich

    2014-12-24

    This note illustrates maximum acceptance of FODO quadrupole focusing channel. Acceptance is the largest Floquet ellipse of a matched beam: A = $\\frac{a^2}{β}$$_{max}$ where a is the aperture of the channel and βmax is the largest value of beta-function in the channel. If aperture of the channel is restricted by a circle of radius a, the s-s acceptance is available for particles oscillating at median plane, y=0. Particles outside median plane will occupy smaller phase space area. In x-y plane, cross section of the accepted beam has a shape of ellipse with truncated boundaries.

  11. Wavefront error sensing

    NASA Technical Reports Server (NTRS)

    Tubbs, Eldred F.

    1986-01-01

    A two-step approach to wavefront sensing for the Large Deployable Reflector (LDR) was examined as part of an effort to define wavefront-sensing requirements and to determine particular areas for more detailed study. A Hartmann test for coarse alignment, particularly segment tilt, seems feasible if LDR can operate at 5 microns or less. The direct measurement of the point spread function in the diffraction limited region may be a way to determine piston error, but this can only be answered by a detailed software model of the optical system. The question of suitable astronomical sources for either test must also be addressed.

  12. Unforced errors and error reduction in tennis

    PubMed Central

    Brody, H

    2006-01-01

    Only at the highest level of tennis is the number of winners comparable to the number of unforced errors. As the average player loses many more points due to unforced errors than due to winners by an opponent, if the rate of unforced errors can be reduced, it should lead to an increase in points won. This article shows how players can improve their game by understanding and applying the laws of physics to reduce the number of unforced errors. PMID:16632568

  13. Unforced errors and error reduction in tennis.

    PubMed

    Brody, H

    2006-05-01

    Only at the highest level of tennis is the number of winners comparable to the number of unforced errors. As the average player loses many more points due to unforced errors than due to winners by an opponent, if the rate of unforced errors can be reduced, it should lead to an increase in points won. This article shows how players can improve their game by understanding and applying the laws of physics to reduce the number of unforced errors.

  14. [Statistical Process Control (SPC) can help prevent treatment errors without increasing costs in radiotherapy].

    PubMed

    Govindarajan, R; Llueguera, E; Melero, A; Molero, J; Soler, N; Rueda, C; Paradinas, C

    2010-01-01

    Statistical Process Control (SPC) was applied to monitor patient set-up in radiotherapy and, when the measured set-up error values indicated a loss of process stability, its root cause was identified and eliminated to prevent set-up errors. Set up errors were measured for medial-lateral (ml), cranial-caudal (cc) and anterior-posterior (ap) dimensions and then the upper control limits were calculated. Once the control limits were known and the range variability was acceptable, treatment set-up errors were monitored using sub-groups of 3 patients, three times each shift. These values were plotted on a control chart in real time. Control limit values showed that the existing variation was acceptable. Set-up errors, measured and plotted on a X chart, helped monitor the set-up process stability and, if and when the stability was lost, treatment was interrupted, the particular cause responsible for the non-random pattern was identified and corrective action was taken before proceeding with the treatment. SPC protocol focuses on controlling the variability due to assignable cause instead of focusing on patient-to-patient variability which normally does not exist. Compared to weekly sampling of set-up error in each and every patient, which may only ensure that just those sampled sessions were set-up correctly, the SPC method enables set-up error prevention in all treatment sessions for all patients and, at the same time, reduces the control costs. Copyright © 2009 SECA. Published by Elsevier Espana. All rights reserved.

  15. Measurement accuracies in band-limited extrapolation

    NASA Technical Reports Server (NTRS)

    Kritikos, H. N.

    1982-01-01

    The problem of numerical instability associated with extrapolation algorithms is addressed. An attempt is made to estimate the bounds for the acceptable errors and to place a ceiling on the measurement accuracy and computational accuracy needed for the extrapolation. It is shown that in band limited (or visible angle limited) extrapolation the larger effective aperture L' that can be realized from a finite aperture L by over sampling is a function of the accuracy of measurements. It is shown that for sampling in the interval L/b absolute value of xL, b1 the signal must be known within an error e sub N given by e sub N squared approximately = 1/4(2kL') cubed (e/8b L/L')(2kL') where L is the physical aperture, L' is the extrapolated aperture, and k = 2pi lambda.

  16. Accepting Individual Differences: Overview.

    ERIC Educational Resources Information Center

    Cohen, Shirley; And Others

    The overview, the first in a series of five manuals, describes the goals of the AID (Accepting Individual Differences) curriculum of fostering acceptance and respect for differences, as exemplified by disabilities. Briefly discussed in the guide's section on the curriculum's rationale are need, assumptions (such as that handicapped individuals…

  17. Remediating Common Math Errors.

    ERIC Educational Resources Information Center

    Wagner, Rudolph F.

    1981-01-01

    Explanations and remediation suggestions for five types of mathematics errors due either to perceptual or cognitive difficulties are given. Error types include directionality problems, mirror writing, visually misperceived signs, diagnosed directionality problems, and mixed process errors. (CL)

  18. Current limiters

    SciTech Connect

    Loescher, D.H.; Noren, K.

    1996-09-01

    The current that flows between the electrical test equipment and the nuclear explosive must be limited to safe levels during electrical tests conducted on nuclear explosives at the DOE Pantex facility. The safest way to limit the current is to use batteries that can provide only acceptably low current into a short circuit; unfortunately this is not always possible. When it is not possible, current limiters, along with other design features, are used to limit the current. Three types of current limiters, the fuse blower, the resistor limiter, and the MOSFET-pass-transistor limiters, are used extensively in Pantex test equipment. Detailed failure mode and effects analyses were conducted on these limiters. Two other types of limiters were also analyzed. It was found that there is no best type of limiter that should be used in all applications. The fuse blower has advantages when many circuits must be monitored, a low insertion voltage drop is important, and size and weight must be kept low. However, this limiter has many failure modes that can lead to the loss of over current protection. The resistor limiter is simple and inexpensive, but is normally usable only on circuits for which the nominal current is less than a few tens of milliamperes. The MOSFET limiter can be used on high current circuits, but it has a number of single point failure modes that can lead to a loss of protective action. Because bad component placement or poor wire routing can defeat any limiter, placement and routing must be designed carefully and documented thoroughly.

  19. Robust characterization of leakage errors

    NASA Astrophysics Data System (ADS)

    Wallman, Joel J.; Barnhill, Marie; Emerson, Joseph

    2016-04-01

    Leakage errors arise when the quantum state leaks out of some subspace of interest, for example, the two-level subspace of a multi-level system defining a computational ‘qubit’, the logical code space of a quantum error-correcting code, or a decoherence-free subspace. Leakage errors pose a distinct challenge to quantum control relative to the more well-studied decoherence errors and can be a limiting factor to achieving fault-tolerant quantum computation. Here we present a scalable and robust randomized benchmarking protocol for quickly estimating the leakage rate due to an arbitrary Markovian noise process on a larger system. We illustrate the reliability of the protocol through numerical simulations.

  20. The Cline of Errors in the Writing of Japanese University Students

    ERIC Educational Resources Information Center

    French, Gary

    2005-01-01

    In this study, errors in the English writing of students in the College of World Englishes at Chukyo University, Japan are examined to determine if there is a level of acceptance among teachers. If there is, are these errors becoming part of an accepted, standardized Japanese English Results show there is little acceptance of third person…

  1. Inflation of the type I error: investigations on regulatory recommendations for bioequivalence of highly variable drugs.

    PubMed

    Wonnemann, Meinolf; Frömke, Cornelia; Koch, Armin

    2015-01-01

    We investigated different evaluation strategies for bioequivalence trials with highly variable drugs on their resulting empirical type I error and empirical power. The classical 'unscaled' crossover design with average bioequivalence evaluation, the Add-on concept of the Japanese guideline, and the current 'scaling' approach of EMA were compared. Simulation studies were performed based on the assumption of a single dose drug administration while changing the underlying intra-individual variability. Inclusion of Add-on subjects following the Japanese concept led to slight increases of the empirical α-error (≈7.5%). For the approach of EMA we noted an unexpected tremendous increase of the rejection rate at a geometric mean ratio of 1.25. Moreover, we detected error rates slightly above the pre-set limit of 5% even at the proposed 'scaled' bioequivalence limits. With the classical 'unscaled' approach and the Japanese guideline concept the goal of reduced subject numbers in bioequivalence trials of HVDs cannot be achieved. On the other hand, widening the acceptance range comes at the price that quite a number of products will be accepted bioequivalent that had not been accepted in the past. A two-stage design with control of the global α therefore seems the better alternative.

  2. Caldecott Medal Acceptance.

    ERIC Educational Resources Information Center

    Provensen, Alice; Provensen, Martin

    1984-01-01

    Reprints the text of the Provensens' Caldecott medal acceptance speech in which they describe their early interest in libraries and literature, the collaborative aspect of their work, and their current interest in aviation. (CRH)

  3. Newbery Medal Acceptance.

    ERIC Educational Resources Information Center

    Freedman, Russell

    1988-01-01

    Presents the Newbery Medal acceptance speech of Russell Freedman, writer of children's nonfiction. Discusses the place of nonfiction in the world of children's literature, the evolution of children's biographies, and the author's work on "Lincoln." (ARH)

  4. Faster magnet sorting with a threshold acceptance algorithm

    NASA Astrophysics Data System (ADS)

    Lidia, Steve; Carr, Roger

    1995-02-01

    We introduce here a new technique for sorting magnets to minimize the field errors in permanent magnet insertion devices. Simulated annealing has been used in this role, but we find the technique of threshold acceptance produces results of equal quality in less computer time. Threshold accepting would be of special value in designing very long insertion devices, such as long free electron lasers (FELs). Our application of threshold acceptance to magnet sorting showed that it converged to equivalently low values of the cost function, but that it converged significantly faster. We present typical cases showing time to convergence for various error tolerances, magnet numbers, and temperature schedules.

  5. Faster magnet sorting with a threshold acceptance algorithm

    NASA Astrophysics Data System (ADS)

    Lidia, S.; Carr, R.

    1994-08-01

    The authors introduce here a new technique for sorting magnets to minimize the field errors in permanent magnet insertion devices. Simulated annealing has been used in this role, but they find the technique of threshold acceptance produces results of equal quality in less computer time. Threshold accepting would be of special value in designing very long insertion devices, such as long FEL's. Their application of threshold acceptance to magnet sorting showed that it converged to equivalently low values of the cost function, but that it converged significantly faster. They present typical cases showing time to convergence for various error tolerances, magnet numbers, and temperature schedules.

  6. Estimation of flood warning runoff thresholds in ungauged basins with asymmetric error functions

    NASA Astrophysics Data System (ADS)

    Toth, Elena

    2016-06-01

    In many real-world flood forecasting systems, the runoff thresholds for activating warnings or mitigation measures correspond to the flow peaks with a given return period (often 2 years, which may be associated with the bankfull discharge). At locations where the historical streamflow records are absent or very limited, the threshold can be estimated with regionally derived empirical relationships between catchment descriptors and the desired flood quantile. Whatever the function form, such models are generally parameterised by minimising the mean square error, which assigns equal importance to overprediction or underprediction errors. Considering that the consequences of an overestimated warning threshold (leading to the risk of missing alarms) generally have a much lower level of acceptance than those of an underestimated threshold (leading to the issuance of false alarms), the present work proposes to parameterise the regression model through an asymmetric error function, which penalises the overpredictions more. The estimates by models (feedforward neural networks) with increasing degree of asymmetry are compared with those of a traditional, symmetrically trained network, in a rigorous cross-validation experiment referred to a database of catchments covering the country of Italy. The analysis shows that the use of the asymmetric error function can substantially reduce the number and extent of overestimation errors, if compared to the use of the traditional square errors. Of course such reduction is at the expense of increasing underestimation errors, but the overall accurateness is still acceptable and the results illustrate the potential value of choosing an asymmetric error function when the consequences of missed alarms are more severe than those of false alarms.

  7. Estimation of flood warning runoff thresholds in ungauged basins with asymmetric error functions

    NASA Astrophysics Data System (ADS)

    Toth, E.

    2015-06-01

    In many real-world flood forecasting systems, the runoff thresholds for activating warnings or mitigation measures correspond to the flow peaks with a given return period (often the 2-year one, that may be associated with the bankfull discharge). At locations where the historical streamflow records are absent or very limited, the threshold can be estimated with regionally-derived empirical relationships between catchment descriptors and the desired flood quantile. Whatever is the function form, such models are generally parameterised by minimising the mean square error, that assigns equal importance to overprediction or underprediction errors. Considering that the consequences of an overestimated warning threshold (leading to the risk of missing alarms) generally have a much lower level of acceptance than those of an underestimated threshold (leading to the issuance of false alarms), the present work proposes to parameterise the regression model through an asymmetric error function, that penalises more the overpredictions. The estimates by models (feedforward neural networks) with increasing degree of asymmetry are compared with those of a traditional, symmetrically-trained network, in a rigorous cross-validation experiment referred to a database of catchments covering the Italian country. The analysis shows that the use of the asymmetric error function can substantially reduce the number and extent of overestimation errors, if compared to the use of the traditional square errors. Of course such reduction is at the expense of increasing underestimation errors, but the overall accurateness is still acceptable and the results illustrate the potential value of choosing an asymmetric error function when the consequences of missed alarms are more severe than those of false alarms.

  8. Antenna motion errors in bistatic SAR imagery

    NASA Astrophysics Data System (ADS)

    Wang, Ling; Yazıcı, Birsen; Cagri Yanik, H.

    2015-06-01

    Antenna trajectory or motion errors are pervasive in synthetic aperture radar (SAR) imaging. Motion errors typically result in smearing and positioning errors in SAR images. Understanding the relationship between the trajectory errors and position errors in reconstructed images is essential in forming focused SAR images. Existing studies on the effect of antenna motion errors are limited to certain geometries, trajectory error models or monostatic SAR configuration. In this paper, we present an analysis of position errors in bistatic SAR imagery due to antenna motion errors. Bistatic SAR imagery is becoming increasingly important in the context of passive imaging and multi-sensor imaging. Our analysis provides an explicit quantitative relationship between the trajectory errors and the positioning errors in bistatic SAR images. The analysis is applicable to arbitrary trajectory errors and arbitrary imaging geometries including wide apertures and large scenes. We present extensive numerical simulations to validate the analysis and to illustrate the results in commonly used bistatic configurations and certain trajectory error models.

  9. Medication errors: prescribing faults and prescription errors

    PubMed Central

    Velo, Giampaolo P; Minuz, Pietro

    2009-01-01

    Medication errors are common in general practice and in hospitals. Both errors in the act of writing (prescription errors) and prescribing faults due to erroneous medical decisions can result in harm to patients. Any step in the prescribing process can generate errors. Slips, lapses, or mistakes are sources of errors, as in unintended omissions in the transcription of drugs. Faults in dose selection, omitted transcription, and poor handwriting are common. Inadequate knowledge or competence and incomplete information about clinical characteristics and previous treatment of individual patients can result in prescribing faults, including the use of potentially inappropriate medications. An unsafe working environment, complex or undefined procedures, and inadequate communication among health-care personnel, particularly between doctors and nurses, have been identified as important underlying factors that contribute to prescription errors and prescribing faults. Active interventions aimed at reducing prescription errors and prescribing faults are strongly recommended. These should be focused on the education and training of prescribers and the use of on-line aids. The complexity of the prescribing procedure should be reduced by introducing automated systems or uniform prescribing charts, in order to avoid transcription and omission errors. Feedback control systems and immediate review of prescriptions, which can be performed with the assistance of a hospital pharmacist, are also helpful. Audits should be performed periodically. PMID:19594530

  10. Quantum error correction for continuously detected errors

    NASA Astrophysics Data System (ADS)

    Ahn, Charlene; Wiseman, H. M.; Milburn, G. J.

    2003-05-01

    We show that quantum feedback control can be used as a quantum-error-correction process for errors induced by a weak continuous measurement. In particular, when the error model is restricted to one, perfectly measured, error channel per physical qubit, quantum feedback can act to perfectly protect a stabilizer codespace. Using the stabilizer formalism we derive an explicit scheme, involving feedback and an additional constant Hamiltonian, to protect an (n-1)-qubit logical state encoded in n physical qubits. This works for both Poisson (jump) and white-noise (diffusion) measurement processes. Universal quantum computation is also possible in this scheme. As an example, we show that detected-spontaneous emission error correction with a driving Hamiltonian can greatly reduce the amount of redundancy required to protect a state from that which has been previously postulated [e.g., Alber et al., Phys. Rev. Lett. 86, 4402 (2001)].

  11. [Medication errors in anesthesia: unacceptable or unavoidable?

    PubMed

    Dhawan, Ira; Tewari, Anurag; Sehgal, Sankalp; Sinha, Ashish Chandra

    Medication errors are the common causes of patient morbidity and mortality. It adds financial burden to the institution as well. Though the impact varies from no harm to serious adverse effects including death, it needs attention on priority basis since medication errors' are preventable. In today's world where people are aware and medical claims are on the hike, it is of utmost priority that we curb this issue. Individual effort to decrease medication error alone might not be successful until a change in the existing protocols and system is incorporated. Often drug errors that occur cannot be reversed. The best way to 'treat' drug errors is to prevent them. Wrong medication (due to syringe swap), overdose (due to misunderstanding or preconception of the dose, pump misuse and dilution error), incorrect administration route, under dosing and omission are common causes of medication error that occur perioperatively. Drug omission and calculation mistakes occur commonly in ICU. Medication errors can occur perioperatively either during preparation, administration or record keeping. Numerous human and system errors can be blamed for occurrence of medication errors. The need of the hour is to stop the blame - game, accept mistakes and develop a safe and 'just' culture in order to prevent medication errors. The newly devised systems like VEINROM, a fluid delivery system is a novel approach in preventing drug errors due to most commonly used medications in anesthesia. Similar developments along with vigilant doctors, safe workplace culture and organizational support all together can help prevent these errors. Copyright © 2016. Publicado por Elsevier Editora Ltda.

  12. Medication errors in anesthesia: unacceptable or unavoidable?

    PubMed

    Dhawan, Ira; Tewari, Anurag; Sehgal, Sankalp; Sinha, Ashish Chandra

    Medication errors are the common causes of patient morbidity and mortality. It adds financial burden to the institution as well. Though the impact varies from no harm to serious adverse effects including death, it needs attention on priority basis since medication errors' are preventable. In today's world where people are aware and medical claims are on the hike, it is of utmost priority that we curb this issue. Individual effort to decrease medication error alone might not be successful until a change in the existing protocols and system is incorporated. Often drug errors that occur cannot be reversed. The best way to 'treat' drug errors is to prevent them. Wrong medication (due to syringe swap), overdose (due to misunderstanding or preconception of the dose, pump misuse and dilution error), incorrect administration route, under dosing and omission are common causes of medication error that occur perioperatively. Drug omission and calculation mistakes occur commonly in ICU. Medication errors can occur perioperatively either during preparation, administration or record keeping. Numerous human and system errors can be blamed for occurrence of medication errors. The need of the hour is to stop the blame - game, accept mistakes and develop a safe and 'just' culture in order to prevent medication errors. The newly devised systems like VEINROM, a fluid delivery system is a novel approach in preventing drug errors due to most commonly used medications in anesthesia. Similar developments along with vigilant doctors, safe workplace culture and organizational support all together can help prevent these errors. Copyright © 2016. Published by Elsevier Editora Ltda.

  13. Learner Error, Affectual Stimulation, and Conceptual Change

    ERIC Educational Resources Information Center

    Allen, Michael

    2010-01-01

    Pupils' expectation-related errors oppose the development of an appropriate scientific attitude towards empirical evidence and the learning of accepted science content, representing a hitherto neglected area of research in science education. In spite of these apparent drawbacks, a pedagogy is described that "encourages" pupils to allow their…

  14. Minimizing Experimental Error in Thinning Research

    Treesearch

    C. B. Briscoe

    1964-01-01

    Many diverse approaches have been made prescribing and evaluating thinnings on an objective basis. None of the techniques proposed hasbeen widely accepted. Indeed. none has been proven superior to the others nor even widely applicable. There are at least two possible reasons for this: none of the techniques suggested is of any general utility and/or experimental error...

  15. Error propagation in calculated ratios.

    PubMed

    Holmes, Daniel T; Buhr, Kevin A

    2007-06-01

    Calculated quantities that combine results of multiple laboratory tests have become popular for screening, risk evaluation, and ongoing care in medicine. Many of these are ratios. In this paper, we address the specific issue of propagated random analytical error in calculated ratios. Standard error propagation theory is applied to develop an approximate formula for the mean, standard deviation (SD), and coefficient of variation (CV) of the ratio of two independent, normally distributed random variables. A method of mathematically modeling the problem by random simulations to validate these formulas is proposed and applied. Comparisons are made with the commonly quoted formula for the CV of a ratio. The approximation formula for the CV of a ratio R=X/Y of independent Gaussian random variables developed herein has an absolute percentage error less than 4% for CVs of less than 20% in Y. In contrast the commonly quoted formula has a percentage error of up to 16% for CVs of less than 20% in Y. The usual formula for the CV of a ratio functions well when the CV of the denominator is less than 10% but for larger CVs, the formula proposed here is more accurate. Random analytical error in calculated ratios may be larger than clinicians and laboratorians are aware. The magnitude of the propagated error needs to be considered when interpreting calculated ratios in the clinical laboratory, especially near medical decision limits where its effect may lead to erroneous conclusions.

  16. Quantifying errors without random sampling.

    PubMed

    Phillips, Carl V; LaPole, Luwanna M

    2003-06-12

    All quantifications of mortality, morbidity, and other health measures involve numerous sources of error. The routine quantification of random sampling error makes it easy to forget that other sources of error can and should be quantified. When a quantification does not involve sampling, error is almost never quantified and results are often reported in ways that dramatically overstate their precision. We argue that the precision implicit in typical reporting is problematic and sketch methods for quantifying the various sources of error, building up from simple examples that can be solved analytically to more complex cases. There are straightforward ways to partially quantify the uncertainty surrounding a parameter that is not characterized by random sampling, such as limiting reported significant figures. We present simple methods for doing such quantifications, and for incorporating them into calculations. More complicated methods become necessary when multiple sources of uncertainty must be combined. We demonstrate that Monte Carlo simulation, using available software, can estimate the uncertainty resulting from complicated calculations with many sources of uncertainty. We apply the method to the current estimate of the annual incidence of foodborne illness in the United States. Quantifying uncertainty from systematic errors is practical. Reporting this uncertainty would more honestly represent study results, help show the probability that estimated values fall within some critical range, and facilitate better targeting of further research.

  17. Accepting space radiation risks.

    PubMed

    Schimmerling, Walter

    2010-08-01

    The human exploration of space inevitably involves exposure to radiation. Associated with this exposure are multiple risks, i.e., probabilities that certain aspects of an astronaut's health or performance will be degraded. The management of these risks requires that such probabilities be accurately predicted, that the actual exposures be verified, and that comprehensive records be maintained. Implicit in these actions is the fact that, at some point, a decision has been made to accept a certain level of risk. This paper examines ethical and practical considerations involved in arriving at a determination that risks are acceptable, roles that the parties involved may play, and obligations arising out of reliance on the informed consent paradigm seen as the basis for ethical radiation risk acceptance in space.

  18. Applying the intention-to-treat principle in practice: Guidance on handling randomisation errors.

    PubMed

    Yelland, Lisa N; Sullivan, Thomas R; Voysey, Merryn; Lee, Katherine J; Cook, Jonathan A; Forbes, Andrew B

    2015-08-01

    The intention-to-treat principle states that all randomised participants should be analysed in their randomised group. The implications of this principle are widely discussed in relation to the analysis, but have received limited attention in the context of handling errors that occur during the randomisation process. The aims of this article are to (1) demonstrate the potential pitfalls of attempting to correct randomisation errors and (2) provide guidance on handling common randomisation errors when they are discovered that maintains the goals of the intention-to-treat principle. The potential pitfalls of attempting to correct randomisation errors are demonstrated and guidance on handling common errors is provided, using examples from our own experiences. We illustrate the problems that can occur when attempts are made to correct randomisation errors and argue that documenting, rather than correcting these errors, is most consistent with the intention-to-treat principle. When a participant is randomised using incorrect baseline information, we recommend accepting the randomisation but recording the correct baseline data. If ineligible participants are inadvertently randomised, we advocate keeping them in the trial and collecting all relevant data but seeking clinical input to determine their appropriate course of management, unless they can be excluded in an objective and unbiased manner. When multiple randomisations are performed in error for the same participant, we suggest retaining the initial randomisation and either disregarding the second randomisation if only one set of data will be obtained for the participant, or retaining the second randomisation otherwise. When participants are issued the incorrect treatment at the time of randomisation, we propose documenting the treatment received and seeking clinical input regarding the ongoing treatment of the participant. Randomisation errors are almost inevitable and should be reported in trial publications. The

  19. A fourier analysis on the maximum acceptable grid size for discrete proton beam dose calculation.

    PubMed

    Li, Haisen S; Romeijn, H Edwin; Dempsey, James F

    2006-09-01

    We developed an analytical method for determining the maximum acceptable grid size for discrete dose calculation in proton therapy treatment plan optimization, so that the accuracy of the optimized dose distribution is guaranteed in the phase of dose sampling and the superfluous computational work is avoided. The accuracy of dose sampling was judged by the criterion that the continuous dose distribution could be reconstructed from the discrete dose within a 2% error limit. To keep the error caused by the discrete dose sampling under a 2% limit, the dose grid size cannot exceed a maximum acceptable value. The method was based on Fourier analysis and the Shannon-Nyquist sampling theorem as an extension of our previous analysis for photon beam intensity modulated radiation therapy [J. F. Dempsey, H. E. Romeijn, J. G. Li, D. A. Low, and J. R. Palta, Med. Phys. 32, 380-388 (2005)]. The proton beam model used for the analysis was a near monoenergetic (of width about 1% the incident energy) and monodirectional infinitesimal (nonintegrated) pencil beam in water medium. By monodirection, we mean that the proton particles are in the same direction before entering the water medium and the various scattering prior to entrance to water is not taken into account. In intensity modulated proton therapy, the elementary intensity modulation entity for proton therapy is either an infinitesimal or finite sized beamlet. Since a finite sized beamlet is the superposition of infinitesimal pencil beams, the result of the maximum acceptable grid size obtained with infinitesimal pencil beam also applies to finite sized beamlet. The analytic Bragg curve function proposed by Bortfeld [T. Bortfeld, Med. Phys. 24, 2024-2033 (1997)] was employed. The lateral profile was approximated by a depth dependent Gaussian distribution. The model included the spreads of the Bragg peak and the lateral profiles due to multiple Coulomb scattering. The dependence of the maximum acceptable dose grid size on the

  20. Detection and avoidance of errors in computer software

    NASA Technical Reports Server (NTRS)

    Kinsler, Les

    1989-01-01

    The acceptance test errors of a computer software project to determine if the errors could be detected or avoided in earlier phases of development. GROAGSS (Gamma Ray Observatory Attitude Ground Support System) was selected as the software project to be examined. The development of the software followed the standard Flight Dynamics Software Development methods. GROAGSS was developed between August 1985 and April 1989. The project is approximately 250,000 lines of code of which approximately 43,000 lines are reused from previous projects. GROAGSS had a total of 1715 Change Report Forms (CRFs) submitted during the entire development and testing. These changes contained 936 errors. Of these 936 errors, 374 were found during the acceptance testing. These acceptance test errors were first categorized into methods of avoidance including: more clearly written requirements; detail review; code reading; structural unit testing; and functional system integration testing. The errors were later broken down in terms of effort to detect and correct, class of error, and probability that the prescribed detection method would be successful. These determinations were based on Software Engineering Laboratory (SEL) documents and interviews with the project programmers. A summary of the results of the categorizations is presented. The number of programming errors at the beginning of acceptance testing can be significantly reduced. The results of the existing development methodology are examined for ways of improvements. A basis is provided for the definition is a new development/testing paradigm. Monitoring of the new scheme will objectively determine its effectiveness on avoiding and detecting errors.

  1. Field error lottery

    SciTech Connect

    Elliott, C.J.; McVey, B. ); Quimby, D.C. )

    1990-01-01

    The level of field errors in an FEL is an important determinant of its performance. We have computed 3D performance of a large laser subsystem subjected to field errors of various types. These calculations have been guided by simple models such as SWOOP. The technique of choice is utilization of the FELEX free electron laser code that now possesses extensive engineering capabilities. Modeling includes the ability to establish tolerances of various types: fast and slow scale field bowing, field error level, beam position monitor error level, gap errors, defocusing errors, energy slew, displacement and pointing errors. Many effects of these errors on relative gain and relative power extraction are displayed and are the essential elements of determining an error budget. The random errors also depend on the particular random number seed used in the calculation. The simultaneous display of the performance versus error level of cases with multiple seeds illustrates the variations attributable to stochasticity of this model. All these errors are evaluated numerically for comprehensive engineering of the system. In particular, gap errors are found to place requirements beyond mechanical tolerances of {plus minus}25{mu}m, and amelioration of these may occur by a procedure utilizing direct measurement of the magnetic fields at assembly time. 4 refs., 12 figs.

  2. Approaches to acceptable risk

    SciTech Connect

    Whipple, C.

    1997-04-30

    Several alternative approaches to address the question {open_quotes}How safe is safe enough?{close_quotes} are reviewed and an attempt is made to apply the reasoning behind these approaches to the issue of acceptability of radiation exposures received in space. The approaches to the issue of the acceptability of technological risk described here are primarily analytical, and are drawn from examples in the management of environmental health risks. These include risk-based approaches, in which specific quantitative risk targets determine the acceptability of an activity, and cost-benefit and decision analysis, which generally focus on the estimation and evaluation of risks, benefits and costs, in a framework that balances these factors against each other. These analytical methods tend by their quantitative nature to emphasize the magnitude of risks, costs and alternatives, and to downplay other factors, especially those that are not easily expressed in quantitative terms, that affect acceptance or rejection of risk. Such other factors include the issues of risk perceptions and how and by whom risk decisions are made.

  3. Why was Relativity Accepted?

    NASA Astrophysics Data System (ADS)

    Brush, S. G.

    Historians of science have published many studies of the reception of Einstein's special and general theories of relativity. Based on a review of these studies, and my own research on the role of the light-bending prediction in the reception of general relativity, I discuss the role of three kinds of reasons for accepting relativity (1) empirical predictions and explanations; (2) social-psychological factors; and (3) aesthetic-mathematical factors. According to the historical studies, acceptance was a three-stage process. First, a few leading scientists adopted the special theory for aesthetic-mathematical reasons. In the second stage, their enthusiastic advocacy persuaded other scientists to work on the theory and apply it to problems currently of interest in atomic physics. The special theory was accepted by many German physicists by 1910 and had begun to attract some interest in other countries. In the third stage, the confirmation of Einstein's light-bending prediction attracted much public attention and forced all physicists to take the general theory of relativity seriously. In addition to light-bending, the explanation of the advance of Mercury's perihelion was considered strong evidence by theoretical physicists. The American astronomers who conducted successful tests of general relativity became defenders of the theory. There is little evidence that relativity was `socially constructed' but its initial acceptance was facilitated by the prestige and resources of its advocates.

  4. SU-D-BRD-07: Evaluation of the Effectiveness of Statistical Process Control Methods to Detect Systematic Errors For Routine Electron Energy Verification

    SciTech Connect

    Parker, S

    2015-06-15

    Purpose: To evaluate the ability of statistical process control methods to detect systematic errors when using a two dimensional (2D) detector array for routine electron beam energy verification. Methods: Electron beam energy constancy was measured using an aluminum wedge and a 2D diode array on four linear accelerators. Process control limits were established. Measurements were recorded in control charts and compared with both calculated process control limits and TG-142 recommended specification limits. The data was tested for normality, process capability and process acceptability. Additional measurements were recorded while systematic errors were intentionally introduced. Systematic errors included shifts in the alignment of the wedge, incorrect orientation of the wedge, and incorrect array calibration. Results: Control limits calculated for each beam were smaller than the recommended specification limits. Process capability and process acceptability ratios were greater than one in all cases. All data was normally distributed. Shifts in the alignment of the wedge were most apparent for low energies. The smallest shift (0.5 mm) was detectable using process control limits in some cases, while the largest shift (2 mm) was detectable using specification limits in only one case. The wedge orientation tested did not affect the measurements as this did not affect the thickness of aluminum over the detectors of interest. Array calibration dependence varied with energy and selected array calibration. 6 MeV was the least sensitive to array calibration selection while 16 MeV was the most sensitive. Conclusion: Statistical process control methods demonstrated that the data distribution was normally distributed, the process was capable of meeting specifications, and that the process was centered within the specification limits. Though not all systematic errors were distinguishable from random errors, process control limits increased the ability to detect systematic errors

  5. Inborn errors of metabolism

    MedlinePlus

    Metabolism - inborn errors of ... Bodamer OA. Approach to inborn errors of metabolism. In: Goldman L, Schafer AI, eds. Goldman's Cecil Medicine . 25th ed. Philadelphia, PA: Elsevier Saunders; 2015:chap 205. Rezvani I, Rezvani G. An ...

  6. Accounting for Interlanguage Errors

    ERIC Educational Resources Information Center

    Benetti, Jean N.

    1978-01-01

    A study was conducted to test various explanations of the error of unmarked noun plurals made by first generation Italian immigrants. The error appeared to be "fossilized" or not eradicated over a period of time. (SW)

  7. Drug Errors in Anaesthesiology

    PubMed Central

    Jain, Rajnish Kumar; Katiyar, Sarika

    2009-01-01

    Summary Medication errors are a leading cause of morbidity and mortality in hospitalized patients. The incidence of these drug errors during anaesthesia is not certain. They impose a considerable financial burden to health care systems apart from the patient losses. Common causes of these errors and their prevention is discussed. PMID:20640103

  8. Minimizing actuator-induced errors in active space telescope mirrors

    NASA Astrophysics Data System (ADS)

    Smith, Matthew W.; Miller, David W.

    2010-07-01

    The trend in future space telescopes points toward increased primary mirror diameter, which improves resolution and sensitivity. However, given the constraints on mass and volume deliverable to orbit by current launch vehicles, creative design solutions are needed to enable increased mirror size while keeping mass and volume within acceptable limits. Lightweight, segmented, rib-stiffened, actively controlled primary mirrors have emerged as a potential solution. Embedded surface-parallel actuators can be used to change the mirror prescription onorbit, lowering mirror mass overall by enabling lighter substrate materials such as silicon carbide (SiC) and relaxing manufacturing constraints. However, the discrete nature of the actuators causes high spatial frequency residual errors when commanding low-order prescription changes. A parameterized finite element model is used to simulate actuator-induced residual error and investigate design solutions that mitigate this error source. Judicious specification of mirror substrate geometry and actuator length is shown to reduce actuator-induced residual while keeping areal density constant. Specifically, a sinusoidally-varying rib shaping function is found to increase actuator influence functions and decrease residual. Likewise, longer actuators are found to offer reduced residual. Other options for geometric shaping are discussed, such as rib-to-facesheet blending and the use of two dimensional patch actuators.

  9. Improved modeling of multivariate measurement errors based on the Wishart distribution.

    PubMed

    Wentzell, Peter D; Cleary, Cody S; Kompany-Zareh, M

    2017-03-22

    The error covariance matrix (ECM) is an important tool for characterizing the errors from multivariate measurements, representing both the variance and covariance in the errors across multiple channels. Such information is useful in understanding and minimizing sources of experimental error and in the selection of optimal data analysis procedures. Experimental ECMs, normally obtained through replication, are inherently noisy, inconvenient to obtain, and offer limited interpretability. Significant advantages can be realized by building a model for the ECM based on established error types. Such models are less noisy, reduce the need for replication, mitigate mathematical complications such as matrix singularity, and provide greater insights. While the fitting of ECM models using least squares has been previously proposed, the present work establishes that fitting based on the Wishart distribution offers a much better approach. Simulation studies show that the Wishart method results in parameter estimates with a smaller variance and also facilitates the statistical testing of alternative models using a parameterized bootstrap method. The new approach is applied to fluorescence emission data to establish the acceptability of various models containing error terms related to offset, multiplicative offset, shot noise and uniform independent noise. The implications of the number of replicates, as well as single vs. multiple replicate sets are also described.

  10. [Errors in laboratory daily practice].

    PubMed

    Larrose, C; Le Carrer, D

    2007-01-01

    Legislation set by GBEA (Guide de bonne exécution des analyses) requires that, before performing analysis, the laboratory directors have to check both the nature of the samples and the patients identity. The data processing of requisition forms, which identifies key errors, was established in 2000 and in 2002 by the specialized biochemistry laboratory, also with the contribution of the reception centre for biological samples. The laboratories follow a strict criteria of defining acceptability as a starting point for the reception to then check requisition forms and biological samples. All errors are logged into the laboratory database and analysis report are sent to the care unit specifying the problems and the consequences they have on the analysis. The data is then assessed by the laboratory directors to produce monthly or annual statistical reports. This indicates the number of errors, which are then indexed to patient files to reveal the specific problem areas, therefore allowing the laboratory directors to teach the nurses and enable corrective action.

  11. RANDOM AND SYSTEMATIC FIELD ERRORS IN THE SNS RING: A STUDY OF THEIR EFFECTS AND COMPENSATION

    SciTech Connect

    GARDNER,C.J.; LEE,Y.Y.; WENG,W.T.

    1998-06-22

    The Accumulator Ring for the proposed Spallation Neutron Source (SNS) [l] is to accept a 1 ms beam pulse from a 1 GeV Proton Linac at a repetition rate of 60 Hz. For each beam pulse, 10{sup 14} protons (some 1,000 turns) are to be accumulated via charge-exchange injection and then promptly extracted to an external target for the production of neutrons by spallation. At this very high intensity, stringent limits (less than two parts in 10,000 per pulse) on beam loss during accumulation must be imposed in order to keep activation of ring components at an acceptable level. To stay within the desired limit, the effects of random and systematic field errors in the ring require careful attention. This paper describes the authors studies of these effects and the magnetic corrector schemes for their compensation.

  12. Acceptability of human risk.

    PubMed

    Kasperson, R E

    1983-10-01

    This paper has three objectives: to explore the nature of the problem implicit in the term "risk acceptability," to examine the possible contributions of scientific information to risk standard-setting, and to argue that societal response is best guided by considerations of process rather than formal methods of analysis. Most technological risks are not accepted but are imposed. There is also little reason to expect consensus among individuals on their tolerance of risk. Moreover, debates about risk levels are often at base debates over the adequacy of the institutions which manage the risks. Scientific information can contribute three broad types of analyses to risk-setting deliberations: contextual analysis, equity assessment, and public preference analysis. More effective risk-setting decisions will involve attention to the process used, particularly in regard to the requirements of procedural justice and democratic responsibility.

  13. Acceptability of human risk.

    PubMed Central

    Kasperson, R E

    1983-01-01

    This paper has three objectives: to explore the nature of the problem implicit in the term "risk acceptability," to examine the possible contributions of scientific information to risk standard-setting, and to argue that societal response is best guided by considerations of process rather than formal methods of analysis. Most technological risks are not accepted but are imposed. There is also little reason to expect consensus among individuals on their tolerance of risk. Moreover, debates about risk levels are often at base debates over the adequacy of the institutions which manage the risks. Scientific information can contribute three broad types of analyses to risk-setting deliberations: contextual analysis, equity assessment, and public preference analysis. More effective risk-setting decisions will involve attention to the process used, particularly in regard to the requirements of procedural justice and democratic responsibility. PMID:6418541

  14. Acceptance Test Plan.

    DTIC Science & Technology

    2014-09-26

    7 RD-Ai507 154 CCEPTANCE TEST PLN(U) WESTINGHOUSE DEFENSE ND i/i ELECTRO ICS CENTER BALTIMORE MD DEVELOPMENT AND OPERATIONS DIY D C KRRiJS 28 JUN...Ln ACCEPTANCE TEST PLAN FOR SPECIAL RELIABILITY TESTS FOR BROADBAND MICROWAVE AMPLIFIER PANEL David C. Kraus, Reliability Engineer WESTINGHOUSE ...ORGANIZATION b. OFFICE SYMBOL 7g& NAME OF MONITORING ORGANIZATION tIf appdeg ble) WESTINGHOUSE ELECTRIC CORP. - NAVAL RESEARCH LABORATORY e. AOORES$ (Ci7t

  15. Empathy and error processing.

    PubMed

    Larson, Michael J; Fair, Joseph E; Good, Daniel A; Baldwin, Scott A

    2010-05-01

    Recent research suggests a relationship between empathy and error processing. Error processing is an evaluative control function that can be measured using post-error response time slowing and the error-related negativity (ERN) and post-error positivity (Pe) components of the event-related potential (ERP). Thirty healthy participants completed two measures of empathy, the Interpersonal Reactivity Index (IRI) and the Empathy Quotient (EQ), and a modified Stroop task. Post-error slowing was associated with increased empathic personal distress on the IRI. ERN amplitude was related to overall empathy score on the EQ and the fantasy subscale of the IRI. The Pe and measures of empathy were not related. Results remained consistent when negative affect was controlled via partial correlation, with an additional relationship between ERN amplitude and empathic concern on the IRI. Findings support a connection between empathy and error processing mechanisms.

  16. 12 CFR 226.13 - Billing error resolution.27

    Code of Federal Regulations, 2011 CFR

    2011-01-01

    ...) Definition of billing error. For purposes of this section, the term billing error means: (1) A reflection on... credit plan. (2) A reflection on or with a periodic statement of an extension of credit that is not... reflection on or with a periodic statement of an extension of credit for property or services not accepted by...

  17. Age and Acceptance of Euthanasia.

    ERIC Educational Resources Information Center

    Ward, Russell A.

    1980-01-01

    Study explores relationship between age (and sex and race) and acceptance of euthanasia. Women and non-Whites were less accepting because of religiosity. Among older people less acceptance was attributable to their lesser education and greater religiosity. Results suggest that quality of life in old age affects acceptability of euthanasia. (Author)

  18. Age and Acceptance of Euthanasia.

    ERIC Educational Resources Information Center

    Ward, Russell A.

    1980-01-01

    Study explores relationship between age (and sex and race) and acceptance of euthanasia. Women and non-Whites were less accepting because of religiosity. Among older people less acceptance was attributable to their lesser education and greater religiosity. Results suggest that quality of life in old age affects acceptability of euthanasia. (Author)

  19. [Acceptance and commitment therapy].

    PubMed

    Ducasse, D; Fond, G

    2015-02-01

    Acceptance and commitment therapy (ACT) is a third generation of cognitive-behavioral therapies. The point is to help patients to improve their psychological flexibility in order to accept unavoidable private events. Thus, they have the opportunity to invest energy in committed actions rather than struggle against their psychological events. (i) To present the ACT basic concepts and (ii) to propose a systematic review of the literature about effectiveness of this kind of psychotherapy. (i) The core concepts of ACT come from Monestès (2011), Schoendorff (2011), and Harris (2012); (ii) we conducted a systematic review of the literature using the PRISMA's criteria. The research paradigm was « acceptance and commitment therapy AND randomized controlled trial ». The bases of the MEDLINE, Cochrane and Web of science have been checked. Overall, 61 articles have been found, of which, after reading the abstracts, 40 corresponded to the subject of our study. (I) Psychological flexibility is established through six core ACT processes (cognitive defusion, acceptance, being present, values, committed action, self as context), while the therapist emphasizes on experiential approach. (II) Emerging research shows that ACT is efficacious in the psychological treatment of a wide range of psychiatric problems, including psychosis, depression, obsessive-compulsive disorder, trichotillomania, generalized anxiety disorder, post-traumatic stress disorder, borderline personality disorder, eating disorders. ACT has also shown a utility in other areas of medicine: the management chronic pain, drug-dependence, smoking cessation, the management of epilepsy, diabetic self-management, the management of work stress, the management of tinnitus, and the management of multiple sclerosis. Meta-analysis of controlled outcome studies reported an average effect size (Cohen's d) of 0.66 at post-treatment (n=704) and 0.65 (n=580) at follow-up (on average 19.2 weeks later). In studies involving

  20. Aircraft system modeling error and control error

    NASA Technical Reports Server (NTRS)

    Kulkarni, Nilesh V. (Inventor); Kaneshige, John T. (Inventor); Krishnakumar, Kalmanje S. (Inventor); Burken, John J. (Inventor)

    2012-01-01

    A method for modeling error-driven adaptive control of an aircraft. Normal aircraft plant dynamics is modeled, using an original plant description in which a controller responds to a tracking error e(k) to drive the component to a normal reference value according to an asymptote curve. Where the system senses that (1) at least one aircraft plant component is experiencing an excursion and (2) the return of this component value toward its reference value is not proceeding according to the expected controller characteristics, neural network (NN) modeling of aircraft plant operation may be changed. However, if (1) is satisfied but the error component is returning toward its reference value according to expected controller characteristics, the NN will continue to model operation of the aircraft plant according to an original description.

  1. Burst error correction extensions for large Reed Solomon codes

    NASA Technical Reports Server (NTRS)

    Owsley, P.

    1990-01-01

    Reed Solomon codes are powerful error correcting codes that include some of the best random and burst correcting codes currently known. It is well known that an (n,k) Reed Solomon code can correct up to (n - k)/2 errors. Many applications utilizing Reed Solomon codes require corrections of errors consisting primarily of bursts. In this paper, it is shown that the burst correcting ability of Reed Solomon codes can be increased beyond (n - k)/2 with an acceptable probability of miscorrect.

  2. ON COMPUTING UPPER LIMITS TO SOURCE INTENSITIES

    SciTech Connect

    Kashyap, Vinay L.; Siemiginowska, Aneta; Van Dyk, David A.; Xu Jin; Connors, Alanna; Freeman, Peter E.; Zezas, Andreas E-mail: asiemiginowska@cfa.harvard.ed E-mail: jinx@ics.uci.ed E-mail: pfreeman@cmu.ed

    2010-08-10

    A common problem in astrophysics is determining how bright a source could be and still not be detected in an observation. Despite the simplicity with which the problem can be stated, the solution involves complicated statistical issues that require careful analysis. In contrast to the more familiar confidence bound, this concept has never been formally analyzed, leading to a great variety of often ad hoc solutions. Here we formulate and describe the problem in a self-consistent manner. Detection significance is usually defined by the acceptable proportion of false positives (background fluctuations that are claimed as detections, or Type I error), and we invoke the complementary concept of false negatives (real sources that go undetected, or Type II error), based on the statistical power of a test, to compute an upper limit to the detectable source intensity. To determine the minimum intensity that a source must have for it to be detected, we first define a detection threshold and then compute the probabilities of detecting sources of various intensities at the given threshold. The intensity that corresponds to the specified Type II error probability defines that minimum intensity and is identified as the upper limit. Thus, an upper limit is a characteristic of the detection procedure rather than the strength of any particular source. It should not be confused with confidence intervals or other estimates of source intensity. This is particularly important given the large number of catalogs that are being generated from increasingly sensitive surveys. We discuss, with examples, the differences between these upper limits and confidence bounds. Both measures are useful quantities that should be reported in order to extract the most science from catalogs, though they answer different statistical questions: an upper bound describes an inference range on the source intensity, while an upper limit calibrates the detection process. We provide a recipe for computing upper

  3. Chemotherapy medication errors in a pediatric cancer treatment center: prospective characterization of error types and frequency and development of a quality improvement initiative to lower the error rate.

    PubMed

    Watts, Raymond G; Parsons, Kerry

    2013-08-01

    Chemotherapy medication errors occur in all cancer treatment programs. Such errors have potential severe consequences: either enhanced toxicity or impaired disease control. Understanding and limiting chemotherapy errors are imperative. A multi-disciplinary team developed and implemented a prospective pharmacy surveillance system of chemotherapy prescribing and administration errors from 2008 to 2011 at a Children's Oncology Group-affiliated, pediatric cancer treatment program. Every chemotherapy order was prospectively reviewed for errors at the time of order submission. All chemotherapy errors were graded using standard error severity codes. Error rates were calculated by number of patient encounters and chemotherapy doses dispensed. Process improvement was utilized to develop techniques to minimize errors with a goal of zero errors reaching the patient. Over the duration of the study, more than 20,000 chemotherapy orders were reviewed. Error rates were low (6/1,000 patient encounters and 3.9/1,000 medications dispensed) at the start of the project and reduced by 50% to 3/1,000 patient encounters and 1.8/1,000 medications dispensed during the initiative. Error types included chemotherapy dosing or prescribing errors (42% of errors), treatment roadmap errors (26%), supportive care errors (15%), timing errors (12%), and pharmacy dispensing errors (4%). Ninety-two percent of errors were intercepted before reaching the patient. No error caused identified patient harm. Efforts to lower rates were successful but have not succeeded in preventing all errors. Chemotherapy medication errors are possibly unavoidable, but can be minimized by thoughtful, multispecialty review of current policies and procedures. Pediatr Blood Cancer 2013;601320-1324. © 2013 Wiley Periodicals, Inc. Copyright © 2013 Wiley Periodicals, Inc.

  4. Baby-Crying Acceptance

    NASA Astrophysics Data System (ADS)

    Martins, Tiago; de Magalhães, Sérgio Tenreiro

    The baby's crying is his most important mean of communication. The crying monitoring performed by devices that have been developed doesn't ensure the complete safety of the child. It is necessary to join, to these technological resources, means of communicating the results to the responsible, which would involve the digital processing of information available from crying. The survey carried out, enabled to understand the level of adoption, in the continental territory of Portugal, of a technology that will be able to do such a digital processing. It was used the TAM as the theoretical referential. The statistical analysis showed that there is a good probability of acceptance of such a system.

  5. High acceptance recoil polarimeter

    SciTech Connect

    The HARP Collaboration

    1992-12-05

    In order to detect neutrons and protons in the 50 to 600 MeV energy range and measure their polarization, an efficient, low-noise, self-calibrating device is being designed. This detector, known as the High Acceptance Recoil Polarimeter (HARP), is based on the recoil principle of proton detection from np[r arrow]n[prime]p[prime] or pp[r arrow]p[prime]p[prime] scattering (detected particles are underlined) which intrinsically yields polarization information on the incoming particle. HARP will be commissioned to carry out experiments in 1994.

  6. Quantum error correction for metrology.

    PubMed

    Kessler, E M; Lovchinsky, I; Sushkov, A O; Lukin, M D

    2014-04-18

    We propose and analyze a new approach based on quantum error correction (QEC) to improve quantum metrology in the presence of noise. We identify the conditions under which QEC allows one to improve the signal-to-noise ratio in quantum-limited measurements, and we demonstrate that it enables, in certain situations, Heisenberg-limited sensitivity. We discuss specific applications to nanoscale sensing using nitrogen-vacancy centers in diamond in which QEC can significantly improve the measurement sensitivity and bandwidth under realistic experimental conditions.

  7. GCF HSD error control

    NASA Technical Reports Server (NTRS)

    Hung, C. K.

    1978-01-01

    A selective repeat automatic repeat request (ARQ) system was implemented under software control in the Ground Communications Facility error detection and correction (EDC) assembly at JPL and the comm monitor and formatter (CMF) assembly at the DSSs. The CMF and EDC significantly improved real time data quality and significantly reduced the post-pass time required for replay of blocks originally received in error. Since the remote mission operation centers (RMOCs) do not provide compatible error correction equipment, error correction will not be used on the RMOC-JPL high speed data (HSD) circuits. The real time error correction capability will correct error burst or outage of two loop-times or less for each DSS-JPL HSD circuit.

  8. Software error detection

    NASA Technical Reports Server (NTRS)

    Buechler, W.; Tucker, A. G.

    1981-01-01

    Several methods were employed to detect both the occurrence and source of errors in the operational software of the AN/SLQ-32. A large embedded real time electronic warfare command and control system for the ROLM 1606 computer are presented. The ROLM computer provides information about invalid addressing, improper use of privileged instructions, stack overflows, and unimplemented instructions. Additionally, software techniques were developed to detect invalid jumps, indices out of range, infinte loops, stack underflows, and field size errors. Finally, data are saved to provide information about the status of the system when an error is detected. This information includes I/O buffers, interrupt counts, stack contents, and recently passed locations. The various errors detected, techniques to assist in debugging problems, and segment simulation on a nontarget computer are discussed. These error detection techniques were a major factor in the success of finding the primary cause of error in 98% of over 500 system dumps.

  9. Error detection method

    DOEpatents

    Olson, Eric J.

    2013-06-11

    An apparatus, program product, and method that run an algorithm on a hardware based processor, generate a hardware error as a result of running the algorithm, generate an algorithm output for the algorithm, compare the algorithm output to another output for the algorithm, and detect the hardware error from the comparison. The algorithm is designed to cause the hardware based processor to heat to a degree that increases the likelihood of hardware errors to manifest, and the hardware error is observable in the algorithm output. As such, electronic components may be sufficiently heated and/or sufficiently stressed to create better conditions for generating hardware errors, and the output of the algorithm may be compared at the end of the run to detect a hardware error that occurred anywhere during the run that may otherwise not be detected by traditional methodologies (e.g., due to cooling, insufficient heat and/or stress, etc.).

  10. Medication errors: definitions and classification

    PubMed Central

    Aronson, Jeffrey K

    2009-01-01

    To understand medication errors and to identify preventive strategies, we need to classify them and define the terms that describe them. The four main approaches to defining technical terms consider etymology, usage, previous definitions, and the Ramsey–Lewis method (based on an understanding of theory and practice). A medication error is ‘a failure in the treatment process that leads to, or has the potential to lead to, harm to the patient’. Prescribing faults, a subset of medication errors, should be distinguished from prescription errors. A prescribing fault is ‘a failure in the prescribing [decision-making] process that leads to, or has the potential to lead to, harm to the patient’. The converse of this, ‘balanced prescribing’ is ‘the use of a medicine that is appropriate to the patient's condition and, within the limits created by the uncertainty that attends therapeutic decisions, in a dosage regimen that optimizes the balance of benefit to harm’. This excludes all forms of prescribing faults, such as irrational, inappropriate, and ineffective prescribing, underprescribing and overprescribing. A prescription error is ‘a failure in the prescription writing process that results in a wrong instruction about one or more of the normal features of a prescription’. The ‘normal features’ include the identity of the recipient, the identity of the drug, the formulation, dose, route, timing, frequency, and duration of administration. Medication errors can be classified, invoking psychological theory, as knowledge-based mistakes, rule-based mistakes, action-based slips, and memory-based lapses. This classification informs preventive strategies. PMID:19594526

  11. Medication errors: definitions and classification.

    PubMed

    Aronson, Jeffrey K

    2009-06-01

    1. To understand medication errors and to identify preventive strategies, we need to classify them and define the terms that describe them. 2. The four main approaches to defining technical terms consider etymology, usage, previous definitions, and the Ramsey-Lewis method (based on an understanding of theory and practice). 3. A medication error is 'a failure in the treatment process that leads to, or has the potential to lead to, harm to the patient'. 4. Prescribing faults, a subset of medication errors, should be distinguished from prescription errors. A prescribing fault is 'a failure in the prescribing [decision-making] process that leads to, or has the potential to lead to, harm to the patient'. The converse of this, 'balanced prescribing' is 'the use of a medicine that is appropriate to the patient's condition and, within the limits created by the uncertainty that attends therapeutic decisions, in a dosage regimen that optimizes the balance of benefit to harm'. This excludes all forms of prescribing faults, such as irrational, inappropriate, and ineffective prescribing, underprescribing and overprescribing. 5. A prescription error is 'a failure in the prescription writing process that results in a wrong instruction about one or more of the normal features of a prescription'. The 'normal features' include the identity of the recipient, the identity of the drug, the formulation, dose, route, timing, frequency, and duration of administration. 6. Medication errors can be classified, invoking psychological theory, as knowledge-based mistakes, rule-based mistakes, action-based slips, and memory-based lapses. This classification informs preventive strategies.

  12. [CIRRNET® - learning from errors, a success story].

    PubMed

    Frank, O; Hochreutener, M; Wiederkehr, P; Staender, S

    2012-06-01

    CIRRNET® is the network of local error-reporting systems of the Swiss Patient Safety Foundation. The network has been running since 2006 together with the Swiss Society for Anaesthesiology and Resuscitation (SGAR), and network participants currently include 39 healthcare institutions from all four different language regions of Switzerland. Further institutions can join at any time. Local error reports in CIRRNET® are bundled at a supraregional level, categorised in accordance with the WHO classification, and analysed by medical experts. The CIRRNET® database offers a solid pool of data with error reports from a wide range of medical specialist's areas and provides the basis for identifying relevant problem areas in patient safety. These problem areas are then processed in cooperation with specialists with extremely varied areas of expertise, and recommendations for avoiding these errors are developed by changing care processes (Quick-Alerts®). Having been approved by medical associations and professional medical societies, Quick-Alerts® are widely supported and well accepted in professional circles. The CIRRNET® database also enables any affiliated CIRRNET® participant to access all error reports in the 'closed user area' of the CIRRNET® homepage and to use these error reports for in-house training. A healthcare institution does not have to make every mistake itself - it can learn from the errors of others, compare notes with other healthcare institutions, and use existing knowledge to advance its own patient safety.

  13. Estimating Acceptability of Financial Health Incentives

    ERIC Educational Resources Information Center

    Bigsby, Elisabeth; Seitz, Holli H.; Halpern, Scott D.; Volpp, Kevin; Cappella, Joseph N.

    2017-01-01

    A growing body of evidence suggests that financial incentives can influence health behavior change, but research on the public acceptability of these programs and factors that predict public support have been limited. A representative sample of U.S. adults (N = 526) were randomly assigned to receive an incentive program description in which the…

  14. Acceptability of Emission Offsets

    EPA Pesticide Factsheets

    This document may be of assistance in applying the New Source Review (NSR) air permitting regulations including the Prevention of Significant Deterioration (PSD) requirements. This document is part of the NSR Policy and Guidance Database. Some documents in the database are a scanned or retyped version of a paper photocopy of the original. Although we have taken considerable effort to quality assure the documents, some may contain typographical errors. Contact the office that issued the document if you need a copy of the original.

  15. Model Error Budgets

    NASA Technical Reports Server (NTRS)

    Briggs, Hugh C.

    2008-01-01

    An error budget is a commonly used tool in design of complex aerospace systems. It represents system performance requirements in terms of allowable errors and flows these down through a hierarchical structure to lower assemblies and components. The requirements may simply be 'allocated' based upon heuristics or experience, or they may be designed through use of physics-based models. This paper presents a basis for developing an error budget for models of the system, as opposed to the system itself. The need for model error budgets arises when system models are a principle design agent as is increasingly more common for poorly testable high performance space systems.

  16. Error coding simulations

    NASA Technical Reports Server (NTRS)

    Noble, Viveca K.

    1993-01-01

    There are various elements such as radio frequency interference (RFI) which may induce errors in data being transmitted via a satellite communication link. When a transmission is affected by interference or other error-causing elements, the transmitted data becomes indecipherable. It becomes necessary to implement techniques to recover from these disturbances. The objective of this research is to develop software which simulates error control circuits and evaluate the performance of these modules in various bit error rate environments. The results of the evaluation provide the engineer with information which helps determine the optimal error control scheme. The Consultative Committee for Space Data Systems (CCSDS) recommends the use of Reed-Solomon (RS) and convolutional encoders and Viterbi and RS decoders for error correction. The use of forward error correction techniques greatly reduces the received signal to noise needed for a certain desired bit error rate. The use of concatenated coding, e.g. inner convolutional code and outer RS code, provides even greater coding gain. The 16-bit cyclic redundancy check (CRC) code is recommended by CCSDS for error detection.

  17. Model Error Budgets

    NASA Technical Reports Server (NTRS)

    Briggs, Hugh C.

    2008-01-01

    An error budget is a commonly used tool in design of complex aerospace systems. It represents system performance requirements in terms of allowable errors and flows these down through a hierarchical structure to lower assemblies and components. The requirements may simply be 'allocated' based upon heuristics or experience, or they may be designed through use of physics-based models. This paper presents a basis for developing an error budget for models of the system, as opposed to the system itself. The need for model error budgets arises when system models are a principle design agent as is increasingly more common for poorly testable high performance space systems.

  18. Post-Error Adjustments

    PubMed Central

    Danielmeier, Claudia; Ullsperger, Markus

    2011-01-01

    When our brain detects an error, this process changes how we react on ensuing trials. People show post-error adaptations, potentially to improve their performance in the near future. At least three types of behavioral post-error adjustments have been observed. These are post-error slowing (PES), post-error reduction of interference, and post-error improvement in accuracy (PIA). Apart from these behavioral changes, post-error adaptations have also been observed on a neuronal level with functional magnetic resonance imaging and electroencephalography. Neuronal post-error adaptations comprise activity increase in task-relevant brain areas, activity decrease in distracter-encoding brain areas, activity modulations in the motor system, and mid-frontal theta power increases. Here, we review the current literature with respect to these post-error adjustments, discuss under which circumstances these adjustments can be observed, and whether the different types of adjustments are linked to each other. We also evaluate different approaches for explaining the functional role of PES. In addition, we report reanalyzed and follow-up data from a flanker task and a moving dots interference task showing (1) that PES and PIA are not necessarily correlated, (2) that PES depends on the response–stimulus interval, and (3) that PES is reliable on a within-subject level over periods as long as several months. PMID:21954390

  19. Acceptance threshold theory can explain occurrence of homosexual behaviour

    PubMed Central

    Engel, Katharina C.; Männer, Lisa; Ayasse, Manfred; Steiger, Sandra

    2015-01-01

    Same-sex sexual behaviour (SSB) has been documented in a wide range of animals, but its evolutionary causes are not well understood. Here, we investigated SSB in the light of Reeve's acceptance threshold theory. When recognition is not error-proof, the acceptance threshold used by males to recognize potential mating partners should be flexibly adjusted to maximize the fitness pay-off between the costs of erroneously accepting males and the benefits of accepting females. By manipulating male burying beetles' search time for females and their reproductive potential, we influenced their perceived costs of making an acceptance or rejection error. As predicted, when the costs of rejecting females increased, males exhibited more permissive discrimination decisions and showed high levels of SSB; when the costs of accepting males increased, males were more restrictive and showed low levels of SSB. Our results support the idea that in animal species, in which the recognition cues of females and males overlap to a certain degree, SSB is a consequence of an adaptive discrimination strategy to avoid the costs of making rejection errors. PMID:25631226

  20. Acceptance threshold theory can explain occurrence of homosexual behaviour.

    PubMed

    Engel, Katharina C; Männer, Lisa; Ayasse, Manfred; Steiger, Sandra

    2015-01-01

    Same-sex sexual behaviour (SSB) has been documented in a wide range of animals, but its evolutionary causes are not well understood. Here, we investigated SSB in the light of Reeve's acceptance threshold theory. When recognition is not error-proof, the acceptance threshold used by males to recognize potential mating partners should be flexibly adjusted to maximize the fitness pay-off between the costs of erroneously accepting males and the benefits of accepting females. By manipulating male burying beetles' search time for females and their reproductive potential, we influenced their perceived costs of making an acceptance or rejection error. As predicted, when the costs of rejecting females increased, males exhibited more permissive discrimination decisions and showed high levels of SSB; when the costs of accepting males increased, males were more restrictive and showed low levels of SSB. Our results support the idea that in animal species, in which the recognition cues of females and males overlap to a certain degree, SSB is a consequence of an adaptive discrimination strategy to avoid the costs of making rejection errors.

  1. [Analysis of variance of bacterial counts in milk. 1. Characterization of total variance and the components of variance random sampling error, methodologic error and variation between parallel errors during storage].

    PubMed

    Böhmer, L; Hildebrandt, G

    1998-01-01

    In contrast to the prevailing automatized chemical analytical methods, classical microbiological techniques are linked with considerable material- and human-dependent sources of errors. These effects must be objectively considered for assessing the reliability and representativeness of a test result. As an example for error analysis, the deviation of bacterial counts and the influence of the time of testing, bacterial species involved (total bacterial count, coliform count) and the detection method used (pour-/spread-plate) were determined in a repeated testing of parallel samples of pasteurized (stored for 8 days at 10 degrees C) and raw (stored for 3 days at 6 degrees C) milk. Separate characterization of deviation components, namely, unavoidable random sampling error as well as methodical error and variation between parallel samples, was made possible by means of a test design where variance analysis was applied. Based on the results of the study, the following conclusions can be drawn: 1. Immediately after filling, the total count deviation in milk mainly followed the POISSON-distribution model and allowed a reliable hygiene evaluation of lots even with few samples. Subsequently, regardless of the examination procedure used, the setting up of parallel dilution series can be disregarded. 2. With increasing storage period, bacterial multiplication especially of psychrotrophs leads to unpredictable changes in the bacterial profile and density. With the increase in errors between samples, it is common to find packages which have acceptable microbiological quality but are already spoiled by the time of the expiry date labeled. As a consequence, a uniform acceptance or rejection of the batch is seldom possible. 3. Because the contamination level of coliforms in certified raw milk mostly lies near the detection limit, coliform counts with high relative deviation are expected to be found in milk directly after filling. Since no bacterial multiplication takes place

  2. Emperical Tests of Acceptance Sampling Plans

    NASA Technical Reports Server (NTRS)

    White, K. Preston, Jr.; Johnson, Kenneth L.

    2012-01-01

    Acceptance sampling is a quality control procedure applied as an alternative to 100% inspection. A random sample of items is drawn from a lot to determine the fraction of items which have a required quality characteristic. Both the number of items to be inspected and the criterion for determining conformance of the lot to the requirement are given by an appropriate sampling plan with specified risks of Type I and Type II sampling errors. In this paper, we present the results of empirical tests of the accuracy of selected sampling plans reported in the literature. These plans are for measureable quality characteristics which are known have either binomial, exponential, normal, gamma, Weibull, inverse Gaussian, or Poisson distributions. In the main, results support the accepted wisdom that variables acceptance plans are superior to attributes (binomial) acceptance plans, in the sense that these provide comparable protection against risks at reduced sampling cost. For the Gaussian and Weibull plans, however, there are ranges of the shape parameters for which the required sample sizes are in fact larger than the corresponding attributes plans, dramatically so for instances of large skew. Tests further confirm that the published inverse-Gaussian (IG) plan is flawed, as reported by White and Johnson (2011).

  3. Spaceborne scanner imaging system errors

    NASA Technical Reports Server (NTRS)

    Prakash, A.

    1982-01-01

    The individual sensor system design elements which are the priori components in the registration and rectification process, and the potential impact of error budgets on multitemporal registration and side-lap registration are analyzed. The properties of scanner, MLA, and SAR imaging systems are reviewed. Each sensor displays internal distortion properties which to varying degrees make it difficult to generate on orthophoto projection of the data acceptable for multiple pass registration or meeting national map accuracy standards and is also affected to varying degrees by relief displacements in moderate to hilly terrain. Nonsensor related distortions, associated with the accuracy of ephemeris determination and platform stability, have a major impact on local geometric distortions. Platform stability improvements expected from the new multi mission spacecraft series and improved ephemeris and ground control point determination from the NAVSTAR/global positioning satellite systems are reviewed.

  4. Twenty Questions about Student Errors.

    ERIC Educational Resources Information Center

    Fisher, Kathleen M.; Lipson, Joseph Isaac

    1986-01-01

    Discusses the value of studying errors made by students in the process of learning science. Addresses 20 research questions dealing with student learning errors. Attempts to characterize errors made by students and clarify some terms used in error research. (TW)

  5. Everyday Scale Errors

    ERIC Educational Resources Information Center

    Ware, Elizabeth A.; Uttal, David H.; DeLoache, Judy S.

    2010-01-01

    Young children occasionally make "scale errors"--they attempt to fit their bodies into extremely small objects or attempt to fit a larger object into another, tiny, object. For example, a child might try to sit in a dollhouse-sized chair or try to stuff a large doll into it. Scale error research was originally motivated by parents' and…

  6. Refractive error blindness.

    PubMed Central

    Dandona, R.; Dandona, L.

    2001-01-01

    Recent data suggest that a large number of people are blind in different parts of the world due to high refractive error because they are not using appropriate refractive correction. Refractive error as a cause of blindness has been recognized only recently with the increasing use of presenting visual acuity for defining blindness. In addition to blindness due to naturally occurring high refractive error, inadequate refractive correction of aphakia after cataract surgery is also a significant cause of blindness in developing countries. Blindness due to refractive error in any population suggests that eye care services in general in that population are inadequate since treatment of refractive error is perhaps the simplest and most effective form of eye care. Strategies such as vision screening programmes need to be implemented on a large scale to detect individuals suffering from refractive error blindness. Sufficient numbers of personnel to perform reasonable quality refraction need to be trained in developing countries. Also adequate infrastructure has to be developed in underserved areas of the world to facilitate the logistics of providing affordable reasonable-quality spectacles to individuals suffering from refractive error blindness. Long-term success in reducing refractive error blindness worldwide will require attention to these issues within the context of comprehensive approaches to reduce all causes of avoidable blindness. PMID:11285669

  7. Teacher-Induced Errors.

    ERIC Educational Resources Information Center

    Richmond, Kent C.

    Students of English as a second language (ESL) often come to the classroom with little or no experience in writing in any language and with inaccurate assumptions about writing. Rather than correct these assumptions, teachers often seem to unwittingly reinforce them, actually inducing errors into their students' work. Teacher-induced errors occur…

  8. Learning from Errors

    ERIC Educational Resources Information Center

    Metcalfe, Janet

    2017-01-01

    Although error avoidance during learning appears to be the rule in American classrooms, laboratory studies suggest that it may be a counterproductive strategy, at least for neurologically typical students. Experimental investigations indicate that errorful learning followed by corrective feedback is beneficial to learning. Interestingly, the…

  9. Definition of an Acceptable Glass composition Region (AGCR) via an Index System and a Partitioning Function

    SciTech Connect

    Peeler, D. K.; Taylor, A. S.; Edwards, T.B.

    2005-06-26

    error only reflects that the particular constraint system being used is overly conservative (i.e., its application restricts access to glasses that have an acceptable measured durability response). A Type II error results in a more serious misclassification that could result in allowing the transfer of a Slurry Mix Evaporator (SME) batch to the melter, which is predicted to produce a durable product based on the specific system applied but in reality does not meet the defined ''acceptability'' criteria. More specifically, a nondurable product could be produced in DWPF. Given the presence of Type II errors, the Index System approach was deemed inadequate for further implementation consideration at the DWPF. The second approach (the JMP partitioning process) was purely data driven and empirically derived--glass science was not a factor. In this approach, the collection of composition--durability data in ComPro was sequentially partitioned or split based on the best available specific criteria and variables. More specifically, the JMP software chose the oxide (Al{sub 2}O{sub 3} for this dataset) that most effectively partitions the PCT responses (NL [B]'s)--perhaps not 100% effective based on a single oxide. Based on this initial split, a second request was made to split a particular set of the ''Y'' values (good or bad PCTs based on the 10 g/L limit) based on the next most critical ''X'' variable. This ''splitting'' or ''partitioning'' process was repeated until an AGCR was defined based on the use of only 3 oxides (Al{sub 2}O{sub 3}, CaO, and MgO) and critical values of > 3.75 wt% Al{sub 2}O{sub 3}, {ge} 0.616 wt% CaO, and < 3.521 wt% MgO. Using this set of criteria, the ComPro database was partitioned in which no Type II errors were committed. The automated partitioning function screened or removed 978 of the 2406 ComPro glasses which did cause some initial concerns regarding excessive conservatism regardless of its ability to identify an AGCR. However, a preliminary

  10. Psychometric Evaluation of a Treatment Acceptance Measure for Use in Patients Receiving Treatment via Subcutaneous Injection.

    PubMed

    Tatlock, Sophi; Arbuckle, Rob; Sanchez, Robert; Grant, Laura; Khan, Irfan; Manvelian, Garen; Spertus, John A

    2017-03-01

    Alirocumab, a proprotein convertase subtilisin/kexin type 9 inhibitor, significantly reduces low-density lipoprotein cholesterol, but requires subcutaneous injections rather than oral pills. To measure patients' acceptance of this treatment modality, a new patient-reported outcome, the Injection-Treatment Acceptance Questionnaire (I-TAQ), was developed. To psychometrically evaluate the I-TAQ with patients at high risk of cardiovascular events receiving alirocumab. The 22-item, 5-domain I-TAQ was administered cross-sectionally to 151 patients enrolled in alirocumab clinical trials. Item response distributions, factor and multitrait analyses, interitem correlations, correlations with an existing measure of acceptance (convergent validity), and comparison of known-groups were performed to assess the I-TAQ's psychometric properties. Completion rates were high, with no patients missing more than two items and 91.4% missing no data. All items displayed high ceiling effects (>30%) because of high treatment acceptance. Factor analysis supported the a priori hypothesized item-domain structure with good fit indices (root mean square error approximation = 0.070; comparative fit index = 0.988) and high factor loadings. All items demonstrated item convergent validity (item-scale correlation ≥0.40), except for the side effects domain, which was limited by small numbers (n = 46). Almost all items correlated most highly with the domain to which they were assigned (item discriminant validity). Internal reliability was acceptable for all domains (Cronbach α range 0.72-0.88) and convergent validity was supported by a logical pattern of correlations with the Chronic Treatment Acceptance Questionnaire. These findings provide initial evidence of validity and reliability for the I-TAQ in patients treated with subcutaneous alirocumab. The I-TAQ could prove to be a valuable patient-reported outcome for therapies requiring subcutaneous injection. Copyright © 2017 International Society

  11. The incidence of diagnostic error in medicine

    PubMed Central

    Graber, Mark L

    2013-01-01

    A wide variety of research studies suggest that breakdowns in the diagnostic process result in a staggering toll of harm and patient deaths. These include autopsy studies, case reviews, surveys of patient and physicians, voluntary reporting systems, using standardised patients, second reviews, diagnostic testing audits and closed claims reviews. Although these different approaches provide important information and unique insights regarding diagnostic errors, each has limitations and none is well suited to establishing the incidence of diagnostic error in actual practice, or the aggregate rate of error and harm. We argue that being able to measure the incidence of diagnostic error is essential to enable research studies on diagnostic error, and to initiate quality improvement projects aimed at reducing the risk of error and harm. Three approaches appear most promising in this regard: (1) using ‘trigger tools’ to identify from electronic health records cases at high risk for diagnostic error; (2) using standardised patients (secret shoppers) to study the rate of error in practice; (3) encouraging both patients and physicians to voluntarily report errors they encounter, and facilitating this process. PMID:23771902

  12. Evaluating mixed samples as a source of error in non-invasive genetic studies using microsatellites

    USGS Publications Warehouse

    Roon, David A.; Thomas, M.E.; Kendall, K.C.; Waits, L.P.

    2005-01-01

    The use of noninvasive genetic sampling (NGS) for surveying wild populations is increasing rapidly. Currently, only a limited number of studies have evaluated potential biases associated with NGS. This paper evaluates the potential errors associated with analysing mixed samples drawn from multiple animals. Most NGS studies assume that mixed samples will be identified and removed during the genotyping process. We evaluated this assumption by creating 128 mixed samples of extracted DNA from brown bear (Ursus arctos) hair samples. These mixed samples were genotyped and screened for errors at six microsatellite loci according to protocols consistent with those used in other NGS studies. Five mixed samples produced acceptable genotypes after the first screening. However, all mixed samples produced multiple alleles at one or more loci, amplified as only one of the source samples, or yielded inconsistent electropherograms by the final stage of the error-checking process. These processes could potentially reduce the number of individuals observed in NGS studies, but errors should be conservative within demographic estimates. Researchers should be aware of the potential for mixed samples and carefully design gel analysis criteria and error checking protocols to detect mixed samples.

  13. The concept of error and malpractice in radiology.

    PubMed

    Pinto, Antonio; Brunese, Luca; Pinto, Fabio; Reali, Riccardo; Daniele, Stefania; Romano, Luigia

    2012-08-01

    Since the early 1970s, physicians have been subjected to an increasing number of medical malpractice claims. Radiology is one of the specialties most liable to claims of medical negligence. The etiology of radiological error is multifactorial. Errors fall into recurrent patterns. Errors arise from poor technique, failures of perception, lack of knowledge, and misjudgments. Every radiologist should understand the sources of error in diagnostic radiology as well as the elements of negligence that form the basis of malpractice litigation. Errors are an inevitable part of human life, and every health professional has made mistakes. To improve patient safety and reduce the risk from harm, we must accept that some errors are inevitable during the delivery of health care. We must play a cultural change in medicine, wherein errors are actively sought, openly discussed, and aggressively addressed. Copyright © 2012 Elsevier Inc. All rights reserved.

  14. Entropic error-disturbance relations

    NASA Astrophysics Data System (ADS)

    Coles, Patrick; Furrer, Fabian

    2014-03-01

    We derive an entropic error-disturbance relation for a sequential measurement scenario as originally considered by Heisenberg, and we discuss how our relation could be tested using existing experimental setups. Our relation is valid for discrete observables, such as spin, as well as continuous observables, such as position and momentum. The novel aspect of our relation compared to earlier versions is its clear operational interpretation and the quantification of error and disturbance using entropic quantities. This directly relates the measurement uncertainty, a fundamental property of quantum mechanics, to information theoretical limitations and offers potential applications in for instance quantum cryptography. PC is funded by National Research Foundation Singapore and Ministry of Education Tier 3 Grant ``Random numbers from quantum processes'' (MOE2012-T3-1-009). FF is funded by Japan Society for the Promotion of Science, KAKENHI grant No. 24-02793.

  15. System contributions to error.

    PubMed

    Adams, J G; Bohan, J S

    2000-11-01

    An unacceptably high rate of medical error occurs in the emergency department (ED). Professional accountability requires that EDs be managed to systematically eliminate error. This requires advocacy and leadership at every level of the specialty and at each institution in order to be effective and sustainable. At the same time, the significant operational challenges that face the ED, such as excessive patient care requirements, should be recognized if error reduction efforts are to remain credible. Proper staffing levels, for example, are an important prerequisite if medical error is to be minimized. Even at times of low volume, however, medical error is probably common. Engineering human factors and operational procedures, promoting team coordination, and standardizing care processes can decrease error and are strongly promoted. Such efforts should be coupled to systematic analysis of errors that occur. Reliable reporting is likely only if the system is based within the specialty to help ensure proper analysis and decrease threat. Ultimate success will require dedicated effort, continued advocacy, and promotion of research.

  16. Refractive errors and schizophrenia.

    PubMed

    Caspi, Asaf; Vishne, Tali; Reichenberg, Abraham; Weiser, Mark; Dishon, Ayelet; Lubin, Gadi; Shmushkevitz, Motti; Mandel, Yossi; Noy, Shlomo; Davidson, Michael

    2009-02-01

    Refractive errors (myopia, hyperopia and amblyopia), like schizophrenia, have a strong genetic cause, and dopamine has been proposed as a potential mediator in their pathophysiology. The present study explored the association between refractive errors in adolescence and schizophrenia, and the potential familiality of this association. The Israeli Draft Board carries a mandatory standardized visual accuracy assessment. 678,674 males consecutively assessed by the Draft Board and found to be psychiatrically healthy at age 17 were followed for psychiatric hospitalization with schizophrenia using the Israeli National Psychiatric Hospitalization Case Registry. Sib-ships were also identified within the cohort. There was a negative association between refractive errors and later hospitalization for schizophrenia. Future male schizophrenia patients were two times less likely to have refractive errors compared with never-hospitalized individuals, controlling for intelligence, years of education and socioeconomic status [adjusted Hazard Ratio=.55; 95% confidence interval .35-.85]. The non-schizophrenic male siblings of schizophrenia patients also had lower prevalence of refractive errors compared to never-hospitalized individuals. Presence of refractive errors in adolescence is related to lower risk for schizophrenia. The familiality of this association suggests that refractive errors may be associated with the genetic liability to schizophrenia.

  17. The Errors of Our Ways: Understanding Error Representations in Cerebellar-Dependent Motor Learning

    PubMed Central

    Popa, Laurentiu S.; Streng, Martha L.; Hewitt, Angela L.; Ebner, Timothy J.

    2015-01-01

    The cerebellum is essential for error-driven motor learning and is strongly implicated in detecting and correcting for motor errors. Therefore, elucidating how motor errors are represented in the cerebellum is essential in understanding cerebellar function, in general, and its role in motor learning, in particular. This review examines how motor errors are encoded in the cerebellar cortex in the context of a forward internal model that generates predictions about the upcoming movement and drives learning and adaptation. In this framework, sensory prediction errors, defined as the discrepancy between the predicted consequences of motor commands and the sensory feedback, are crucial for both on-line movement control and motor learning. While many studies support the dominant view that motor errors are encoded in the complex spike discharge of Purkinje cells, others have failed to relate complex spike activity with errors. Given these limitations, we review recent findings in the monkey showing that complex spike modulation is not necessarily required for motor learning or for simple spike adaptation. Also, new results demonstrate that the simple spike discharge provides continuous error signals that both lead and lag the actual movements in time, suggesting errors are encoded as both an internal prediction of motor commands and the actual sensory feedback. These dual error representations have opposing effects on simple spike discharge, consistent with the signals needed to generate sensory prediction errors used to update a forward internal model. PMID:26112422

  18. Understanding and Confronting Our Mistakes: The Epidemiology of Error in Radiology and Strategies for Error Reduction.

    PubMed

    Bruno, Michael A; Walker, Eric A; Abujudeh, Hani H

    2015-10-01

    Arriving at a medical diagnosis is a highly complex process that is extremely error prone. Missed or delayed diagnoses often lead to patient harm and missed opportunities for treatment. Since medical imaging is a major contributor to the overall diagnostic process, it is also a major potential source of diagnostic error. Although some diagnoses may be missed because of the technical or physical limitations of the imaging modality, including image resolution, intrinsic or extrinsic contrast, and signal-to-noise ratio, most missed radiologic diagnoses are attributable to image interpretation errors by radiologists. Radiologic interpretation cannot be mechanized or automated; it is a human enterprise based on complex psychophysiologic and cognitive processes and is itself subject to a wide variety of error types, including perceptual errors (those in which an important abnormality is simply not seen on the images) and cognitive errors (those in which the abnormality is visually detected but the meaning or importance of the finding is not correctly understood or appreciated). The overall prevalence of radiologists' errors in practice does not appear to have changed since it was first estimated in the 1960s. The authors review the epidemiology of errors in diagnostic radiology, including a recently proposed taxonomy of radiologists' errors, as well as research findings, in an attempt to elucidate possible underlying causes of these errors. The authors also propose strategies for error reduction in radiology. On the basis of current understanding, specific suggestions are offered as to how radiologists can improve their performance in practice.

  19. The Errors of Our Ways: Understanding Error Representations in Cerebellar-Dependent Motor Learning.

    PubMed

    Popa, Laurentiu S; Streng, Martha L; Hewitt, Angela L; Ebner, Timothy J

    2016-04-01

    The cerebellum is essential for error-driven motor learning and is strongly implicated in detecting and correcting for motor errors. Therefore, elucidating how motor errors are represented in the cerebellum is essential in understanding cerebellar function, in general, and its role in motor learning, in particular. This review examines how motor errors are encoded in the cerebellar cortex in the context of a forward internal model that generates predictions about the upcoming movement and drives learning and adaptation. In this framework, sensory prediction errors, defined as the discrepancy between the predicted consequences of motor commands and the sensory feedback, are crucial for both on-line movement control and motor learning. While many studies support the dominant view that motor errors are encoded in the complex spike discharge of Purkinje cells, others have failed to relate complex spike activity with errors. Given these limitations, we review recent findings in the monkey showing that complex spike modulation is not necessarily required for motor learning or for simple spike adaptation. Also, new results demonstrate that the simple spike discharge provides continuous error signals that both lead and lag the actual movements in time, suggesting errors are encoded as both an internal prediction of motor commands and the actual sensory feedback. These dual error representations have opposing effects on simple spike discharge, consistent with the signals needed to generate sensory prediction errors used to update a forward internal model.

  20. Quantum Error Correction for Metrology

    NASA Astrophysics Data System (ADS)

    Sushkov, Alex; Kessler, Eric; Lovchinsky, Igor; Lukin, Mikhail

    2014-05-01

    The question of the best achievable sensitivity in a quantum measurement is of great experimental relevance, and has seen a lot of attention in recent years. Recent studies [e.g., Nat. Phys. 7, 406 (2011), Nat. Comms. 3, 1063 (2012)] suggest that in most generic scenarios any potential quantum gain (e.g. through the use of entangled states) vanishes in the presence of environmental noise. To overcome these limitations, we propose and analyze a new approach to improve quantum metrology based on quantum error correction (QEC). We identify the conditions under which QEC allows one to improve the signal-to-noise ratio in quantum-limited measurements, and we demonstrate that it enables, in certain situations, Heisenberg-limited sensitivity. We discuss specific applications to nanoscale sensing using nitrogen-vacancy centers in diamond in which QEC can significantly improve the measurement sensitivity and bandwidth under realistic experimental conditions.

  1. Error Prevention Aid

    NASA Technical Reports Server (NTRS)

    1987-01-01

    In a complex computer environment there is ample opportunity for error, a mistake by a programmer, or a software-induced undesirable side effect. In insurance, errors can cost a company heavily, so protection against inadvertent change is a must for the efficient firm. The data processing center at Transport Life Insurance Company has taken a step to guard against accidental changes by adopting a software package called EQNINT (Equations Interpreter Program). EQNINT cross checks the basic formulas in a program against the formulas that make up the major production system. EQNINT assures that formulas are coded correctly and helps catch errors before they affect the customer service or its profitability.

  2. Cognitive illusions of authorship reveal hierarchical error detection in skilled typists.

    PubMed

    Logan, Gordon D; Crump, Matthew J C

    2010-10-29

    The ability to detect errors is an essential component of cognitive control. Studies of error detection in humans typically use simple tasks and propose single-process theories of detection. We examined error detection by skilled typists and found illusions of authorship that provide evidence for two error-detection processes. We corrected errors that typists made and inserted errors in correct responses. When asked to report errors, typists took credit for corrected errors and accepted blame for inserted errors, claiming authorship for the appearance of the screen. However, their typing rate showed no evidence of these illusions, slowing down after corrected errors but not after inserted errors. This dissociation suggests two error-detection processes: one sensitive to the appearance of the screen and the other sensitive to keystrokes.

  3. Quantum error correction via less noisy qubits.

    PubMed

    Fujiwara, Yuichiro

    2013-04-26

    Known quantum error correction schemes are typically able to take advantage of only a limited class of classical error-correcting codes. Entanglement-assisted quantum error correction is a partial solution which made it possible to exploit any classical linear codes over the binary or quaternary finite field. However, the known entanglement-assisted scheme requires noiseless qubits that help correct quantum errors on noisy qubits, which can be too severe an assumption. We prove that a more relaxed and realistic assumption is sufficient by presenting encoding and decoding operations assisted by qubits on which quantum errors of one particular kind may occur. As in entanglement assistance, our scheme can import any binary or quaternary linear codes. If the auxiliary qubits are noiseless, our codes become entanglement-assisted codes, and saturate the quantum Singleton bound when the underlying classical codes are maximum distance separable.

  4. Identifying subset errors in multiple sequence alignments.

    PubMed

    Roy, Aparna; Taddese, Bruck; Vohra, Shabana; Thimmaraju, Phani K; Illingworth, Christopher J R; Simpson, Lisa M; Mukherjee, Keya; Reynolds, Christopher A; Chintapalli, Sree V

    2014-01-01

    Multiple sequence alignment (MSA) accuracy is important, but there is no widely accepted method of judging the accuracy that different alignment algorithms give. We present a simple approach to detecting two types of error, namely block shifts and the misplacement of residues within a gap. Given a MSA, subsets of very similar sequences are generated through the use of a redundancy filter, typically using a 70-90% sequence identity cut-off. Subsets thus produced are typically small and degenerate, and errors can be easily detected even by manual examination. The errors, albeit minor, are inevitably associated with gaps in the alignment, and so the procedure is particularly relevant to homology modelling of protein loop regions. The usefulness of the approach is illustrated in the context of the universal but little known [K/R]KLH motif that occurs in intracellular loop 1 of G protein coupled receptors (GPCR); other issues relevant to GPCR modelling are also discussed.

  5. A survey of physicians' acceptance of telemedicine.

    PubMed

    Sheng, O R; Hu, P J; Chau, P Y; Hjelm, N M; Tam, K Y; Wei, C P; Tse, J

    1998-01-01

    Physicians' acceptance of telemedicine is an important managerial issue facing health-care organizations that have adopted, or are about to adopt, telemedicine. Most previous investigations of the acceptance of telemedicine have lacked theoretical foundation and been of limited scope. We examined technology acceptance and usage among physicians and specialists from 49 clinical departments at eight public tertiary hospitals in Hong Kong. Out of the 1021 questionnaires distributed, 310 were completed and returned, a 30% response rate. The preliminary findings suggested that use of telemedicine among clinicians in Hong Kong was moderate. While 18% of the respondents were using some form of telemedicine for patient care and management, it accounted for only 6.3% of the services provided. The intensity of their technology usage was also low, accounting for only 6.8% of a typical telemedicine-assisted service. These preliminary findings have managerial implications.

  6. Sonic boom acceptability studies

    NASA Technical Reports Server (NTRS)

    Shepherd, Kevin P.; Sullivan, Brenda M.; Leatherwood, Jack D.; Mccurdy, David A.

    1992-01-01

    The determination of the magnitude of sonic boom exposure which would be acceptable to the general population requires, as a starting point, a method to assess and compare individual sonic booms. There is no consensus within the scientific and regulatory communities regarding an appropriate sonic boom assessment metric. Loudness, being a fundamental and well-understood attribute of human hearing was chosen as a means of comparing sonic booms of differing shapes and amplitudes. The figure illustrates the basic steps which yield a calculated value of loudness. Based upon the aircraft configuration and its operating conditions, the sonic boom pressure signature which reaches the ground is calculated. This pressure-time history is transformed to the frequency domain and converted into a one-third octave band spectrum. The essence of the loudness method is to account for the frequency response and integration characteristics of the auditory system. The result of the calculation procedure is a numerical description (perceived level, dB) which represents the loudness of the sonic boom waveform.

  7. Effects of Error Experience When Learning to Simulate Hypernasality

    ERIC Educational Resources Information Center

    Wong, Andus W.-K.; Tse, Andy C.-Y.; Ma, Estella P.-M.; Whitehill, Tara L.; Masters, Rich S. W.

    2013-01-01

    Purpose: The purpose of this study was to evaluate the effects of error experience on the acquisition of hypernasal speech. Method: Twenty-eight healthy participants were asked to simulate hypernasality in either an "errorless learning" condition (in which the possibility for errors was limited) or an "errorful learning"…

  8. Increasing sensing resolution with error correction.

    PubMed

    Arrad, G; Vinkler, Y; Aharonov, D; Retzker, A

    2014-04-18

    The signal to noise ratio of quantum sensing protocols scales with the square root of the coherence time. Thus, increasing this time is a key goal in the field. By utilizing quantum error correction, we present a novel way of prolonging such coherence times beyond the fundamental limits of current techniques. We develop an implementable sensing protocol that incorporates error correction, and discuss the characteristics of these protocols in different noise and measurement scenarios. We examine the use of entangled versue untangled states, and error correction's reach of the Heisenberg limit. The effects of error correction on coherence times are calculated and we show that measurement precision can be enhanced for both one-directional and general noise.

  9. Meanings and implications of acceptability judgements for wilderness use impacts

    Treesearch

    Amy F. Hoss; Mark W. Brunson

    2000-01-01

    While the concept of “acceptability” is central to the Limits of Acceptable Change (LAC) framework, there is inadequate understanding of how “acceptability” is judged and how unacceptable conditions affect visitor experiences. To address this knowledge gap, visitors to nine wilderness areas were interviewed. Judgments of social and environmental conditions fell into...

  10. Estimating Bias Error Distributions

    NASA Technical Reports Server (NTRS)

    Liu, Tian-Shu; Finley, Tom D.

    2001-01-01

    This paper formulates the general methodology for estimating the bias error distribution of a device in a measuring domain from less accurate measurements when a minimal number of standard values (typically two values) are available. A new perspective is that the bias error distribution can be found as a solution of an intrinsic functional equation in a domain. Based on this theory, the scaling- and translation-based methods for determining the bias error distribution arc developed. These methods are virtually applicable to any device as long as the bias error distribution of the device can be sufficiently described by a power series (a polynomial) or a Fourier series in a domain. These methods have been validated through computational simulations and laboratory calibration experiments for a number of different devices.

  11. EMS -- Error Message Service

    NASA Astrophysics Data System (ADS)

    Rees, P. C. T.; Chipperfield, A. J.; Draper, P. W.

    This document describes the Error Message Service, EMS, and its use in system software. The purpose of EMS is to provide facilities for constructing and storing error messages for future delivery to the user -- usually via the Starlink Error Reporting System, ERR (see SUN/104). EMS can be regarded as a simplified version of ERR without the binding to any software environment (e.g., for message output or access to the parameter and data systems). The routines in this library conform to the error reporting conventions described in SUN/104. A knowledge of these conventions, and of the ADAM system (see SG/4), is assumed in what follows. This document is intended for Starlink systems programmers and can safely be ignored by applications programmers and users.

  12. The surveillance error grid.

    PubMed

    Klonoff, David C; Lias, Courtney; Vigersky, Robert; Clarke, William; Parkes, Joan Lee; Sacks, David B; Kirkman, M Sue; Kovatchev, Boris

    2014-07-01

    Currently used error grids for assessing clinical accuracy of blood glucose monitors are based on out-of-date medical practices. Error grids have not been widely embraced by regulatory agencies for clearance of monitors, but this type of tool could be useful for surveillance of the performance of cleared products. Diabetes Technology Society together with representatives from the Food and Drug Administration, the American Diabetes Association, the Endocrine Society, and the Association for the Advancement of Medical Instrumentation, and representatives of academia, industry, and government, have developed a new error grid, called the surveillance error grid (SEG) as a tool to assess the degree of clinical risk from inaccurate blood glucose (BG) monitors. A total of 206 diabetes clinicians were surveyed about the clinical risk of errors of measured BG levels by a monitor. The impact of such errors on 4 patient scenarios was surveyed. Each monitor/reference data pair was scored and color-coded on a graph per its average risk rating. Using modeled data representative of the accuracy of contemporary meters, the relationships between clinical risk and monitor error were calculated for the Clarke error grid (CEG), Parkes error grid (PEG), and SEG. SEG action boundaries were consistent across scenarios, regardless of whether the patient was type 1 or type 2 or using insulin or not. No significant differences were noted between responses of adult/pediatric or 4 types of clinicians. Although small specific differences in risk boundaries between US and non-US clinicians were noted, the panel felt they did not justify separate grids for these 2 types of clinicians. The data points of the SEG were classified in 15 zones according to their assigned level of risk, which allowed for comparisons with the classic CEG and PEG. Modeled glucose monitor data with realistic self-monitoring of blood glucose errors derived from meter testing experiments plotted on the SEG when compared to

  13. Quantum Error Correction

    NASA Astrophysics Data System (ADS)

    Lidar, Daniel A.; Brun, Todd A.

    2013-09-01

    Prologue; Preface; Part I. Background: 1. Introduction to decoherence and noise in open quantum systems Daniel Lidar and Todd Brun; 2. Introduction to quantum error correction Dave Bacon; 3. Introduction to decoherence-free subspaces and noiseless subsystems Daniel Lidar; 4. Introduction to quantum dynamical decoupling Lorenza Viola; 5. Introduction to quantum fault tolerance Panos Aliferis; Part II. Generalized Approaches to Quantum Error Correction: 6. Operator quantum error correction David Kribs and David Poulin; 7. Entanglement-assisted quantum error-correcting codes Todd Brun and Min-Hsiu Hsieh; 8. Continuous-time quantum error correction Ognyan Oreshkov; Part III. Advanced Quantum Codes: 9. Quantum convolutional codes Mark Wilde; 10. Non-additive quantum codes Markus Grassl and Martin Rötteler; 11. Iterative quantum coding systems David Poulin; 12. Algebraic quantum coding theory Andreas Klappenecker; 13. Optimization-based quantum error correction Andrew Fletcher; Part IV. Advanced Dynamical Decoupling: 14. High order dynamical decoupling Zhen-Yu Wang and Ren-Bao Liu; 15. Combinatorial approaches to dynamical decoupling Martin Rötteler and Pawel Wocjan; Part V. Alternative Quantum Computation Approaches: 16. Holonomic quantum computation Paolo Zanardi; 17. Fault tolerance for holonomic quantum computation Ognyan Oreshkov, Todd Brun and Daniel Lidar; 18. Fault tolerant measurement-based quantum computing Debbie Leung; Part VI. Topological Methods: 19. Topological codes Héctor Bombín; 20. Fault tolerant topological cluster state quantum computing Austin Fowler and Kovid Goyal; Part VII. Applications and Implementations: 21. Experimental quantum error correction Dave Bacon; 22. Experimental dynamical decoupling Lorenza Viola; 23. Architectures Jacob Taylor; 24. Error correction in quantum communication Mark Wilde; Part VIII. Critical Evaluation of Fault Tolerance: 25. Hamiltonian methods in QEC and fault tolerance Eduardo Novais, Eduardo Mucciolo and

  14. Intertester agreement in refractive error measurements.

    PubMed

    Huang, Jiayan; Maguire, Maureen G; Ciner, Elise; Kulp, Marjean T; Quinn, Graham E; Orel-Bixler, Deborah; Cyert, Lynn A; Moore, Bruce; Ying, Gui-Shuang

    2013-10-01

    To determine the intertester agreement of refractive error measurements between lay and nurse screeners using the Retinomax Autorefractor and the SureSight Vision Screener. Trained lay and nurse screeners measured refractive error in 1452 preschoolers (3 to 5 years old) using the Retinomax and the SureSight in a random order for screeners and instruments. Intertester agreement between lay and nurse screeners was assessed for sphere, cylinder, and spherical equivalent (SE) using the mean difference and the 95% limits of agreement. The mean intertester difference (lay minus nurse) was compared between groups defined based on the child's age, cycloplegic refractive error, and the reading's confidence number using analysis of variance. The limits of agreement were compared between groups using the Brown-Forsythe test. Intereye correlation was accounted for in all analyses. The mean intertester differences (95% limits of agreement) were -0.04 (-1.63, 1.54) diopter (D) sphere, 0.00 (-0.52, 0.51) D cylinder, and -0.04 (1.65, 1.56) D SE for the Retinomax and 0.05 (-1.48, 1.58) D sphere, 0.01 (-0.58, 0.60) D cylinder, and 0.06 (-1.45, 1.57) D SE for the SureSight. For either instrument, the mean intertester differences in sphere and SE did not differ by the child's age, cycloplegic refractive error, or the reading's confidence number. However, for both instruments, the limits of agreement were wider when eyes had significant refractive error or the reading's confidence number was below the manufacturer's recommended value. Among Head Start preschool children, trained lay and nurse screeners agree well in measuring refractive error using the Retinomax or the SureSight. Both instruments had similar intertester agreement in refractive error measurements independent of the child's age. Significant refractive error and a reading with low confidence number were associated with worse intertester agreement.

  15. Phasing piston error in segmented telescopes.

    PubMed

    Jiang, Junlun; Zhao, Weirui

    2016-08-22

    To achieve a diffraction-limited imaging, the piston errors between the segments of the segmented primary mirror telescope should be reduced to λ/40 RMS. We propose a method to detect the piston error by analyzing the intensity distribution on the image plane according to the Fourier optics principle, which can capture segments with the piston errors as large as the coherence length of the input light and reduce these to 0.026λ RMS (λ = 633nm). This method is adaptable to any segmented and deployable primary mirror telescope. Experiments have been carried out to validate the feasibility of the method.

  16. Automatically generated acceptance test: A software reliability experiment

    NASA Technical Reports Server (NTRS)

    Protzel, Peter W.

    1988-01-01

    This study presents results of a software reliability experiment investigating the feasibility of a new error detection method. The method can be used as an acceptance test and is solely based on empirical data about the behavior of internal states of a program. The experimental design uses the existing environment of a multi-version experiment previously conducted at the NASA Langley Research Center, in which the launch interceptor problem is used as a model. This allows the controlled experimental investigation of versions with well-known single and multiple faults, and the availability of an oracle permits the determination of the error detection performance of the test. Fault interaction phenomena are observed that have an amplifying effect on the number of error occurrences. Preliminary results indicate that all faults examined so far are detected by the acceptance test. This shows promise for further investigations, and for the employment of this test method on other applications.

  17. Scanner qualification with IntenCD based reticle error correction

    NASA Astrophysics Data System (ADS)

    Elblinger, Yair; Finders, Jo; Demarteau, Marcel; Wismans, Onno; Minnaert Janssen, Ingrid; Duray, Frank; Ben Yishai, Michael; Mangan, Shmoolik; Cohen, Yaron; Parizat, Ziv; Attal, Shay; Polonsky, Netanel; Englard, Ilan

    2010-03-01

    Scanner introduction into the fab production environment is a challenging task. An efficient evaluation of scanner performance matrices during factory acceptance test (FAT) and later on during site acceptance test (SAT) is crucial for minimizing the cycle time for pre and post production-start activities. If done effectively, the matrices of base line performance established during the SAT are used as a reference for scanner performance and fleet matching monitoring and maintenance in the fab environment. Key elements which can influence the cycle time of the SAT, FAT and maintenance cycles are the imaging, process and mask characterizations involved with those cycles. Discrete mask measurement techniques are currently in use to create across-mask CDU maps. By subtracting these maps from their final wafer measurement CDU map counterparts, it is possible to assess the real scanner induced printed errors within certain limitations. The current discrete measurement methods are time consuming and some techniques also overlook mask based effects other than line width variations, such as transmission and phase variations, all of which influence the final printed CD variability. Applied Materials Aera2TM mask inspection tool with IntenCDTM technology can scan the mask at high speed, offer full mask coverage and accurate assessment of all masks induced source of errors simultaneously, making it beneficial for scanner qualifications and performance monitoring. In this paper we report on a study that was done to improve a scanner introduction and qualification process using the IntenCD application to map the mask induced CD non uniformity. We will present the results of six scanners in production and discuss the benefits of the new method.

  18. Error reduction in EMG signal decomposition.

    PubMed

    Kline, Joshua C; De Luca, Carlo J

    2014-12-01

    Decomposition of the electromyographic (EMG) signal into constituent action potentials and the identification of individual firing instances of each motor unit in the presence of ambient noise are inherently probabilistic processes, whether performed manually or with automated algorithms. Consequently, they are subject to errors. We set out to classify and reduce these errors by analyzing 1,061 motor-unit action-potential trains (MUAPTs), obtained by decomposing surface EMG (sEMG) signals recorded during human voluntary contractions. Decomposition errors were classified into two general categories: location errors representing variability in the temporal localization of each motor-unit firing instance and identification errors consisting of falsely detected or missed firing instances. To mitigate these errors, we developed an error-reduction algorithm that combines multiple decomposition estimates to determine a more probable estimate of motor-unit firing instances with fewer errors. The performance of the algorithm is governed by a trade-off between the yield of MUAPTs obtained above a given accuracy level and the time required to perform the decomposition. When applied to a set of sEMG signals synthesized from real MUAPTs, the identification error was reduced by an average of 1.78%, improving the accuracy to 97.0%, and the location error was reduced by an average of 1.66 ms. The error-reduction algorithm in this study is not limited to any specific decomposition strategy. Rather, we propose it be used for other decomposition methods, especially when analyzing precise motor-unit firing instances, as occurs when measuring synchronization.

  19. Error reduction in EMG signal decomposition

    PubMed Central

    Kline, Joshua C.

    2014-01-01

    Decomposition of the electromyographic (EMG) signal into constituent action potentials and the identification of individual firing instances of each motor unit in the presence of ambient noise are inherently probabilistic processes, whether performed manually or with automated algorithms. Consequently, they are subject to errors. We set out to classify and reduce these errors by analyzing 1,061 motor-unit action-potential trains (MUAPTs), obtained by decomposing surface EMG (sEMG) signals recorded during human voluntary contractions. Decomposition errors were classified into two general categories: location errors representing variability in the temporal localization of each motor-unit firing instance and identification errors consisting of falsely detected or missed firing instances. To mitigate these errors, we developed an error-reduction algorithm that combines multiple decomposition estimates to determine a more probable estimate of motor-unit firing instances with fewer errors. The performance of the algorithm is governed by a trade-off between the yield of MUAPTs obtained above a given accuracy level and the time required to perform the decomposition. When applied to a set of sEMG signals synthesized from real MUAPTs, the identification error was reduced by an average of 1.78%, improving the accuracy to 97.0%, and the location error was reduced by an average of 1.66 ms. The error-reduction algorithm in this study is not limited to any specific decomposition strategy. Rather, we propose it be used for other decomposition methods, especially when analyzing precise motor-unit firing instances, as occurs when measuring synchronization. PMID:25210159

  20. Recognition errors by honey bee (Apis mellifera) guards demonstrate overlapping cues in conspecific recognition

    PubMed Central

    Couvillon, Margaret J; Roy, Gabrielle G F; Ratnieks, Francis L W

    2015-01-01

    Summary Honey bee (Apis mellifera) entrance guards discriminate nestmates from intruders. We tested the hypothesis that the recognition cues between nestmate bees and intruder bees overlap by comparing their acceptances with that of worker common wasps, Vespula vulgaris, by entrance guards. If recognition cues of nestmate and non-nestmate bees overlap, we would expect recognition errors. Conversely, we hypothesised that guards would not make errors in recognizing wasps because wasps and bees should have distinct, non-overlapping cues. We found both to be true. There was a negative correlation between errors in recognizing nestmates (error: reject nestmate) and nonnestmates (error: accept non-nestmate) bees such that when guards were likely to reject nestmates, they were less likely to accept a nonnestmate; conversely, when guards were likely to accept a non-nestmate, they were less likely to reject a nestmate. There was, however, no correlation between errors in the recognition of nestmate bees (error: reject nestmate) and wasps (error: accept wasp), demonstrating that guards were able to reject wasps categorically. Our results strongly support that overlapping cue distributions occur, resulting in errors and leading to adaptive shifts in guard acceptance thresholds PMID:26005220

  1. The implicit benefit of learning without errors.

    PubMed

    Maxwell, J P; Masters, R S; Kerr, E; Weedon, E

    2001-11-01

    Two studies examined whether the number of errors made in learning a motor skill, golf putting, differentially influences the adoption of a selective (explicit) or unselective (implicit) learning mode. Errorful learners were expected to adopt an explicit, hypothesis-testing strategy to correct errors during learning, thereby accruing a pool of verbalizable rules and exhibiting performance breakdown under dual-task conditions, characteristic of a selective mode of learning. Reducing errors during learning was predicted to minimize the involvement of explicit hypothesis testing leading to the adoption of an unselective mode of learning, distinguished by few verbalizable rules and robust performance under secondary task loading. Both studies supported these predictions. The golf putting performance of errorless learners in both studies was unaffected by the imposition of a secondary task load, whereas the performance of errorful learners deteriorated. Reducing errors during learning limited the number of error-correcting hypotheses tested by the learner, thereby reducing the contribution of explicit processing to skill acquisition. It was concluded that the reduction of errors during learning encourages the use of implicit, unselective learning processes, which confer insusceptibility to performance breakdown under distraction.

  2. Nurses' acceptance of Smart IV pump technology.

    PubMed

    Carayon, Pascale; Hundt, Ann Schoofs; Wetterneck, Tosha B

    2010-06-01

    "Smart" intravenous infusion pumps (Smart IV pumps) are increasingly being implemented in hospitals to reduce medication administration errors. This study examines nurses' experience with the implementation and use of a Smart IV pump in an academic hospital. Data were collected in three longitudinal surveys: (a) a pre-implementation survey, (b) a 6-week-post-implementation survey, and (c) a 1-year-post-implementation survey. We examined: (a) the technology implementation process, (b) technical performance of the pump, (c) usability of the pump, and (d) user acceptance of the pump. Initially, nurses had a somewhat positive acceptance of the Smart IV pump technology that significantly increased one year after implementation. User experiences associated with the pump in general improved over time, especially perceptions of pump efficiency. However, user experience with the pump implementation process and pump technical performance did not consistently improve from the pre-implementation survey to the post-implementation survey. Several characteristics of pump technical performance and usability influenced user acceptance at the one-year post-implementation survey. These data may be useful for other institutions to guide implementation and post-implementation follow-up of IV pump use; other institutions could use the survey instrument from this study to evaluate nurses' perceptions of the technology. Our study identified several characteristics of the implementation process that other institutions may need to pay attention to (e.g., sharing information about the implementation process with nurses). Copyright 2010 Elsevier Ireland Ltd. All rights reserved.

  3. Nurses’ Acceptance of Smart IV Pump Technology

    PubMed Central

    Carayon, Pascale; Hundt, Ann Schoofs; Wetterneck, Tosha B.

    2010-01-01

    Background “Smart” intravenous infusion pumps (Smart IV pumps) are increasingly being implemented in hospitals to reduce medication administration errors. Objectives This study examines nurses’ experience with the implementation and use of a Smart IV pump in an academic hospital. Method Data were collected in three longitudinal surveys: (a) a pre-implementation survey, (b) a 6-week-post-implementation survey, and (c) a 1-year-post-implementation survey. We examined: (a) the technology implementation process, (b) technical performance of the pump, (c) usability of the pump, and (d) user acceptance of the pump. Results Initially, nurses had a somewhat positive acceptance of the Smart IV pump technology that significantly increased one year after implementation. User experiences associated with the pump in general improved over time, especially perceptions of pump efficiency. However, user experience with the pump implementation process and pump technical performance did not consistently improve from the pre-implementation survey to the post-implementation survey. Several characteristics of pump technical performance and usability influenced user acceptance at the one-year post-implementation survey. Discussion These data may be useful for other institutions to guide implementation and post-implementation follow-up of IV pump use; other institutions could use the survey instrument from this study to evaluate nurses’ perceptions of the technology. Our study identified several characteristics of the implementation process that other institutions may need to pay attention to (e.g., sharing information about the implementation process with nurses). PMID:20219423

  4. Increasing Our Acceptance as Parents of Children with Special Needs

    ERIC Educational Resources Information Center

    Loewenstein, David

    2007-01-01

    Accepting the limitations of a child whose life was supposed to be imbued with endless possibilities requires parents to come to terms with expectations of themselves and the world around them. In this article, the author offers some helpful strategies for fostering acceptance and strengthening family relationships: (1) Remember that parenting is…

  5. Treatment Acceptability of Healthcare Services for Children with Cerebral Palsy

    ERIC Educational Resources Information Center

    Dahl, Norm; Tervo, Raymond; Symons, Frank J.

    2007-01-01

    Background: Although treatment acceptability scales in intellectual and developmental disabilities research have been used in large- and small-scale applications, large-scale application has been limited to analogue (i.e. contrived) investigations. This study extended the application of treatment acceptability by assessing a large sample of care…

  6. Increasing Our Acceptance as Parents of Children with Special Needs

    ERIC Educational Resources Information Center

    Loewenstein, David

    2007-01-01

    Accepting the limitations of a child whose life was supposed to be imbued with endless possibilities requires parents to come to terms with expectations of themselves and the world around them. In this article, the author offers some helpful strategies for fostering acceptance and strengthening family relationships: (1) Remember that parenting is…

  7. Human Factors Process Task Analysis: Liquid Oxygen Pump Acceptance Test Procedure at the Advanced Technology Development Center

    NASA Technical Reports Server (NTRS)

    Diorio, Kimberly A.; Voska, Ned (Technical Monitor)

    2002-01-01

    This viewgraph presentation provides information on Human Factors Process Failure Modes and Effects Analysis (HF PFMEA). HF PFMEA includes the following 10 steps: Describe mission; Define System; Identify human-machine; List human actions; Identify potential errors; Identify factors that effect error; Determine likelihood of error; Determine potential effects of errors; Evaluate risk; Generate solutions (manage error). The presentation also describes how this analysis was applied to a liquid oxygen pump acceptance test.

  8. Human error in aviation operations

    NASA Technical Reports Server (NTRS)

    Nagel, David C.

    1988-01-01

    The role of human error in commercial and general aviation accidents and the techniques used to evaluate it are reviewed from a human-factors perspective. Topics addressed include the general decline in accidents per million departures since the 1960s, the increase in the proportion of accidents due to human error, methods for studying error, theoretical error models, and the design of error-resistant systems. Consideration is given to information acquisition and processing errors, visually guided flight, disorientation, instrument-assisted guidance, communication errors, decision errors, debiasing, and action errors.

  9. Error monitoring in musicians

    PubMed Central

    Maidhof, Clemens

    2013-01-01

    To err is human, and hence even professional musicians make errors occasionally during their performances. This paper summarizes recent work investigating error monitoring in musicians, i.e., the processes and their neural correlates associated with the monitoring of ongoing actions and the detection of deviations from intended sounds. Electroencephalography (EEG) studies reported an early component of the event-related potential (ERP) occurring before the onsets of pitch errors. This component, which can be altered in musicians with focal dystonia, likely reflects processes of error detection and/or error compensation, i.e., attempts to cancel the undesired sensory consequence (a wrong tone) a musician is about to perceive. Thus, auditory feedback seems not to be a prerequisite for error detection, consistent with previous behavioral results. In contrast, when auditory feedback is externally manipulated and thus unexpected, motor performance can be severely distorted, although not all feedback alterations result in performance impairments. Recent studies investigating the neural correlates of feedback processing showed that unexpected feedback elicits an ERP component after note onsets, which shows larger amplitudes during music performance than during mere perception of the same musical sequences. Hence, these results stress the role of motor actions for the processing of auditory information. Furthermore, recent methodological advances like the combination of 3D motion capture techniques with EEG will be discussed. Such combinations of different measures can potentially help to disentangle the roles of different feedback types such as proprioceptive and auditory feedback, and in general to derive at a better understanding of the complex interactions between the motor and auditory domain during error monitoring. Finally, outstanding questions and future directions in this context will be discussed. PMID:23898255

  10. Errata: Papers in Error Analysis.

    ERIC Educational Resources Information Center

    Svartvik, Jan, Ed.

    Papers presented at the symposium of error analysis in Lund, Sweden, in September 1972, approach error analysis specifically in its relation to foreign language teaching and second language learning. Error analysis is defined as having three major aspects: (1) the description of the errors, (2) the explanation of errors by means of contrastive…

  11. The Michelson Stellar Interferometer Error Budget for Triple Triple-Satellite Configuration

    NASA Technical Reports Server (NTRS)

    Marathay, Arvind S.; Shiefman, Joe

    1996-01-01

    This report presents the results of a study of the instrumentation tolerances for a conventional style Michelson stellar interferometer (MSI). The method used to determine the tolerances was to determine the change, due to the instrument errors, in the measured fringe visibility and phase relative to the ideal values. The ideal values are those values of fringe visibility and phase that would be measured by a perfect MSI and are attributable solely to the object being detected. Once the functional relationship for changes in visibility and phase as a function of various instrument errors is understood it is then possible to set limits on the instrument errors in order to ensure that the measured visibility and phase are different from the ideal values by no more than some specified amount. This was done as part of this study. The limits we obtained are based on a visibility error of no more than 1% and a phase error of no more than 0.063 radians (this comes from 1% of 2(pi) radians). The choice of these 1% limits is supported in the literture. The approach employed in the study involved the use of ASAP (Advanced System Analysis Program) software provided by Breault Research Organization, Inc., in conjunction with parallel analytical calculations. The interferometer accepts object radiation into two separate arms each consisting of an outer mirror, an inner mirror, a delay line (made up of two moveable mirrors and two static mirrors), and a 10:1 afocal reduction telescope. The radiation coming out of both arms is incident on a slit plane which is opaque with two openings (slits). One of the two slits is centered directly under one of the two arms of the interferometer and the other slit is centered directly under the other arm. The slit plane is followed immediately by an ideal combining lens which images the radiation in the fringe plane (also referred to subsequently as the detector plane).

  12. Technical note: The effect of midshaft location on the error ranges of femoral and tibial cross-sectional parameters.

    PubMed

    Sládek, Vladimír; Berner, Margit; Galeta, Patrik; Friedl, Lukás; Kudrnová, Sárka

    2010-02-01

    In comparing long-bone cross-sectional geometric properties between individuals, percentages of bone length are often used to identify equivalent locations along the diaphysis. In fragmentary specimens where bone lengths cannot be measured, however, these locations must be estimated more indirectly. In this study, we examine the effect of inaccurately located femoral and tibial midshafts on estimation of geometric properties. The error ranges were compared on 30 femora and tibiae from the Eneolithic and Bronze Age. Cross-sections were obtained at each 1% interval from 60 to 40% of length using CT scans. Five percent of deviation from midshaft properties was used as the maximum acceptable error. Reliability was expressed by mean percentage differences, standard deviation of percentage differences, mean percentage absolute differences, limits of agreement, and mean accuracy range (MAR) (range within which mean deviation from true midshaft values was less than 5%). On average, tibial cortical area and femoral second moments of area are the least sensitive to positioning error, with mean accuracy ranges wide enough for practical application in fragmentary specimens (MAR = 40-130 mm). In contrast, tibial second moments of area are the most sensitive to error in midshaft location (MAR = 14-20 mm). Individuals present significant variation in morphology and thus in error ranges for different properties. For highly damaged fossil femora and tibiae we recommend carrying out additional tests to better establish specific errors associated with uncertain length estimates.

  13. Good people who try their best can have problems: recognition of human factors and how to minimise error.

    PubMed

    Brennan, Peter A; Mitchell, David A; Holmes, Simon; Plint, Simon; Parry, David

    2016-01-01

    Human error is as old as humanity itself and is an appreciable cause of mistakes by both organisations and people. Much of the work related to human factors in causing error has originated from aviation where mistakes can be catastrophic not only for those who contribute to the error, but for passengers as well. The role of human error in medical and surgical incidents, which are often multifactorial, is becoming better understood, and includes both organisational issues (by the employer) and potential human factors (at a personal level). Mistakes as a result of individual human factors and surgical teams should be better recognised and emphasised. Attitudes and acceptance of preoperative briefing has improved since the introduction of the World Health Organization (WHO) surgical checklist. However, this does not address limitations or other safety concerns that are related to performance, such as stress and fatigue, emotional state, hunger, awareness of what is going on situational awareness, and other factors that could potentially lead to error. Here we attempt to raise awareness of these human factors, and highlight how they can lead to error, and how they can be minimised in our day-to-day practice. Can hospitals move from being "high risk industries" to "high reliability organisations"? Copyright © 2015 The British Association of Oral and Maxillofacial Surgeons. Published by Elsevier Ltd. All rights reserved.

  14. Detecting imipenem resistance in Acinetobacter baumannii by automated systems (BD Phoenix, Microscan WalkAway, Vitek 2); high error rates with Microscan WalkAway

    PubMed Central

    2009-01-01

    Background Increasing reports of carbapenem resistant Acinetobacter baumannii infections are of serious concern. Reliable susceptibility testing results remains a critical issue for the clinical outcome. Automated systems are increasingly used for species identification and susceptibility testing. This study was organized to evaluate the accuracies of three widely used automated susceptibility testing methods for testing the imipenem susceptibilities of A. baumannii isolates, by comparing to the validated test methods. Methods Selected 112 clinical isolates of A. baumanii collected between January 2003 and May 2006 were tested to confirm imipenem susceptibility results. Strains were tested against imipenem by the reference broth microdilution (BMD), disk diffusion (DD), Etest, BD Phoenix, MicroScan WalkAway and Vitek 2 automated systems. Data were analysed by comparing the results from each test method to those produced by the reference BMD test. Results MicroScan performed true identification of all A. baumannii strains while Vitek 2 unidentified one strain, Phoenix unidentified two strains and misidentified two strains. Eighty seven of the strains (78%) were resistant to imipenem by BMD. Etest, Vitek 2 and BD Phoenix produced acceptable error rates when tested against imipenem. Etest showed the best performance with only two minor errors (1.8%). Vitek 2 produced eight minor errors(7.2%). BD Phoenix produced three major errors (2.8%). DD produced two very major errors (1.8%) (slightly higher (0.3%) than the acceptable limit) and three major errors (2.7%). MicroScan showed the worst performance in susceptibility testing with unacceptable error rates; 28 very major (25%) and 50 minor errors (44.6%). Conclusion Reporting errors for A. baumannii against imipenem do exist in susceptibility testing systems. We suggest clinical laboratories using MicroScan system for routine use should consider using a second, independent antimicrobial susceptibility testing method to

  15. Pediatric antidepressant medication errors in a national error reporting database.

    PubMed

    Rinke, Michael L; Bundy, David G; Shore, Andrew D; Colantuoni, Elizabeth; Morlock, Laura L; Miller, Marlene R

    2010-01-01

    To describe inpatient and outpatient pediatric antidepressant medication errors. We analyzed all error reports from the United States Pharmacopeia MEDMARX database, from 2003 to 2006, involving antidepressant medications and patients younger than 18 years. Of the 451 error reports identified, 95% reached the patient, 6.4% reached the patient and necessitated increased monitoring and/or treatment, and 77% involved medications being used off label. Thirty-three percent of errors cited administering as the macrolevel cause of the error, 30% cited dispensing, 28% cited transcribing, and 7.9% cited prescribing. The most commonly cited medications were sertraline (20%), bupropion (19%), fluoxetine (15%), and trazodone (11%). We found no statistically significant association between medication and reported patient harm; harmful errors involved significantly more administering errors (59% vs 32%, p = .023), errors occurring in inpatient care (93% vs 68%, p = .012) and extra doses of medication (31% vs 10%, p = .025) compared with nonharmful errors. Outpatient errors involved significantly more dispensing errors (p < .001) and more errors due to inaccurate or omitted transcription (p < .001), compared with inpatient errors. Family notification of medication errors was reported in only 12% of errors. Pediatric antidepressant errors often reach patients, frequently involve off-label use of medications, and occur with varying severity and type depending on location and type of medication prescribed. Education and research should be directed toward prompt medication error disclosure and targeted error reduction strategies for specific medication types and settings.

  16. Compact disk error measurements

    NASA Technical Reports Server (NTRS)

    Howe, D.; Harriman, K.; Tehranchi, B.

    1993-01-01

    The objectives of this project are as follows: provide hardware and software that will perform simple, real-time, high resolution (single-byte) measurement of the error burst and good data gap statistics seen by a photoCD player read channel when recorded CD write-once discs of variable quality (i.e., condition) are being read; extend the above system to enable measurement of the hard decision (i.e., 1-bit error flags) and soft decision (i.e., 2-bit error flags) decoding information that is produced/used by the Cross Interleaved - Reed - Solomon - Code (CIRC) block decoder employed in the photoCD player read channel; construct a model that uses data obtained via the systems described above to produce meaningful estimates of output error rates (due to both uncorrected ECC words and misdecoded ECC words) when a CD disc having specific (measured) error statistics is read (completion date to be determined); and check the hypothesis that current adaptive CIRC block decoders are optimized for pressed (DAD/ROM) CD discs. If warranted, do a conceptual design of an adaptive CIRC decoder that is optimized for write-once CD discs.

  17. Reducing prospective memory error and costs in simulated air traffic control: External aids, extending practice, and removing perceived memory requirements.

    PubMed

    Loft, Shayne; Chapman, Melissa; Smith, Rebekah E

    2016-09-01

    In air traffic control (ATC), forgetting to perform deferred actions-prospective memory (PM) errors-can have severe consequences. PM demands can also interfere with ongoing tasks (costs). We examined the extent to which PM errors and costs were reduced in simulated ATC by providing extended practice, or by providing external aids combined with extended practice, or by providing external aids combined with instructions that removed perceived memory requirements. Participants accepted/handed-off aircraft and detected conflicts. For the PM task, participants were required to substitute alternative actions for routine actions when accepting aircraft. In Experiment 1, when no aids were provided, PM errors and costs were not reduced by practice. When aids were provided, costs observed early in practice were eliminated with practice, but residual PM errors remained. Experiment 2 provided more limited practice with aids, but instructions that did not frame the PM task as a "memory" task led to high PM accuracy without costs. Attention-allocation policies that participants set based on expected PM demands were modified as individuals were increasingly exposed to reliable aids, or were given instructions that removed perceived memory requirements. These findings have implications for the design of aids for individuals who monitor multi-item dynamic displays. (PsycINFO Database Record (c) 2016 APA, all rights reserved).

  18. Experimental Quantum Error Detection

    PubMed Central

    Jin, Xian-Min; Yi, Zhen-Huan; Yang, Bin; Zhou, Fei; Yang, Tao; Peng, Cheng-Zhi

    2012-01-01

    Faithful transmission of quantum information is a crucial ingredient in quantum communication networks. To overcome the unavoidable decoherence in a noisy channel, to date, many efforts have been made to transmit one state by consuming large numbers of time-synchronized ancilla states. However, such huge demands of quantum resources are hard to meet with current technology and this restricts practical applications. Here we experimentally demonstrate quantum error detection, an economical approach to reliably protecting a qubit against bit-flip errors. Arbitrary unknown polarization states of single photons and entangled photons are converted into time bins deterministically via a modified Franson interferometer. Noise arising in both 10 m and 0.8 km fiber, which induces associated errors on the reference frame of time bins, is filtered when photons are detected. The demonstrated resource efficiency and state independence make this protocol a promising candidate for implementing a real-world quantum communication network. PMID:22953047

  19. The influence of the IMRT QA set-up error on the 2D and 3D gamma evaluation method as obtained by using Monte Carlo simulations

    NASA Astrophysics Data System (ADS)

    Kim, Kyeong-Hyeon; Kim, Dong-Su; Kim, Tae-Ho; Kang, Seong-Hee; Cho, Min-Seok; Suh, Tae Suk

    2015-11-01

    The phantom-alignment error is one of the factors affecting delivery quality assurance (QA) accuracy in intensity-modulated radiation therapy (IMRT). Accordingly, a possibility of inadequate use of spatial information in gamma evaluation may exist for patient-specific IMRT QA. The influence of the phantom-alignment error on gamma evaluation can be demonstrated experimentally by using the gamma passing rate and the gamma value. However, such experimental methods have a limitation regarding the intrinsic verification of the influence of the phantom set-up error because experimentally measuring the phantom-alignment error accurately is impossible. To overcome this limitation, we aimed to verify the effect of the phantom set-up error within the gamma evaluation formula by using a Monte Carlo simulation. Artificial phantom set-up errors were simulated, and the concept of the true point (TP) was used to represent the actual coordinates of the measurement point for the mathematical modeling of these effects on the gamma. Using dose distributions acquired from the Monte Carlo simulation, performed gamma evaluations in 2D and 3D. The results of the gamma evaluations and the dose difference at the TP were classified to verify the degrees of dose reflection at the TP. The 2D and the 3D gamma errors were defined by comparing gamma values between the case of the imposed phantom set-up error and the TP in order to investigate the effect of the set-up error on the gamma value. According to the results for gamma errors, the 3D gamma evaluation reflected the dose at the TP better than the 2D one. Moreover, the gamma passing rates were higher for 3D than for 2D, as is widely known. Thus, the 3D gamma evaluation can increase the precision of patient-specific IMRT QA by applying stringent acceptance criteria and setting a reasonable action level for the 3D gamma passing rate.

  20. NLO error propagation exercise: statistical results

    SciTech Connect

    Pack, D.J.; Downing, D.J.

    1985-09-01

    Error propagation is the extrapolation and cumulation of uncertainty (variance) above total amounts of special nuclear material, for example, uranium or /sup 235/U, that are present in a defined location at a given time. The uncertainty results from the inevitable inexactness of individual measurements of weight, uranium concentration, /sup 235/U enrichment, etc. The extrapolated and cumulated uncertainty leads directly to quantified limits of error on inventory differences (LEIDs) for such material. The NLO error propagation exercise was planned as a field demonstration of the utilization of statistical error propagation methodology at the Feed Materials Production Center in Fernald, Ohio from April 1 to July 1, 1983 in a single material balance area formed specially for the exercise. Major elements of the error propagation methodology were: variance approximation by Taylor Series expansion; variance cumulation by uncorrelated primary error sources as suggested by Jaech; random effects ANOVA model estimation of variance effects (systematic error); provision for inclusion of process variance in addition to measurement variance; and exclusion of static material. The methodology was applied to material balance area transactions from the indicated time period through a FORTRAN computer code developed specifically for this purpose on the NLO HP-3000 computer. This paper contains a complete description of the error propagation methodology and a full summary of the numerical results of applying the methodlogy in the field demonstration. The error propagation LEIDs did encompass the actual uranium and /sup 235/U inventory differences. Further, one can see that error propagation actually provides guidance for reducing inventory differences and LEIDs in future time periods.

  1. Enteral feeding pumps: efficacy, safety, and patient acceptability

    PubMed Central

    White, Helen; King, Linsey

    2014-01-01

    Enteral feeding is a long established practice across pediatric and adult populations, to enhance nutritional intake and prevent malnutrition. Despite recognition of the importance of nutrition within the modern health agenda, evaluation of the efficacy of how such feeds are delivered is more limited. The accuracy, safety, and consistency with which enteral feed pump systems dispense nutritional formulae are important determinants of their use and acceptability. Enteral feed pump safety has received increased interest in recent years as enteral pumps are used across hospital and home settings. Four areas of enteral feed pump safety have emerged: the consistent and accurate delivery of formula; the minimization of errors associated with tube misconnection; the impact of continuous feed delivery itself (via an enteral feed pump); and the chemical composition of the casing used in enteral feed pump manufacture. The daily use of pumps in delivery of enteral feeds in a home setting predominantly falls to the hands of parents and caregivers. Their understanding of the use and function of their pump is necessary to ensure appropriate, safe, and accurate delivery of enteral nutrition; their experience with this is important in informing clinicians and manufacturers of the emerging needs and requirements of this diverse patient population. The review highlights current practice and areas of concern and establishes our current knowledge in this field. PMID:25170284

  2. Enteral feeding pumps: efficacy, safety, and patient acceptability.

    PubMed

    White, Helen; King, Linsey

    2014-01-01

    Enteral feeding is a long established practice across pediatric and adult populations, to enhance nutritional intake and prevent malnutrition. Despite recognition of the importance of nutrition within the modern health agenda, evaluation of the efficacy of how such feeds are delivered is more limited. The accuracy, safety, and consistency with which enteral feed pump systems dispense nutritional formulae are important determinants of their use and acceptability. Enteral feed pump safety has received increased interest in recent years as enteral pumps are used across hospital and home settings. Four areas of enteral feed pump safety have emerged: the consistent and accurate delivery of formula; the minimization of errors associated with tube misconnection; the impact of continuous feed delivery itself (via an enteral feed pump); and the chemical composition of the casing used in enteral feed pump manufacture. The daily use of pumps in delivery of enteral feeds in a home setting predominantly falls to the hands of parents and caregivers. Their understanding of the use and function of their pump is necessary to ensure appropriate, safe, and accurate delivery of enteral nutrition; their experience with this is important in informing clinicians and manufacturers of the emerging needs and requirements of this diverse patient population. The review highlights current practice and areas of concern and establishes our current knowledge in this field.

  3. Error Free Software

    NASA Technical Reports Server (NTRS)

    1985-01-01

    A mathematical theory for development of "higher order" software to catch computer mistakes resulted from a Johnson Space Center contract for Apollo spacecraft navigation. Two women who were involved in the project formed Higher Order Software, Inc. to develop and market the system of error analysis and correction. They designed software which is logically error-free, which, in one instance, was found to increase productivity by 600%. USE.IT defines its objectives using AXES -- a user can write in English and the system converts to computer languages. It is employed by several large corporations.

  4. Securing Teacher Acceptance of Technology.

    ERIC Educational Resources Information Center

    Lidtke, Doris K.

    This paper offers an historical perspective as to why teachers may be reluctant to accept new technologies, and what might persuade them to use computers in their classrooms. The analysis also suggests some methods for minimizing the inhibiting factors and maximizing the acceptance of computers in elementary and secondary school settings.…

  5. Accepters and Rejecters of Counseling.

    ERIC Educational Resources Information Center

    Rose, Harriett A.; Elton, Charles F.

    Personality differences between students who accept or reject proffered counseling assistance were investigated by comparing personality traits of 116 male students at the University of Kentucky who accepted or rejected letters of invitation to group counseling. Factor analysis of Omnibus Personality Inventory (OPI) scores to two groups of 60 and…

  6. Cone penetrometer acceptance test report

    SciTech Connect

    Boechler, G.N.

    1996-09-19

    This Acceptance Test Report (ATR) documents the results of acceptance test procedure WHC-SD-WM-ATR-151. Included in this report is a summary of the tests, the results and issues, the signature and sign- off ATP pages, and a summarized table of the specification vs. ATP section that satisfied the specification.

  7. [Accuracy limits in IOL calculation: current status].

    PubMed

    Preussner, P-R

    2007-12-01

    Overview over the single independent error contributions of IOL power calculation and their currently achievable lower limits. Analysis of the reasons of avoidable and unavoidable single errors which contribute to the overall error: measuring errors of axial length and corneal radii; errors due to neglecting of relevant influences such as pupil width, asphericity of cornea and IOL and IOL geometry; calculation errors from inadequate calculation methods; estimation errors of postoperative IOL position; IOL manufacturing errors. The said error contributions are to be compared with the reproducibility error of the refraction. All calculations use a numerical raytracing based on the geometric-optical IOL manufacturing data. Axial eye length with an error of approximately 0.2 D is no longer the dominating error if the measurements are performed by interferometry, the same is true for corneal readii in normal eyes. The latter, however, causes the dominant error in eyes after corneal refractive surgery ( approximately 1.5 D) if measured only by keratometry. This error can be avoided if a topographic measurement is included into the raytracing, and in some cases also the measurement of posterior corneal surface has to be included. Currently the dominant unavoidable error contribution results from the uncertainty of postoperative IOL position ( approximately 0.35 D). Some errors of classical IOL formulae can be avoided by raytracing. But if the total error threshold shall be below the error of refraction, the prediction accuracy of postoperative IOL position must be improved.

  8. Variation transmission model for setting acceptance criteria in a multi-staged pharmaceutical manufacturing process.

    PubMed

    Montes, Richard O

    2012-03-01

    Pharmaceutical manufacturing processes consist of a series of stages (e.g., reaction, workup, isolation) to generate the active pharmaceutical ingredient (API). Outputs at intermediate stages (in-process control) and API need to be controlled within acceptance criteria to assure final drug product quality. In this paper, two methods based on tolerance interval to derive such acceptance criteria will be evaluated. The first method is serial worst case (SWC), an industry risk minimization strategy, wherein input materials and process parameters of a stage are fixed at their worst-case settings to calculate the maximum level expected from the stage. This maximum output then becomes input to the next stage wherein process parameters are again fixed at worst-case setting. The procedure is serially repeated throughout the process until the final stage. The calculated limits using SWC can be artificially high and may not reflect the actual process performance. The second method is the variation transmission (VT) using autoregressive model, wherein variation transmitted up to a stage is estimated by accounting for the recursive structure of the errors at each stage. Computer simulations at varying extent of variation transmission and process stage variability are performed. For the scenarios tested, VT method is demonstrated to better maintain the simulated confidence level and more precisely estimate the true proportion parameter than SWC. Real data examples are also presented that corroborate the findings from the simulation. Overall, VT is recommended for setting acceptance criteria in a multi-staged pharmaceutical manufacturing process.

  9. Sources of error in emergency ultrasonography.

    PubMed

    Pinto, Antonio; Pinto, Fabio; Faggian, Angela; Rubini, Giuseppe; Caranci, Ferdinando; Macarini, Luca; Genovese, Eugenio Annibale; Brunese, Luca

    2013-07-15

    To evaluate the common sources of diagnostic errors in emergency ultrasonography. The authors performed a Medline search using PubMed (National Library of Medicine, Bethesda, Maryland) for original research and review publications examining the common sources of errors in diagnosis with specific reference to emergency ultrasonography. The search design utilized different association of the following terms : (1) emergency ultrasonography, (2) error, (3) malpractice and (4) medical negligence. This review was restricted to human studies and to English-language literature. Four authors reviewed all the titles and subsequent the abstract of 171 articles that appeared appropriate. Other articles were recognized by reviewing the reference lists of significant papers. Finally, the full text of 48 selected articles was reviewed. Several studies indicate that the etiology of error in emergency ultrasonography is multi-factorial. Common sources of error in emergency ultrasonography are: lack of attention to the clinical history and examination, lack of communication with the patient, lack of knowledge of the technical equipment, use of inappropriate probes, inadequate optimization of the images, failure of perception, lack of knowledge of the possible differential diagnoses, over-estimation of one's own skill, failure to suggest further ultrasound examinations or other imaging techniques. To reduce errors in interpretation of ultrasonographic findings, the sonographer needs to be aware of the limitations of ultrasonography in the emergency setting, and the similarities in the appearances of various physiological and pathological processes. Adequate clinical informations are essential. Diagnostic errors should be considered not as signs of failure, but as learning opportunities.

  10. Prevention of medication errors: detection and audit.

    PubMed

    Montesi, Germana; Lechi, Alessandro

    2009-06-01

    1. Medication errors have important implications for patient safety, and their identification is a main target in improving clinical practice errors, in order to prevent adverse events. 2. Error detection is the first crucial step. Approaches to this are likely to be different in research and routine care, and the most suitable must be chosen according to the setting. 3. The major methods for detecting medication errors and associated adverse drug-related events are chart review, computerized monitoring, administrative databases, and claims data, using direct observation, incident reporting, and patient monitoring. All of these methods have both advantages and limitations. 4. Reporting discloses medication errors, can trigger warnings, and encourages the diffusion of a culture of safe practice. Combining and comparing data from various and encourages the diffusion of a culture of safe practice sources increases the reliability of the system. 5. Error prevention can be planned by means of retroactive and proactive tools, such as audit and Failure Mode, Effect, and Criticality Analysis (FMECA). Audit is also an educational activity, which promotes high-quality care; it should be carried out regularly. In an audit cycle we can compare what is actually done against reference standards and put in place corrective actions to improve the performances of individuals and systems. 6. Patient safety must be the first aim in every setting, in order to build safer systems, learning from errors and reducing the human and fiscal costs.

  11. Orwell's Instructive Errors

    ERIC Educational Resources Information Center

    Julian, Liam

    2009-01-01

    In this article, the author talks about George Orwell, his instructive errors, and the manner in which Orwell pierced worthless theory, faced facts and defended decency (with fluctuating success), and largely ignored the tradition of accumulated wisdom that has rendered him a timeless teacher--one whose inadvertent lessons, while infrequently…

  12. Satellite Photometric Error Determination

    DTIC Science & Technology

    2015-10-18

    the errors associated with optical photometry used in non-resolved object characterization for the Space Situational Awareness (SSA) community. We...begin with an overview of the standard astronomical techniques used to measure the brightness of spatially unresolved objects (point source photometry ...in deep space. After discussing the standard astronomical techniques, we present the application of astronomical photometry for the purposes of

  13. Backtracking and error correction in DNA transcription

    NASA Astrophysics Data System (ADS)

    Voliotis, Margaritis; Cohen, Netta; Molina-Paris, Carmen; Liverpool, Tanniemola

    2008-03-01

    Genetic information is encoded in the nucleotide sequence of the DNA. This sequence contains the instruction code of the cell - determining protein structure and function, and hence cell function and fate. The viability and endurance of organisms crucially depend on the fidelity with which genetic information is transcribed/translated (during mRNA and protein production) and replicated (during DNA replication). However, thermodynamics introduces significant fluctuations which would incur massive error rates if efficient proofreading mechanisms were not in place. Here, we examine a putative mechanism for error correction during DNA transcription, which relies on backtracking of the RNA polymerase (RNAP). We develop an error correction model that incorporates RNAP translocation, backtracking pauses and mRNA cleavage. We calculate the error rate as a function of the relevant rates (translocation, cleavage, backtracking and polymerization) and show that the its theoretical limit is equivalent to that accomplished by a multiple-step kinetic proofreading mechanism.

  14. A statistical model for point-based target registration error with anisotropic fiducial localizer error.

    PubMed

    Wiles, Andrew D; Likholyot, Alexander; Frantz, Donald D; Peters, Terry M

    2008-03-01

    Error models associated with point-based medical image registration problems were first introduced in the late 1990s. The concepts of fiducial localizer error, fiducial registration error, and target registration error are commonly used in the literature. The model for estimating the target registration error at a position r in a coordinate frame defined by a set of fiducial markers rigidly fixed relative to one another is ubiquitous in the medical imaging literature. The model has also been extended to simulate the target registration error at the point of interest in optically tracked tools. However, the model is limited to describing the error in situations where the fiducial localizer error is assumed to have an isotropic normal distribution in R3. In this work, the model is generalized to include a fiducial localizer error that has an anisotropic normal distribution. Similar to the previous models, the root mean square statistic rms tre is provided along with an extension that provides the covariance Sigma tre. The new model is verified using a Monte Carlo simulation and a set of statistical hypothesis tests. Finally, the differences between the two assumptions, isotropic and anisotropic, are discussed within the context of their use in 1) optical tool tracking simulation and 2) image registration.

  15. Functional Error Models to Accelerate Nested Sampling

    NASA Astrophysics Data System (ADS)

    Josset, L.; Elsheikh, A. H.; Demyanov, V.; Lunati, I.

    2014-12-01

    Sampling algorithm, the proposed geostatistical realization is first evaluated through the approximate model to decide whether it is useful or not to perform a full physics simulation. This improves the acceptance rate of full physics simulations and opens the door to iteratively test the performance and improve the quality of the error model.

  16. (Errors in statistical tests)3.

    PubMed

    Phillips, Carl V; MacLehose, Richard F; Kaufman, Jay S

    2008-07-14

    In 2004, Garcia-Berthou and Alcaraz published "Incongruence between test statistics and P values in medical papers," a critique of statistical errors that received a tremendous amount of attention. One of their observations was that the final reported digit of p-values in articles published in the journal Nature departed substantially from the uniform distribution that they suggested should be expected. In 2006, Jeng critiqued that critique, observing that the statistical analysis of those terminal digits had been based on comparing the actual distribution to a uniform continuous distribution, when digits obviously are discretely distributed. Jeng corrected the calculation and reported statistics that did not so clearly support the claim of a digit preference. However delightful it may be to read a critique of statistical errors in a critique of statistical errors, we nevertheless found several aspects of the whole exchange to be quite troubling, prompting our own meta-critique of the analysis.The previous discussion emphasized statistical significance testing. But there are various reasons to expect departure from the uniform distribution in terminal digits of p-values, so that simply rejecting the null hypothesis is not terribly informative. Much more importantly, Jeng found that the original p-value of 0.043 should have been 0.086, and suggested this represented an important difference because it was on the other side of 0.05. Among the most widely reiterated (though often ignored) tenets of modern quantitative research methods is that we should not treat statistical significance as a bright line test of whether we have observed a phenomenon. Moreover, it sends the wrong message about the role of statistics to suggest that a result should be dismissed because of limited statistical precision when it is so easy to gather more data.In response to these limitations, we gathered more data to improve the statistical precision, and analyzed the actual pattern of the

  17. Error thresholds for RNA replication in the presence of both point mutations and premature termination errors.

    PubMed

    Tupper, Andrew S; Higgs, Paul G

    2017-09-07

    We consider a spatial model of replication in the RNA World in which polymerase ribozymes use neighbouring strands as templates. Point mutation errors create parasites that have the same replication rate as the polymerase. We have shown previously that spatial clustering allows survival of the polymerases as long as the error rate is below a critical error threshold. Here, we additionally consider errors where a polymerase prematurely terminates replication before reaching the end of the template, creating shorter parasites that are replicated faster than the functional polymerase. In well-known experiments where Qβ RNA is replicated by an RNA polymerase protein, the virus RNA is rapidly replaced by very short non-functional sequences. If the same thing were to occur when the polymerase is a ribozyme, this would mean that termination errors could potentially destroy the RNA World. In this paper, we show that this is not the case in the RNA replication model studied here. When there is continued generation of parasites of all lengths by termination errors, the system can survive up to a finite error threshold, due to the formation of travelling wave patterns; hence termination errors are important, but they do not lead to the inevitable destruction of the RNA World by short parasites. The simplest assumption is that parasite replication rate is inversely proportional to the strand length. In this worst-case scenario, the error threshold for termination errors is much lower than for point mutations. We also consider a more realistic model in which the time for replication of a strand is the sum of a time for binding of the polymerase, and a time for polymerization. When the binding step is considered, termination errors are less serious than in the worst case. In the limit where the binding time is dominant, replication rates are equal for all lengths, and the error threshold for termination is the same as for point mutations. Copyright © 2017 Elsevier Ltd. All

  18. Report of the Subpanel on Error Characterization and Error Budgets

    NASA Technical Reports Server (NTRS)

    1982-01-01

    The state of knowledge of both user positioning requirements and error models of current and proposed satellite systems is reviewed. In particular the error analysis models for LANDSAT D are described. Recommendations are given concerning the geometric error model for the thematic mapper; interactive user involvement in system error budgeting and modeling and verification on real data sets; and the identification of a strawman mission for modeling key error sources.

  19. Medical error and human factors engineering: where are we now?

    PubMed

    Gawron, Valerie J; Drury, Colin G; Fairbanks, Rollin J; Berger, Roseanne C

    2006-01-01

    The goal of human factors engineering is to optimize the relationship between humans and systems by studying human behavior, abilities, and limitations and using this knowledge to design systems for safe and effective human use. With the assumption that the human component of any system will inevitably produce errors, human factors engineers design systems and human/machine interfaces that are robust enough to reduce error rates and the effect of the inevitable error within the system. In this article, we review the extent and nature of medical error and then discuss human factors engineering tools that have potential applicability. These tools include taxonomies of human and system error and error data collection and analysis methods. Finally, we describe studies that have examined medical error, and on the basis of these studies, present conclusions about how human factors engineering can significantly reduce medical errors and their effects.

  20. Rational error in internal medicine.

    PubMed

    Federspil, Giovanni; Vettor, Roberto

    2008-03-01

    Epistemologists have selected two basic categories: that of errors committed in scientific research, when a researcher devises or accepts an unfounded hypothesis, and that of mistakes committed in the application of scientific knowledge whereby doctors rely on knowledge held to be true at the time in order to understand an individual patient's signs and symptoms. The paper will deal exclusively with the latter, that is to say the mistakes which physicians make while carrying out their day-to-day medical duties. The paper will deal with the mistakes committed in medicine trying also to offer a classification. It will take into account also examples of mistakes in Bayesian reasoning and mistakes of reasoning committed by clinicians regard inductive reasoning. Moreover, many other mistakes are due to fallacies of deductive logic, logic which they use on a day-to-day basis while examining patients in order to envisage the consequences of the various diagnostic or physiopathologic hypotheses. The existence of a different type of mistakes that are part of the psychology of thought will be also pointed out. We conclude that internists often make mistakes because, unknowingly, they fail to reason correctly. These mistakes can occur in two ways: either because he does not observe the laws of formal logic, or because his practical rationality does not match theoretical rationality and so his reasoning becomes influenced by the circumstances in which he finds himself.

  1. Extending the Technology Acceptance Model: Policy Acceptance Model (PAM)

    NASA Astrophysics Data System (ADS)

    Pierce, Tamra

    There has been extensive research on how new ideas and technologies are accepted in society. This has resulted in the creation of many models that are used to discover and assess the contributing factors. The Technology Acceptance Model (TAM) is one that is a widely accepted model. This model examines people's acceptance of new technologies based on variables that directly correlate to how the end user views the product. This paper introduces the Policy Acceptance Model (PAM), an expansion of TAM, which is designed for the analysis and evaluation of acceptance of new policy implementation. PAM includes the traditional constructs of TAM and adds the variables of age, ethnicity, and family. The model is demonstrated using a survey of people's attitude toward the upcoming healthcare reform in the United States (US) from 72 survey respondents. The aim is that the theory behind this model can be used as a framework that will be applicable to studies looking at the introduction of any new or modified policies.

  2. Imagery of Errors in Typing

    ERIC Educational Resources Information Center

    Rieger, Martina; Martinez, Fanny; Wenke, Dorit

    2011-01-01

    Using a typing task we investigated whether insufficient imagination of errors and error corrections is related to duration differences between execution and imagination. In Experiment 1 spontaneous error imagination was investigated, whereas in Experiment 2 participants were specifically instructed to imagine errors. Further, in Experiment 2 we…

  3. Control by model error estimation

    NASA Technical Reports Server (NTRS)

    Likins, P. W.; Skelton, R. E.

    1976-01-01

    Modern control theory relies upon the fidelity of the mathematical model of the system. Truncated modes, external disturbances, and parameter errors in linear system models are corrected by augmenting to the original system of equations an 'error system' which is designed to approximate the effects of such model errors. A Chebyshev error system is developed for application to the Large Space Telescope (LST).

  4. Speech Errors across the Lifespan

    ERIC Educational Resources Information Center

    Vousden, Janet I.; Maylor, Elizabeth A.

    2006-01-01

    Dell, Burger, and Svec (1997) proposed that the proportion of speech errors classified as anticipations (e.g., "moot and mouth") can be predicted solely from the overall error rate, such that the greater the error rate, the lower the anticipatory proportion (AP) of errors. We report a study examining whether this effect applies to changes in error…

  5. Automatic Error Analysis Using Intervals

    ERIC Educational Resources Information Center

    Rothwell, E. J.; Cloud, M. J.

    2012-01-01

    A technique for automatic error analysis using interval mathematics is introduced. A comparison to standard error propagation methods shows that in cases involving complicated formulas, the interval approach gives comparable error estimates with much less effort. Several examples are considered, and numerical errors are computed using the INTLAB…

  6. Automatic Error Analysis Using Intervals

    ERIC Educational Resources Information Center

    Rothwell, E. J.; Cloud, M. J.

    2012-01-01

    A technique for automatic error analysis using interval mathematics is introduced. A comparison to standard error propagation methods shows that in cases involving complicated formulas, the interval approach gives comparable error estimates with much less effort. Several examples are considered, and numerical errors are computed using the INTLAB…

  7. Speech Errors across the Lifespan

    ERIC Educational Resources Information Center

    Vousden, Janet I.; Maylor, Elizabeth A.

    2006-01-01

    Dell, Burger, and Svec (1997) proposed that the proportion of speech errors classified as anticipations (e.g., "moot and mouth") can be predicted solely from the overall error rate, such that the greater the error rate, the lower the anticipatory proportion (AP) of errors. We report a study examining whether this effect applies to changes in error…

  8. The physiological basis for spacecraft environmental limits

    NASA Technical Reports Server (NTRS)

    Waligora, J. M. (Compiler)

    1979-01-01

    Limits for operational environments are discussed in terms of acceptable physiological changes. The environmental factors considered are pressure, contaminants, temperature, acceleration, noise, rf radiation, and weightlessness.

  9. Implications of Monte Carlo Statistical Errors in Criticality Safety Assessments

    SciTech Connect

    Pevey, Ronald E.

    2005-09-15

    Most criticality safety calculations are performed using Monte Carlo techniques because of Monte Carlo's ability to handle complex three-dimensional geometries. For Monte Carlo calculations, the more histories sampled, the lower the standard deviation of the resulting estimates. The common intuition is, therefore, that the more histories, the better; as a result, analysts tend to run Monte Carlo analyses as long as possible (or at least to a minimum acceptable uncertainty). For Monte Carlo criticality safety analyses, however, the optimization situation is complicated by the fact that procedures usually require that an extra margin of safety be added because of the statistical uncertainty of the Monte Carlo calculations. This additional safety margin affects the impact of the choice of the calculational standard deviation, both on production and on safety. This paper shows that, under the assumptions of normally distributed benchmarking calculational errors and exact compliance with the upper subcritical limit (USL), the standard deviation that optimizes production is zero, but there is a non-zero value of the calculational standard deviation that minimizes the risk of inadvertently labeling a supercritical configuration as subcritical. Furthermore, this value is shown to be a simple function of the typical benchmarking step outcomes--the bias, the standard deviation of the bias, the upper subcritical limit, and the number of standard deviations added to calculated k-effectives before comparison to the USL.

  10. A review of major factors contributing to errors in human hair association by microscopy.

    PubMed

    Smith, S L; Linch, C A

    1999-09-01

    Forensic hair examiners using traditional microscopic comparison techniques cannot state with certainty, except in extremely rare cases, that a found hair originated from a particular individual. They also cannot provide a statistical likelihood that a hair came from a certain individual and not another. There is no data available regarding the frequency of a specific microscopic hair characteristic (i.e., microtype) or trait in a particular population. Microtype is a term we use to describe certain internal characteristics and features expressed when observing hairs with unpolarized transmitted light. Courts seem to be sympathetic to lawyer's concerns that there are no accepted probability standards for human hair identification. Under Daubert, microscopic hair analysis testimony (or other scientific testimony) is allowed if the technique can be shown to have testability, peer review, general acceptance, and a known error rate. As with other forensic disciplines, laboratory error rate determination for a specific hair comparison case is not possible. Polymerase chain reaction (PCR)-based typing of hair roots offer hair examiners an opportunity to begin cataloging data with regard to microscopic hair association error rates. This is certainly a realistic manner in which to ascertain which hair microtypes and case circumstances repeatedly cause difficulty in association. Two cases are presented in which PCR typing revealed an incorrect inclusion in one and an incorrect exclusion in another. This paper does not suggest that such limited observations define a rate of occurrence. These cases illustrate evidentiary conditions or case circumstances which may potentially contribute to microscopic hair association errors. Issues discussed in this review paper address the potential questions an expert witness may expect in a Daubert hair analysis admissibility hearing.

  11. Defining error in anatomic pathology.

    PubMed

    Sirota, Ronald L

    2006-05-01

    Although much has been said and written about medical error and about error in pathology since the publication of the Institute of Medicine's report on medical error in 1999, precise definitions of what constitutes error in anatomic pathology do not exist for the specialty. Without better definitions, it is impossible to accurately judge errors in pathology. The lack of standardized definitions has implications for patient care and for the legal judgment of malpractice. To review the goals of anatomic pathology, to discuss the problems inherent in applying these goals to the judgment of error in pathology, to offer definitions of major and minor errors in pathology, and to discuss error in anatomic pathology in relation to the classic laboratory test cycle. Existing literature. Definitions for major and minor error in anatomic pathology are proffered, and anatomic pathology error is characterized in the classic test cycle.

  12. The Relative Frequency of Spanish Pronunciation Errors.

    ERIC Educational Resources Information Center

    Hammerly, Hector

    Types of hierarchies of pronunciation difficulty are discussed, and a hierarchy based on contrastive analysis plus informal observation is proposed. This hierarchy is less one of initial difficulty than of error persistence. One feature of this hierarchy is that, because of lesser learner awareness and very limited functional load, errors…

  13. Roundoff error effects on spatial lattice algorithm

    NASA Technical Reports Server (NTRS)

    An, S. H.; Yao, K.

    1986-01-01

    The floating-point roundoff error effect under finite word length limitations is analyzed for the time updates of reflection coefficients in the spatial lattice algorithm. It is shown that recursive computation is superior to direct computation under finite word length limitations. Moreover, the forgetting factor, which is conventionally used to smooth the time variations of the inputs, is also a crucial parameter in the consideration of the system stability and adaptability under finite word length constraints.

  14. Modular error embedding

    DOEpatents

    Sandford, II, Maxwell T.; Handel, Theodore G.; Ettinger, J. Mark

    1999-01-01

    A method of embedding auxiliary information into the digital representation of host data containing noise in the low-order bits. The method applies to digital data representing analog signals, for example digital images. The method reduces the error introduced by other methods that replace the low-order bits with auxiliary information. By a substantially reverse process, the embedded auxiliary data can be retrieved easily by an authorized user through use of a digital key. The modular error embedding method includes a process to permute the order in which the host data values are processed. The method doubles the amount of auxiliary information that can be added to host data values, in comparison with bit-replacement methods for high bit-rate coding. The invention preserves human perception of the meaning and content of the host data, permitting the addition of auxiliary data in the amount of 50% or greater of the original host data.

  15. Error-Free Software

    NASA Technical Reports Server (NTRS)

    1989-01-01

    001 is an integrated tool suited for automatically developing ultra reliable models, simulations and software systems. Developed and marketed by Hamilton Technologies, Inc. (HTI), it has been applied in engineering, manufacturing, banking and software tools development. The software provides the ability to simplify the complex. A system developed with 001 can be a prototype or fully developed with production quality code. It is free of interface errors, consistent, logically complete and has no data or control flow errors. Systems can be designed, developed and maintained with maximum productivity. Margaret Hamilton, President of Hamilton Technologies, also directed the research and development of USE.IT, an earlier product which was the first computer aided software engineering product in the industry to concentrate on automatically supporting the development of an ultrareliable system throughout its life cycle. Both products originated in NASA technology developed under a Johnson Space Center contract.

  16. Hyponatremia: management errors.

    PubMed

    Seo, Jang Won; Park, Tae Jin

    2006-11-01

    Rapid correction of hyponatremia is frequently associated with increased morbidity and mortality. Therefore, it is important to estimate the proper volume and type of infusate required to increase the serum sodium concentration predictably. The major common management errors during the treatment of hyponatremia are inadequate investigation, treatment with fluid restriction for diuretic-induced hyponatremia and treatment with fluid restriction plus intravenous isotonic saline simultaneously. We present two cases of management errors. One is about the problem of rapid correction of hyponatremia in a patient with sepsis and acute renal failure during continuous renal replacement therapy in the intensive care unit. The other is the case of hypothyroidism in which hyponatremia was aggravated by intravenous infusion of dextrose water and isotonic saline infusion was erroneously used to increase serum sodium concentration.

  17. Type I Error Control for Tree Classification

    PubMed Central

    Jung, Sin-Ho; Chen, Yong; Ahn, Hongshik

    2014-01-01

    Binary tree classification has been useful for classifying the whole population based on the levels of outcome variable that is associated with chosen predictors. Often we start a classification with a large number of candidate predictors, and each predictor takes a number of different cutoff values. Because of these types of multiplicity, binary tree classification method is subject to severe type I error probability. Nonetheless, there have not been many publications to address this issue. In this paper, we propose a binary tree classification method to control the probability to accept a predictor below certain level, say 5%. PMID:25452689

  18. Staff Acceptance of Tele-ICU Coverage

    PubMed Central

    Chan, Paul S.; Cram, Peter

    2011-01-01

    Background: Remote coverage of ICUs is increasing, but staff acceptance of this new technology is incompletely characterized. We conducted a systematic review to summarize existing research on acceptance of tele-ICU coverage among ICU staff. Methods: We searched for published articles pertaining to critical care telemedicine systems (aka, tele-ICU) between January 1950 and March 2010 using PubMed, Cumulative Index to Nursing and Allied Health Literature, Global Health, Web of Science, and the Cochrane Library and abstracts and presentations delivered at national conferences. Studies were included if they provided original qualitative or quantitative data on staff perceptions of tele-ICU coverage. Studies were imported into content analysis software and coded by tele-ICU configuration, methodology, participants, and findings (eg, positive and negative staff evaluations). Results: Review of 3,086 citations yielded 23 eligible studies. Findings were grouped into four categories of staff evaluation: overall acceptance level of tele-ICU coverage (measured in 70% of studies), impact on patient care (measured in 96%), impact on staff (measured in 100%), and organizational impact (measured in 48%). Overall acceptance was high, despite initial ambivalence. Favorable impact on patient care was perceived by > 82% of participants. Staff impact referenced enhanced collaboration, autonomy, and training, although scrutiny, malfunctions, and contradictory advice were cited as potential barriers. Staff perceived the organizational impact to vary. An important limitation of available studies was a lack of rigorous methodology and validated survey instruments in many studies. Conclusions: Initial reports suggest high levels of staff acceptance of tele-ICU coverage, but more rigorous methodologic study is required. PMID:21051386

  19. Human Error In Complex Systems

    NASA Technical Reports Server (NTRS)

    Morris, Nancy M.; Rouse, William B.

    1991-01-01

    Report presents results of research aimed at understanding causes of human error in such complex systems as aircraft, nuclear powerplants, and chemical processing plants. Research considered both slips (errors of action) and mistakes (errors of intention), and influence of workload on them. Results indicated that: humans respond to conditions in which errors expected by attempting to reduce incidence of errors; and adaptation to conditions potent influence on human behavior in discretionary situations.

  20. Reducing Spreadsheet Errors

    DTIC Science & Technology

    2009-09-01

    Basic for Applications ( VBA ) to improve spreadsheets. Program- ming and coding portions of a spreadsheet in VBA (especially iteration) can reduce...effort as well as errors. Users unfamiliar with VBA may begin learning by “recording macros” in Excel. Microsoft’s online tutorials...www.office.microsoft.com/en-us/excel) provide overviews of this and other VBA capabilities. 5) Thorough documentation of spreadsheet development and application is

  1. Error-correction coding

    NASA Technical Reports Server (NTRS)

    Hinds, Erold W. (Principal Investigator)

    1996-01-01

    This report describes the progress made towards the completion of a specific task on error-correcting coding. The proposed research consisted of investigating the use of modulation block codes as the inner code of a concatenated coding system in order to improve the overall space link communications performance. The study proposed to identify and analyze candidate codes that will complement the performance of the overall coding system which uses the interleaved RS (255,223) code as the outer code.

  2. Surface temperature measurement errors

    SciTech Connect

    Keltner, N.R.; Beck, J.V.

    1983-05-01

    Mathematical models are developed for the response of surface mounted thermocouples on a thick wall. These models account for the significant causes of errors in both the transient and steady-state response to changes in the wall temperature. In many cases, closed form analytical expressions are given for the response. The cases for which analytical expressions are not obtained can be easily evaluated on a programmable calculator or a small computer.

  3. Medical errors--is total quality management for the battlefield desirable?

    PubMed

    Cohen, David J; Lisagor, Philip

    2005-11-01

    There has recently been a great deal of discussion in both the lay press as well as the medical press regarding the incidence of errors that occur during medical practice. There have been many discussions of how quality control measures from industry can be applied to the health care system. Indeed both civilian and "brick and mortar" military medical treatment facilities are adapting these techniques. It is important that we understand the principles behind Total Quality Management (TQM) as well as its techniques and limitations. TQM is based on limiting deviation from an accepted standard of practice. These principles may be as applicable to our military health care facilities in a field environment as they are to our fixed facilities, although the standards used for measurement may have to be modified to adapt to different constraints of environment and resources. TQM techniques can nonetheless be applied in virtually any facility to ensure the best possible care and outcomes for our soldiers.

  4. Market Acceptance of Smart Growth

    EPA Pesticide Factsheets

    This report finds that smart growth developments enjoy market acceptance because of stability in prices over time. Housing resales in smart growth developments often have greater appreciation than their conventional suburban counterparts.

  5. L-286 Acceptance Test Record

    SciTech Connect

    HARMON, B.C.

    2000-01-14

    This document provides a detailed account of how the acceptance testing was conducted for Project L-286, ''200E Area Sanitary Water Plant Effluent Stream Reduction''. The testing of the L-286 instrumentation system was conducted under the direct supervision

  6. Bayesian Error Estimation Functionals

    NASA Astrophysics Data System (ADS)

    Jacobsen, Karsten W.

    The challenge of approximating the exchange-correlation functional in Density Functional Theory (DFT) has led to the development of numerous different approximations of varying accuracy on different calculated properties. There is therefore a need for reliable estimation of prediction errors within the different approximation schemes to DFT. The Bayesian Error Estimation Functionals (BEEF) have been developed with this in mind. The functionals are constructed by fitting to experimental and high-quality computational databases for molecules and solids including chemisorption and van der Waals systems. This leads to reasonably accurate general-purpose functionals with particual focus on surface science. The fitting procedure involves considerations on how to combine different types of data, and applies Tikhonov regularization and bootstrap cross validation. The methodology has been applied to construct GGA and metaGGA functionals with and without inclusion of long-ranged van der Waals contributions. The error estimation is made possible by the generation of not only a single functional but through the construction of a probability distribution of functionals represented by a functional ensemble. The use of the functional ensemble is illustrated on compound heat of formation and by investigations of the reliability of calculated catalytic ammonia synthesis rates.

  7. Error analysis of optical correlators

    NASA Technical Reports Server (NTRS)

    Ma, Paul W.; Reid, Max B.; Downie, John D.

    1992-01-01

    With the growing interest in using binary phase only filters (BPOF) in optical correlators that are implemented on magnetooptic spatial light modulators, an understanding of the effect of errors in system alignment and optical components is critical in obtaining optimal system performance. We present simulations of optical correlator performance degradation in the presence of eight errors. We break these eight errors into three groups: 1) alignment errors, 2) errors due to a combination of component imperfections and alignment errors, and 3) errors which result solely from non-ideal components. Under the first group, we simulate errors in the distance from the object to the first principle plane of the transform lens, the distance from the second principle plane of the transform lens to the filter plane, and rotational misalignment of the input mask with the filter mask. Next we consider errors which result from a combination of alignment and component imperfections. These include errors in the transform lens, the phase compensation lens, and the inverse Fourier transform lens. Lastly we have the component errors resulting from the choice of spatial light modulator. These include contrast error and phase errors caused by the non-uniform flatness of the masks. The effects of each individual error are discussed, and the result of combining all eight errors under assumptions of reasonable tolerances and system parameters is also presented. Conclusions are drawn as to which tolerances are most critical for optimal system performance.

  8. We need to talk about error: causes and types of error in veterinary practice.

    PubMed

    Oxtoby, C; Ferguson, E; White, K; Mossop, L

    2015-10-31

    Patient safety research in human medicine has identified the causes and common types of medical error and subsequently informed the development of interventions which mitigate harm, such as the WHO's safe surgery checklist. There is no such evidence available to the veterinary profession. This study therefore aims to identify the causes and types of errors in veterinary practice, and presents an evidence based system for their classification. Causes of error were identified from retrospective record review of 678 claims to the profession's leading indemnity insurer and nine focus groups (average N per group=8) with vets, nurses and support staff were performed using critical incident technique. Reason's (2000) Swiss cheese model of error was used to inform the interpretation of the data. Types of error were extracted from 2978 claims records reported between the years 2009 and 2013. The major classes of error causation were identified with mistakes involving surgery the most common type of error. The results were triangulated with findings from the medical literature and highlight the importance of cognitive limitations, deficiencies in non-technical skills and a systems approach to veterinary error.

  9. GY SAMPLING THEORY IN ENVIRONMENTAL STUDIES 2: SUBSAMPLING ERROR MEASUREMENTS

    EPA Science Inventory

    Sampling can be a significant source of error in the measurement process. The characterization and cleanup of hazardous waste sites require data that meet site-specific levels of acceptable quality if scientifically supportable decisions are to be made. In support of this effort,...

  10. The Location of Error: Reflections on a Research Project

    ERIC Educational Resources Information Center

    Cook, Devan

    2010-01-01

    Andrea Lunsford and Karen Lunsford conclude "Mistakes Are a Fact of Life: A National Comparative Study," a discussion of their research project exploring patterns of formal grammar and usage error in first-year writing, with an invitation to "conduct a local version of this study." The author was eager to accept their invitation; learning and…

  11. GY SAMPLING THEORY IN ENVIRONMENTAL STUDIES 2: SUBSAMPLING ERROR MEASUREMENTS

    EPA Science Inventory

    Sampling can be a significant source of error in the measurement process. The characterization and cleanup of hazardous waste sites require data that meet site-specific levels of acceptable quality if scientifically supportable decisions are to be made. In support of this effort,...

  12. Professional liability insurance and medical error disclosure.

    PubMed

    McLennan, Stuart; Shaw, David; Leu, Agnes; Elger, Bernice

    2015-01-01

    To examine medicolegal stakeholders' views about the impact of professional liability insurance in Switzerland on medical error disclosure. Purposive sample of 23 key medicolegal stakeholders in Switzerland from a range of fields between October 2012 and February 2013. Data were collected via individual, face-to-face interviews using a researcher-developed semi-structured interview guide. Interviews were transcribed and analysed using conventional content analysis. Participants, particularly those with a legal or quality background, reported that concerns relating to professional liability insurance often inhibited communication with patients after a medical error. Healthcare providers were reported to be particularly concerned about losing their liability insurance cover for apologising to harmed patients. It was reported that the attempt to limit the exchange of information and communication could lead to a conflict with patient rights law. Participants reported that hospitals could, and in some case are, moving towards self-insurance approaches, which could increase flexibility regarding error communication The reported current practice of at least some liability insurance companies in Switzerland of inhibiting communication with harmed patients after an error is concerning and requires further investigation. With a new ethic of transparency regarding medical errors now prevailing internationally, this approach is increasingly being perceived to be misguided. A move away from hospitals relying solely on liability insurance may allow greater transparency after errors. Legalisation preventing the loss of liability insurance coverage for apologising to harmed patients should also be considered.

  13. Measuring nursing error: psychometrics of MISSCARE and practice and professional issues items.

    PubMed

    Castner, Jessica; Dean-Baar, Susan

    2014-01-01

    Health care error causes inpatient morbidity and mortality. This study pooled the items from preexisting nursing error questionnaires and tested the psychometric properties of modified subscales from these item combinations. Items from MISSCARE Part A, Part B, and the Practice and Professional Issues were collected from 556 registered nurses. Principal component analyses were completed for items measuring (a) nursing error and (b) antecedents to error. Acceptable factor loadings and internal consistency reliability (.70-.89) were found for subscales Acute Care Missed Nursing Care, Errors of Commission, Workload, Supplies Problems, and Communication Problems. The findings support the use of 5 subscales to measure nursing error and antecedents to error in various inpatient unit types with acceptable validity and reliability. The Activities of Daily Living (ADL) Omissions subscale is not appropriate for all inpatient unit types.

  14. Diagnostic errors in pediatric radiology.

    PubMed

    Taylor, George A; Voss, Stephan D; Melvin, Patrice R; Graham, Dionne A

    2011-03-01

    Little information is known about the frequency, types and causes of diagnostic errors in imaging children. Our goals were to describe the patterns and potential etiologies of diagnostic error in our subspecialty. We reviewed 265 cases with clinically significant diagnostic errors identified during a 10-year period. Errors were defined as a diagnosis that was delayed, wrong or missed; they were classified as perceptual, cognitive, system-related or unavoidable; and they were evaluated by imaging modality and level of training of the physician involved. We identified 484 specific errors in the 265 cases reviewed (mean:1.8 errors/case). Most discrepancies involved staff (45.5%). Two hundred fifty-eight individual cognitive errors were identified in 151 cases (mean = 1.7 errors/case). Of these, 83 cases (55%) had additional perceptual or system-related errors. One hundred sixty-five perceptual errors were identified in 165 cases. Of these, 68 cases (41%) also had cognitive or system-related errors. Fifty-four system-related errors were identified in 46 cases (mean = 1.2 errors/case) of which all were multi-factorial. Seven cases were unavoidable. Our study defines a taxonomy of diagnostic errors in a large academic pediatric radiology practice and suggests that most are multi-factorial in etiology. Further study is needed to define effective strategies for improvement.

  15. Errors and mistakes in breast ultrasound diagnostics

    PubMed Central

    Jakubowski, Wiesław; Migda, Bartosz

    2012-01-01

    Sonomammography is often the first additional examination performed in the diagnostics of breast diseases. The development of ultrasound imaging techniques, particularly the introduction of high frequency transducers, matrix transducers, harmonic imaging and finally, elastography, influenced the improvement of breast disease diagnostics. Nevertheless, as in each imaging method, there are errors and mistakes resulting from the technical limitations of the method, breast anatomy (fibrous remodeling), insufficient sensitivity and, in particular, specificity. Errors in breast ultrasound diagnostics can be divided into impossible to be avoided and potentially possible to be reduced. In this article the most frequently made errors in ultrasound have been presented, including the ones caused by the presence of artifacts resulting from volumetric averaging in the near and far field, artifacts in cysts or in dilated lactiferous ducts (reverberations, comet tail artifacts, lateral beam artifacts), improper setting of general enhancement or time gain curve or range. Errors dependent on the examiner, resulting in the wrong BIRADS-usg classification, are divided into negative and positive errors. The sources of these errors have been listed. The methods of minimization of the number of errors made have been discussed, including the ones related to the appropriate examination technique, taking into account data from case history and the use of the greatest possible number of additional options such as: harmonic imaging, color and power Doppler and elastography. In the article examples of errors resulting from the technical conditions of the method have been presented, and those dependent on the examiner which are related to the great diversity and variation of ultrasound images of pathological breast lesions. PMID:26675358

  16. Errors and mistakes in breast ultrasound diagnostics.

    PubMed

    Jakubowski, Wiesław; Dobruch-Sobczak, Katarzyna; Migda, Bartosz

    2012-09-01

    Sonomammography is often the first additional examination performed in the diagnostics of breast diseases. The development of ultrasound imaging techniques, particularly the introduction of high frequency transducers, matrix transducers, harmonic imaging and finally, elastography, influenced the improvement of breast disease diagnostics. Nevertheless, as in each imaging method, there are errors and mistakes resulting from the technical limitations of the method, breast anatomy (fibrous remodeling), insufficient sensitivity and, in particular, specificity. Errors in breast ultrasound diagnostics can be divided into impossible to be avoided and potentially possible to be reduced. In this article the most frequently made errors in ultrasound have been presented, including the ones caused by the presence of artifacts resulting from volumetric averaging in the near and far field, artifacts in cysts or in dilated lactiferous ducts (reverberations, comet tail artifacts, lateral beam artifacts), improper setting of general enhancement or time gain curve or range. Errors dependent on the examiner, resulting in the wrong BIRADS-usg classification, are divided into negative and positive errors. The sources of these errors have been listed. The methods of minimization of the number of errors made have been discussed, including the ones related to the appropriate examination technique, taking into account data from case history and the use of the greatest possible number of additional options such as: harmonic imaging, color and power Doppler and elastography. In the article examples of errors resulting from the technical conditions of the method have been presented, and those dependent on the examiner which are related to the great diversity and variation of ultrasound images of pathological breast lesions.

  17. Biasing errors and corrections

    NASA Technical Reports Server (NTRS)

    Meyers, James F.

    1991-01-01

    The dependence of laser velocimeter measurement rate on flow velocity is discussed. Investigations outlining that any dependence is purely statistical, and is nonstationary both spatially and temporally, are described. Main conclusions drawn are that the times between successive particle arrivals should be routinely measured and the calculation of the velocity data rate correlation coefficient should be performed to determine if a dependency exists. If none is found, accept the data ensemble as an independent sample of the flow. If a dependency is found, the data should be modified to obtain an independent sample. Universal correcting procedures should never be applied because their underlying assumptions are not valid.

  18. Quantum Error Correction with Biased Noise

    NASA Astrophysics Data System (ADS)

    Brooks, Peter

    Quantum computing offers powerful new techniques for speeding up the calculation of many classically intractable problems. Quantum algorithms can allow for the efficient simulation of physical systems, with applications to basic research, chemical modeling, and drug discovery; other algorithms have important implications for cryptography and internet security. At the same time, building a quantum computer is a daunting task, requiring the coherent manipulation of systems with many quantum degrees of freedom while preventing environmental noise from interacting too strongly with the system. Fortunately, we know that, under reasonable assumptions, we can use the techniques of quantum error correction and fault tolerance to achieve an arbitrary reduction in the noise level. In this thesis, we look at how additional information about the structure of noise, or "noise bias," can improve or alter the performance of techniques in quantum error correction and fault tolerance. In Chapter 2, we explore the possibility of designing certain quantum gates to be extremely robust with respect to errors in their operation. This naturally leads to structured noise where certain gates can be implemented in a protected manner, allowing the user to focus their protection on the noisier unprotected operations. In Chapter 3, we examine how to tailor error-correcting codes and fault-tolerant quantum circuits in the presence of dephasing biased noise, where dephasing errors are far more common than bit-flip errors. By using an appropriately asymmetric code, we demonstrate the ability to improve the amount of error reduction and decrease the physical resources required for error correction. In Chapter 4, we analyze a variety of protocols for distilling magic states, which enable universal quantum computation, in the presence of faulty Clifford operations. Here again there is a hierarchy of noise levels, with a fixed error rate for faulty gates, and a second rate for errors in the distilled

  19. Evaluating the acceptability of recreation rationing policies used on rivers

    NASA Astrophysics Data System (ADS)

    Wikle, Thomas A.

    1991-05-01

    Research shows that users and managers have different perceptions of acceptable policies that ration or limit recreational use on rivers. The acceptability of seven rationing policies was evaluated using Thurstone's method of paired comparisons, which provided a rank ordering of advance reservation, lottery, first-come/first-served, merit, priority for first time users, zoning, and price. Chi-squared tests were used to determine if users and managers have significantly different levels of acceptability for the policies. River users and managers were found to be significantly different according to their evaluation of advance reservation, zoning, and merit. The results also indicated that river users collectively divide the policies into three categories corresponding to high, moderate, and low levels of acceptability, while river managers divide the policies into two levels corresponding to acceptable and unacceptable.

  20. Detecting Errors in Programs

    DTIC Science & Technology

    1979-02-01

    unclassified c. THIS PAGE unclassified Standard Form 298 (Rev. 8-98) Prescribed by ANSI Std Z39-18 DETECTING ERRORS IN PROGRAMS* Lloyd D. Fosdick...from a finite set of tests [35,36]a Recently Howden [37] presented a result showing that for a particular class of Lindenmayer grammars it was possible...Diego, CA. 37o Howden, W.E.: Lindenmayer grammars and symbolic testing. Information Processing Letters 7,1 (Jano 1978), 36-39. 38~ Fitzsimmons, Ann

  1. Laser Phase Errors in Seeded FELs

    SciTech Connect

    Ratner, D.; Fry, A.; Stupakov, G.; White, W.; /SLAC

    2012-03-28

    Harmonic seeding of free electron lasers has attracted significant attention from the promise of transform-limited pulses in the soft X-ray region. Harmonic multiplication schemes extend seeding to shorter wavelengths, but also amplify the spectral phase errors of the initial seed laser, and may degrade the pulse quality. In this paper we consider the effect of seed laser phase errors in high gain harmonic generation and echo-enabled harmonic generation. We use simulations to confirm analytical results for the case of linearly chirped seed lasers, and extend the results for arbitrary seed laser envelope and phase.

  2. Human error and the search for blame

    NASA Technical Reports Server (NTRS)

    Denning, Peter J.

    1989-01-01

    Human error is a frequent topic in discussions about risks in using computer systems. A rational analysis of human error leads through the consideration of mistakes to standards that designers use to avoid mistakes that lead to known breakdowns. The irrational side, however, is more interesting. It conditions people to think that breakdowns are inherently wrong and that there is ultimately someone who is responsible. This leads to a search for someone to blame which diverts attention from: learning from the mistakes; seeing the limitations of current engineering methodology; and improving the discourse of design.

  3. Speech Errors, Error Correction, and the Construction of Discourse.

    ERIC Educational Resources Information Center

    Linde, Charlotte

    Speech errors have been used in the construction of production models of the phonological and semantic components of language, and for a model of interactional processes. Errors also provide insight into how speakers plan discourse and syntactic structure,. Different types of discourse exhibit different types of error. The present data are taken…

  4. Validating new toxicology tests for regulatory acceptance.

    PubMed

    Zeiger, E; Stokes, W S

    1998-02-01

    Before a new or revised toxicology test is considered acceptable for safety evaluation of new substances, the test users and the industrial and regulatory decision makers must feel comfortable with it, and the decisions it supports. Comfort with, and the acceptance of, a new test comes after knowing that it has been validated for its proposed use. The validation process is designed to determine the operational characteristics of a test, that is, its reliability and relevance, in addition to its strengths and limitations. The reliability of a test is measured by its reproducibility. Its relevance is judged by its mechanistic relationship to the health effects of concern, and its ability to predict or identify those effects. The U.S. government has recently formed the Interagency Coordinating Committee on the Validation of Alternative Methods (ICCVAM) to work with federal agencies and test developers to coordinate the evaluation and adoption of new test methods. The ICCVAM will provide guidance to agencies and other stakeholders on criteria and processes for development, validation, and acceptance of tests; coordinate technical reviews of proposed new tests of interagency interest; facilitate information sharing among agencies; and serve as an interagency resource and communications link with parties outside of the federal government on matters of test method validation.

  5. Validating new toxicology tests for regulatory acceptance

    PubMed

    Zeiger; Stokes

    1998-02-01

    Before a new or revised toxicology test is considered acceptable for safety evaluation of new substances, the test users and the industrial and regulatory decision makers must feel comfortable with it, and the decisions it supports. Comfort with, and the acceptance of, a new test comes after knowing that it has been validated for its proposed use. The validation process is designed to determine the operational characteristics of a test, that is, its reliability and relevance, in addition to its strengths and limitations. The reliability of a test is measured by its reproducibility. Its relevance is judged by its mechanistic relationship to the health effects of concern, and its ability to predict or identify those effects. The U.S. government has recently formed the Interagency Coordinating Committee on the Validation of Alternative Methods (ICCVAM) to work with federal agencies and test developers to coordinate the evaluation and adoption of new test methods. The ICCVAM will provide guidance to agencies and other stakeholders on criteria and processes for development, validation, and acceptance of tests; coordinate technical reviews of proposed new tests of interagency interest; facilitate information sharing among agencies; and serve as an interagency resource and communications link with parties outside of the federal government on matters of test method validation. Copyright 1998 Academic Press.

  6. Assessment of relative error sources in IR DIAL measurement accuracy

    NASA Technical Reports Server (NTRS)

    Menyuk, N.; Killinger, D. K.

    1983-01-01

    An assessment is made of the role the various error sources play in limiting the accuracy of infrared differential absorption lidar measurements used for the remote sensing of atmospheric species. An overview is presented of the relative contribution of each error source including the inadequate knowledge of the absorption coefficient, differential spectral reflectance, and background interference as well as measurement errors arising from signal fluctuations.

  7. Inborn Errors in Immunity

    PubMed Central

    Lionakis, M.S.; Hajishengallis, G.

    2015-01-01

    In recent years, the study of genetic defects arising from inborn errors in immunity has resulted in the discovery of new genes involved in the function of the immune system and in the elucidation of the roles of known genes whose importance was previously unappreciated. With the recent explosion in the field of genomics and the increasing number of genetic defects identified, the study of naturally occurring mutations has become a powerful tool for gaining mechanistic insight into the functions of the human immune system. In this concise perspective, we discuss emerging evidence that inborn errors in immunity constitute real-life models that are indispensable both for the in-depth understanding of human biology and for obtaining critical insights into common diseases, such as those affecting oral health. In the field of oral mucosal immunity, through the study of patients with select gene disruptions, the interleukin-17 (IL-17) pathway has emerged as a critical element in oral immune surveillance and susceptibility to inflammatory disease, with disruptions in the IL-17 axis now strongly linked to mucosal fungal susceptibility, whereas overactivation of the same pathways is linked to inflammatory periodontitis. PMID:25900229

  8. Errors in CT colonography.

    PubMed

    Trilisky, Igor; Ward, Emily; Dachman, Abraham H

    2015-10-01

    CT colonography (CTC) is a colorectal cancer screening modality which is becoming more widely implemented and has shown polyp detection rates comparable to those of optical colonoscopy. CTC has the potential to improve population screening rates due to its minimal invasiveness, no sedation requirement, potential for reduced cathartic examination, faster patient throughput, and cost-effectiveness. Proper implementation of a CTC screening program requires careful attention to numerous factors, including patient preparation prior to the examination, the technical aspects of image acquisition, and post-processing of the acquired data. A CTC workstation with dedicated software is required with integrated CTC-specific display features. Many workstations include computer-aided detection software which is designed to decrease errors of detection by detecting and displaying polyp-candidates to the reader for evaluation. There are several pitfalls which may result in false-negative and false-positive reader interpretation. We present an overview of the potential errors in CTC and a systematic approach to avoid them.

  9. From requirements to acceptance tests

    NASA Technical Reports Server (NTRS)

    Baize, Lionel; Pasquier, Helene

    1993-01-01

    From user requirements definition to accepted software system, the software project management wants to be sure that the system will meet the requirements. For the development of a telecommunication satellites Control Centre, C.N.E.S. has used new rules to make the use of tracing matrix easier. From Requirements to Acceptance Tests, each item of a document must have an identifier. A unique matrix traces the system and allows the tracking of the consequences of a change in the requirements. A tool has been developed, to import documents into a relational data base. Each record of the data base corresponds to an item of a document, the access key is the item identifier. Tracing matrix is also processed, providing automatically links between the different documents. It enables the reading on the same screen of traced items. For example one can read simultaneously the User Requirements items, the corresponding Software Requirements items and the Acceptance Tests.

  10. To accept, or not to accept, that is the question: citizen reactions to rationing

    PubMed Central

    Broqvist, Mari; Garpenby, Peter

    2014-01-01

    Abstract Background  The publicly financed health service in Sweden has come under increasing pressure, forcing policy makers to consider restrictions. Objective  To describe different perceptions of rationing, in particular, what citizens themselves believe influences their acceptance of having to stand aside for others in a public health service. Design  Qualitative interviews, analysed by phenomenography, describing perceptions by different categories. Setting and participants  Purposeful sample of 14 Swedish citizens, based on demographic criteria and attitudes towards allocation in health care. Results  Participants expressed high awareness of limitations in public resources and the necessity of rationing. Acceptance of rationing could increase or decrease, depending on one’s (i) awareness that healthcare resources are limited, (ii) endorsement of universal health care, (iii) knowledge and acceptance of the principles guiding rationing and (iv) knowledge about alternatives to public health services. Conclusions  This study suggests that decision makers should be more explicit in describing the dilemma of resource limitations in a publicly funded healthcare system. Openness enables citizens to gain the insight to make informed decisions, i.e. to use public services or to ‘opt out’ of the public sector solution if they consider rationing decisions unacceptable. PMID:22032636

  11. Congress abstracts: preparing abstracts for submission and successful acceptance.

    PubMed

    Curzon, M E J; Cleaton-Jones, P E

    2011-12-01

    To provide guidance on writing congress abstracts for submission and how to increase the chance of acceptance. There is increasing competition for submitted abstracts to be accepted by scientific congresses. Because the facilities or size of a congress may be limited a selection process is often used based upon the quality of abstracts submitted. Accordingly, it is crucial for a researcher to prepare an abstract very carefully to ensure the best chance of acceptance. The approaches to preparing an abstract and the techniques for enhancing quality are reviewed. Suggestions and guidance are given to ensure the production of a well structured, informative and scientifically sound abstract.

  12. Skylab water balance error analysis

    NASA Technical Reports Server (NTRS)

    Leonard, J. I.

    1977-01-01

    Estimates of the precision of the net water balance were obtained for the entire Skylab preflight and inflight phases as well as for the first two weeks of flight. Quantitative estimates of both total sampling errors and instrumentation errors were obtained. It was shown that measurement error is minimal in comparison to biological variability and little can be gained from improvement in analytical accuracy. In addition, a propagation of error analysis demonstrated that total water balance error could be accounted for almost entirely by the errors associated with body mass changes. Errors due to interaction between terms in the water balance equation (covariances) represented less than 10% of the total error. Overall, the analysis provides evidence that daily measurements of body water changes obtained from the indirect balance technique are reasonable, precise, and relaible. The method is not biased toward net retention or loss.

  13. Error Analysis in Mathematics Education.

    ERIC Educational Resources Information Center

    Rittner, Max

    1982-01-01

    The article reviews the development of mathematics error analysis as a means of diagnosing students' cognitive reasoning. Errors specific to addition, subtraction, multiplication, and division are described, and suggestions for remediation are provided. (CL)

  14. Prospective issues for error detection.

    PubMed

    Blavier, Adélaïde; Rouy, Emmanuelle; Nyssen, Anne-Sophie; de Keyser, Véronique

    2005-06-10

    From the literature on error detection, the authors select several concepts relating error detection mechanisms and prospective memory features. They emphasize the central role of intention in the classification of the errors into slips/lapses/mistakes, in the error handling process and in the usual distinction between action-based and outcome-based detection. Intention is again a core concept in their investigation of prospective memory theory, where they point out the contribution of intention retrievals, intention persistence and output monitoring in the individual's possibilities for detecting their errors. The involvement of the frontal lobes in prospective memory and in error detection is also analysed. From the chronology of a prospective memory task, the authors finally suggest a model for error detection also accounting for neural mechanisms highlighted by studies on error-related brain activity.

  15. Acceptance of suicide in Moscow.

    PubMed

    Jukkala, Tanya; Mäkinen, Ilkka Henrik

    2011-08-01

    Attitudes concerning the acceptability of suicide have been emphasized as being important for understanding why levels of suicide mortality vary in different societies across the world. While Russian suicide mortality levels are among the highest in the world, not much is known about attitudes to suicide in Russia. This study aims to obtain a greater understanding about the levels and correlates of suicide acceptance in Russia. Data from a survey of 1,190 Muscovites were analysed using logistic regression techniques. Suicide acceptance was examined among respondents in relation to social, economic and demographic factors as well as in relation to attitudes towards other moral questions. The majority of interviewees (80%) expressed condemnatory attitudes towards suicide, although men were slightly less condemning. The young, the higher educated, and the non-religious were more accepting of suicide (OR > 2). However, the two first-mentioned effects disappeared when controlling for tolerance, while a positive effect of lower education on suicide acceptance appeared. When controlling for other independent variables, no significant effects were found on suicide attitudes by gender, one's current family situation, or by health-related or economic problems. The most important determinants of the respondents' attitudes towards suicide were their tolerance regarding other moral questions and their religiosity. More tolerant views, in general, also seemed to explain the more accepting views towards suicide among the young and the higher educated. Differences in suicide attitudes between the sexes seemed to be dependent on differences in other factors rather than on gender per se. Suicide attitudes also seemed to be more affected by one's earlier experiences in terms of upbringing and socialization than by events and processes later in life.

  16. Error Sources in Asteroid Astrometry

    NASA Technical Reports Server (NTRS)

    Owen, William M., Jr.

    2000-01-01

    Asteroid astrometry, like any other scientific measurement process, is subject to both random and systematic errors, not all of which are under the observer's control. To design an astrometric observing program or to improve an existing one requires knowledge of the various sources of error, how different errors affect one's results, and how various errors may be minimized by careful observation or data reduction techniques.

  17. [The error, source of learning].

    PubMed

    Joyeux, Stéphanie; Bohic, Valérie

    2016-05-01

    The error itself is not recognised as a fault. It is the intentionality which differentiates between an error and a fault. An error is unintentional while a fault is a failure to respect known rules. The risk of error is omnipresent in health institutions. Public authorities have therefore set out a series of measures to reduce this risk. Copyright © 2016 Elsevier Masson SAS. All rights reserved.

  18. Assessment of errors and misused statistics in dental research.

    PubMed

    Kim, Jee Soo Jay; Kim, Dong-Kie; Hong, Suk Jin

    2011-06-01

      The goal of the study is to assess the level of misused statistics or statistical errors in dental research, and identify the major source of statistical errors prevalent in dental literature.   A total of 418 papers, published between 1995 and 2009, was randomly selected from 10 well established dental journals. Every paper in the sample underwent careful scrutiny for the correct use of statistics. Of these, there were 111 papers for which we were unable to judge whether or not the use of statistics was appropriate, due to insufficient information presented in the paper; leaving 307 papers for this study. A paper with at least one statistical error has been classified as 'Misuse of statistics', and a paper without any statistical errors as 'Acceptable'. Statistical errors also included misinterpretation of statistical analytical results.   Our investigation showed that 149 were acceptable and 158 contained at least one misuse of statistics or a statistical error.   This gave the misuse rate of 51.5%, which is slightly lower than that reported by several studies completed for the medical literature. © 2011 FDI World Dental Federation.

  19. Sputtering performance of the TFCX limiter

    SciTech Connect

    Brooks, J.N.

    1984-09-01

    The sputtering performance of the proposed TFCX pumped limiter was analyzed using the REDEP computer code. Erosion, redeposition, surface shape and heat flux changes with time, and plasma contamination issues were examined. A carbon coated limiter was found to give acceptable sputtering performance over the TFCX lifetime if, and only if, acceptable redeposition properties of carbon are obtained.

  20. Feature Referenced Error Correction Apparatus.

    DTIC Science & Technology

    A feature referenced error correction apparatus utilizing the multiple images of the interstage level image format to compensate for positional...images and by the generation of an error correction signal in response to the sub-frame registration errors. (Author)

  1. Measurement Error. For Good Measure....

    ERIC Educational Resources Information Center

    Johnson, Stephen; Dulaney, Chuck; Banks, Karen

    No test, however well designed, can measure a student's true achievement because numerous factors interfere with the ability to measure achievement. These factors are sources of measurement error, and the goal in creating tests is to have as little measurement error as possible. Error can result from the test design, factors related to individual…

  2. The Rules of Spelling Errors.

    ERIC Educational Resources Information Center

    Yannakoudakis, E. J.; Fawthrop, D.

    1983-01-01

    Results of analysis of 1,377 spelling error forms including three categories of spelling errors (consonantal, vowel, and sequential) demonstrate that majority of spelling errors are highly predictable when set of predefined rules based on phonological and sequential considerations are followed algorithmically. Eleven references and equivalent…

  3. Error Patterns in Problem Solving.

    ERIC Educational Resources Information Center

    Babbitt, Beatrice C.

    Although many common problem-solving errors within the realm of school mathematics have been previously identified, a compilation of such errors is not readily available within learning disabilities textbooks, mathematics education texts, or teacher's manuals for school mathematics texts. Using data on error frequencies drawn from both the Fourth…

  4. Errors in causal inference: an organizational schema for systematic error and random error.

    PubMed

    Suzuki, Etsuji; Tsuda, Toshihide; Mitsuhashi, Toshiharu; Mansournia, Mohammad Ali; Yamamoto, Eiji

    2016-11-01

    To provide an organizational schema for systematic error and random error in estimating causal measures, aimed at clarifying the concept of errors from the perspective of causal inference. We propose to divide systematic error into structural error and analytic error. With regard to random error, our schema shows its four major sources: nondeterministic counterfactuals, sampling variability, a mechanism that generates exposure events and measurement variability. Structural error is defined from the perspective of counterfactual reasoning and divided into nonexchangeability bias (which comprises confounding bias and selection bias) and measurement bias. Directed acyclic graphs are useful to illustrate this kind of error. Nonexchangeability bias implies a lack of "exchangeability" between the selected exposed and unexposed groups. A lack of exchangeability is not a primary concern of measurement bias, justifying its separation from confounding bias and selection bias. Many forms of analytic errors result from the small-sample properties of the estimator used and vanish asymptotically. Analytic error also results from wrong (misspecified) statistical models and inappropriate statistical methods. Our organizational schema is helpful for understanding the relationship between systematic error and random error from a previously less investigated aspect, enabling us to better understand the relationship between accuracy, validity, and precision. Copyright © 2016 Elsevier Inc. All rights reserved.

  5. Beam lifetime and limitations during low-energy RHIC operation

    SciTech Connect

    Fedotov, A.V.; Bai, M.; Blaskiewicz, M.; Fischer, W.; Kayran, D.; Montag, C.; Satogata, T.; Tepikian, S.; Wang, G.

    2011-03-28

    The low-energy physics program at the Relativistic Heavy Ion Collider (RHIC), motivated by a search for the QCD phase transition critical point, requires operation at low energies. At these energies, large nonlinear magnetic field errors and large beam sizes produce low beam lifetimes. A variety of beam dynamics effects such as Intrabeam Scattering (IBS), space charge and beam-beam forces also contribute. All these effects are important to understand beam lifetime limitations in RHIC at low energies. During the low-energy RHIC physics run in May-June 2010 at beam {gamma} = 6.1 and {gamma} = 4.1, gold beam lifetimes were measured for various values of space-charge tune shifts, transverse acceptance limitation by collimators, synchrotron tunes and RF voltage. This paper summarizes our observations and initial findings.

  6. A syntax-preserving error resilience tool for JPEG 2000 based on error correcting arithmetic coding.

    PubMed

    Grangetto, Marco; Magli, Enrico; Olmo, Gabriella

    2006-04-01

    JPEG 2000 is the novel ISO standard for image and video coding. Besides its improved coding efficiency, it also provides a few error resilience tools in order to limit the effect of errors in the codestream, which can occur when the compressed image or video data are transmitted over an error-prone channel, as typically occurs in wireless communication scenarios. However, for very harsh channels, these tools often do not provide an adequate degree of error protection. In this paper, we propose a novel error-resilience tool for JPEG 2000, based on the concept of ternary arithmetic coders employing a forbidden symbol. Such coders introduce a controlled degree of redundancy during the encoding process, which can be exploited at the decoder side in order to detect and correct errors. We propose a maximum likelihood and a maximum a posteriori context-based decoder, specifically tailored to the JPEG 2000 arithmetic coder, which are able to carry out both hard and soft decoding of a corrupted code-stream. The proposed decoder extends the JPEG 2000 capabilities in error-prone scenarios, without violating the standard syntax. Extensive simulations on video sequences show that the proposed decoders largely outperform the standard in terms of PSNR and visual quality.

  7. Error-finding and error-correcting methods for the start-up of the SLC

    SciTech Connect

    Lee, M.J.; Clearwater, S.H.; Kleban, S.D.; Selig, L.J.

    1987-02-01

    During the commissioning of an accelerator, storage ring, or beam transfer line, one of the important tasks of an accelertor physicist is to check the first-order optics of the beam line and to look for errors in the system. Conceptually, it is important to distinguish between techniques for finding the machine errors that are the cause of the problem and techniques for correcting the beam errors that are the result of the machine errors. In this paper we will limit our presentation to certain applications of these two methods for finding or correcting beam-focus errors and beam-kick errors that affect the profile and trajectory of the beam respectively. Many of these methods have been used successfully in the commissioning of SLC systems. In order not to waste expensive beam time we have developed and used a beam-line simulator to test the ideas that have not been tested experimentally. To save valuable physicist's time we have further automated the beam-kick error-finding procedures by adopting methods from the field of artificial intelligence to develop a prototype expert system. Our experience with this prototype has demonstrated the usefulness of expert systems in solving accelerator control problems. The expert system is able to find the same solutions as an expert physicist but in a more systematic fashion. The methods used in these procedures and some of the recent applications will be described in this paper.

  8. 5 CFR 841.505 - Correction of error.

    Code of Federal Regulations, 2011 CFR

    2011-01-01

    ... 5 Administrative Personnel 2 2011-01-01 2011-01-01 false Correction of error. 841.505 Section 841... Contributions § 841.505 Correction of error. (a) When it is determined that an agency has paid less than the... whatsoever, including but not limited to, coverage decisions, correction of the percentage applicable or...

  9. 5 CFR 841.505 - Correction of error.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... 5 Administrative Personnel 2 2010-01-01 2010-01-01 false Correction of error. 841.505 Section 841... Contributions § 841.505 Correction of error. (a) When it is determined that an agency has paid less than the... whatsoever, including but not limited to, coverage decisions, correction of the percentage applicable or...

  10. Young Children Make Scale Errors when Playing with Dolls

    ERIC Educational Resources Information Center

    Ware, Elizabeth A.; Uttal, David H.; Wetter, Emily K.; DeLoache, Judy S.

    2006-01-01

    Prior research (DeLoache, Uttal & Rosengren, 2004) has documented that 18- to 30-month-olds occasionally make scale errors: they attempt to fit their bodies into or onto miniature objects (e.g. a chair) that are far too small for them. The current study explores whether scale errors are limited to actions that directly involve the child's…

  11. On typographical errors.

    PubMed

    Hamilton, J W

    1993-09-01

    In his overall assessment of parapraxes in 1901, Freud included typographical mistakes but did not elaborate on or study this subject nor did he have anything to say about it in his later writings. This paper lists textual errors from a variety of current literary sources and explores the dynamic importance of their execution and the failure to make necessary corrections during the editorial process. While there has been a deemphasis of the role of unconscious determinants in the genesis of all slips as a result of recent findings in cognitive psychology, the examples offered suggest that, with respect to motivation, lapses in compulsivity contribute to their original commission while thematic compliance and voyeuristic issues are important in their not being discovered prior to publication.

  12. Beta systems error analysis

    NASA Technical Reports Server (NTRS)

    1984-01-01

    The atmospheric backscatter coefficient, beta, measured with an airborne CO Laser Doppler Velocimeter (LDV) system operating in a continuous wave, focussed model is discussed. The Single Particle Mode (SPM) algorithm, was developed from concept through analysis of an extensive amount of data obtained with the system on board a NASA aircraft. The SPM algorithm is intended to be employed in situations where one particle at a time appears in the sensitive volume of the LDV. In addition to giving the backscatter coefficient, the SPM algorithm also produces as intermediate results the aerosol density and the aerosol backscatter cross section distribution. A second method, which measures only the atmospheric backscatter coefficient, is called the Volume Mode (VM) and was simultaneously employed. The results of these two methods differed by slightly less than an order of magnitude. The measurement uncertainties or other errors in the results of the two methods are examined.

  13. Comparison of 3 options for choosing control limits in biochemistry testing.

    PubMed

    Manzocchi, Simone; Furman, Erika; Freeman, Kathleen

    2017-03-01

    The purpose of statistical quality control (QC) is to provide peace of mind with regard to the production of results that are suitable for analytically sound clinical interpretation and making reliable decisions about patient diagnosis, monitoring, and prognosis. In this study, we compared 3 options for choosing control limits for biochemistry testing. They focus on the probability of error detection (Ped) and probability of false rejection (Pfr) achievable for a veterinary biochemical analyzer using the following 3 combinations: the quality control material (QCM) manufacturer's acceptable ranges; a standard 12s rule customized for the instrument's observed performance; and candidate rules selected for the instrument's observed performance using a computerized program (EZrules). For assessing customized QC, we used mean, SD, CV, bias, total error, and sigma metrics calculated from 3 months of control measurements on a laboratory biochemical analyzer, for 24 commonly used analytes, on 2 QCM levels. Given the desirable combination of high Ped (> 90%) and low Pfr (≤ 5%), the candidate rules selected by the computerized program-related EZrules provided the best performance combinations. The present work shows acceptable QC performance basing the QC on customization of the acceptable ranges of results from the achievable performance of an individual instrument. The QC performance is maximized by the application of candidate rules based on customized ranges obtained from a computerized QC tool, providing the ability to achieve the highest Ped and acceptably low Pfr values. © 2017 American Society for Veterinary Clinical Pathology.

  14. On the main Errors underlying Statistical Physics

    NASA Astrophysics Data System (ADS)

    Kalanov, Temur Z.

    2002-04-01

    _nm ≡ B_mn ≡ P_nm^10 (where P_nm^q + 1, q is the (m, q) arrow (n, q + 1) transition probability in unit time; the quantum numbers q, q + 1 and m, n characterize the energetic states of the photon gas and of the molecule, respectively). Thus, the generally accepted basis of statistical physics includes the essential errors that are due to violation of the laws of logic. Correction of the errors open a way to unitarization of the principles of statistical physics and physical kinetics. (A more detailed consideration is given in a dissertation [T.Z.Kalanov, “The correct quantum-statistical description of the ideal systems within the framework of the master equation”, Tashkent, 1993]).

  15. Measuring Cyclic Error in Laser Heterodyne Interferometers

    NASA Technical Reports Server (NTRS)

    Ryan, Daniel; Abramovici, Alexander; Zhao, Feng; Dekens, Frank; An, Xin; Azizi, Alireza; Chapsky, Jacob; Halverson, Peter

    2010-01-01

    An improved method and apparatus have been devised for measuring cyclic errors in the readouts of laser heterodyne interferometers that are configured and operated as displacement gauges. The cyclic errors arise as a consequence of mixing of spurious optical and electrical signals in beam launchers that are subsystems of such interferometers. The conventional approach to measurement of cyclic error involves phase measurements and yields values precise to within about 10 pm over air optical paths at laser wavelengths in the visible and near infrared. The present approach, which involves amplitude measurements instead of phase measurements, yields values precise to about .0.1 microns . about 100 times the precision of the conventional approach. In a displacement gauge of the type of interest here, the laser heterodyne interferometer is used to measure any change in distance along an optical axis between two corner-cube retroreflectors. One of the corner-cube retroreflectors is mounted on a piezoelectric transducer (see figure), which is used to introduce a low-frequency periodic displacement that can be measured by the gauges. The transducer is excited at a frequency of 9 Hz by a triangular waveform to generate a 9-Hz triangular-wave displacement having an amplitude of 25 microns. The displacement gives rise to both amplitude and phase modulation of the heterodyne signals in the gauges. The modulation includes cyclic error components, and the magnitude of the cyclic-error component of the phase modulation is what one needs to measure in order to determine the magnitude of the cyclic displacement error. The precision attainable in the conventional (phase measurement) approach to measuring cyclic error is limited because the phase measurements are af-

  16. Imaginary Companions and Peer Acceptance

    ERIC Educational Resources Information Center

    Gleason, Tracy R.

    2004-01-01

    Early research on imaginary companions suggests that children who create them do so to compensate for poor social relationships. Consequently, the peer acceptance of children with imaginary companions was compared to that of their peers. Sociometrics were conducted on 88 preschool-aged children; 11 had invisible companions, 16 had personified…

  17. Helping Our Children Accept Themselves.

    ERIC Educational Resources Information Center

    Gamble, Mae

    1984-01-01

    Parents of a child with muscular dystrophy recount their reactions to learning of the diagnosis, their gradual acceptance, and their son's resistance, which was gradually lessened when he was provided with more information and treated more normally as a member of the family. (CL)

  18. Helping Our Children Accept Themselves.

    ERIC Educational Resources Information Center

    Gamble, Mae

    1984-01-01

    Parents of a child with muscular dystrophy recount their reactions to learning of the diagnosis, their gradual acceptance, and their son's resistance, which was gradually lessened when he was provided with more information and treated more normally as a member of the family. (CL)

  19. Safety, risk acceptability, and morality.

    PubMed

    Macpherson, James A E

    2008-09-01

    The primary aim of this article is to develop and defend a conceptual analysis of safety. The article begins by considering two previous analyses of safety in terms of risk acceptability. It is argued that these analyses fail because the notion of risk acceptability is more subjective than safety, as risk acceptability takes into account potential benefits in a way that safety does not. A distinction is then made between two different kinds of safety--safety qua cause and safety qua recipient--and both are defined in terms of the probability of a loss of value, though the relationship between safety and the probability of loss varies in each case. It is then shown that although this analysis is less subjective than the previously considered analyses, subjectivity can still enter into judgments of safety via the notions of probability and value. In the final section of this article, it is argued that the difference between safety and risk acceptability is important because it corresponds in significant ways to the difference between consequentialist and deontological moral viewpoints.

  20. Acceptance and Commitment Therapy: Introduction

    ERIC Educational Resources Information Center

    Twohig, Michael P.

    2012-01-01

    This is the introductory article to a special series in Cognitive and Behavioral Practice on Acceptance and Commitment Therapy (ACT). Instead of each article herein reviewing the basics of ACT, this article contains that review. This article provides a description of where ACT fits within the larger category of cognitive behavior therapy (CBT):…

  1. Euthanasia Acceptance: An Attitudinal Inquiry.

    ERIC Educational Resources Information Center

    Klopfer, Fredrick J.; Price, William F.

    The study presented was conducted to examine potential relationships between attitudes regarding the dying process, including acceptance of euthanasia, and other attitudinal or demographic attributes. The data of the survey was comprised of responses given by 331 respondents to a door-to-door interview. Results are discussed in terms of preferred…

  2. Energy justice: Participation promotes acceptance

    NASA Astrophysics Data System (ADS)

    Baxter, Jamie

    2017-08-01

    Wind turbines have been a go-to technology for addressing climate change, but they are increasingly a source of frustration for all stakeholders. While community ownership is often lauded as a panacea for maximizing turbine acceptance, a new study suggests that decision-making involvement — procedural fairness — matters most.

  3. Acceptance and Commitment Therapy: Introduction

    ERIC Educational Resources Information Center

    Twohig, Michael P.

    2012-01-01

    This is the introductory article to a special series in Cognitive and Behavioral Practice on Acceptance and Commitment Therapy (ACT). Instead of each article herein reviewing the basics of ACT, this article contains that review. This article provides a description of where ACT fits within the larger category of cognitive behavior therapy (CBT):…

  4. Nitrogen trailer acceptance test report

    SciTech Connect

    Kostelnik, A.J.

    1996-02-12

    This Acceptance Test Report documents compliance with the requirements of specification WHC-S-0249. The equipment was tested according to WHC-SD-WM-ATP-108 Rev.0. The equipment being tested is a portable contained nitrogen supply. The test was conducted at Norco`s facility.

  5. LANL measurements verification acceptance criteria

    SciTech Connect

    Chavez, D. M.

    2001-01-01

    The possibility of SNM diversion/theft is a major concern to organizations charged with control of Special Nuclear Material (SNM). Verification measurements are used to aid in the detection of SNM losses. The acceptance/rejection criteria for verification measurements are dependent on the facility-specific processes, the knowledge of the measured item, and the measurement technique applied. This paper will discuss some of the LANL measurement control steps and criteria applied for the acceptance of a verification measurement. The process involves interaction among the facility operations personnel, the subject matter experts of a specific instrument/technique, the process knowledge on the matrix of the measured item, and the measurement-specific precision and accuracy values. By providing an introduction to a site-specific application of measurement verification acceptance criteria, safeguards, material custodians, and SNM measurement professionals are assisted in understanding the acceptance/rejection process for measurements and their contribution of the process to the detection of SNM diversion.

  6. Further Conceptualization of Treatment Acceptability

    ERIC Educational Resources Information Center

    Carter, Stacy L.

    2008-01-01

    A review and extension of previous conceptualizations of treatment acceptability is provided in light of progress within the area of behavior treatment development and implementation. Factors including legislation, advances in research, and service delivery models are examined as to their relationship with a comprehensive conceptualization of…

  7. Error, signal, and the placement of Ctenophora sister to all other animals.

    PubMed

    Whelan, Nathan V; Kocot, Kevin M; Moroz, Leonid L; Halanych, Kenneth M

    2015-05-05

    Elucidating relationships among early animal lineages has been difficult, and recent phylogenomic analyses place Ctenophora sister to all other extant animals, contrary to the traditional view of Porifera as the earliest-branching animal lineage. To date, phylogenetic support for either ctenophores or sponges as sister to other animals has been limited and inconsistent among studies. Lack of agreement among phylogenomic analyses using different data and methods obscures how complex traits, such as epithelia, neurons, and muscles evolved. A consensus view of animal evolution will not be accepted until datasets and methods converge on a single hypothesis of early metazoan relationships and putative sources of systematic error (e.g., long-branch attraction, compositional bias, poor model choice) are assessed. Here, we investigate possible causes of systematic error by expanding taxon sampling with eight novel transcriptomes, strictly enforcing orthology inference criteria, and progressively examining potential causes of systematic error while using both maximum-likelihood with robust data partitioning and Bayesian inference with a site-heterogeneous model. We identified ribosomal protein genes as possessing a conflicting signal compared with other genes, which caused some past studies to infer ctenophores and cnidarians as sister. Importantly, biases resulting from elevated compositional heterogeneity or elevated substitution rates are ruled out. Placement of ctenophores as sister to all other animals, and sponge monophyly, are strongly supported under multiple analyses, herein.

  8. Error, signal, and the placement of Ctenophora sister to all other animals

    PubMed Central

    Whelan, Nathan V.; Kocot, Kevin M.; Moroz, Leonid L.

    2015-01-01

    Elucidating relationships among early animal lineages has been difficult, and recent phylogenomic analyses place Ctenophora sister to all other extant animals, contrary to the traditional view of Porifera as the earliest-branching animal lineage. To date, phylogenetic support for either ctenophores or sponges as sister to other animals has been limited and inconsistent among studies. Lack of agreement among phylogenomic analyses using different data and methods obscures how complex traits, such as epithelia, neurons, and muscles evolved. A consensus view of animal evolution will not be accepted until datasets and methods converge on a single hypothesis of early metazoan relationships and putative sources of systematic error (e.g., long-branch attraction, compositional bias, poor model choice) are assessed. Here, we investigate possible causes of systematic error by expanding taxon sampling with eight novel transcriptomes, strictly enforcing orthology inference criteria, and progressively examining potential causes of systematic error while using both maximum-likelihood with robust data partitioning and Bayesian inference with a site-heterogeneous model. We identified ribosomal protein genes as possessing a conflicting signal compared with other genes, which caused some past studies to infer ctenophores and cnidarians as sister. Importantly, biases resulting from elevated compositional heterogeneity or elevated substitution rates are ruled out. Placement of ctenophores as sister to all other animals, and sponge monophyly, are strongly supported under multiple analyses, herein. PMID:25902535

  9. Position error propagation in the simplex strapdown navigation system

    NASA Technical Reports Server (NTRS)

    1976-01-01

    The results of an analysis of the effects of deterministic error sources on position error in the simplex strapdown navigation system were documented. Improving the long term accuracy of the system was addressed in two phases: understanding and controlling the error within the system, and defining methods of damping the net system error through the use of an external reference velocity or position. Review of the flight and ground data revealed error containing the Schuler frequency as well as non-repeatable trends. The only unbounded terms are those involving gyro bias and azimuth error coupled with velocity. All forms of Schuler-periodic position error were found to be sufficiently large to require update or damping capability unless the source coefficients can be limited to values less than those used in this analysis for misalignment and gyro and accelerometer bias. The first-order effects of the deterministic error sources were determined with a simple error propagator which provided plots of error time functions in response to various source error values.

  10. [Survey of drug dispensing errors in hospital wards].

    PubMed

    Lám, Judit; Rózsa, Erzsébet; Kis Szölgyémi, Mónika; Belicza, Eva

    2011-08-28

    Medication errors occur very frequently. The limited knowledge of contributing factors and risks prevents the development and testing of successful preventive strategies. To investigate the differences between the ordered and dispensed drugs, and to identify the risks during medication. Prospective direct observation at two inpatient hospital wards. The number of observed doses was 775 and the number of ordered doses was 806. It was found that from the total opportunities of 803 errors 114 errors occurred in dispensed drugs corresponding to an error rate of 14.1%. Among the different types of errors, the most important errors were: dispensing inappropriate doses (25.4%), unauthorized tablet halving or crushing (24.6%), omission errors (16.4%) and dispensing an active ingredient different from the ordered (14.2%). 87% of drug dispensing errors were considered as errors with minor consequences, while 13% of errors were potentially serious. Direct observation of the drug dispensing procedure appears to be an appropriate method to observe errors in medication of hospital wards. The results of the study and the identified risks are worth to be reconsidered and prevention measures should be applied to everyday health care practice to improve patient safety.

  11. Spin glasses and error-correcting codes

    NASA Technical Reports Server (NTRS)

    Belongie, M. L.

    1994-01-01

    In this article, we study a model for error-correcting codes that comes from spin glass theory and leads to both new codes and a new decoding technique. Using the theory of spin glasses, it has been proven that a simple construction yields a family of binary codes whose performance asymptotically approaches the Shannon bound for the Gaussian channel. The limit is approached as the number of information bits per codeword approaches infinity while the rate of the code approaches zero. Thus, the codes rapidly become impractical. We present simulation results that show the performance of a few manageable examples of these codes. In the correspondence that exists between spin glasses and error-correcting codes, the concept of a thermal average leads to a method of decoding that differs from the standard method of finding the most likely information sequence for a given received codeword. Whereas the standard method corresponds to calculating the thermal average at temperature zero, calculating the thermal average at a certain optimum temperature results instead in the sequence of most likely information bits. Since linear block codes and convolutional codes can be viewed as examples of spin glasses, this new decoding method can be used to decode these codes in a way that minimizes the bit error rate instead of the codeword error rate. We present simulation results that show a small improvement in bit error rate by using the thermal average technique.

  12. Entanglment assisted zero-error codes

    NASA Astrophysics Data System (ADS)

    Matthews, William; Mancinska, Laura; Leung, Debbie; Ozols, Maris; Roy, Aidan

    2011-03-01

    Zero-error information theory studies the transmission of data over noisy communication channels with strictly zero error probability. For classical channels and data, much of the theory can be studied in terms of combinatorial graph properties and is a source of hard open problems in that domain. In recent work, we investigated how entanglement between sender and receiver can be used in this task. We found that entanglement-assisted zero-error codes (which are still naturally studied in terms of graphs) sometimes offer an increased bit rate of zero-error communication even in the large block length limit. The assisted codes that we have constructed are closely related to Kochen-Specker proofs of non-contextuality as studied in the context of foundational physics, and our results on asymptotic rates of assisted zero-error communication yield non-contextuality proofs which are particularly `strong' in a certain quantitive sense. I will also describe formal connections to the multi-prover games known as pseudo-telepathy games.

  13. Experimental repetitive quantum error correction.

    PubMed

    Schindler, Philipp; Barreiro, Julio T; Monz, Thomas; Nebendahl, Volckmar; Nigg, Daniel; Chwalla, Michael; Hennrich, Markus; Blatt, Rainer

    2011-05-27

    The computational potential of a quantum processor can only be unleashed if errors during a quantum computation can be controlled and corrected for. Quantum error correction works if imperfections of quantum gate operations and measurements are below a certain threshold and corrections can be applied repeatedly. We implement multiple quantum error correction cycles for phase-flip errors on qubits encoded with trapped ions. Errors are corrected by a quantum-feedback algorithm using high-fidelity gate operations and a reset technique for the auxiliary qubits. Up to three consecutive correction cycles are realized, and the behavior of the algorithm for different noise environments is analyzed.

  14. Rapid mapping of volumetric errors

    SciTech Connect

    Krulewich, D.; Hale, L.; Yordy, D.

    1995-09-13

    This paper describes a relatively inexpensive, fast, and easy to execute approach to mapping the volumetric errors of a machine tool, coordinate measuring machine, or robot. An error map is used to characterize a machine or to improve its accuracy by compensating for the systematic errors. The method consists of three steps: (1) modeling the relationship between the volumetric error and the current state of the machine; (2) acquiring error data based on length measurements throughout the work volume; and (3) optimizing the model to the particular machine.

  15. Register file soft error recovery

    DOEpatents

    Fleischer, Bruce M.; Fox, Thomas W.; Wait, Charles D.; Muff, Adam J.; Watson, III, Alfred T.

    2013-10-15

    Register file soft error recovery including a system that includes a first register file and a second register file that mirrors the first register file. The system also includes an arithmetic pipeline for receiving data read from the first register file, and error detection circuitry to detect whether the data read from the first register file includes corrupted data. The system further includes error recovery circuitry to insert an error recovery instruction into the arithmetic pipeline in response to detecting the corrupted data. The inserted error recovery instruction replaces the corrupted data in the first register file with a copy of the data from the second register file.

  16. Data Properties Categorization to Improve Scientific Sensor Data Error Detection

    NASA Astrophysics Data System (ADS)

    Gallegos, I.; Gates, A.; Tweedie, C. E.

    2009-12-01

    Recent advancements in scientific sensor data acquisition technologies have increased the amount of data collected in near-real time. Although the need for error detection in such data sets is widely acknowledged, few organizations to date have automated random and systematic error detection. This poster presents the results of a broad survey of the literature on scientific sensor data collected through networks and environmental observatories with the aim of identifying research priorities needed for the development of automated error detection mechanisms. The key finding of this survey is that there appears to be no overarching consensus about error detection criteria in the environmental sciences and that this likely limits the development and implementation of automated error detection in this domain. The literature survey focused on identifying scientific projects from institutions that have incorporated error detection into their systems, the type of analyzed data, and the type of sensor error detection properties as defined by each project. The projects have mechanisms that perform error detection in both the field sites and data centers. The literature survey was intended to capture a representative sample of projects with published error detection criteria. Several scientific projects have included error detection, mostly as part of the system’s source code; however, error detection properties, which are embedded or hard-coded in the source code, are difficult to refine and require a software developer to modify the source code every time a new error detection property or a modification to an existing property is needed. An alternative to hard-coded error detection properties is an error-detection mechanism, independent of the system used to collect the sensor data, which will automatically detect errors in the supported type of data. Such a mechanism would allow scientists to specify and reuse error detection properties and uses the specified properties

  17. Role of memory errors in quantum repeaters

    NASA Astrophysics Data System (ADS)

    Hartmann, L.; Kraus, B.; Briegel, H.-J.; Dür, W.

    2007-03-01

    We investigate the influence of memory errors in the quantum repeater scheme for long-range quantum communication. We show that the communication distance is limited in standard operation mode due to memory errors resulting from unavoidable waiting times for classical signals. We show how to overcome these limitations by (i) improving local memory and (ii) introducing two operational modes of the quantum repeater. In both operational modes, the repeater is run blindly, i.e., without waiting for classical signals to arrive. In the first scheme, entanglement purification protocols based on one-way classical communication are used allowing to communicate over arbitrary distances. However, the error thresholds for noise in local control operations are very stringent. The second scheme makes use of entanglement purification protocols with two-way classical communication and inherits the favorable error thresholds of the repeater run in standard mode. One can increase the possible communication distance by an order of magnitude with reasonable overhead in physical resources. We outline the architecture of a quantum repeater that can possibly ensure intercontinental quantum communication.

  18. Prediction of discretization error using the error transport equation

    NASA Astrophysics Data System (ADS)

    Celik, Ismail B.; Parsons, Don Roscoe

    2017-06-01

    This study focuses on an approach to quantify the discretization error associated with numerical solutions of partial differential equations by solving an error transport equation (ETE). The goal is to develop a method that can be used to adequately predict the discretization error using the numerical solution on only one grid/mesh. The primary problem associated with solving the ETE is the formulation of the error source term which is required for accurately predicting the transport of the error. In this study, a novel approach is considered which involves fitting the numerical solution with a series of locally smooth curves and then blending them together with a weighted spline approach. The result is a continuously differentiable analytic expression that can be used to determine the error source term. Once the source term has been developed, the ETE can easily be solved using the same solver that is used to obtain the original numerical solution. The new methodology is applied to the two-dimensional Navier-Stokes equations in the laminar flow regime. A simple unsteady flow case is also considered. The discretization error predictions based on the methodology presented in this study are in good agreement with the 'true error'. While in most cases the error predictions are not quite as accurate as those from Richardson extrapolation, the results are reasonable and only require one numerical grid. The current results indicate that there is much promise going forward with the newly developed error source term evaluation technique and the ETE.

  19. Towards error-free interaction.

    PubMed

    Tsoneva, Tsvetomira; Bieger, Jordi; Garcia-Molina, Gary

    2010-01-01

    Human-machine interaction (HMI) relies on pat- tern recognition algorithms that are not perfect. To improve the performance and usability of these systems we can utilize the neural mechanisms in the human brain dealing with error awareness. This study aims at designing a practical error detection algorithm using electroencephalogram signals that can be integrated in an HMI system. Thus, real-time operation, customization, and operation convenience are important. We address these requirements in an experimental framework simulating machine errors. Our results confirm the presence of brain potentials related to processing of machine errors. These are used to implement an error detection algorithm emphasizing the differences in error processing on a per subject basis. The proposed algorithm uses the individual best bipolar combination of electrode sites and requires short calibration. The single-trial error detection performance on six subjects, characterized by the area under the ROC curve ranges from 0.75 to 0.98.

  20. An Introduction to Error Analysis for Quantitative Chemistry

    ERIC Educational Resources Information Center

    Neman, R. L.

    1972-01-01

    Describes two formulas for calculating errors due to instrument limitations which are usually found in gravimetric volumetric analysis and indicates their possible applications to other fields of science. (CC)

  1. Analytical method transfer using equivalence tests with reasonable acceptance criteria and appropriate effort: extension of the ISPE concept.

    PubMed

    Kaminski, L; Schepers, U; Wätzig, H

    2010-12-15

    A method development process is commonly finalized by a method transfer from the developing to the routine laboratory. Statistical tests are performed in order to survey if a transfer succeeded or failed. However, using the classic two-sample t-test can lead to misjudgments and unsatisfying transfer results due to its test characteristics. Therefore the International Society of Pharmaceutical Engineering (ISPE) employed a fixed method transfer design using equivalence tests in their Guide for Technology Transfer. Although it was well received by analytical laboratories worldwide this fixed design can easily bring about high beta-errors (rejection of successful transfers) or high workload (many analysts employed during transfer) if sigma(AN) (error due to different analysts) exceeds 0.6%. Hence this work introduces an extended concept which will help to circumvent this disadvantage by providing guidance to select a personalized and more appropriate experimental design. First of all it demonstrates that former t-test related acceptance criteria can be scaled by a factor of 1.15, which allows for a broader tolerance without a loss of decision certainty. Furthermore a decision guidance to choose the proper number of analysts or series at given percentage acceptance limits (%AL) is presented.

  2. Optimal input design for aircraft instrumentation systematic error estimation

    NASA Technical Reports Server (NTRS)

    Morelli, Eugene A.

    1991-01-01

    A new technique for designing optimal flight test inputs for accurate estimation of instrumentation systematic errors was developed and demonstrated. A simulation model of the F-18 High Angle of Attack Research Vehicle (HARV) aircraft was used to evaluate the effectiveness of the optimal input compared to input recorded during flight test. Instrumentation systematic error parameter estimates and their standard errors were compared. It was found that the optimal input design improved error parameter estimates and their accuracies for a fixed time input design. Pilot acceptability of the optimal input design was demonstrated using a six degree-of-freedom fixed base piloted simulation of the F-18 HARV. The technique described in this work provides a practical, optimal procedure for designing inputs for data compatibility experiments.

  3. Optimal input design for aircraft instrumentation systematic error estimation

    NASA Technical Reports Server (NTRS)

    Morelli, Eugene A.

    1991-01-01

    A new technique for designing optimal flight test inputs for accurate estimation of instrumentation systematic errors was developed and demonstrated. A simulation model of the F-18 High Angle of Attack Research Vehicle (HARV) aircraft was used to evaluate the effectiveness of the optimal input compared to input recorded during flight test. Instrumentation systematic error parameter estimates and their standard errors were compared. It was found that the optimal input design improved error parameter estimates and their accuracies for a fixed time input design. Pilot acceptability of the optimal input design was demonstrated using a six degree-of-freedom fixed base piloted simulation of the F-18 HARV. The technique described in this work provides a practical, optimal procedure for designing inputs for data compatibility experiments.

  4. Error models for uncertainty quantification

    NASA Astrophysics Data System (ADS)

    Josset, L.; Scheidt, C.; Lunati, I.

    2012-12-01

    In groundwater modeling, uncertainty on the permeability field leads to a stochastic description of the aquifer system, in which the quantities of interests (e.g., groundwater fluxes or contaminant concentrations) are considered as stochastic variables and described by their probability density functions (PDF) or by a finite number of quantiles. Uncertainty quantification is often evaluated using Monte-Carlo simulations, which employ a large number of realizations. As this leads to prohibitive computational costs, techniques have to be developed to keep the problem computationally tractable. The Distance-based Kernel Method (DKM) [1] limits the computational cost of the uncertainty quantification by reducing the stochastic space: first, the realizations are clustered based on the response of a proxy; then, the full model is solved only for a subset of realizations defined by the clustering and the quantiles are estimated from this limited number of realizations. Here, we present a slightly different strategy that employs an approximate model rather than a proxy: we use the Multiscale Finite Volume method (MsFV) [2,3] to compute an approximate solution for each realization, and to obtain a first assessment of the PDF. In this context, DKM is then used to identify a subset of realizations for which the exact model is solved and compared with the solution of the approximate model. This allows highlighting and correcting possible errors introduced by the approximate model, while keeping full statistical information on the ensemble of realizations. Here, we test several strategies to compute the model error, correct the approximate model and achieve an optimal PDF estimation. We present a case study in which we predict the breakthrough curve of an ideal tracer for an ensemble of realizations generated via Multiple Point Direct Sampling [4] with a training image obtained from a 2D section of the Herten permeability field [5]. [1] C. Scheidt and J. Caers, "Representing

  5. Adherence to balance tolerance limits at the Upper Mississippi Science Center, La Crosse, Wisconsin.

    USGS Publications Warehouse

    Myers, C.T.; Kennedy, D.M.

    1998-01-01

    Verification of balance accuracy entails applying a series of standard masses to a balance prior to use and recording the measured values. The recorded values for each standard should have lower and upper weight limits or tolerances that are accepted as verification of balance accuracy under normal operating conditions. Balance logbooks for seven analytical balances at the Upper Mississippi Science Center were checked over a 3.5-year period to determine if the recorded weights were within the established tolerance limits. A total of 9435 measurements were checked. There were 14 instances in which the balance malfunctioned and operators recorded a rationale in the balance logbook. Sixty-three recording errors were found. Twenty-eight operators were responsible for two types of recording errors: Measurements of weights were recorded outside of the tolerance limit but not acknowledged as an error by the operator (n = 40); and measurements were recorded with the wrong number of decimal places (n = 23). The adherence rate for following tolerance limits was 99.3%. To ensure the continued adherence to tolerance limits, the quality-assurance unit revised standard operating procedures to require more frequent review of balance logbooks.

  6. Application of Uniform Measurement Error Distribution

    DTIC Science & Technology

    2016-03-18

    should be aware that notwithstanding any other provision of law , no person shall be subject to any penalty for failing to comply with a collection of...Uniform Measurement Error Distribution 5a. CONTRACT NUMBER 5b. GRANT NUMBER 5c. PROGRAM ELEMENT NUMBER 6. AUTHOR(S) Ghazarians, Alan; Jackson, Dennis...PFA), Probability of False Reject (PFR). 16. SECURITY CLASSIFICATION OF: 17. LIMITATION OF ABSTRACT SAR 18. NUMBER OF PAGES 15 19a. NAME

  7. Error analysis in laparoscopic surgery

    NASA Astrophysics Data System (ADS)

    Gantert, Walter A.; Tendick, Frank; Bhoyrul, Sunil; Tyrrell, Dana; Fujino, Yukio; Rangel, Shawn; Patti, Marco G.; Way, Lawrence W.

    1998-06-01

    Iatrogenic complications in laparoscopic surgery, as in any field, stem from human error. In recent years, cognitive psychologists have developed theories for understanding and analyzing human error, and the application of these principles has decreased error rates in the aviation and nuclear power industries. The purpose of this study was to apply error analysis to laparoscopic surgery and evaluate its potential for preventing complications. Our approach is based on James Reason's framework using a classification of errors according to three performance levels: at the skill- based performance level, slips are caused by attention failures, and lapses result form memory failures. Rule-based mistakes constitute the second level. Knowledge-based mistakes occur at the highest performance level and are caused by shortcomings in conscious processing. These errors committed by the performer 'at the sharp end' occur in typical situations which often times are brought about by already built-in latent system failures. We present a series of case studies in laparoscopic surgery in which errors are classified and the influence of intrinsic failures and extrinsic system flaws are evaluated. Most serious technical errors in lap surgery stem from a rule-based or knowledge- based mistake triggered by cognitive underspecification due to incomplete or illusory visual input information. Error analysis in laparoscopic surgery should be able to improve human performance, and it should detect and help eliminate system flaws. Complication rates in laparoscopic surgery due to technical errors can thus be considerably reduced.

  8. Contour Error Map Algorithm

    NASA Technical Reports Server (NTRS)

    Merceret, Francis; Lane, John; Immer, Christopher; Case, Jonathan; Manobianco, John

    2005-01-01

    The contour error map (CEM) algorithm and the software that implements the algorithm are means of quantifying correlations between sets of time-varying data that are binarized and registered on spatial grids. The present version of the software is intended for use in evaluating numerical weather forecasts against observational sea-breeze data. In cases in which observational data come from off-grid stations, it is necessary to preprocess the observational data to transform them into gridded data. First, the wind direction is gridded and binarized so that D(i,j;n) is the input to CEM based on forecast data and d(i,j;n) is the input to CEM based on gridded observational data. Here, i and j are spatial indices representing 1.25-km intervals along the west-to-east and south-to-north directions, respectively; and n is a time index representing 5-minute intervals. A binary value of D or d = 0 corresponds to an offshore wind, whereas a value of D or d = 1 corresponds to an onshore wind. CEM includes two notable subalgorithms: One identifies and verifies sea-breeze boundaries; the other, which can be invoked optionally, performs an image-erosion function for the purpose of attempting to eliminate river-breeze contributions in the wind fields.

  9. Comparison of analytical error and sampling error for contaminated soil.

    PubMed

    Gustavsson, Björn; Luthbom, Karin; Lagerkvist, Anders

    2006-11-16

    Investigation of soil from contaminated sites requires several sample handling steps that, most likely, will induce uncertainties in the sample. The theory of sampling describes seven sampling errors that can be calculated, estimated or discussed in order to get an idea of the size of the sampling uncertainties. With the aim of comparing the size of the analytical error to the total sampling error, these seven errors were applied, estimated and discussed, to a case study of a contaminated site. The manageable errors were summarized, showing a range of three orders of magnitudes between the examples. The comparisons show that the quotient between the total sampling error and the analytical error is larger than 20 in most calculation examples. Exceptions were samples taken in hot spots, where some components of the total sampling error get small and the analytical error gets large in comparison. Low concentration of contaminant, small extracted sample size and large particles in the sample contribute to the extent of uncertainty.

  10. Nurses' practice environments, error interception practices, and inpatient medication errors.

    PubMed

    Flynn, Linda; Liang, Yulan; Dickson, Geri L; Xie, Minge; Suh, Dong-Churl

    2012-06-01

    Medication errors remain a threat to patient safety. Therefore, the purpose of this study was to determine the relationships among characteristics of the nursing practice environment, nurse staffing levels, nurses' error interception practices, and rates of nonintercepted medication errors in acute care hospitals. This study, using a nonexperimental design, was conducted in a sample of 82 medical-surgical units recruited from 14 U.S. acute care hospitals. Registered nurses (RNs) on the 82 units were surveyed, producing a sample of 686 staff nurses. Data collected for the 8-month study period included the number of medication errors per 1,000 patient days and the number of RN hours per patient day. Nurse survey data included the Practice Environment Scale of the Nursing Work Index as a measure of environmental characteristics; a metric of nurses' interception practices was developed for the study. All survey measures were aggregated to the unit level prior to analysis with hierarchical linear modeling. A supportive practice environment was positively associated with error interception practices among nurses in the sample of medical-surgical units. Importantly, nurses' interception practices were inversely associated with medication error rates. A supportive practice environment enhances nurses' error interception practices. These interception practices play a role in reducing medication errors. When supported by their practice environments, nurses employ practices that can assist in interrupting medication errors before they reach the patients. © 2012 Sigma Theta Tau International.

  11. Accepting the T3D

    SciTech Connect

    Rich, D.O.; Pope, S.C.; DeLapp, J.G.

    1994-10-01

    In April, a 128 PE Cray T3D was installed at Los Alamos National Laboratory`s Advanced Computing Laboratory as part of the DOE`s High-Performance Parallel Processor Program (H4P). In conjunction with CRI, the authors implemented a 30 day acceptance test. The test was constructed in part to help them understand the strengths and weaknesses of the T3D. In this paper, they briefly describe the H4P and its goals. They discuss the design and implementation of the T3D acceptance test and detail issues that arose during the test. They conclude with a set of system requirements that must be addressed as the T3D system evolves.

  12. Data Acceptance Criteria for Standardized Human-Associated Fecal Source Identification Quantitative Real-Time PCR Methods

    PubMed Central

    Kelty, Catherine A.; Oshiro, Robin; Haugland, Richard A.; Madi, Tania; Brooks, Lauren; Field, Katharine G.; Sivaganesan, Mano

    2016-01-01

    There is growing interest in the application of human-associated fecal source identification quantitative real-time PCR (qPCR) technologies for water quality management. The transition from a research tool to a standardized protocol requires a high degree of confidence in data quality across laboratories. Data quality is typically determined through a series of specifications that ensure good experimental practice and the absence of bias in the results due to DNA isolation and amplification interferences. However, there is currently a lack of consensus on how best to evaluate and interpret human fecal source identification qPCR experiments. This is, in part, due to the lack of standardized protocols and information on interlaboratory variability under conditions for data acceptance. The aim of this study is to provide users and reviewers with a complete series of conditions for data acceptance derived from a multiple laboratory data set using standardized procedures. To establish these benchmarks, data from HF183/BacR287 and HumM2 human-associated qPCR methods were generated across 14 laboratories. Each laboratory followed a standardized protocol utilizing the same lot of reference DNA materials, DNA isolation kits, amplification reagents, and test samples to generate comparable data. After removal of outliers, a nested analysis of variance (ANOVA) was used to establish proficiency metrics that include lab-to-lab, replicate testing within a lab, and random error for amplification inhibition and sample processing controls. Other data acceptance measurements included extraneous DNA contamination assessments (no-template and extraction blank controls) and calibration model performance (correlation coefficient, amplification efficiency, and lower limit of quantification). To demonstrate the implementation of the proposed standardized protocols and data acceptance criteria, comparable data from two additional laboratories were reviewed. The data acceptance criteria

  13. Data Acceptance Criteria for Standardized Human-Associated Fecal Source Identification Quantitative Real-Time PCR Methods.

    PubMed

    Shanks, Orin C; Kelty, Catherine A; Oshiro, Robin; Haugland, Richard A; Madi, Tania; Brooks, Lauren; Field, Katharine G; Sivaganesan, Mano

    2016-05-01

    There is growing interest in the application of human-associated fecal source identification quantitative real-time PCR (qPCR) technologies for water quality management. The transition from a research tool to a standardized protocol requires a high degree of confidence in data quality across laboratories. Data quality is typically determined through a series of specifications that ensure good experimental practice and the absence of bias in the results due to DNA isolation and amplification interferences. However, there is currently a lack of consensus on how best to evaluate and interpret human fecal source identification qPCR experiments. This is, in part, due to the lack of standardized protocols and information on interlaboratory variability under conditions for data acceptance. The aim of this study is to provide users and reviewers with a complete series of conditions for data acceptance derived from a multiple laboratory data set using standardized procedures. To establish these benchmarks, data from HF183/BacR287 and HumM2 human-associated qPCR methods were generated across 14 laboratories. Each laboratory followed a standardized protocol utilizing the same lot of reference DNA materials, DNA isolation kits, amplification reagents, and test samples to generate comparable data. After removal of outliers, a nested analysis of variance (ANOVA) was used to establish proficiency metrics that include lab-to-lab, replicate testing within a lab, and random error for amplification inhibition and sample processing controls. Other data acceptance measurements included extraneous DNA contamination assessments (no-template and extraction blank controls) and calibration model performance (correlation coefficient, amplification efficiency, and lower limit of quantification). To demonstrate the implementation of the proposed standardized protocols and data acceptance criteria, comparable data from two additional laboratories were reviewed. The data acceptance criteria

  14. Acceptability of reactors in space

    SciTech Connect

    Buden, D.

    1981-01-01

    Reactors are the key to our future expansion into space. However, there has been some confusion in the public as to whether they are a safe and acceptable technology for use in space. The answer to these questions is explored. The US position is that when reactors are the preferred technical choice, that they can be used safely. In fact, it does not appear that reactors add measurably to the risk associated with the Space Transportation System.

  15. Acceptability of reactors in space

    SciTech Connect

    Buden, D.

    1981-04-01

    Reactors are the key to our future expansion into space. However, there has been some confusion in the public as to whether they are a safe and acceptable technology for use in space. The answer to these questions is explored. The US position is that when reactors are the preferred technical choice, that they can be used safely. In fact, it dies not appear that reactors add measurably to the risk associated with the Space Transportation System.

  16. [Selection and acceptability of food].

    PubMed

    Jaffé, W G

    1976-12-01

    In this paper the following factors which in one way or the other may have influence on the selection and acceptability of foods are treated: 1. Fisiological and psycological aspects; a) genetic factors, b) neurophysiological factors, c) emotional factors, d) perceptive factors. 2. Physical and ecological aspects. 3. Social and cultural aspects, a) habits and traditions, b) religious believes, c) tabues, d) nutrition faddism, e) prejudice, aversions and perversions, f) social value of foods, g) industrialized foods. 4. Economic aspects. 5. Educational aspects.

  17. Realtime mitigation of GPS SA errors using Loran-C

    NASA Technical Reports Server (NTRS)

    Braasch, Soo Y.

    1994-01-01

    The hybrid use of Loran-C with the Global Positioning System (GPS) was shown capable of providing a sole-means of enroute air radionavigation. By allowing pilots to fly direct to their destinations, use of this system is resulting in significant time savings and therefore fuel savings as well. However, a major error source limiting the accuracy of GPS is the intentional degradation of the GPS signal known as Selective Availability (SA). SA-induced position errors are highly correlated and far exceed all other error sources (horizontal position error: 100 meters, 95 percent). Realtime mitigation of SA errors from the position solution is highly desirable. How that can be achieved is discussed. The stability of Loran-C signals is exploited to reduce SA errors. The theory behind this technique is discussed and results using bench and flight data are given.

  18. Peeling Away Timing Error in NetFlow Data

    NASA Astrophysics Data System (ADS)

    Trammell, Brian; Tellenbach, Bernhard; Schatzmann, Dominik; Burkhart, Martin

    In this paper, we characterize, quantify, and correct timing errors introduced into network flow data by collection and export via Cisco NetFlow version 9. We find that while some of these sources of error (clock skew, export delay) are generally implementation-dependent and known in the literature, there is an additional cyclic error of up to one second that is inherent to the design of the export protocol. We present a method for correcting this cyclic error in the presence of clock skew and export delay. In an evaluation using traffic with known timing collected from a national-scale network, we show that this method can successfully correct the cyclic error. However, there can also be other implementation-specific errors for which insufficient information remains for correction. On the routers we have deployed in our network, this limits the accuracy to about 70ms, reinforcing the point that implementation matters when conducting research on network measurement data.

  19. The voices acceptance and action scale (VAAS): Pilot data.

    PubMed

    Shawyer, Frances; Ratcliff, Kirk; Mackinnon, Andrew; Farhall, John; Hayes, Steven C; Copolov, David

    2007-06-01

    Acceptance and mindfulness methods that emphasise the acceptance rather than control of symptoms are becoming more central to behavioural and cognitive therapies. Acceptance and Commitment Therapy (ACT) is the most developed of these methods; recent applications of ACT to psychosis suggest it to be a promising therapeutic approach. However, investigation of the mechanisms of therapy within this domain is difficult because there are no acceptance-based measures available specifically for psychotic symptoms. This paper describes the preliminary evaluation of a self-report instrument designed to assess acceptance-based attitudes and actions in relation to auditory and command hallucinations. Following initial scale development, a 56-item version of the Voices Acceptance and Action Scale (VAAS) was administered to 43 participants with command hallucinations as part of their baseline assessment in a larger trial. Measures of symptoms, quality of life, and depression were also administered. The scale was examined for reliability using corrected item total statistics. Based on this method, 31 items were retained. Internal consistency and test-retest reliability for the 31-item VAAS were acceptable. Subsequent examination of construct validity showed the VAAS to correlate significantly in the expected directions with depression, quality of life, and coping with command hallucinations. It also discriminated compliance from non-compliance with harmful command hallucinations. Although these results are preliminary and subject to a number of limitations, the VAAS shows promise as a useful aid in the assessment of the psychological impact of voices.

  20. Reactor tank UT acceptance criteria

    SciTech Connect

    Daugherty, W.L.

    1990-01-30

    The SRS reactor tanks are constructed of type 304 stainless steel, with 0.5 inch thick walls. An ultrasonic (UT) in-service inspection program has been developed for examination of these tanks, in accordance with the ISI Plan for the Savannah River Production Reactors Process Water System (DPSTM-88-100-1). Prior to initiation of these inspections, criteria for the disposition of any indications that might be found are required. A working group has been formed to review available information on the SRS reactor tanks and develop acceptance criteria. This working group includes nationally recognized experts in the nuclear industry. The working group has met three times and produced three documents describing the proposed acceptance criteria, the technical basis for the criteria and a proposed initial sampling plan. This report transmits these three documents, which were prepared in accordance with the technical task plan and quality assurance plan for this task, task 88-001-A- 1. In addition, this report summarizes the acceptance criteria and proposed sampling plan, and provides further interpretation of the intent of these three documents where necessary.

  1. Regulatory perspectives on acceptability testing of dosage forms in children.

    PubMed

    Kozarewicz, Piotr

    2014-08-05

    Current knowledge about the age-appropriateness of different dosage forms is still fragmented or limited. Applicants are asked to demonstrate that the target age group(s) can manage the dosage form or propose an alternative strategy. However, questions remain about how far the applicant must go and what percentage of patients must find the strategy 'acceptable'. The aim of this overview is to provide an update on current thinking and understanding of the problem, and discuss issues relating to the acceptability testing. This overview should be considered as means to start a wider discussion which hopefully will result in a harmonised, globally acceptable approach for confirmation of the acceptability in the future. Copyright © 2014 Elsevier B.V. All rights reserved.

  2. Optical Limiting.

    DTIC Science & Technology

    1992-05-22

    AGENCY USE ONLY (Leave EPO RT D A TE 3. REPORT TYPE AND DATES COVERED 4. TITLE AND SUBTITLE S. FUNDING NUMBERS Optical Limiting 6. AUTHOR(S) , 1 oS...to nanoseconds and find that, since the excited state absorption is cummulative, that the dyes can limit well for nanosecond pulses but not for...over which the device limits . In addition, we find that the dynamic range of limiting devices can be substantially increased using two elements without

  3. Dopamine reward prediction error coding

    PubMed Central

    Schultz, Wolfram

    2016-01-01

    Reward prediction errors consist of the differences between received and predicted rewards. They are crucial for basic forms of learning about rewards and make us strive for more rewards—an evolutionary beneficial trait. Most dopamine neurons in the midbrain of humans, monkeys, and rodents signal a reward prediction error; they are activated by more reward than predicted (positive prediction error), remain at baseline activity for fully predicted rewards, and show depressed activity with less reward than predicted (negative prediction error). The dopamine signal increases nonlinearly with reward value and codes formal economic utility. Drugs of addiction generate, hijack, and amplify the dopamine reward signal and induce exaggerated, uncontrolled dopamine effects on neuronal plasticity. The striatum, amygdala, and frontal cortex also show reward prediction error coding, but only in subpopulations of neurons. Thus, the important concept of reward prediction errors is implemented in neuronal hardware. PMID:27069377

  4. Medical errors: overcoming the challenges.

    PubMed

    Kalra, Jawahar

    2004-12-01

    The issue of medical errors has received substantial attention in recent years. The Institute of Medicine (IOM) report released in 1999 has several implications for health care systems in all disciplines of medicine. Notwithstanding the plethora of available information on the subject, little, by way of substantive action, is done toward medical error reduction. A principal reason for this may be the stigma associated with medical errors. An educational program with a practical, informed, and longitudinal approach offers realistic solutions toward this end. Effective reporting systems need to be developed as a medium of learning from the errors and modifying behaviors appropriately. The presence of a strong leadership supported by organizational commitment is essential in driving these changes. A national, provincial or territorial quality care council dedicated solely for the purpose of enhancing patient safety and medical error reduction may be formed to oversee these efforts. The bioethical and emotional components associated with medical errors also deserve attention and focus.

  5. Dopamine reward prediction error coding.

    PubMed

    Schultz, Wolfram

    2016-03-01

    Reward prediction errors consist of the differences between received and predicted rewards. They are crucial for basic forms of learning about rewards and make us strive for more rewards-an evolutionary beneficial trait. Most dopamine neurons in the midbrain of humans, monkeys, and rodents signal a reward prediction error; they are activated by more reward than predicted (positive prediction error), remain at baseline activity for fully predicted rewards, and show depressed activity with less reward than predicted (negative prediction error). The dopamine signal increases nonlinearly with reward value and codes formal economic utility. Drugs of addiction generate, hijack, and amplify the dopamine reward signal and induce exaggerated, uncontrolled dopamine effects on neuronal plasticity. The striatum, amygdala, and frontal cortex also show reward prediction error coding, but only in subpopulations of neurons. Thus, the important concept of reward prediction errors is implemented in neuronal hardware.

  6. Optimizing learning of a locomotor task: amplifying errors as needed.

    PubMed

    Marchal-Crespo, Laura; López-Olóriz, Jorge; Jaeger, Lukas; Riener, Robert

    2014-01-01

    Research on motor learning has emphasized that errors drive motor adaptation. Thereby, several researchers have proposed robotic training strategies that amplify movement errors rather than decrease them. In this study, the effect of different robotic training strategies that amplify errors on learning a complex locomotor task was investigated. The experiment was conducted with a one degree-of freedom robotic stepper (MARCOS). Subjects were requested to actively coordinate their legs in a desired gait-like pattern in order to track a Lissajous figure presented on a visual display. Learning with three different training strategies was evaluated: (i) No perturbation: the robot follows the subjects' movement without applying any perturbation, (ii) Error amplification: existing errors were amplified with repulsive forces proportional to errors, (iii) Noise disturbance: errors were evoked with a randomly-varying force disturbance. Results showed that training without perturbations was especially suitable for a subset of initially less-skilled subjects, while error amplification seemed to benefit more skilled subjects. Training with error amplification, however, limited transfer of learning. Random disturbing forces benefited learning and promoted transfer in all subjects, probably because it increased attention. These results suggest that learning a locomotor task can be optimized when errors are randomly evoked or amplified based on subjects' initial skill level.

  7. Writing errors by normal subjects.

    PubMed

    Moretti, Rita; Torre, Paola; Antonello, Rodolfo M; Fabbro, Franco; Cazzato, Giuseppe; Bava, Antonio

    2003-08-01

    Writing is a complex process requiring visual memory, attention, phonological and semantic operations, and motor performance. For that reason, it can easily be disturbed by interfering with attention, memory, by interfering subvocalization, and so on. With 16 female third-year students (23.4 +/- 0.8 yr.) from the University of Trieste, we investigated the production of errors in three experimental conditions (control, articulatory suppression, and tapping). In the articulatory suppression condition, the participants produced significantly more linguistic impairments (such as agrammatism, unrelated substitutions, sentence omissions, and semantically deviant sentences), which are similar to linguistic impairments found in aphasia. On the tapping condition there were more perseverations, deletions, and substitutions of both letters and words. These data suggest that writing is not an automatic skill. Only after many years of experience and practice of processing information (through cortical to subcortical channels) can writing be considered an automatic skill. Limited experimental conditions can disrupt the writing system of normal subjects, probably interfering with the cortical to subcortical loops, and link normality to pathology.

  8. Processor register error correction management

    DOEpatents

    Bose, Pradip; Cher, Chen-Yong; Gupta, Meeta S.

    2016-12-27

    Processor register protection management is disclosed. In embodiments, a method of processor register protection management can include determining a sensitive logical register for executable code generated by a compiler, generating an error-correction table identifying the sensitive logical register, and storing the error-correction table in a memory accessible by a processor. The processor can be configured to generate a duplicate register of the sensitive logical register identified by the error-correction table.

  9. Some mathematical refinements concerning error minimization in the genetic code.

    PubMed

    Buhrman, Harry; van der Gulik, Peter T S; Kelk, Steven M; Koolen, Wouter M; Stougie, Leen

    2011-01-01

    The genetic code is known to have a high level of error robustness and has been shown to be very error robust compared to randomly selected codes, but to be significantly less error robust than a certain code found by a heuristic algorithm. We formulate this optimization problem as a Quadratic Assignment Problem and use this to formally verify that the code found by the heuristic algorithm is the global optimum. We also argue that it is strongly misleading to compare the genetic code only with codes sampled from the fixed block model, because the real code space is orders of magnitude larger. We thus enlarge the space from which random codes can be sampled from approximately 2.433 × 10(18) codes to approximately 5.908 × 10(45) codes. We do this by leaving the fixed block model, and using the wobble rules to formulate the characteristics acceptable for a genetic code. By relaxing more constraints, three larger spaces are also constructed. Using a modified error function, the genetic code is found to be more error robust compared to a background of randomly generated codes with increasing space size. We point out that these results do not necessarily imply that the code was optimized during evolution for error minimization, but that other mechanisms could be the reason for this error robustness.

  10. Critical evidence for the prediction error theory in associative learning.

    PubMed

    Terao, Kanta; Matsumoto, Yukihisa; Mizunami, Makoto

    2015-03-10

    In associative learning in mammals, it is widely accepted that the discrepancy, or error, between actual and predicted reward determines whether learning occurs. Complete evidence for the prediction error theory, however, has not been obtained in any learning systems: Prediction error theory stems from the finding of a blocking phenomenon, but blocking can also be accounted for by other theories, such as the attentional theory. We demonstrated blocking in classical conditioning in crickets and obtained evidence to reject the attentional theory. To obtain further evidence supporting the prediction error theory and rejecting alternative theories, we constructed a neural model to match the prediction error theory, by modifying our previous model of learning in crickets, and we tested a prediction from the model: the model predicts that pharmacological intervention of octopaminergic transmission during appetitive conditioning impairs learning but not formation of reward prediction itself, and it thus predicts no learning in subsequent training. We observed such an "auto-blocking", which could be accounted for by the prediction error theory but not by other competitive theories to account for blocking. This study unambiguously demonstrates validity of the prediction error theory in associative learning.

  11. Frequency analysis of nonlinear oscillations via the global error minimization

    NASA Astrophysics Data System (ADS)

    Kalami Yazdi, M.; Hosseini Tehrani, P.

    2016-06-01

    The capacity and effectiveness of a modified variational approach, namely global error minimization (GEM) is illustrated in this study. For this purpose, the free oscillations of a rod rocking on a cylindrical surface and the Duffing-harmonic oscillator are treated. In order to validate and exhibit the merit of the method, the obtained result is compared with both of the exact frequency and the outcome of other well-known analytical methods. The corollary reveals that the first order approximation leads to an acceptable relative error, specially for large initial conditions. The procedure can be promisingly exerted to the conservative nonlinear problems.

  12. Heuristic errors in clinical reasoning.

    PubMed

    Rylander, Melanie; Guerrasio, Jeannette

    2016-08-01

    Errors in clinical reasoning contribute to patient morbidity and mortality. The purpose of this study was to determine the types of heuristic errors made by third-year medical students and first-year residents. This study surveyed approximately 150 clinical educators inquiring about the types of heuristic errors they observed in third-year medical students and first-year residents. Anchoring and premature closure were the two most common errors observed amongst third-year medical students and first-year residents. There was no difference in the types of errors observed in the two groups. Errors in clinical reasoning contribute to patient morbidity and mortality Clinical educators perceived that both third-year medical students and first-year residents committed similar heuristic errors, implying that additional medical knowledge and clinical experience do not affect the types of heuristic errors made. Further work is needed to help identify methods that can be used to reduce heuristic errors early in a clinician's education. © 2015 John Wiley & Sons Ltd.

  13. Compensating For GPS Ephemeris Error

    NASA Technical Reports Server (NTRS)

    Wu, Jiun-Tsong

    1992-01-01

    Method of computing position of user station receiving signals from Global Positioning System (GPS) of navigational satellites compensates for most of GPS ephemeris error. Present method enables user station to reduce error in its computed position substantially. User station must have access to two or more reference stations at precisely known positions several hundred kilometers apart and must be in neighborhood of reference stations. Based on fact that when GPS data used to compute baseline between reference station and user station, vector error in computed baseline is proportional ephemeris error and length of baseline.

  14. Retransmission error control with memory

    NASA Technical Reports Server (NTRS)

    Sindhu, P. S.

    1977-01-01

    In this paper, an error control technique that is a basic improvement over automatic-repeat-request ARQ is presented. Erroneously received blocks in an ARQ system are used for error control. The technique is termed ARQ-with-memory (MRQ). The general MRQ system is described, and simple upper and lower bounds are derived on the throughput achievable by MRQ. The performance of MRQ with respect to throughput, message delay and probability of error is compared to that of ARQ by simulating both systems using error data from a VHF satellite channel being operated in the ALOHA packet broadcasting mode.

  15. A theory of human error

    NASA Technical Reports Server (NTRS)

    Mcruer, D. T.; Clement, W. F.; Allen, R. W.

    1980-01-01

    Human error, a significant contributing factor in a very high proportion of civil transport, general aviation, and rotorcraft accidents is investigated. Correction of the sources of human error requires that one attempt to reconstruct underlying and contributing causes of error from the circumstantial causes cited in official investigative reports. A validated analytical theory of the input-output behavior of human operators involving manual control, communication, supervisory, and monitoring tasks which are relevant to aviation operations is presented. This theory of behavior, both appropriate and inappropriate, provides an insightful basis for investigating, classifying, and quantifying the needed cause-effect relationships governing propagation of human error.

  16. Measurement Error and Equating Error in Power Analysis

    ERIC Educational Resources Information Center

    Phillips, Gary W.; Jiang, Tao

    2016-01-01

    Power analysis is a fundamental prerequisite for conducting scientific research. Without power analysis the researcher has no way of knowing whether the sample size is large enough to detect the effect he or she is looking for. This paper demonstrates how psychometric factors such as measurement error and equating error affect the power of…

  17. Correcting numerical integration errors caused by small aliasing errors

    SciTech Connect

    Smallwood, D.O.

    1997-11-01

    Small sampling errors can have a large effect on numerically integrated waveforms. An example is the integration of acceleration to compute velocity and displacement waveforms. These large integration errors complicate checking the suitability of the acceleration waveform for reproduction on shakers. For waveforms typically used for shaker reproduction, the errors become significant when the frequency content of the waveform spans a large frequency range. It is shown that these errors are essentially independent of the numerical integration method used, and are caused by small aliasing errors from the frequency components near the Nyquist frequency. A method to repair the integrated waveforms is presented. The method involves using a model of the acceleration error, and fitting this model to the acceleration, velocity, and displacement waveforms to force the waveforms to fit the assumed initial and final values. The correction is then subtracted from the acceleration before integration. The method is effective where the errors are isolated to a small section of the time history. It is shown that the common method to repair these errors using a high pass filter is sometimes ineffective for this class of problem.

  18. Measurement Error and Equating Error in Power Analysis

    ERIC Educational Resources Information Center

    Phillips, Gary W.; Jiang, Tao

    2016-01-01

    Power analysis is a fundamental prerequisite for conducting scientific research. Without power analysis the researcher has no way of knowing whether the sample size is large enough to detect the effect he or she is looking for. This paper demonstrates how psychometric factors such as measurement error and equating error affect the power of…

  19. Anxiety and Error Monitoring: Increased Error Sensitivity or Altered Expectations?

    ERIC Educational Resources Information Center

    Compton, Rebecca J.; Carp, Joshua; Chaddock, Laura; Fineman, Stephanie L.; Quandt, Lorna C.; Ratliff, Jeffrey B.

    2007-01-01

    This study tested the prediction that the error-related negativity (ERN), a physiological measure of error monitoring, would be enhanced in anxious individuals, particularly in conditions with threatening cues. Participants made gender judgments about faces whose expressions were either happy, angry, or neutral. Replicating prior studies, midline…

  20. Error studies for SNS Linac. Part 1: Transverse errors

    SciTech Connect

    Crandall, K.R.

    1998-12-31

    The SNS linac consist of a radio-frequency quadrupole (RFQ), a drift-tube linac (DTL), a coupled-cavity drift-tube linac (CCDTL) and a coupled-cavity linac (CCL). The RFQ and DTL are operated at 402.5 MHz; the CCDTL and CCL are operated at 805 MHz. Between the RFQ and DTL is a medium-energy beam-transport system (MEBT). This error study is concerned with the DTL, CCDTL and CCL, and each will be analyzed separately. In fact, the CCL is divided into two sections, and each of these will be analyzed separately. The types of errors considered here are those that affect the transverse characteristics of the beam. The errors that cause the beam center to be displaced from the linac axis are quad displacements and quad tilts. The errors that cause mismatches are quad gradient errors and quad rotations (roll).

  1. Prescription errors in cancer chemotherapy: Omissions supersede potentially harmful errors

    PubMed Central

    Mathaiyan, Jayanthi; Jain, Tanvi; Dubashi, Biswajit; Reddy, K Satyanarayana; Batmanabane, Gitanjali

    2015-01-01

    Objective: To estimate the frequency and type of prescription errors in patients receiving cancer chemotherapy. Settings and Design: We conducted a cross-sectional study at the day care unit of the Regional Cancer Centre (RCC) of a tertiary care hospital in South India. Materials and Methods: All prescriptions written during July to September 2013 for patients attending the out-patient department of the RCC to be treated at the day care center were included in this study. The prescriptions were analyzed for omission of standard information, usage of brand names, abbreviations and legibility. The errors were further classified into potentially harmful ones and not harmful based on the likelihood of resulting in harm to the patient. Descriptive analysis was performed to estimate the frequency of prescription errors and expressed as total number of errors and percentage. Results: A total of 4253 prescribing errors were found in 1500 prescriptions (283.5%), of which 47.1% were due to omissions like name, age and diagnosis and 22.5% were due to usage of brand names. Abbreviations of pre-medications and anticancer drugs accounted for 29.2% of the errors. Potentially harmful errors that were likely to result in serious consequences to the patient were estimated to be 11.7%. Conclusions: Most of the errors intercepted in our study are due to a high patient load and inattention of the prescribers to omissions in prescription. Redesigning prescription forms and sensitizing prescribers to the importance of writing prescriptions without errors may help in reducing errors to a large extent. PMID:25969654

  2. Passport Officers’ Errors in Face Matching

    PubMed Central

    White, David; Kemp, Richard I.; Jenkins, Rob; Matheson, Michael; Burton, A. Mike

    2014-01-01

    Photo-ID is widely used in security settings, despite research showing that viewers find it very difficult to match unfamiliar faces. Here we test participants with specialist experience and training in the task: passport-issuing officers. First, we ask officers to compare photos to live ID-card bearers, and observe high error rates, including 14% false acceptance of ‘fraudulent’ photos. Second, we compare passport officers with a set of student participants, and find equally poor levels of accuracy in both groups. Finally, we observe that passport officers show no performance advantage over the general population on a standardised face-matching task. Across all tasks, we observe very large individual differences: while average performance of passport staff was poor, some officers performed very accurately – though this was not related to length of experience or training. We propose that improvements in security could be made by emphasising personnel selection. PMID:25133682

  3. Passport officers' errors in face matching.

    PubMed

    White, David; Kemp, Richard I; Jenkins, Rob; Matheson, Michael; Burton, A Mike

    2014-01-01

    Photo-ID is widely used in security settings, despite research showing that viewers find it very difficult to match unfamiliar faces. Here we test participants with specialist experience and training in the task: passport-issuing officers. First, we ask officers to compare photos to live ID-card bearers, and observe high error rates, including 14% false acceptance of 'fraudulent' photos. Second, we compare passport officers with a set of student participants, and find equally poor levels of accuracy in both groups. Finally, we observe that passport officers show no performance advantage over the general population on a standardised face-matching task. Across all tasks, we observe very large individual differences: while average performance of passport staff was poor, some officers performed very accurately--though this was not related to length of experience or training. We propose that improvements in security could be made by emphasising personnel selection.

  4. Error field measurement, correction and heat flux balancing on Wendelstein 7-X

    NASA Astrophysics Data System (ADS)

    Lazerson, Samuel A.; Otte, Matthias; Jakubowski, Marcin; Israeli, Ben; Wurden, Glen A.; Wenzel, Uwe; Andreeva, Tamara; Bozhenkov, Sergey; Biedermann, Christoph; Kocsis, Gábor; Szepesi, Tamás; Geiger, Joachim; Pedersen, Thomas Sunn; Gates, David; The W7-X Team

    2017-04-01

    The measurement and correction of error fields in Wendelstein 7-X (W7-X) is critical to long pulse high beta operation, as small error fields may cause overloading of divertor plates in some configurations. Accordingly, as part of a broad collaborative effort, the detection and correction of error fields on the W7-X experiment has been performed using the trim coil system in conjunction with the flux surface mapping diagnostic and high resolution infrared camera. In the early commissioning phase of the experiment, the trim coils were used to open an n/m  =  1/2 island chain in a specially designed magnetic configuration. The flux surfacing mapping diagnostic was then able to directly image the magnetic topology of the experiment, allowing the inference of a small  ∼4 cm intrinsic island chain. The suspected main sources of the error field, slight misalignment and deformations of the superconducting coils, are then confirmed through experimental modeling using the detailed measurements of the coil positions. Observations of the limiters temperatures in module 5 shows a clear dependence of the limiter heat flux pattern as the perturbing fields are rotated. Plasma experiments without applied correcting fields show a significant asymmetry in neutral pressure (centered in module 4) and light emission (visible, H-alpha, CII, and CIII). Such pressure asymmetry is associated with plasma-wall (limiter) interaction asymmetries between the modules. Application of trim coil fields with n  =  1 waveform correct the imbalance. Confirmation of the error fields allows the assessment of magnetic fields which resonate with the n/m  =  5/5 island chain. Notice: This manuscript has been authored by Princeton University under Contract Number DE-AC02-09CH11466 with the U.S. Department of Energy. The publisher, by accepting the article for publication acknowledges, that the United States Government retains a non-exclusive, paid-up, irrevocable, world

  5. Error begat error: design error analysis and prevention in social infrastructure projects.

    PubMed

    Love, Peter E D; Lopez, Robert; Edwards, David J; Goh, Yang M

    2012-09-01

    Design errors contribute significantly to cost and schedule growth in social infrastructure projects and to engineering failures, which can result in accidents and loss of life. Despite considerable research that has addressed their error causation in construction projects they still remain prevalent. This paper identifies the underlying conditions that contribute to design errors in social infrastructure projects (e.g. hospitals, education, law and order type buildings). A systemic model of error causation is propagated and subsequently used to develop a learning framework for design error prevention. The research suggests that a multitude of strategies should be adopted in congruence to prevent design errors from occurring and so ensure that safety and project performance are ameliorated.

  6. Error coding simulations in C

    NASA Technical Reports Server (NTRS)

    Noble, Viveca K.

    1994-01-01

    When data is transmitted through a noisy channel, errors are produced within the data rendering it indecipherable. Through the use of error control coding techniques, the bit error rate can be reduced to any desired level without sacrificing the transmission data rate. The Astrionics Laboratory at Marshall Space Flight Center has decided to use a modular, end-to-end telemetry data simulator to simulate the transmission of data from flight to ground and various methods of error control. The simulator includes modules for random data generation, data compression, Consultative Committee for Space Data Systems (CCSDS) transfer frame formation, error correction/detection, error generation and error statistics. The simulator utilizes a concatenated coding scheme which includes CCSDS standard (255,223) Reed-Solomon (RS) code over GF(2(exp 8)) with interleave depth of 5 as the outermost code, (7, 1/2) convolutional code as an inner code and CCSDS recommended (n, n-16) cyclic redundancy check (CRC) code as the innermost code, where n is the number of information bits plus 16 parity bits. The received signal-to-noise for a desired bit error rate is greatly reduced through the use of forward error correction techniques. Even greater coding gain is provided through the use of a concatenated coding scheme. Interleaving/deinterleaving is necessary to randomize burst errors which may appear at the input of the RS decoder. The burst correction capability length is increased in proportion to the interleave depth. The modular nature of the simulator allows for inclusion or exclusion of modules as needed. This paper describes the development and operation of the simulator, the verification of a C-language Reed-Solomon code, and the possibility of using Comdisco SPW(tm) as a tool for determining optimal error control schemes.

  7. Reducing errors in emergency surgery.

    PubMed

    Watters, David A K; Truskett, Philip G

    2013-06-01

    Errors are to be expected in health care. Adverse events occur in around 10% of surgical patients and may be even more common in emergency surgery. There is little formal teaching on surgical error in surgical education and training programmes despite their frequency. This paper reviews surgical error and provides a classification system, to facilitate learning. The approach and language used to enable teaching about surgical error was developed through a review of key literature and consensus by the founding faculty of the Management of Surgical Emergencies course, currently delivered by General Surgeons Australia. Errors may be classified as being the result of commission, omission or inition. An error of inition is a failure of effort or will and is a failure of professionalism. The risk of error can be minimized by good situational awareness, matching perception to reality, and, during treatment, reassessing the patient, team and plan. It is important to recognize and acknowledge an error when it occurs and then to respond appropriately. The response will involve rectifying the error where possible but also disclosing, reporting and reviewing at a system level all the root causes. This should be done without shaming or blaming. However, the individual surgeon still needs to reflect on their own contribution and performance. A classification of surgical error has been developed that promotes understanding of how the error was generated, and utilizes a language that encourages reflection, reporting and response by surgeons and their teams. © 2013 The Authors. ANZ Journal of Surgery © 2013 Royal Australasian College of Surgeons.

  8. Clinical determinants of PACS acceptance

    NASA Astrophysics Data System (ADS)

    Saarinen, Allan O.; Youngs, Gayle L.; Haynor, David R.; Loop, John W.

    1990-08-01

    One of the key determinants influencing how successfully a radiology department can convert from a conventional film-based environment to an exclusively digital imaging environment may be how well referring physician members of the hospital staff who are not radiologists endorse this new system. The benefits of Picture Archive and Communication Systems (PACS) to radiologists are becoming widely accepted and documented; however, physicians who interact with the radiology department represent an important user group whose views on PACS are less well understood. The acceptance of PACS by referring physicians (clinicians) may be critical to the overall utility ofPACS as well as a major drivingforce behind why a hospitalpurchases PACS. The degree to which referring physicians support PACS may be dependent upon many factors. This study identifies several aspects through the administration and analysis ofa survey which improve PACS acceptance by nonradiology physicians. It appears the more patients a referring physician sends to the radiology department, the more time a physician spends traveling to andfrom thefllmflle room retrievingfllms, and, the more interested a referring physician is about computers, the higher his interest is in PACS. If a referring physician believes that PACS will save him or her time, will reduce the incidence oflostfilms, or will cause performance of radiology exams or generation of reports to be more efficient, the referring physician appears more likely to support PACS and to make the initial time investment necessary to learn how PACS equipment operates. The factors which cause referring physicians to support PACS are principally: (1) the elimination oflost, misplaced, and checked outfllms, and (2) the elimination oftrips to and from thefile room. The major distractions ofthe technology are: (1) system reliability, and (2) reduced diagnostic capability. While the high cost ofPACS is also a distraction, it is not the predominant concern.

  9. Axelrod model: accepting or discussing

    NASA Astrophysics Data System (ADS)

    Dybiec, Bartlomiej; Mitarai, Namiko; Sneppen, Kim

    2012-10-01

    Agents building social systems are characterized by complex states, and interactions among individuals can align their opinions. The Axelrod model describes how local interactions can result in emergence of cultural domains. We propose two variants of the Axelrod model where local consensus is reached either by listening and accepting one of neighbors' opinion or two agents discuss their opinion and achieve an agreement with mixed opinions. We show that the local agreement rule affects the character of the transition between the single culture and the multiculture regimes.

  10. Honeywell Modular Automation System Acceptance Test Procedure

    SciTech Connect

    STUBBS, A.M.

    1999-09-21

    The purpose of this Acceptance Test Procedure (ATP) is to verify the operability of the three new furnaces as controlled by the new Honeywell Modular Automation System (MAS). The Honeywell MAS is being installed in PFP to control the three thermal stabilization furnaces in glovebox HA-211. The ATP provides instructions for testing the configuration of the Honeywell MAS at the Plutonium Finishing Plant(PFP). The test will be a field test of the analog inputs, analog outputs, and software interlocks. The interlock test will check the digital input and outputs. Field equipment will not be connected forth is test. Simulated signals will be used to test thermocouple, limit switch, and vacuum pump inputs to the PLUMAS.

  11. Authentic tolerance: between forbearance and acceptance.

    PubMed

    Von Bergen, C W; Von Bergen, Beth A; Stubblefield, Claire; Bandow, Diane

    2012-01-01

    Promoting tolerance is seen as a key weapon in battling prejudice in diversity and multicultural training but its meaning has been modified recently. The classical definition of tolerance meant that others are entitled to their opinions and have the right to express them and that even though one may disagree with them, one can live in peace with such differences. In recent years, however, tolerance has come to mean that all ideas and practices must be accepted and affirmed and where appreciation and valuing of differences is the ultimate virtue. Such a neo-classical definition has alienated many who value equality and justice and limits the effectiveness of diversity initiatives that teach the promotion of tolerance. The authors offer authentic tolerance as an alternative, incorporating respect and civility toward others, not necessarily approval of their beliefs and behavior. All persons are equal, but all opinions and conduct are not equal.

  12. Error, contradiction and reversal in science and medicine.

    PubMed

    Coccheri, Sergio

    2017-06-01

    Error and contradictions are not "per se" detrimental in science and medicine. Going back to the history of philosophy, Sir Francis Bacon stated that "truth emerges more readily from error than from confusion", and recently Popper introduced the concept of an approximate temporary truth that constitutes the engine of scientific progress. In biomedical research and in clinical practice we assisted during the last decades to many overturnings or reversals of concepts and practices. This phenomenon may discourage patients from accepting ordinary medical care and may favour the choice of alternative medicine. The media often enhance the disappointment for these discrepancies. In this note I recommend to transfer to patients the concept of a confirmed and dependable knowledge at the present time. However, physicians should tolerate uncertainty and accept the idea that medical concepts and applications are subjected to continuous progression, change and displacement. Copyright © 2017 European Federation of Internal Medicine. Published by Elsevier B.V. All rights reserved.

  13. Acceptance of Online Degrees by Undergraduate Mexican Students

    ERIC Educational Resources Information Center

    Padilla Rodriguez, Brenda Cecilia; Adams, Jonathan

    2014-01-01

    The quality and acceptance of online degree programs are still controversial issues. In Mexico, where access to technology is limited, there are few studies on the matter. Undergraduate students (n = 104) answered a survey that aimed to evaluate their knowledge of virtual education, their likelihood of enrollment in an online degree program, and…

  14. Acceptance-Enhanced Behavior Therapy for Trichotillomania in Adolescents

    ERIC Educational Resources Information Center

    Fine, Kathi M.; Walther, Michael R.; Joseph, Jessica M.; Robinson, Jordan; Ricketts, Emily J.; Bowe, William E.; Woods, Douglas W.

    2012-01-01

    Although several studies have examined the efficacy of Acceptance Enhanced Behavior Therapy (AEBT) for the treatment of trichotillomania (TTM) in adults, data are limited with respect to the treatment of adolescents. Our case series illustrates the use of AEBT for TTM in the treatment of two adolescents. The AEBT protocol (Woods & Twohig, 2008) is…

  15. Acceptance and Commitment Therapy (ACT) as a Career Counselling Strategy

    ERIC Educational Resources Information Center

    Hoare, P. Nancey; McIlveen, Peter; Hamilton, Nadine

    2012-01-01

    Acceptance and commitment therapy (ACT) has potential to contribute to career counselling. In this paper, the theoretical tenets of ACT and a selection of its counselling techniques are overviewed along with a descriptive case vignette. There is limited empirical research into ACT's application in career counselling. Accordingly, a research agenda…

  16. Acceptance of Online Degrees by Undergraduate Mexican Students

    ERIC Educational Resources Information Center

    Padilla Rodriguez, Brenda Cecilia; Adams, Jonathan

    2014-01-01

    The quality and acceptance of online degree programs are still controversial issues. In Mexico, where access to technology is limited, there are few studies on the matter. Undergraduate students (n = 104) answered a survey that aimed to evaluate their knowledge of virtual education, their likelihood of enrollment in an online degree program, and…

  17. Acceptance-Enhanced Behavior Therapy for Trichotillomania in Adolescents

    ERIC Educational Resources Information Center

    Fine, Kathi M.; Walther, Michael R.; Joseph, Jessica M.; Robinson, Jordan; Ricketts, Emily J.; Bowe, William E.; Woods, Douglas W.

    2012-01-01

    Although several studies have examined the efficacy of Acceptance Enhanced Behavior Therapy (AEBT) for the treatment of trichotillomania (TTM) in adults, data are limited with respect to the treatment of adolescents. Our case series illustrates the use of AEBT for TTM in the treatment of two adolescents. The AEBT protocol (Woods & Twohig, 2008) is…

  18. Acceptance and Commitment Therapy (ACT) as a Career Counselling Strategy

    ERIC Educational Resources Information Center

    Hoare, P. Nancey; McIlveen, Peter; Hamilton, Nadine

    2012-01-01

    Acceptance and commitment therapy (ACT) has potential to contribute to career counselling. In this paper, the theoretical tenets of ACT and a selection of its counselling techniques are overviewed along with a descriptive case vignette. There is limited empirical research into ACT's application in career counselling. Accordingly, a research agenda…

  19. 14 CFR 189.5 - Limitation of liability.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... liability. The United States is not liable for any omission, error, or delay in transmitting or relaying, or for any failure to transmit or relay, any message accepted for transmission or relayed under this part, even if the omission, error, delay, or failure to transmit or relay is caused by the negligence of...

  20. 14 CFR 189.5 - Limitation of liability.

    Code of Federal Regulations, 2012 CFR

    2012-01-01

    ... liability. The United States is not liable for any omission, error, or delay in transmitting or relaying, or for any failure to transmit or relay, any message accepted for transmission or relayed under this part, even if the omission, error, delay, or failure to transmit or relay is caused by the negligence of...