Science.gov

Sample records for acceptable error limits

  1. Confidence limits and their errors

    SciTech Connect

    Rajendran Raja

    2002-03-22

    Confidence limits are common place in physics analysis. Great care must be taken in their calculation and use especially in cases of limited statistics. We introduce the concept of statistical errors of confidence limits and argue that not only should limits be calculated but also their errors in order to represent the results of the analysis to the fullest. We show that comparison of two different limits from two different experiments becomes easier when their errors are also quoted. Use of errors of confidence limits will lead to abatement of the debate on which method is best suited to calculate confidence limits.

  2. What Are Acceptable Limits of Radiation?

    NASA Video Gallery

    Brad Gersey, lead research scientist at the Center for Radiation Engineering and Science for Space Exploration, or CRESSE, at Prairie View A&M University, describes the legal and acceptable limits ...

  3. The Acceptability Limit in Food Shelf Life Studies.

    PubMed

    Manzocco, Lara

    2016-07-26

    Despite its apparently intuitive nature, the acceptability limit is probably the most difficult parameter to be defined when developing a shelf life test. Although it dramatically affects the final shelf life value, it is surprising that discussion on its nature has been largely neglected in the literature and only rare indications about the possible methodologies for its determination are available in the literature. This is due to the fact that the definition of this parameter is a consumer- and market-oriented issue, requiring a rational evaluation of the potential negative consequences of food unacceptability in the actual market scenario. This paper critically analyzes the features of the acceptability limit and the role of the decision maker. The methodologies supporting the choice of the acceptability limit as well as acceptability limit values proposed in the literature to calculate shelf life of different foods are reviewed.

  4. Error analysis of flux limiter schemes at extrema

    NASA Astrophysics Data System (ADS)

    Kriel, A. J.

    2017-01-01

    Total variation diminishing (TVD) schemes have been an invaluable tool for the solution of hyperbolic conservation laws. One of the major shortcomings of commonly used TVD methods is the loss of accuracy near extrema. Although large amounts of anti-diffusion usually benefit the resolution of discontinuities, a balanced limiter such as Van Leer's performs better at extrema. Reliable criteria, however, for the performance of a limiter near extrema are not readily apparent. This work provides theoretical quantitative estimates for the local truncation errors of flux limiter schemes at extrema for a uniform grid. Moreover, the component of the error attributed to the flux limiter was obtained. This component is independent of the problem and grid spacing, and may be considered a property of the limiter that reflects the performance at extrema. Numerical test problems validate the results.

  5. An Error Score Model for Time-Limit Tests

    ERIC Educational Resources Information Center

    Ven, A. H. G. S. van der

    1976-01-01

    A more generalized error model for time-limit tests is developed. Model estimates are derived for right-attempted and wrong-attempted correlations both within the same test and between different tests. A comparison is made between observed correlations and their model counterparts and a fair agreement is found between observed and expected…

  6. Customization of user interfaces to reduce errors and enhance user acceptance.

    PubMed

    Burkolter, Dina; Weyers, Benjamin; Kluge, Annette; Luther, Wolfram

    2014-03-01

    Customization is assumed to reduce error and increase user acceptance in the human-machine relation. Reconfiguration gives the operator the option to customize a user interface according to his or her own preferences. An experimental study with 72 computer science students using a simulated process control task was conducted. The reconfiguration group (RG) interactively reconfigured their user interfaces and used the reconfigured user interface in the subsequent test whereas the control group (CG) used a default user interface. Results showed significantly lower error rates and higher acceptance of the RG compared to the CG while there were no significant differences between the groups regarding situation awareness and mental workload. Reconfiguration seems to be promising and therefore warrants further exploration.

  7. An error limit for the evolution of language.

    PubMed

    Nowak, M A; Krakauer, D C; Dress, A

    1999-10-22

    On the evolutionary trajectory that led to human language there must have been a transition from a fairly limited to an essentially unlimited communication system. The structure of modern human languages reveals at least two steps that are required for such a transition: in all languages (i) a small number of phonemes are used to generate a large number of words; and (ii) a large number of words are used to a produce an unlimited number of sentences. The first (and simpler) step is the topic of the current paper. We study the evolution of communication in the presence of errors and show that this limits the number of objects (or concepts) that can be described by a simple communication system. The evolutionary optimum is achieved by using only a small number of signals to describe a few valuable concepts. Adding more signals does not increase the fitness of a language. This represents an error limit for the evolution of communication. We show that this error limit can be overcome by combining signals (phonemes) into words. The transition from an analogue to a digital system was a necessary step toward the evolution of human language.

  8. An error limit for the evolution of language.

    PubMed Central

    Nowak, M A; Krakauer, D C; Dress, A

    1999-01-01

    On the evolutionary trajectory that led to human language there must have been a transition from a fairly limited to an essentially unlimited communication system. The structure of modern human languages reveals at least two steps that are required for such a transition: in all languages (i) a small number of phonemes are used to generate a large number of words; and (ii) a large number of words are used to a produce an unlimited number of sentences. The first (and simpler) step is the topic of the current paper. We study the evolution of communication in the presence of errors and show that this limits the number of objects (or concepts) that can be described by a simple communication system. The evolutionary optimum is achieved by using only a small number of signals to describe a few valuable concepts. Adding more signals does not increase the fitness of a language. This represents an error limit for the evolution of communication. We show that this error limit can be overcome by combining signals (phonemes) into words. The transition from an analogue to a digital system was a necessary step toward the evolution of human language. PMID:10902547

  9. Peak-counts blood flow model-errors and limitations

    SciTech Connect

    Mullani, N.A.; Marani, S.K.; Ekas, R.D.; Gould, K.L.

    1984-01-01

    The peak-counts model has several advantages, but its use may be limited due to the condition that the venous egress may not be negligible at the time of peak-counts. Consequently, blood flow measurements by the peak-counts model will depend on the bolus size, bolus duration, and the minimum transit time of the bolus through the region of interest. The effect of bolus size on the measurement of extraction fraction and blood flow was evaluated by injecting 1 to 30ml of rubidium chloride in the femoral vein of a dog and measuring the myocardial activity with a beta probe over the heart. Regional blood flow measurements were not found to vary with bolus sizes up to 30ml. The effect of bolus duration was studied by injecting a 10cc bolus of tracer at different speeds in the femoral vein of a dog. All intravenous injections undergo a broadening of the bolus duration due to the transit time of the tracer through the lungs and the heart. This transit time was found to range from 4-6 second FWHM and dominates the duration of the bolus to the myocardium for up to 3 second injections. A computer simulation has been carried out in which the different parameters of delay time, extraction fraction, and bolus duration can be changed to assess the errors in the peak-counts model. The results of the simulations show that the error will be greatest for short transit time delays and for low extraction fractions.

  10. WTO accepts rules limiting medicine exports to poor countries.

    PubMed

    James, John S

    2003-09-12

    In a controversial decision on August 30, 2003, the World Trade Organization agreed to complex rules limiting the export of medications to developing countries. Reaction to the decision so far has shown a complete disconnect between trade delegates and the WTO, both of which praise the new rules as a humanitarian advance, and those working in treatment access in poor countries, who believe that they will effectively block treatment from reaching many who need it. We have prepared a background paper that analyzes this decision and its implications and offers the opinions of key figures on both sides of the debate. It is clear that the rules were largely written for and probably by the proprietary pharmaceutical industry, and imposed on the countries in the WTO mainly by the United States. The basic conflict is that this industry does not want the development of international trade in low-cost generic copies of its patented medicines--not even for poor countries, where little or no market exists. Yet millions of people die each year without medication for treatable conditions such as AIDS, and drug pricing remains one of several major obstacles to controlling global epidemics.

  11. DWPF COAL CARBON WASTE ACCEPTANCE CRITERIA LIMIT EVALUATION

    SciTech Connect

    Lambert, D.; Choi, A.

    2010-06-21

    A paper study was completed to assess the impact on the Defense Waste Processing Facility (DWPF)'s Chemical Processing Cell (CPC) acid addition and melter off-gas flammability control strategy in processing Sludge Batch 10 (SB10) to SB13 with an added Fluidized Bed Steam Reformer (FBSR) stream and two Salt Waste Processing Facility (SWPF) products (Strip Effluent and Actinide Removal Stream). In all of the cases that were modeled, an acid mix using formic acid and nitric acid could be achieved that would produce a predicted Reducing/Oxidizing (REDOX) Ratio of 0.20 Fe{sup +2}/{Sigma}Fe. There was sufficient formic acid in these combinations to reduce both the manganese and mercury present. Reduction of manganese and mercury are both necessary during Sludge Receipt and Adjustment Tank (SRAT) processing, however, other reducing agents such as coal and oxalate are not effective in this reduction. The next phase in this study will be experimental testing with SB10, FBSR, and both SWPF simulants to validate the assumptions in this paper study and determine whether there are any issues in processing these streams simultaneously. The paper study also evaluated a series of abnormal processing conditions to determine whether potential abnormal conditions in FBSR, SWPF or DWPF would produce melter feed that was too oxidizing or too reducing. In most of the cases that were modeled with one parameter at its extreme, an acid mix using formic acid and nitric acid could be achieved that would produce a predicted REDOX of 0.09-0.30 (target 0.20). However, when a run was completed with both high coal and oxalate, with minimum formic acid to reduce mercury and manganese, the final REDOX was predicted to be 0.49 with sludge and FBSR product and 0.47 with sludge, FBSR product and both SWPF products which exceeds the upper REDOX limit.

  12. 10 CFR 2.643 - Acceptance and docketing of application for limited work authorization.

    Code of Federal Regulations, 2013 CFR

    2013-01-01

    ... acceptable for processing, the Director of New Reactors or the Director of Nuclear Reactor Regulation will... 10 Energy 1 2013-01-01 2013-01-01 false Acceptance and docketing of application for limited work authorization. 2.643 Section 2.643 Energy NUCLEAR REGULATORY COMMISSION AGENCY RULES OF PRACTICE AND...

  13. Deconstructing the "reign of error": interpersonal warmth explains the self-fulfilling prophecy of anticipated acceptance.

    PubMed

    Stinson, Danu Anthony; Cameron, Jessica J; Wood, Joanne V; Gaucher, Danielle; Holmes, John G

    2009-09-01

    People's expectations of acceptance often come to create the acceptance or rejection they anticipate. The authors tested the hypothesis that interpersonal warmth is the behavioral key to this acceptance prophecy: If people expect acceptance, they will behave warmly, which in turn will lead other people to accept them; if they expect rejection, they will behave coldly, which will lead to less acceptance. A correlational study and an experiment supported this model. Study 1 confirmed that participants' warm and friendly behavior was a robust mediator of the acceptance prophecy compared to four plausible alternative explanations. Study 2 demonstrated that situational cues that reduced the risk of rejection also increased socially pessimistic participants' warmth and thus improved their social outcomes.

  14. Pain related catastrophizing on physical limitation in rheumatoid arthritis patients. Is acceptance important?

    PubMed

    Costa, Joana; Pinto-Gouveia, José; Marôco, João

    2014-01-01

    The experience of Rheumatoid Arthritis (RA) includes significant suffering and life disruption. This cross-sectional study examined the associations between pain, catastrophizing, acceptance and physical limitation in 55 individuals (11 males and 44 female; Mean age = 54.37; SD = 18.346), from the Portuguese population with (RA) 2 years after the diagnosis; also explored the role of acceptance as a mediator process between pain, catastrophizing and physical limitation. Results showed positive correlation between pain and catastrophizing (r = .544; p ≤ .001), and also between pain and 2-years' physical limitation (r = .531; p ≤ .001) Results also showed that acceptance was negatively correlated with physical limitation 2 years after the diagnosis (r = -.476; p ≤ .001). Path-analysis was performed to explore the direct effect of pain (ß = -.393; SD = .044; Z = 3.180; p ≤ .001) and catastrophizing (n.sig.) on physical limitation and also to explore the buffer effect of acceptance in this relationship (indirect effect ß = -.080). Results showed that physical limitation is not necessarily a direct product of pain and catastrophizing but acceptance was also involved. Pain and catastrophizing are associated but the influence of catastrophizing on physical limitation is promoted by low levels of acceptance. Results emphasize the relevance of acceptance as the emotional regulation process by which pain and catastrophizing influence physical functioning and establish the basic mechanism by which pain and catastrophizing operate in a contextual-based perspective. Also the study results offer a novel approach that may help behavioral health and medical providers prevent and treat these conditions.

  15. 75 FR 6371 - Jordan Hydroelectric Limited Partnership; Notice of Application Accepted for Filing and...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2010-02-09

    ... Application: Major Original License b. Project No.: P-12740-003 c. Date filed: July 13, 2009 d. Applicant: Jordan Hydroelectric Limited Partnership e. Name of Project: Flannagan Hydroelectric Project f. Location... Energy Regulatory Commission Jordan Hydroelectric Limited Partnership; Notice of Application Accepted...

  16. Legitimization of regulatory norms: Waterfowl hunter acceptance of changing duck bag limits

    USGS Publications Warehouse

    Schroeder, Susan A.; Fulton, David C.; Lawrence, Jeffrey S.; Cordts, Steven D.

    2014-01-01

    Few studies have examined response to regulatory change over time, or addressed hunter attitudes about changes in hunting bag limits. This article explores Minnesota waterfowl hunters’ attitudes about duck bag limits, examining attitudes about two state duck bag limits that were initially more restrictive than the maximum set by the U.S. Fish and Wildlife Service (USFWS), but then increased to match federal limits. Results are from four mail surveys that examined attitudes about bag limits over time. Following two bag limit increases, a greater proportion of hunters rated the new bag limit “too high” and a smaller proportion rated it “too low.” Several years following the first bag limit increase, the proportion of hunters who indicated that the limit was “too high” had declined, suggesting hunter acceptance of the new regulation. Results suggest that waterfowl bag limits may represent legal norms that influence hunter attitudes and gain legitimacy over time.

  17. Statistical analysis of the limitation of half integer resonances on the available momentum acceptance of the High Energy Photon Source

    NASA Astrophysics Data System (ADS)

    Jiao, Yi; Duan, Zhe

    2017-01-01

    In a diffraction-limited storage ring, half integer resonances can have strong effects on the beam dynamics, associated with the large detuning terms from the strong focusing and strong sextupoles as required for an ultralow emittance. In this study, the limitation of half integer resonances on the available momentum acceptance (MA) was statistically analyzed based on one design of the High Energy Photon Source (HEPS). It was found that the probability of MA reduction due to crossing of half integer resonances is closely correlated with the level of beta beats at the nominal tunes, but independent of the error sources. The analysis indicated that for the presented HEPS lattice design, the rms amplitude of beta beats should be kept below 1.5% horizontally and 2.5% vertically to reach a small MA reduction probability of about 1%.

  18. Acceptance Control Charts with Stipulated Error Probabilities Based on Poisson Count Data

    DTIC Science & Technology

    1973-01-01

    Richard L / Scheaffer ’.* Richard S eavenwort December,... 198 *Department of Industrial and Systems Engineering University of Florida Gainesville...L. Scheaffer N00014-75-C-0783 Richard S. Leavenworth 9. PERFORMING ORGANIZATION NAME AND ADDRESS . PROGRAM ELEMENT. PROJECT, TASK Industrial and...PROBABILITIES BASED ON POISSON COUNT DATA by Suresh 1Ihatre Richard L. Scheaffer S..Richard S. Leavenworth ABSTRACT An acceptance control charting

  19. A SECOND MOMENT EXPONENTIAL ERROR BOUND FOR PEAK LIMITED BINARY SYMMETRIC COHERENT CHANNELS AT LOW SNR.

    DTIC Science & Technology

    An exponential-type bound on error rate, Pe, for peak limited binary coherent channels operated at low SNR is presented. The bound depends...exponentially only on the first and second moments of the channel output and serves to justify, in part, the use of SNR calculations for error rate performance

  20. Small Inertial Measurement Units - Soures of Error and Limitations on Accuracy

    NASA Technical Reports Server (NTRS)

    Hoenk, M. E.

    1994-01-01

    Limits on the precision of small accelerometers for inertial measurement units are enumerated and discussed. Scaling laws and errors which affect the precision are discussed in terms of tradeoffs between size, sensitivity, and cost.

  1. Application of Zoning and ``Limits of Acceptable Change'' to Manage Snorkelling Tourism

    NASA Astrophysics Data System (ADS)

    Roman, George S. J.; Dearden, Philip; Rollins, Rick

    2007-06-01

    Zoning and applying Limits of Acceptable Change (LAC) are two promising strategies for managing tourism in Marine Protected Areas (MPAs). Typically, these management strategies require the collection and integration of ecological and socioeconomic data. This problem is illustrated by a case study of Koh Chang National Marine Park, Thailand. Biophysical surveys assessed coral communities in the MPA to derive indices of reef diversity and vulnerability. Social surveys assessed visitor perceptions and satisfaction with conditions encountered on snorkelling tours. Notably, increased coral mortality caused a significant decrease in visitor satisfaction. The two studies were integrated to prescribe zoning and “Limits of Acceptable Change” (LAC). As a biophysical indicator, the data suggest a LAC value of 0.35 for the coral mortality index. As a social indicator, the data suggest that a significant fraction of visitors would find a LAC value of under 30 snorkellers per site as acceptable. The draft zoning plan prescribed four different types of zones: (I) a Conservation Zone with no access apart from monitoring or research; (II) Tourism Zones with high tourism intensities at less vulnerable reefs; (III) Ecotourism zones with a social LAC standard of <30 snorkellers per site, and (IV) General Use Zones to meet local artisanal fishery needs. This study illustrates how ecological and socioeconomic field studies in MPAs can be integrated to craft zoning plans addressing multiple objectives.

  2. Application of zoning and "limits of acceptable change" to manage snorkelling tourism.

    PubMed

    Roman, George S J; Dearden, Philip; Rollins, Rick

    2007-06-01

    Zoning and applying Limits of Acceptable Change (LAC) are two promising strategies for managing tourism in Marine Protected Areas (MPAs). Typically, these management strategies require the collection and integration of ecological and socioeconomic data. This problem is illustrated by a case study of Koh Chang National Marine Park, Thailand. Biophysical surveys assessed coral communities in the MPA to derive indices of reef diversity and vulnerability. Social surveys assessed visitor perceptions and satisfaction with conditions encountered on snorkelling tours. Notably, increased coral mortality caused a significant decrease in visitor satisfaction. The two studies were integrated to prescribe zoning and "Limits of Acceptable Change" (LAC). As a biophysical indicator, the data suggest a LAC value of 0.35 for the coral mortality index. As a social indicator, the data suggest that a significant fraction of visitors would find a LAC value of under 30 snorkellers per site as acceptable. The draft zoning plan prescribed four different types of zones: (I) a Conservation Zone with no access apart from monitoring or research; (II) Tourism Zones with high tourism intensities at less vulnerable reefs; (III) Ecotourism zones with a social LAC standard of <30 snorkellers per site, and (IV) General Use Zones to meet local artisanal fishery needs. This study illustrates how ecological and socioeconomic field studies in MPAs can be integrated to craft zoning plans addressing multiple objectives.

  3. Error Pattern Analysis of Elementary School-Aged Students with Limited English Proficiency

    ERIC Educational Resources Information Center

    Yang, Chin Wen; Sherman, Helene; Murdick, Nikki

    2011-01-01

    The purpose of this research study was to investigate and classify particular categories of mathematical errors made by students with Limited English Proficiency. Participants included 15 general education teachers, two English as Second Language teachers, and 91 Limited English Proficiency students. General education teachers provided mathematics…

  4. An efficient approach for limited-data chemical species tomography and its error bounds

    PubMed Central

    Polydorides, N.; Tsekenis, S.-A.; McCann, H.; Prat, V.-D. A.; Wright, P.

    2016-01-01

    We present a computationally efficient reconstruction method for the limited-data chemical species tomography problem that incorporates projection of the unknown gas concentration function onto a low-dimensional subspace, and regularization using prior information obtained from a simple flow model. In this context, the contribution of this work is on the analysis of the projection-induced data errors and the calculation of bounds for the overall image error incorporating the impact of projection and regularization errors as well as measurement noise. As an extension to this methodology, we present a variant algorithm that preserves the positivity of the concentration image. PMID:27118923

  5. Quantum Error-Correction-Enhanced Magnetometer Overcoming the Limit Imposed by Relaxation.

    PubMed

    Herrera-Martí, David A; Gefen, Tuvia; Aharonov, Dorit; Katz, Nadav; Retzker, Alex

    2015-11-13

    When incorporated in quantum sensing protocols, quantum error correction can be used to correct for high frequency noise, as the correction procedure does not depend on the actual shape of the noise spectrum. As such, it provides a powerful way to complement usual refocusing techniques. Relaxation imposes a fundamental limit on the sensitivity of state of the art quantum sensors which cannot be overcome by dynamical decoupling. The only way to overcome this is to utilize quantum error correcting codes. We present a superconducting magnetometry design that incorporates approximate quantum error correction, in which the signal is generated by a two qubit Hamiltonian term. This two-qubit term is provided by the dynamics of a tunable coupler between two transmon qubits. For fast enough correction, it is possible to lengthen the coherence time of the device beyond the relaxation limit.

  6. 20 CFR 410.671 - Revision for error or other reason; time limitation generally.

    Code of Federal Regulations, 2010 CFR

    2010-04-01

    ... 20 Employees' Benefits 2 2010-04-01 2010-04-01 false Revision for error or other reason; time limitation generally. 410.671 Section 410.671 Employees' Benefits SOCIAL SECURITY ADMINISTRATION FEDERAL COAL..., Other Determinations, Administrative Review, Finality of Decisions, and Representation of Parties §...

  7. Natural Conception May Be an Acceptable Option in HIV-Serodiscordant Couples in Resource Limited Settings.

    PubMed

    Sun, Lijun; Wang, Fang; Liu, An; Xin, Ruolei; Zhu, Yunxia; Li, Jianwei; Shao, Ying; Ye, Jiangzhu; Chen, Danqing; Li, Zaicun

    2015-01-01

    Many HIV serodiscordant couples have a strong desire to have their own biological children. Natural conception may be the only choice in some resource limited settings but data about natural conception is limited. Here, we reported our findings of natural conception in HIV serodiscordant couples. Between January 2008 and June 2014, we retrospectively collected data on 91 HIV serodiscordant couples presenting to Beijing Youan Hospital with childbearing desires. HIV counseling, effective ART on HIV infected partners, pre-exposure prophylaxis (PrEP) and post-exposure prophylaxis (PEP) in negative female partners and timed intercourse were used to maximally reduce the risk of HIV transmission. Of the 91 HIV serodiscordant couples, 43 were positive in male partners and 48 were positive in female partners. There were 196 unprotected vaginal intercourses, 100 natural conception and 97 newborns. There were no cases of HIV seroconversion in uninfected sexual partners. Natural conception may be an acceptable option in HIV-serodiscordant couples in resource limited settings if HIV-positive individuals have undetectable viremia on HAART, combined with HIV counseling, PrEP, PEP and timed intercourse.

  8. Fetal tolerance in human pregnancy--a crucial balance between acceptance and limitation of trophoblast invasion.

    PubMed

    von Rango, Ulrike

    2008-01-15

    During human pregnancy the semi-allogeneic/allogeneic fetal graft is normally accepted by the mother's immune system. Initially the contact between maternal and fetal cells is restricted to the decidua but during the 2nd trimester it is extended to the entire body. Two contrary requirements influence the extent of invasion of extravillous fetal trophoblast cells (EVT) in the maternal decidua: anchorage of the placenta to ensure fetal nutrition and protection of the uterine wall against over-invasion. To establish the crucial balance between tolerance of the EVT and its limitation, recognition of the semi-allogeneic/allogeneic fetal cell by maternal leukocytes is prerequisite. A key mechanism to limit EVT invasion is induction of EVT apoptosis. Apoptotic bodies are phagocytosed by antigen-presenting cells (APC). Peptides from apoptotic cells are presented by APC cells and induce an antigen-specific tolerance against the foreign antigens on EVT cells. These pathways, including up-regulation of the expression of IDO, IFNgamma and CTLA-4 as well as the induction of T(regulatory) cells, are general immunological mechanisms which have developed to maintain peripheral tolerance to self-antigens. Together these data suggest that the mother extends her "definition of self" for 9 months on the foreign antigens of the fetus.

  9. Multipoint Lods Provide Reliable Linkage Evidence Despite Unknown Limiting Distribution: Type I Error Probabilities Decrease with Sample Size for Multipoint Lods and Mods

    PubMed Central

    Hodge, Susan E.; Rodriguez-Murillo, Laura; Strug, Lisa J.; Greenberg, David A.

    2009-01-01

    We investigate the behavior of type I error rates in model-based multipoint (MP) linkage analysis, as a function of sample size (N). We consider both MP lods (i.e., MP linkage analysis that uses the correct genetic model) and MP mods (maximizing MP lods over 18 dominant and recessive models). Following Xing & Elston [2006], we first consider MP linkage analysis limited to a single position; then we enlarge the scope and maximize the lods and mods over a span of positions. In all situations we examined, type I error rates decrease with increasing sample size, apparently approaching zero. We show: (a) For MP lods analyzed only at a single position, well-known statistical theory predicts that type I error rates approach zero. (b) For MP lods and mods maximized over position, this result has a different explanation, related to the fact that one maximizes the scores over only a finite portion of the parameter range. The implications of these findings may be far-reaching: Although it is widely accepted that fixed nominal critical values for MP lods and mods are not known, this study shows that whatever the nominal error rates are, the actual error rates appear to decrease with increasing sample size. Moreover, the actual (observed) type I error rate may be quite small for any given study. We conclude that multipoint lod and mod scores provide reliable linkage evidence for complex diseases, despite the unknown limiting distributions of these multipoint scores. PMID:18613118

  10. Acceptable symbiont cell size differs among cnidarian species and may limit symbiont diversity.

    PubMed

    Biquand, Elise; Okubo, Nami; Aihara, Yusuke; Rolland, Vivien; Hayward, David C; Hatta, Masayuki; Minagawa, Jun; Maruyama, Tadashi; Takahashi, Shunichi

    2017-03-21

    Reef-building corals form symbiotic relationships with dinoflagellates of the genus Symbiodinium. Symbiodinium are genetically and physiologically diverse, and corals may be able to adapt to different environments by altering their dominant Symbiodinium phylotype. Notably, each coral species associates only with specific Symbiodinium phylotypes, and consequently the diversity of symbionts available to the host is limited by the species specificity. Currently, it is widely presumed that species specificity is determined by the combination of cell-surface molecules on the host and symbiont. Here we show experimental evidence supporting a new model to explain at least part of the specificity in coral-Symbiodinium symbiosis. Using the laboratory model Aiptasia-Symbiodinium system, we found that symbiont infectivity is related to cell size; larger Symbiodinium phylotypes are less likely to establish a symbiotic relationship with the host Aiptasia. This size dependency is further supported by experiments where symbionts were replaced by artificial fluorescent microspheres. Finally, experiments using two different coral species demonstrate that our size-dependent-infection model can be expanded to coral-Symbiodinium symbiosis, with the acceptability of large-sized Symbiodinium phylotypes differing between two coral species. Thus the selectivity of the host for symbiont cell size can affect the diversity of symbionts in corals.The ISME Journal advance online publication, 21 March 2017; doi:10.1038/ismej.2017.17.

  11. Validation of analytical methods involved in dissolution assays: acceptance limits and decision methodologies.

    PubMed

    Rozet, E; Ziemons, E; Marini, R D; Boulanger, B; Hubert, Ph

    2012-11-02

    Dissolution tests are key elements to ensure continuing product quality and performance. The ultimate goal of these tests is to assure consistent product quality within a defined set of specification criteria. Validation of an analytical method aimed at assessing the dissolution profile of products or at verifying pharmacopoeias compliance should demonstrate that this analytical method is able to correctly declare two dissolution profiles as similar or drug products as compliant with respect to their specifications. It is essential to ensure that these analytical methods are fit for their purpose. Method validation is aimed at providing this guarantee. However, even in the ICHQ2 guideline there is no information explaining how to decide whether the method under validation is valid for its final purpose or not. Are the entire validation criterion needed to ensure that a Quality Control (QC) analytical method for dissolution test is valid? What acceptance limits should be set on these criteria? How to decide about method's validity? These are the questions that this work aims at answering. Focus is made to comply with the current implementation of the Quality by Design (QbD) principles in the pharmaceutical industry in order to allow to correctly defining the Analytical Target Profile (ATP) of analytical methods involved in dissolution tests. Analytical method validation is then the natural demonstration that the developed methods are fit for their intended purpose and is not any more the inconsiderate checklist validation approach still generally performed to complete the filing required to obtain product marketing authorization.

  12. 12 CFR 250.163 - Inapplicability of amount limitations to “ineligible acceptances.”

    Code of Federal Regulations, 2011 CFR

    2011-01-01

    ... the acceptances are not the type described in section 13 of the Federal Reserve Act. (c) A review of... section 13 (of the Federal Reserve Act), inasmuch as the laws of many States confer broader acceptance... section 13 of the Federal Reserve Act. Yet, this appears to be a development that Congress did...

  13. 12 CFR 250.163 - Inapplicability of amount limitations to “ineligible acceptances.”

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... the acceptances are not the type described in section 13 of the Federal Reserve Act. (c) A review of... section 13 (of the Federal Reserve Act), inasmuch as the laws of many States confer broader acceptance... section 13 of the Federal Reserve Act. Yet, this appears to be a development that Congress did...

  14. Guidance on the establishment of acceptable daily exposure limits (ADE) to support Risk-Based Manufacture of Pharmaceutical Products.

    PubMed

    Sargent, Edward V; Faria, Ellen; Pfister, Thomas; Sussman, Robert G

    2013-03-01

    Health-based limits for active pharmaceutical ingredients (API) referred to as acceptable daily exposures (ADEs) are necessary to the pharmaceutical industry and used to derive acceptance limits for cleaning validation purposes and evaluating cross-carryover. ADEs represent a dose of an API unlikely to cause adverse effects if an individual is exposed, by any route, at or below this dose every day over a lifetime. Derivations of ADEs need to be consistent with ICH Q9 as well as other scientific approaches for the derivation of health-based limits that help to manage risks to both product quality and operator safety during the manufacture of pharmaceutical products. Previous methods for the establishment of acceptance limits in cleaning validation programs are considered arbitrary and have largely ignored the available clinical and toxicological data available for a drug substance. Since the ADE utilizes all available pharmaceutical data and applies scientifically acceptable risk assessment methodology it is more holistic and consistent with other quantitative risk assessments purposes such derivation of occupational exposure limits. Processes for hazard identification, dose response assessment, uncertainty factor analysis and documentation are reviewed.

  15. Improving transient performance of adaptive control architectures using frequency-limited system error dynamics

    NASA Astrophysics Data System (ADS)

    Yucelen, Tansel; De La Torre, Gerardo; Johnson, Eric N.

    2014-11-01

    Although adaptive control theory offers mathematical tools to achieve system performance without excessive reliance on dynamical system models, its applications to safety-critical systems can be limited due to poor transient performance and robustness. In this paper, we develop an adaptive control architecture to achieve stabilisation and command following of uncertain dynamical systems with improved transient performance. Our framework consists of a new reference system and an adaptive controller. The proposed reference system captures a desired closed-loop dynamical system behaviour modified by a mismatch term representing the high-frequency content between the uncertain dynamical system and this reference system, i.e., the system error. In particular, this mismatch term allows the frequency content of the system error dynamics to be limited, which is used to drive the adaptive controller. It is shown that this key feature of our framework yields fast adaptation without incurring high-frequency oscillations in the transient performance. We further show the effects of design parameters on the system performance, analyse closeness of the uncertain dynamical system to the unmodified (ideal) reference system, discuss robustness of the proposed approach with respect to time-varying uncertainties and disturbances, and make connections to gradient minimisation and classical control theory. A numerical example is provided to demonstrate the efficacy of the proposed architecture.

  16. Ptychographic overlap constraint errors and the limits of their numerical recovery using conjugate gradient descent methods.

    PubMed

    Tripathi, Ashish; McNulty, Ian; Shpyrko, Oleg G

    2014-01-27

    Ptychographic coherent x-ray diffractive imaging is a form of scanning microscopy that does not require optics to image a sample. A series of scanned coherent diffraction patterns recorded from multiple overlapping illuminated regions on the sample are inverted numerically to retrieve its image. The technique recovers the phase lost by detecting the diffraction patterns by using experimentally known constraints, in this case the measured diffraction intensities and the assumed scan positions on the sample. The spatial resolution of the recovered image of the sample is limited by the angular extent over which the diffraction patterns are recorded and how well these constraints are known. Here, we explore how reconstruction quality degrades with uncertainties in the scan positions. We show experimentally that large errors in the assumed scan positions on the sample can be numerically determined and corrected using conjugate gradient descent methods. We also explore in simulations the limits, based on the signal to noise of the diffraction patterns and amount of overlap between adjacent scan positions, of just how large these errors can be and still be rendered tractable by this method.

  17. A hybrid variational-ensemble data assimilation scheme with systematic error correction for limited-area ocean models

    NASA Astrophysics Data System (ADS)

    Oddo, Paolo; Storto, Andrea; Dobricic, Srdjan; Russo, Aniello; Lewis, Craig; Onken, Reiner; Coelho, Emanuel

    2016-10-01

    A hybrid variational-ensemble data assimilation scheme to estimate the vertical and horizontal parts of the background error covariance matrix for an ocean variational data assimilation system is presented and tested in a limited-area ocean model implemented in the western Mediterranean Sea. An extensive data set collected during the Recognized Environmental Picture Experiments conducted in June 2014 by the Centre for Maritime Research and Experimentation has been used for assimilation and validation. The hybrid scheme is used to both correct the systematic error introduced in the system from the external forcing (initialisation, lateral and surface open boundary conditions) and model parameterisation, and improve the representation of small-scale errors in the background error covariance matrix. An ensemble system is run offline for further use in the hybrid scheme, generated through perturbation of assimilated observations. Results of four different experiments have been compared. The reference experiment uses the classical stationary formulation of the background error covariance matrix and has no systematic error correction. The other three experiments account for, or not, systematic error correction and hybrid background error covariance matrix combining the static and the ensemble-derived errors of the day. Results show that the hybrid scheme when used in conjunction with the systematic error correction reduces the mean absolute error of temperature and salinity misfit by 55 and 42 % respectively, versus statistics arising from standard climatological covariances without systematic error correction.

  18. 12 CFR 250.163 - Inapplicability of amount limitations to “ineligible acceptances.”

    Code of Federal Regulations, 2012 CFR

    2012-01-01

    ... Reserve Act. (c) A review of the legislative history surrounding the enactment of the acceptance... the provisions of section 13 (of the Federal Reserve Act), inasmuch as the laws of many States confer... the type described in section 13 of the Federal Reserve Act. Yet, this appears to be a...

  19. 12 CFR 250.163 - Inapplicability of amount limitations to “ineligible acceptances.”

    Code of Federal Regulations, 2014 CFR

    2014-01-01

    ... Reserve Act. (c) A review of the legislative history surrounding the enactment of the acceptance... the provisions of section 13 (of the Federal Reserve Act), inasmuch as the laws of many States confer... the type described in section 13 of the Federal Reserve Act. Yet, this appears to be a...

  20. 12 CFR 250.163 - Inapplicability of amount limitations to “ineligible acceptances.”

    Code of Federal Regulations, 2013 CFR

    2013-01-01

    ... Reserve Act. (c) A review of the legislative history surrounding the enactment of the acceptance... the provisions of section 13 (of the Federal Reserve Act), inasmuch as the laws of many States confer... the type described in section 13 of the Federal Reserve Act. Yet, this appears to be a...

  1. La composition academique: les limites de l'acceptabilite (Composition for Academic Purposes: Criteria for Acceptability).

    ERIC Educational Resources Information Center

    Grenall, G. M.

    1981-01-01

    Examines the pedagogical approaches and problems attendant to the development of English writing programs for foreign students. Discusses the skills necessary to handle course work, such as essay tests, term papers and reports, theses and dissertations, and focuses particularly on diagnostic problems and acceptability criteria. Societe Nouvelle…

  2. Analysis of operator splitting errors for near-limit flame simulations

    NASA Astrophysics Data System (ADS)

    Lu, Zhen; Zhou, Hua; Li, Shan; Ren, Zhuyin; Lu, Tianfeng; Law, Chung K.

    2017-04-01

    High-fidelity simulations of ignition, extinction and oscillatory combustion processes are of practical interest in a broad range of combustion applications. Splitting schemes, widely employed in reactive flow simulations, could fail for stiff reaction-diffusion systems exhibiting near-limit flame phenomena. The present work first employs a model perfectly stirred reactor (PSR) problem with an Arrhenius reaction term and a linear mixing term to study the effects of splitting errors on the near-limit combustion phenomena. Analysis shows that the errors induced by decoupling of the fractional steps may result in unphysical extinction or ignition. The analysis is then extended to the prediction of ignition, extinction and oscillatory combustion in unsteady PSRs of various fuel/air mixtures with a 9-species detailed mechanism for hydrogen oxidation and an 88-species skeletal mechanism for n-heptane oxidation, together with a Jacobian-based analysis for the time scales. The tested schemes include the Strang splitting, the balanced splitting, and a newly developed semi-implicit midpoint method. Results show that the semi-implicit midpoint method can accurately reproduce the dynamics of the near-limit flame phenomena and it is second-order accurate over a wide range of time step size. For the extinction and ignition processes, both the balanced splitting and midpoint method can yield accurate predictions, whereas the Strang splitting can lead to significant shifts on the ignition/extinction processes or even unphysical results. With an enriched H radical source in the inflow stream, a delay of the ignition process and the deviation on the equilibrium temperature are observed for the Strang splitting. On the contrary, the midpoint method that solves reaction and diffusion together matches the fully implicit accurate solution. The balanced splitting predicts the temperature rise correctly but with an over-predicted peak. For the sustainable and decaying oscillatory

  3. First Year Wilkinson Microwave Anisotropy Probe(WMAP) Observations: Data Processing Methods and Systematic Errors Limits

    NASA Technical Reports Server (NTRS)

    Hinshaw, G.; Barnes, C.; Bennett, C. L.; Greason, M. R.; Halpern, M.; Hill, R. S.; Jarosik, N.; Kogut, A.; Limon, M.; Meyer, S. S.

    2003-01-01

    We describe the calibration and data processing methods used to generate full-sky maps of the cosmic microwave background (CMB) from the first year of Wilkinson Microwave Anisotropy Probe (WMAP) observations. Detailed limits on residual systematic errors are assigned based largely on analyses of the flight data supplemented, where necessary, with results from ground tests. The data are calibrated in flight using the dipole modulation of the CMB due to the observatory's motion around the Sun. This constitutes a full-beam calibration source. An iterative algorithm simultaneously fits the time-ordered data to obtain calibration parameters and pixelized sky map temperatures. The noise properties are determined by analyzing the time-ordered data with this sky signal estimate subtracted. Based on this, we apply a pre-whitening filter to the time-ordered data to remove a low level of l/f noise. We infer and correct for a small (approx. 1 %) transmission imbalance between the two sky inputs to each differential radiometer, and we subtract a small sidelobe correction from the 23 GHz (K band) map prior to further analysis. No other systematic error corrections are applied to the data. Calibration and baseline artifacts, including the response to environmental perturbations, are negligible. Systematic uncertainties are comparable to statistical uncertainties in the characterization of the beam response. Both are accounted for in the covariance matrix of the window function and are propagated to uncertainties in the final power spectrum. We characterize the combined upper limits to residual systematic uncertainties through the pixel covariance matrix.

  4. Post-manufacturing, 17-times acceptable raw bit error rate enhancement, dynamic codeword transition ECC scheme for highly reliable solid-state drives, SSDs

    NASA Astrophysics Data System (ADS)

    Tanakamaru, Shuhei; Fukuda, Mayumi; Higuchi, Kazuhide; Esumi, Atsushi; Ito, Mitsuyoshi; Li, Kai; Takeuchi, Ken

    2011-04-01

    A dynamic codeword transition ECC scheme is proposed for highly reliable solid-state drives, SSDs. By monitoring the error number or the write/erase cycles, the ECC codeword dynamically increases from 512 Byte (+parity) to 1 KByte, 2 KByte, 4 KByte…32 KByte. The proposed ECC with a larger codeword decreases the failure rate after ECC. As a result, the acceptable raw bit error rate, BER, before ECC is enhanced. Assuming a NAND Flash memory which requires 8-bit correction in 512 Byte codeword ECC, a 17-times higher acceptable raw BER than the conventional fixed 512 Byte codeword ECC is realized for the mobile phone application without an interleaving. For the MP3 player, digital-still camera and high-speed memory card applications with a dual channel interleaving, 15-times higher acceptable raw BER is achieved. Finally, for the SSD application with 8 channel interleaving, 13-times higher acceptable raw BER is realized. Because the ratio of the user data to the parity bits is the same in each ECC codeword, no additional memory area is required. Note that the reliability of SSD is improved after the manufacturing without cost penalty. Compared with the conventional ECC with the fixed large 32 KByte codeword, the proposed scheme achieves a lower power consumption by introducing the "best-effort" type operation. In the proposed scheme, during the most of the lifetime of SSD, a weak ECC with a shorter codeword such as 512 Byte (+parity), 1 KByte and 2 KByte is used and 98% lower power consumption is realized. At the life-end of SSD, a strong ECC with a 32 KByte codeword is used and the highly reliable operation is achieved. The random read performance is also discussed. The random read performance is estimated by the latency. The latency is below 1.5 ms for ECC codeword up to 32 KByte. This latency is below the average latency of 15,000 rpm HDD, 2 ms.

  5. Wireless smart meters and public acceptance: the environment, limited choices, and precautionary politics.

    PubMed

    Hess, David J; Coley, Jonathan S

    2014-08-01

    Wireless smart meters (WSMs) promise numerous environmental benefits, but they have been installed without full consideration of public acceptance issues. Although societal-implications research and regulatory policy have focused on privacy, security, and accuracy issues, our research indicates that health concerns have played an important role in the public policy debates that have emerged in California. Regulatory bodies do not recognize non-thermal health effects for non-ionizing electromagnetic radiation, but both homeowners and counter-experts have contested the official assurances that WSMs pose no health risks. Similarities and differences with the existing social science literature on mobile phone masts are discussed, as are the broader political implications of framing an alternative policy based on an opt-out choice. The research suggests conditions under which health-oriented precautionary politics can be particularly effective, namely, if there is a mandatory technology, a network of counter-experts, and a broader context of democratic contestation.

  6. Basis set limit and systematic errors in local-orbital based all-electron DFT

    NASA Astrophysics Data System (ADS)

    Blum, Volker; Behler, Jörg; Gehrke, Ralf; Reuter, Karsten; Scheffler, Matthias

    2006-03-01

    With the advent of efficient integration schemes,^1,2 numeric atom-centered orbitals (NAO's) are an attractive basis choice in practical density functional theory (DFT) calculations of nanostructured systems (surfaces, clusters, molecules). Though all-electron, the efficiency of practical implementations promises to be on par with the best plane-wave pseudopotential codes, while having a noticeably higher accuracy if required: Minimal-sized effective tight-binding like calculations and chemically accurate all-electron calculations are both possible within the same framework; non-periodic and periodic systems can be treated on equal footing; and the localized nature of the basis allows in principle for O(N)-like scaling. However, converging an observable with respect to the basis set is less straightforward than with competing systematic basis choices (e.g., plane waves). We here investigate the basis set limit of optimized NAO basis sets in all-electron calculations, using as examples small molecules and clusters (N2, Cu2, Cu4, Cu10). meV-level total energy convergence is possible using <=50 basis functions per atom in all cases. We also find a clear correlation between the errors which arise from underconverged basis sets, and the system geometry (interatomic distance). ^1 B. Delley, J. Chem. Phys. 92, 508 (1990), ^2 J.M. Soler et al., J. Phys.: Condens. Matter 14, 2745 (2002).

  7. Error Performance of Differentially Coherent Detection of Binary DPSK Data Transmission on the Hard-Limiting Satellite Channel.

    DTIC Science & Technology

    1979-08-01

    unequal power levels and noise correlations between the two adjacent time slot pulses. In practice, the power imbalance, or equivalently SNR imbalance...is a practical assumption since the noise is necessarily band limited in the system. Error probabilities are given as a function of uplink SNR with...different levels of SNR imbalances and different downlink SNR as parameters. It is discovered that, while SNR imbalance affects error performance, the

  8. 241-SY-101 DACS High hydrogen abort limit reduction (SCR 473) acceptance test report

    SciTech Connect

    ERMI, A.M.

    1999-09-09

    The capability of the 241-SY-101 Data Acquisition and Control System (DACS) computer system to provide proper control and monitoring of the 241-SY-101 underground storage tank hydrogen monitoring system utilizing the reduced hydrogen abort limit of 0.69% was systematically evaluated by the performance of ATP HNF-4927. This document reports the results of the ATP.

  9. Effect and acceptance of bluegill length limits in Nebraska natural lakes

    USGS Publications Warehouse

    Paukert, C.P.; Willis, D.W.; Gabelhouse, D.W.

    2002-01-01

    Bluegill Lepomis macrochirus populations in 18 Nebraska Sandhill lakes were evaluated to determine if a 200-mm minimum length limit would increase population size structure. Bluegills were trap-netted in May and June 1998 and 1999, and a creel survey was conducted during winter 1998-2001 on one or two lakes where bluegills had been tagged to determine angler exploitation. Thirty-three percent of anglers on one creeled lake were trophy anglers (i.e., fishing for large [???250 mm] bluegills), whereas 67% were there to harvest fish to eat. Exploitation was always less than 10% and the total annual mortality averaged 40% across all 18 lakes. The time to reach 200 mm ranged from 4.3 to 8.3 years. The relative stock density of preferred-length fish increased an average of 2.2 units in all 18 lakes with a 10% exploitation rate. However, yield declined 39% and the number harvested declined 62%. Bluegills would need to reach 200 mm in 4.2 years to ensure no reduction in yield at 10% exploitation. Both yield and size structure were higher with a 200-mm minimum length limit (relative to having no length limit) only in populations with the lowest natural mortality and at exploitation of 30% or more. Although 100% (N = 39) of anglers surveyed said they would favor a 200-mm minimum length limit to improve bluegill size structure, anglers would have to sacrifice harvest to achieve this goal. While a 200-mm minimum length limit did minimally increase size structure at current levels of exploitation across all 18 bluegill populations, the populations with the lowest natural mortality and fastest growth provided the highest increase in size structure with the lowest reduction in yield and number harvested.

  10. Sampling hazelnuts for aflatoxin: effect of sample size and accept/reject limit on reducing the risk of misclassifying lots.

    PubMed

    Ozay, Guner; Seyhan, Ferda; Yilmaz, Aysun; Whitaker, Thomas B; Slate, Andrew B; Giesbrecht, Francis G

    2007-01-01

    About 100 countries have established regulatory limits for aflatoxin in food and feeds. Because these limits vary widely among regulating countries, the Codex Committee on Food Additives and Contaminants began work in 2004 to harmonize aflatoxin limits and sampling plans for aflatoxin in almonds, pistachios, hazelnuts, and Brazil nuts. Studies were developed to measure the uncertainty and distribution among replicated sample aflatoxin test results taken from aflatoxin-contaminated treenut lots. The uncertainty and distribution information is used to develop a model that can evaluate the performance (risk of misclassifying lots) of aflatoxin sampling plan designs for treenuts. Once the performance of aflatoxin sampling plans can be predicted, they can be designed to reduce the risks of misclassifying lots traded in either the domestic or export markets. A method was developed to evaluate the performance of sampling plans designed to detect aflatoxin in hazelnuts lots. Twenty hazelnut lots with varying levels of contamination were sampled according to an experimental protocol where 16 test samples were taken from each lot. The observed aflatoxin distribution among the 16 aflatoxin sample test results was compared to lognormal, compound gamma, and negative binomial distributions. The negative binomial distribution was selected to model aflatoxin distribution among sample test results because it gave acceptable fits to observed distributions among sample test results taken from a wide range of lot concentrations. Using the negative binomial distribution, computer models were developed to calculate operating characteristic curves for specific aflatoxin sampling plan designs. The effect of sample size and accept/reject limits on the chances of rejecting good lots (sellers' risk) and accepting bad lots (buyers' risk) was demonstrated for various sampling plan designs.

  11. Predicting tool operator capacity to react against torque within acceptable handle deflection limits in automotive assembly.

    PubMed

    Radwin, Robert G; Chourasia, Amrish; Fronczak, Frank J; Subedi, Yashpal; Howery, Robert; Yen, Thomas Y; Sesto, Mary E; Irwin, Curtis B

    2016-05-01

    The proportion of tool operators capable of maintaining published psychophysically derived threaded fastener tool handle deflection limits were predicted using a biodynamic tool operator model, interacting with the tool, task and workstation. Tool parameters, including geometry, speed and torque were obtained from the specifications for 35 tools used in an auto assembly plant. Tool mass moments of inertia were measured for these tools using a novel device that engages the tool in a rotating system of known inertia. Task parameters, including fastener target torque and joint properties (soft, medium or hard), were ascertained from the vehicle design specifications. Workstation parameters, including vertical and horizontal distances from the operator were measured using a laser rangefinder for 69 tool installations in the plant. These parameters were entered into the model and tool handle deflection was predicted for each job. While handle deflection for most jobs did not exceed the capacity of 75% females and 99% males, six jobs exceeded the deflection criterion. Those tool installations were examined and modifications in tool speed and operator position improved those jobs within the deflection limits, as predicted by the model. We conclude that biodynamic tool operator models may be useful for identifying stressful tool installations and interventions that bring them within the capacity of most operators.

  12. Pluribus - Exploring the Limits of Error Correction Using a Suffix Tree.

    PubMed

    Savel, Daniel; LaFramboise, Thomas; Grama, Ananth; Koyuturk, Mehmet

    2016-06-29

    Next generation sequencing technologies enable efficient and cost-effective genome sequencing. However, sequencing errors increase the complexity of the de novo assembly process, and reduce the quality of the assembled sequences. Many error correction techniques utilizing substring frequencies have been developed to mitigate this effect. In this paper, we present a novel and effective method called PLURIBUS, for correcting sequencing errors using a generalized suffix trie. PLURIBUS utilizes multiple manifestations of an error in the trie to accurately identify errors and suggest corrections. We show that PLURIBUS produces the least number of false positives across a diverse set of real sequencing datasets when compared to other methods. Furthermore, PLURIBUS can be used in conjunction with other contemporary error correction methods to achieve higher levels of accuracy than either tool alone. These increases in error correction accuracy are also realized in the quality of the contigs that are generated during assembly. We explore, in-depth, the behavior of PLURIBUS, to explain the observed improvement in accuracy and assembly performance. PLURIBUS is freely available at http://compbio.

  13. Estimation of measurement error in plasma HIV-1 RNA assays near their limit of quantification

    PubMed Central

    Wang, Lu; Brumme, Chanson; Wu, Lang; Montaner, Julio S. G.; Harrigan, P. Richard

    2017-01-01

    Background Plasma HIV-1 RNA levels (pVLs), routinely used for clinical management, are influenced by measurement error (ME) due to physiologic and assay variation. Objective To assess the ME of the COBAS HIV-1 Ampliprep AMPLICOR MONITOR ultrasensitive assay version 1.5 and the COBAS Ampliprep Taqman HIV-1 assay versions 1.0 and 2.0 close to their lower limit of detection. Secondly to examine whether there was any evidence that pVL measurements closest to the lower limit of quantification, where clinical decisions are made, were susceptible to a higher degree of random noise than the remaining range. Methods We analysed longitudinal pVL of treatment-naïve patients from British Columbia, Canada, during their first six months on treatment, for time periods when each assay was uniquely available: Period 1 (Amplicor): 08/03/2000–01/02/2008; Period 2 (Taqman v1.0): 07/01/2010–07/03/2012; Period 3 (Taqman v2.0): 08/03/2012–30/06/2014. ME was estimated via generalized additive mixed effects models, adjusting for several clinical and demographic variables and follow-up time. Results The ME associated with each assay was approximately 0.5 log10 copies/mL. The number of pVL measurements, at a given pVL value, was not randomly distributed; values ≤250 copies/mL were strongly systematically overrepresented in all assays, with the prevalence decreasing monotonically as the pVL increased. Model residuals for pVL ≤250 copies/mL were approximately three times higher than that for the higher range, and pVL measurements in this range could not be modelled effectively due to considerable random noise of the data. Conclusions Although the ME was stable across assays, there is substantial increase in random noise in measuring pVL close to the lower level of detection. These findings have important clinical significance, especially in the range where key clinical decisions are made. Thus, pVL values ≤250 copies/mL should not be taken as the “truth” and repeat p

  14. A Complementary Note to 'A Lag-1 Smoother Approach to System-Error Estimation': The Intrinsic Limitations of Residual Diagnostics

    NASA Technical Reports Server (NTRS)

    Todling, Ricardo

    2015-01-01

    Recently, this author studied an approach to the estimation of system error based on combining observation residuals derived from a sequential filter and fixed lag-1 smoother. While extending the methodology to a variational formulation, experimenting with simple models and making sure consistency was found between the sequential and variational formulations, the limitations of the residual-based approach came clearly to the surface. This note uses the sequential assimilation application to simple nonlinear dynamics to highlight the issue. Only when some of the underlying error statistics are assumed known is it possible to estimate the unknown component. In general, when considerable uncertainties exist in the underlying statistics as a whole, attempts to obtain separate estimates of the various error covariances are bound to lead to misrepresentation of errors. The conclusions are particularly relevant to present-day attempts to estimate observation-error correlations from observation residual statistics. A brief illustration of the issue is also provided by comparing estimates of error correlations derived from a quasi-operational assimilation system and a corresponding Observing System Simulation Experiments framework.

  15. 5 CFR 1605.16 - Claims for correction of employing agency errors; time limitations.

    Code of Federal Regulations, 2014 CFR

    2014-01-01

    ... months before it was discovered, the agency may exercise sound discretion in deciding whether to correct... a claim to correct any such error after that time, the agency may do so at its sound discretion. (c... employing agency provides the participant with good cause for requiring a longer period to decide the...

  16. 5 CFR 1605.16 - Claims for correction of employing agency errors; time limitations.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... months before it was discovered, the agency may exercise sound discretion in deciding whether to correct... a claim to correct any such error after that time, the agency may do so at its sound discretion. (c... employing agency provides the participant with good cause for requiring a longer period to decide the...

  17. 5 CFR 1605.16 - Claims for correction of employing agency errors; time limitations.

    Code of Federal Regulations, 2013 CFR

    2013-01-01

    ... months before it was discovered, the agency may exercise sound discretion in deciding whether to correct... a claim to correct any such error after that time, the agency may do so at its sound discretion. (c... employing agency provides the participant with good cause for requiring a longer period to decide the...

  18. Simultaneous inference for longitudinal data with detection limits and covariates measured with errors, with application to AIDS studies.

    PubMed

    Wu, Lang

    2004-06-15

    In AIDS studies such as HIV viral dynamics, statistical inference is often complicated because the viral load measurements may be subject to left censoring due to a detection limit and time-varying covariates such as CD4 counts may be measured with substantial errors. Mixed-effects models are often used to model the response and the covariate processes in these studies. We propose a unified approach which addresses the censoring and measurement errors simultaneously. We estimate the model parameters by a Monte-Carlo EM algorithm via the Gibbs sampler. A simulation study is conducted to compare the proposed method with the usual two-step method and a naive method. We find that the proposed method produces approximately unbiased estimates with more reliable standard errors. A real data set from an AIDS study is analysed using the proposed method.

  19. Double error shrinkage method for identifying protein binding sites observed by tiling arrays with limited replication

    PubMed Central

    Kim, Youngchul; Bekiranov, Stefan; Lee, Jae K.; Park, Taesung

    2009-01-01

    Motivation: ChIP–chip has been widely used for various genome-wide biological investigations. Given the small number of replicates (typically two to three) per biological sample, methods of analysis that control the variance are desirable but in short supply. We propose a double error shrinkage (DES) method by using moving average statistics based on local-pooled error estimates which effectively control both heterogeneous error variances and correlation structures of an extremely large number of individual probes on tiling arrays. Results: Applying DES to ChIP–chip tiling array study for discovering genome-wide protein-binding sites, we identified 8400 target regions that include highly likely TFIID binding sites. About 33% of these were well matched with the known transcription starting sites on the DBTSS library, while many other newly identified sites have a high chance to be real binding sites based on a high positive predictive value of DES. We also showed the superior performance of DES compared with other commonly used methods for detecting actual protein binding sites. Contact: tspark@snu.ac.kr; jaeklee@virginia.edu Supplementary information: Supplementary data are available at Bioinformatics online. PMID:19667080

  20. Evaluation and Acceptability of a Simplified Test of Visual Function at Birth in a Limited-Resource Setting

    PubMed Central

    Carrara, Verena I.; Darakomon, Mue Chae; Thin, Nant War War; Paw, Naw Ta Kaw; Wah, Naw; Wah, Hser Gay; Helen, Naw; Keereecharoen, Suporn; Paw, Naw Ta Mlar; Jittamala, Podjanee; Nosten, François H.; Ricci, Daniela; McGready, Rose

    2016-01-01

    Neurological examination, including visual fixation and tracking of a target, is routinely performed in the Shoklo Malaria Research Unit postnatal care units on the Thailand-Myanmar border. We aimed to evaluate a simple visual newborn test developed in Italy and performed by non-specialized personnel working in neonatal care units. An intensive training of local health staff in Thailand was conducted prior to performing assessments at 24, 48 and 72 hours of life in healthy, low-risk term singletons. The 48 and 72 hours results were then compared to values obtained to those from Italy. Parents and staff administering the test reported on acceptability. One hundred and seventy nine newborns, between June 2011 and October 2012, participated in the study. The test was rapidly completed if the infant remained in an optimal behavioral stage (7 ± 2 minutes) but the test duration increased significantly (12 ± 4 minutes, p < 0.001) if its behavior changed. Infants were able to fix a target and to discriminate a colored face at 24 hours of life. Horizontal tracking of a target was achieved by 96% (152/159) of the infants at 48 hours. Circular tracking, stripe discrimination and attention to distance significantly improved between each 24-hour test period. The test was easily performed by non-specialized local staff and well accepted by the parents. Healthy term singletons in this limited-resource setting have a visual response similar to that obtained to gestational age matched newborns in Italy. It is possible to use these results as a reference set of values for the visual assessment in Karen and Burmese infants in the first 72 hours of life. The utility of the 24 hours test should be pursued. PMID:27300137

  1. X-ray optics metrology limited by random noise, instrumental drifts, and systematic errors

    SciTech Connect

    Yashchuk, Valeriy V.; Anderson, Erik H.; Barber, Samuel K.; Cambie, Rossana; Celestre, Richard; Conley, Raymond; Goldberg, Kenneth A.; McKinney, Wayne R.; Morrison, Gregory; Takacs, Peter Z.; Voronov, Dmitriy L.; Yuan, Sheng; Padmore, Howard A.

    2010-07-09

    Continuous, large-scale efforts to improve and develop third- and forth-generation synchrotron radiation light sources for unprecedented high-brightness, low emittance, and coherent x-ray beams demand diffracting and reflecting x-ray optics suitable for micro- and nano-focusing, brightness preservation, and super high resolution. One of the major impediments for development of x-ray optics with the required beamline performance comes from the inadequate present level of optical and at-wavelength metrology and insufficient integration of the metrology into the fabrication process and into beamlines. Based on our experience at the ALS Optical Metrology Laboratory, we review the experimental methods and techniques that allow us to mitigate significant optical metrology problems related to random, systematic, and drift errors with super-high-quality x-ray optics. Measurement errors below 0.2 mu rad have become routine. We present recent results from the ALS of temperature stabilized nano-focusing optics and dedicated at-wavelength metrology. The international effort to develop a next generation Optical Slope Measuring System (OSMS) to address these problems is also discussed. Finally, we analyze the remaining obstacles to further improvement of beamline x-ray optics and dedicated metrology, and highlight the ways we see to overcome the problems.

  2. Technique Errors and Limiting Factors in Laser Ranging to Geodetic Satellites

    NASA Astrophysics Data System (ADS)

    Appleby, G. M.; Luceri, V.; Mueller, H.; Noll, C. E.; Otsubo, T.; Wilkinson, M.

    2012-12-01

    The tracking stations of the International Laser Ranging Service (ILRS) global network provide to the Data Centres a steady stream of very precise laser range normal points to the primary geodetic spherical satellites LAGEOS (-1 and -2) and Etalon (-1 and -2). Analysis of these observations to determine instantaneous site coordinates and Earth orientation parameters provides a major contribution to ongoing international efforts to define a precise terrestrial reference frame, which itself supports research into geophysical processes at the few mm level of precision. For example, the latest realization of the reference frame, ITRF2008, used weekly laser range solutions from 1983 to 2009, the origin of the Frame being determined solely by the SLR technique. However, in the ITRF2008 publication, Altamimi et al (2011, Journal of Geodesy) point out that further improvement in the ITRF is partly dependent upon improving an understanding of sources of technique error. In this study we look at SLR station hardware configuration that has been subject to major improvements over the last four decades, at models that strive to provide accurate translations of the laser range observations to the centres of mass of the small geodetic satellites and at the considerable body of work that has been carried out via orbital analyses to determine range corrections for some of the tracking stations. Through this study, with specific examples, we start to put together an inventory of system-dependent technique errors that will be important information for SLR re-analysis towards the next realization of the ITRF.

  3. Technical Errors May Affect Accuracy of Torque Limiter in Locking Plate Osteosynthesis.

    PubMed

    Savin, David D; Lee, Simon; Bohnenkamp, Frank C; Pastor, Andrew; Garapati, Rajeev; Goldberg, Benjamin A

    2016-01-01

    In locking plate osteosynthesis, proper surgical technique is crucial in reducing potential pitfalls, and use of a torque limiter makes it possible to control insertion torque. We conducted a study of the ways in which different techniques can alter the accuracy of torque limiters. We tested 22 torque limiters (1.5 Nm) for accuracy using hand and power tools under different rotational scenarios: hand power at low and high velocity and drill power at low and high velocity. We recorded the maximum torque reached after each torque-limiting event. Use of torque limiters under hand power at low velocity and high velocity resulted in significantly (P < .0001) different mean (SD) measurements: 1.49 (0.15) Nm and 3.73 (0.79) Nm. Use under drill power at controlled low velocity and at high velocity also resulted in significantly (P < .0001) different mean (SD) measurements: 1.47 (0.14) Nm and 5.37 (0.90) Nm. Maximum single measurement obtained was 9.0 Nm using drill power at high velocity. Locking screw insertion with improper technique may result in higher than expected torque and subsequent complications. For torque limiters, the most reliable technique involves hand power at slow velocity or drill power with careful control of insertion speed until 1 torque-limiting event occurs.

  4. Application of thresholds of potential concern and limits of acceptable change in the condition assessment of a significant wetland.

    PubMed

    Rogers, Kerrylee; Saintilan, Neil; Colloff, Matthew J; Wen, Li

    2013-10-01

    We propose a framework in which thresholds of potential concern (TPCs) and limits of acceptable change (LACs) are used in concert in the assessment of wetland condition and vulnerability and apply the framework in a case study. The lower Murrumbidgee River floodplain (the 'Lowbidgee') is one of the most ecologically important wetlands in Australia and the focus of intense management intervention by State and Federal government agencies. We used a targeted management stakeholder workshop to identify key values that contribute to the ecological significance of the Lowbidgee floodplain, and identified LACs that, if crossed, would signify the loss of significance. We then used conceptual models linking the condition of these values (wetland vegetation communities, waterbirds, fish species and the endangered southern bell frog) to measurable threat indicators, for which we defined a management goal and a TPC. We applied this framework to data collected across 70 wetland storages', or eco-hydrological units, at the peak of a prolonged drought (2008) and following extensive re-flooding (2010). At the suggestion of water and wetland mangers, we neither aggregated nor integrated indices but reported separately in a series of chloropleth maps. The resulting assessment clearly identified the effect of rewetting in restoring indicators within TPC in most cases, for most storages. The scale of assessment was useful in informing the targeted and timely management intervention and provided a context for retaining and utilising monitoring information in an adaptive management context.

  5. Bit Error Rate Performance Limitations Due to Raman Amplifier Induced Crosstalk in a WDM Transmission System

    NASA Astrophysics Data System (ADS)

    Tithi, F. H.; Majumder, S. P.

    2017-03-01

    Analysis is carried out for a single span wavelength division multiplexing (WDM) transmission system with distributed Raman amplification to find the effect of amplifier induced crosstalk on the bit error rate (BER) with different system parameters. The results are evaluated in terms of crosstalk power induced in a WDM channel due to Raman amplification, optical signal to crosstalk ratio (OSCR) and BER at any distance for different pump power and number of WDM channels. The results show that the WDM system suffers power penalty due to crosstalk which is significant at higher pump power, higher channel separation and number of WDM channel. It is noticed that at a BER 10-9, the power penalty is 8.7 dB and 10.5 dB for the length of 180 km and number of WDM channel N=32 and 64 respectively when the pump power is 20 mW and is higher at high pump power. Analytical results are validated by simulation.

  6. Water-balance uncertainty in Honduras: a limits-of-acceptability approach to model evaluation using a time-variant rating curve

    NASA Astrophysics Data System (ADS)

    Westerberg, I.; Guerrero, J.-L.; Beven, K.; Seibert, J.; Halldin, S.; Lundin, L.-C.; Xu, C.-Y.

    2009-04-01

    The climate of Central America is highly variable both spatially and temporally; extreme events like floods and droughts are recurrent phenomena posing great challenges to regional water-resources management. Scarce and low-quality hydro-meteorological data complicate hydrological modelling and few previous studies have addressed the water-balance in Honduras. In the alluvial Choluteca River, the river bed changes over time as fill and scour occur in the channel, leading to a fast-changing relation between stage and discharge and difficulties in deriving consistent rating curves. In this application of a four-parameter water-balance model, a limits-of-acceptability approach to model evaluation was used within the General Likelihood Uncertainty Estimation (GLUE) framework. The limits of acceptability were determined for discharge alone for each time step, and ideally a simulated result should always be contained within the limits. A moving-window weighted fuzzy regression of the ratings, based on estimated uncertainties in the rating-curve data, was used to derive the limits. This provided an objective way to determine the limits of acceptability and handle the non-stationarity of the rating curves. The model was then applied within GLUE and evaluated using the derived limits. Preliminary results show that the best simulations are within the limits 75-80% of the time, indicating that precipitation data and other uncertainties like model structure also have a significant effect on predictability.

  7. Analysis and mitigation of systematic errors in spectral shearing interferometry of pulses approaching the single-cycle limit [Invited

    SciTech Connect

    Birge, Jonathan R.; Kaertner, Franz X.

    2008-06-15

    We derive an analytical approximation for the measured pulse width error in spectral shearing methods, such as spectral phase interferometry for direct electric-field reconstruction (SPIDER), caused by an anomalous delay between the two sheared pulse components. This analysis suggests that, as pulses approach the single-cycle limit, the resulting requirements on the calibration and stability of this delay become significant, requiring precision orders of magnitude higher than the scale of a wavelength. This is demonstrated by numerical simulations of SPIDER pulse reconstruction using actual data from a sub-two-cycle laser. We briefly propose methods to minimize the effects of this sensitivity in SPIDER and review variants of spectral shearing that attempt to avoid this difficulty.

  8. 50 CFR 648.53 - Acceptable biological catch (ABC), annual catch limits (ACL), annual catch targets (ACT), DAS...

    Code of Federal Regulations, 2014 CFR

    2014-10-01

    ... divided as sub-ACLs between limited access vessels, limited access vessels that are fishing under a LAGC... adjustment. (i) The limited access fishery sub-ACLs for fishing years 2014 and 2015 are: (A) 2014: 18,885 mt...). (i) The ACLs for fishing years 2014 and 2015 for LAGC IFQ vessels without a limited access...

  9. 50 CFR 648.53 - Acceptable biological catch (ABC), annual catch limits (ACL), annual catch targets (ACT), DAS...

    Code of Federal Regulations, 2013 CFR

    2013-10-01

    ... divided as sub-ACLs between limited access vessels, limited access vessels that are fishing under a LAGC... adjustment. (i) The limited access fishery sub-ACLs for fishing years 2013 and 2014 are: (A) 2013: 19,093 mt... paragraph (a). (i) The ACLs for fishing years 2013 and 2014 for LAGC IFQ vessels without a limited...

  10. 50 CFR 648.53 - Acceptable biological catch (ABC), annual catch limits (ACL), annual catch targets (ACT), DAS...

    Code of Federal Regulations, 2012 CFR

    2012-10-01

    ... catch limits (ACL), annual catch targets (ACT), DAS allocations, and individual fishing quotas (IFQ... limits (ACL), annual catch targets (ACT), DAS allocations, and individual fishing quotas (IFQ). (a... limited access scallop fishery shall be allocated 94.5 percent of the ACL specified in paragraph (a)(1)...

  11. 50 CFR 648.53 - Acceptable biological catch (ABC), annual catch limits (ACL), annual catch targets (ACT), DAS...

    Code of Federal Regulations, 2011 CFR

    2011-10-01

    ... divided as sub-ACLs between limited access vessels, limited access vessels that are fishing under a... limited access fishery sub-ACLs for fishing years 2011 through 2013 are: (A) 2011: 24,954 mt. (B) 2012: 26... catch, observer set-aside, and research set-aside, as specified in this paragraph (a). The LAGC ACLs...

  12. Achieving the Complete-Basis Limit in Large Molecular Clusters: Computationally Efficient Procedures to Eliminate Basis-Set Superposition Error

    NASA Astrophysics Data System (ADS)

    Richard, Ryan M.; Herbert, John M.

    2013-06-01

    Previous electronic structure studies that have relied on fragmentation have been primarily interested in those methods' abilities to replicate the supersystem energy (or a related energy difference) without recourse to the ability of those supersystem results to replicate experiment or high accuracy benchmarks. Here we focus on replicating accurate ab initio benchmarks, that are suitable for comparison to experimental data. In doing this it becomes imperative that we correct our methods for basis-set superposition errors (BSSE) in a computationally feasible way. This criterion leads us to develop a new method for BSSE correction, which we term the many-body counterpoise correction, or MBn for short. MBn is truncated at order n, in much the same manner as a normal many-body expansion leading to a decrease in computational time. Furthermore, its formulation in terms of fragments makes it especially suitable for use with pre-existing fragment codes. A secondary focus of this study is directed at assessing fragment methods' abilities to extrapolate to the complete basis set (CBS) limit as well as compute approximate triples corrections. Ultimately, by analysis of (H_2O)_6 and (H_2O)_{10}F^- systems, it is concluded that with large enough basis-sets (triple or quad zeta) fragment based methods can replicate high level benchmarks in a fraction of the time.

  13. Effect of model error on precipitation forecasts in the high-resolution limited area ensemble prediction system of the Korea Meteorological Administration

    NASA Astrophysics Data System (ADS)

    Kim, SeHyun; Kim, Hyun Mee

    2015-04-01

    In numerical weather prediction using convective-scale model resolution, forecast uncertainties are caused by initial condition error, boundary condition error, and model error. Because convective-scale forecasts are influenced by subgrid scale processes which cannot be resolved easily, the model error becomes more important than the initial and boundary condition errors. To consider the model error, multi-model and multi-physics methods use several models and physics schemes and the stochastic physics method uses random numbers to create a noise term in the model equations (e.g. Stochastic Perturbed Parameterization Tendency (SPPT), Stochastic Kinetic Energy Backscatter (SKEB), Stochastic Convective Vorticity (SCV), and Random Parameters (RP)). In this study, the RP method was used to consider the model error in the high-resolution limited area ensemble prediction system (EPS) of the Korea Meteorological Administration (KMA). The EPS has 12 ensemble members with 3 km horizontal resolution which generate 48 h forecasts. The initial and boundary conditions were provided by the global EPS of the KMA. The RP method was applied to microphysics and boundary layer schemes, and the ensemble forecasts using RP were compared with those without RP during July 2013. Both Root Mean Square Error (RMSE) and spread of wind at 10 m verified by surface Automatic Weather System (AWS) observations decreased when using RP. However, for 1 hour accumulated precipitation, the spread increased with RP and Equitable Threat Score (ETS) showed different results for each rainfall event.

  14. Personal digital assistants to collect tuberculosis bacteriology data in Peru reduce delays, errors, and workload, and are acceptable to users: cluster randomized controlled trial

    PubMed Central

    Blaya, Joaquín A.; Cohen, Ted; Rodríguez, Pablo; Kim, Jihoon; Fraser, Hamish S.F.

    2009-01-01

    Summary Objectives To evaluate the effectiveness of a personal digital assistant (PDA)-based system for collecting tuberculosis test results and to compare this new system to the previous paper-based system. The PDA- and paper-based systems were evaluated based on processing times, frequency of errors, and number of work-hours expended by data collectors. Methods We conducted a cluster randomized controlled trial in 93 health establishments in Peru. Baseline data were collected for 19 months. Districts (n = 4) were then randomly assigned to intervention (PDA) or control (paper) groups, and further data were collected for 6 months. Comparisons were made between intervention and control districts and within-districts before and after the introduction of the intervention. Results The PDA-based system had a significant effect on processing times (p < 0.001) and errors (p = 0.005). In the between-districts comparison, the median processing time for cultures was reduced from 23 to 8 days and for smears was reduced from 25 to 12 days. In that comparison, the proportion of cultures with delays >90 days was reduced from 9.2% to 0.1% and the number of errors was decreased by 57.1%. The intervention reduced the work-hours necessary to process results by 70% and was preferred by all users. Conclusions A well-designed PDA-based system to collect data from institutions over a large, resource-poor area can significantly reduce delays, errors, and person-hours spent processing data. PMID:19097925

  15. In vivo erythrocyte micronucleus assay III. Validation and regulatory acceptance of automated scoring and the use of rat peripheral blood reticulocytes, with discussion of non-hematopoietic target cells and a single dose-level limit test.

    PubMed

    Hayashi, Makoto; MacGregor, James T; Gatehouse, David G; Blakey, David H; Dertinger, Stephen D; Abramsson-Zetterberg, Lilianne; Krishna, Gopala; Morita, Takeshi; Russo, Antonella; Asano, Norihide; Suzuki, Hiroshi; Ohyama, Wakako; Gibson, Dave

    2007-02-03

    The in vivo micronucleus assay working group of the International Workshop on Genotoxicity Testing (IWGT) discussed new aspects in the in vivo micronucleus (MN) test, including the regulatory acceptance of data derived from automated scoring, especially with regard to the use of flow cytometry, the suitability of rat peripheral blood reticulocytes to serve as the principal cell population for analysis, the establishment of in vivo MN assays in tissues other than bone marrow and blood (for example liver, skin, colon, germ cells), and the biological relevance of the single-dose-level test. Our group members agreed that flow cytometric systems to detect induction of micronucleated immature erythrocytes have advantages based on the presented data, e.g., they give good reproducibility compared to manual scoring, are rapid, and require only small quantities of peripheral blood. Flow cytometric analysis of peripheral blood reticulocytes has the potential to allow monitoring of chromosome damage in rodents and also other species as part of routine toxicology studies. It appears that it will be applicable to humans as well, although in this case the possible confounding effects of splenic activity will need to be considered closely. Also, the consensus of the group was that any system that meets the validation criteria recommended by the IWGT (2000) should be acceptable. A number of different flow cytometric-based micronucleus assays have been developed, but at the present time the validation data are most extensive for the flow cytometric method using anti-CD71 fluorescent staining especially in terms of inter-laboratory collaborative data. Whichever method is chosen, it is desirable that each laboratory should determine the minimum sample size required to ensure that scoring error is maintained below the level of animal-to-animal variation. In the second IWGT, the potential to use rat peripheral blood reticulocytes as target cells for the micronucleus assay was discussed

  16. DWPF COAL-CARBON WASTE ACCEPTANCE CRITERIA LIMIT EVALUATION BASED ON EXPERIMENTAL WORK (TANK 48 IMPACT STUDY)

    SciTech Connect

    Lambert, D.; Choi, A.

    2010-10-15

    This report summarizes the results of both experimental and modeling studies performed using Sludge Batch 10 (SB10) simulants and FBSR product from Tank 48 simulant testing in order to develop higher levels of coal-carbon that can be managed by DWPF. Once the Fluidized Bed Steam Reforming (FBSR) process starts up for treatment of Tank 48 legacy waste, the FBSR product stream will contribute higher levels of coal-carbon in the sludge batch for processing at DWPF. Coal-carbon is added into the FBSR process as a reductant and some of it will be present in the FBSR product as unreacted coal. The FBSR product will be slurried in water, transferred to Tank Farm and will be combined with sludge and washed to produce the sludge batch that DWPF will process. The FBSR product is high in both water soluble sodium carbonate and unreacted coal-carbon. Most of the sodium carbonate is removed during washing but all of the coal-carbon will remain and become part of the DWPF sludge batch. A paper study was performed earlier to assess the impact of FBSR coal-carbon on the DWPF Chemical Processing Cell (CPC) operation and melter off-gas flammability by combining it with SB10-SB13. The results of the paper study are documented in Ref. 7 and the key findings included that SB10 would be the most difficult batch to process with the FBSR coal present and up to 5,000 mg/kg of coal-carbon could be fed to the melter without exceeding the off-gas flammability safety basis limits. In the present study, a bench-scale demonstration of the DWPF CPC processing was performed using SB10 simulants spiked with varying amounts of coal, and the resulting seven CPC products were fed to the DWPF melter cold cap and off-gas dynamics models to determine the maximum coal that can be processed through the melter without exceeding the off-gas flammability safety basis limits. Based on the results of these experimental and modeling studies, the presence of coal-carbon in the sludge feed to DWPF is found to have

  17. Error Analysis

    NASA Astrophysics Data System (ADS)

    Scherer, Philipp O. J.

    Input data as well as the results of elementary operations have to be represented by machine numbers, the subset of real numbers which is used by the arithmetic unit of today's computers. Generally this generates rounding errors. This kind of numerical error can be avoided in principle by using arbitrary precision arithmetics or symbolic algebra programs. But this is unpractical in many cases due to the increase in computing time and memory requirements. Results from more complex operations like square roots or trigonometric functions can have even larger errors since series expansions have to be truncated and iterations accumulate the errors of the individual steps. In addition, the precision of input data from an experiment is limited. In this chapter we study the influence of numerical errors on the uncertainties of the calculated results and the stability of simple algorithms.

  18. Operational Interventions to Maintenance Error

    NASA Technical Reports Server (NTRS)

    Kanki, Barbara G.; Walter, Diane; Dulchinos, VIcki

    1997-01-01

    A significant proportion of aviation accidents and incidents are known to be tied to human error. However, research of flight operational errors has shown that so-called pilot error often involves a variety of human factors issues and not a simple lack of individual technical skills. In aircraft maintenance operations, there is similar concern that maintenance errors which may lead to incidents and accidents are related to a large variety of human factors issues. Although maintenance error data and research are limited, industry initiatives involving human factors training in maintenance have become increasingly accepted as one type of maintenance error intervention. Conscientious efforts have been made in re-inventing the team7 concept for maintenance operations and in tailoring programs to fit the needs of technical opeRAtions. Nevertheless, there remains a dual challenge: 1) to develop human factors interventions which are directly supported by reliable human error data, and 2) to integrate human factors concepts into the procedures and practices of everyday technical tasks. In this paper, we describe several varieties of human factors interventions and focus on two specific alternatives which target problems related to procedures and practices; namely, 1) structured on-the-job training and 2) procedure re-design. We hope to demonstrate that the key to leveraging the impact of these solutions comes from focused interventions; that is, interventions which are derived from a clear understanding of specific maintenance errors, their operational context and human factors components.

  19. 13 CFR 124.504 - What circumstances limit SBA's ability to accept a procurement for award as an 8(a) contract?

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... offer and acceptance. The procuring activity competed a requirement among Participants prior to offering... is offered to the 8(a) BD program. (2) In determining whether the acceptance of a requirement would... requests in writing that SBA decline to accept the offer prior to SBA's acceptance of the requirement...

  20. Improvement of synchrotron radiation mirrors below the 0.1-arcsec rms slope error limit with the help of a long trace profiler

    NASA Astrophysics Data System (ADS)

    Lammert, Heiner; Senf, Friedmar; Berger, Marion

    1997-11-01

    Traditional optical manufacturing methods employing both conventional and modern interferometric techniques, enable one to measure surface deviations to high accuracy, e.g. up to (lambda) 100 for flats (6 nm P-V). In synchrotron radiation applications the slope error is an important criterion for the quality of optical surfaces. In order to predict the performance of a synchrotron radiation mirror the slope errors of the surface must be known. Up to now, the highest achievable accuracy in the production of synchrotron radiation mirrors and in the measuring methods did not fall significantly below the 0.1 arcsec rms limit (spherical and flat surfaces). A long-trace profiler (LTP) is ideally suited for this task since it directly measures slope deviations with high precision. On the other hand, using an LTP becomes very sensitive to random and systematic errors at the limit of 0.1 arcsec. The main influence is the variation of the surrounding temperature in creating temporal and local temperature gradients at the instrument. At BESSY both temperature and vibrations are monitored at the most sensitive points of the LTP. In 1996 BESSY started a collaboration with a neighboring optical workshop combining traditional manufacturing technology with quasi- in-process high precision LTP measurements. As result of this mutual polishing and LTP measuring process, flat surfaces have been repeatedly produced with slope errors of 0.05 arcsec rms, e.g. 1 nm rms and 3 nm P-V (approximately equals (lambda) /200).

  1. Limiter

    DOEpatents

    Cohen, S.A.; Hosea, J.C.; Timberlake, J.R.

    1984-10-19

    A limiter with a specially contoured front face is provided. The front face of the limiter (the plasma-side face) is flat with a central indentation. In addition, the limiter shape is cylindrically symmetric so that the limiter can be rotated for greater heat distribution. This limiter shape accommodates the various power scrape-off distances lambda p, which depend on the parallel velocity, V/sub parallel/, of the impacting particles.

  2. Limiter

    DOEpatents

    Cohen, Samuel A.; Hosea, Joel C.; Timberlake, John R.

    1986-01-01

    A limiter with a specially contoured front face accommodates the various power scrape-off distances .lambda..sub.p, which depend on the parallel velocity, V.sub..parallel., of the impacting particles. The front face of the limiter (the plasma-side face) is flat with a central indentation. In addition, the limiter shape is cylindrically symmetric so that the limiter can be rotated for greater heat distribution.

  3. Pitfalls in Inversion and Interpretation of Continuous Resistivity Profiling Data: Effects of Resolution Limitations and Measurement Error

    NASA Astrophysics Data System (ADS)

    Lane, J. W.; Day-Lewis, F. D.; Loke, M. H.; White, E. A.

    2005-12-01

    Water-borne continuous resistivity profiling (CRP), also called marine or streaming resistivity, increasingly is used to support hydrogeophysical studies in freshwater and saltwater environments. CRP can provide resistivity tomograms for delineation of focused ground-water discharge, identification of sediment types, and mapping the near-shore freshwater/saltwater interface. Data collection, performed with a boat-towed electrode streamer, is commonly fast and relatively straightforward. In contrast, data processing and interpretation are potentially time consuming and subject to pitfalls. Data analysis is difficult due to the underdetermined nature of the tomographic inverse problem and the poorly understood resolution of tomograms, which is a function of the measurement physics, survey geometry, measurement error, and inverse problem parameterization and regularization. CRP data analysis in particular is complicated by noise in the data, sources of which include water leaking into the electrode cable, inefficient data collection geometry, and electrode obstruction by vegetation in the water column. Preliminary modeling has shown that, as in other types of geotomography, inversions of CRP data tend to overpredict the extent of and underpredict the magnitude of resistivity anomalies. Previous work also has shown that the water layer has a strong effect on the measured apparent resistivity values as it commonly has a much lower resistivity than the subsurface. Here we use synthetic examples and inverted field data sets to (1) assess the ability of CRP to resolve hydrogeophysical targets of interest for a range of water depths and salinities; and (2) examine the effects of CRP streamer noise on inverted resistivity sections. Our results show that inversion and interpretation of CRP data should be guided by hydrologic insight, available data for bathymetry and water layer resistivity, and a reliable model of measurement errors.

  4. Action errors, error management, and learning in organizations.

    PubMed

    Frese, Michael; Keith, Nina

    2015-01-03

    Every organization is confronted with errors. Most errors are corrected easily, but some may lead to negative consequences. Organizations often focus on error prevention as a single strategy for dealing with errors. Our review suggests that error prevention needs to be supplemented by error management--an approach directed at effectively dealing with errors after they have occurred, with the goal of minimizing negative and maximizing positive error consequences (examples of the latter are learning and innovations). After defining errors and related concepts, we review research on error-related processes affected by error management (error detection, damage control). Empirical evidence on positive effects of error management in individuals and organizations is then discussed, along with emotional, motivational, cognitive, and behavioral pathways of these effects. Learning from errors is central, but like other positive consequences, learning occurs under certain circumstances--one being the development of a mind-set of acceptance of human error.

  5. Quantification and correction of the error due to limited PIV resolution on the accuracy of non-intrusive spatial pressure measurement using a DNS channel flow database

    NASA Astrophysics Data System (ADS)

    Liu, Xiaofeng; Siddle-Mitchell, Seth

    2016-11-01

    The effect of the subgrid-scale (SGS) stress due to limited PIV resolution on pressure measurement accuracy is quantified using data from a direct numerical simulation database of turbulent channel flow (JHTDB). A series of 2000 consecutive realizations of sample block data with 512x512x49 grid nodal points were selected and spatially filtered with a coarse 17x17x17 and a fine 5x5x5 box averaging, respectively, giving rise to corresponding PIV resolutions of roughly 62.6 and 18.4 times of the viscous length scale. Comparison of the reconstructed pressure at different levels of pressure gradient approximation with the filtered pressure shows that the neglect of the viscous term leads to a small but noticeable change in the reconstructed pressure, especially in regions near the channel walls. As a contrast, the neglect of the SGS stress results in a more significant increase in both the bias and the random errors, indicating the SGS term must be accounted for in PIV pressure measurement. Correction using similarity SGS modeling reduces the random error due to the omission of SGS stress from 114.5% of the filtered pressure r.m.s. fluctuation to 89.1% for the coarse PIV resolution, and from 66.5% to 35.9% for the fine PIV resolution, respectively, confirming the benefit of the error compensation method and the positive influence of increasing PIV resolution on pressure measurement accuracy improvement.

  6. Reverse-polynomial dilution calibration methodology extends lower limit of quantification and reduces relative residual error in targeted peptide measurements in blood plasma.

    PubMed

    Yau, Yunki Y; Duo, Xizi; Leong, Rupert W L; Wasinger, Valerie C

    2015-02-01

    . Reverse-polynomial dilution techniques extend the Lower Limit of Quantification and reduce error (p = 0.005) in low-concentration plasma peptide assays and is broadly applicable for verification phase Tier 2 multiplexed multiple reaction monitoring assay development within the FDA-National Cancer Institute (NCI) biomarker development pipeline.

  7. Dose error analysis for a scanned proton beam delivery system

    NASA Astrophysics Data System (ADS)

    Coutrakon, G.; Wang, N.; Miller, D. W.; Yang, Y.

    2010-12-01

    All particle beam scanning systems are subject to dose delivery errors due to errors in position, energy and intensity of the delivered beam. In addition, finite scan speeds, beam spill non-uniformities, and delays in detector, detector electronics and magnet responses will all contribute errors in delivery. In this paper, we present dose errors for an 8 × 10 × 8 cm3 target of uniform water equivalent density with 8 cm spread out Bragg peak and a prescribed dose of 2 Gy. Lower doses are also analyzed and presented later in the paper. Beam energy errors and errors due to limitations of scanning system hardware have been included in the analysis. By using Gaussian shaped pencil beams derived from measurements in the research room of the James M Slater Proton Treatment and Research Center at Loma Linda, CA and executing treatment simulations multiple times, statistical dose errors have been calculated in each 2.5 mm cubic voxel in the target. These errors were calculated by delivering multiple treatments to the same volume and calculating the rms variation in delivered dose at each voxel in the target. The variations in dose were the result of random beam delivery errors such as proton energy, spot position and intensity fluctuations. The results show that with reasonable assumptions of random beam delivery errors, the spot scanning technique yielded an rms dose error in each voxel less than 2% or 3% of the 2 Gy prescribed dose. These calculated errors are within acceptable clinical limits for radiation therapy.

  8. Programming Errors in APL.

    ERIC Educational Resources Information Center

    Kearsley, Greg P.

    This paper discusses and provides some preliminary data on errors in APL programming. Data were obtained by analyzing listings of 148 complete and partial APL sessions collected from student terminal rooms at the University of Alberta. Frequencies of errors for the various error messages are tabulated. The data, however, are limited because they…

  9. 13 CFR 124.504 - What circumstances limit SBA's ability to accept a procurement for award as an 8(a) contract?

    Code of Federal Regulations, 2012 CFR

    2012-01-01

    ... impact on an individual small business, SBA will consider all relevant factors. (i) In connection with a... impact on other small business programs, SBA will consider all relevant factors, including but not...) procedures. (c) Adverse impact. SBA has made a written determination that acceptance of the procurement for...

  10. Randomized Trial of a Computerized Touch Screen Decision Aid to Increase Acceptance of Colonoscopy Screening in an African American Population with Limited Literacy.

    PubMed

    Ruzek, Sheryl B; Bass, Sarah Bauerle; Greener, Judith; Wolak, Caitlin; Gordon, Thomas F

    2016-10-01

    The goal of this study was to assess the effectiveness of a touch screen decision aid to increase acceptance of colonoscopy screening among African American patients with low literacy, developed and tailored using perceptual mapping methods grounded in Illness Self-Regulation and Information-Communication Theories. The pilot randomized controlled trial investigated the effects of a theory-based intervention on patients' acceptance of screening, including their perceptions of educational value, feelings about colonoscopy, likelihood to undergo screening, and decisional conflict about colonoscopy screening. Sixty-one African American patients with low literacy, aged 50-70 years, with no history of colonoscopy, were randomly assigned to receive a computerized touch screen decision aid (CDA; n = 33) or a literacy appropriate print tool (PT; n = 28) immediately before a primary care appointment in an urban, university-affiliated general internal medicine clinic. Patients rated the CDA significantly higher than the PT on all indicators of acceptance, including the helpfulness of the information for making a screening decision, and reported positive feelings about colonoscopy, greater likelihood to be screened, and lower decisional conflict. Results showed that a touch screen decision tool is acceptable to African American patients with low iteracy and, by increasing intent to screen, may increase rates of colonoscopy screening.

  11. Reduction of Maintenance Error Through Focused Interventions

    NASA Technical Reports Server (NTRS)

    Kanki, Barbara G.; Walter, Diane; Rosekind, Mark R. (Technical Monitor)

    1997-01-01

    It is well known that a significant proportion of aviation accidents and incidents are tied to human error. In flight operations, research of operational errors has shown that so-called "pilot error" often involves a variety of human factors issues and not a simple lack of individual technical skills. In aircraft maintenance operations, there is similar concern that maintenance errors which may lead to incidents and accidents are related to a large variety of human factors issues. Although maintenance error data and research are limited, industry initiatives involving human factors training in maintenance have become increasingly accepted as one type of maintenance error intervention. Conscientious efforts have been made in re-inventing the "team" concept for maintenance operations and in tailoring programs to fit the needs of technical operations. Nevertheless, there remains a dual challenge: to develop human factors interventions which are directly supported by reliable human error data, and to integrate human factors concepts into the procedures and practices of everyday technical tasks. In this paper, we describe several varieties of human factors interventions and focus on two specific alternatives which target problems related to procedures and practices; namely, 1) structured on-the-job training and 2) procedure re-design. We hope to demonstrate that the key to leveraging the impact of these solutions comes from focused interventions; that is, interventions which are derived from a clear understanding of specific maintenance errors, their operational context and human factors components.

  12. Estimating errors in cloud amount and cloud optical thickness due to limited spatial sampling using a satellite imager as a proxy for nadir-view sensors

    NASA Astrophysics Data System (ADS)

    Liu, Yinghui

    2015-07-01

    Cloud climatologies from space-based active sensors have been used in climate and other studies without their uncertainties specified. This study quantifies the errors in monthly mean cloud amount and optical thickness due to the limited spatial sampling of space-based active sensors. Nadir-view observations from a satellite imager, the Moderate Resolution Imaging Spectroradiometer (MODIS), serve as a proxy for those active sensors and observations within 10° of the sensor's nadir view serve as truth for data from 2003 to 2013 in the Arctic. June-July monthly mean cloud amount and liquid water and ice cloud optical thickness from MODIS for both observations are calculated and compared. Results show that errors increase with decreasing sample numbers for monthly means in cloud amount and cloud optical thickness. The root-mean-square error of monthly mean cloud amount from nadir-view observations increases with lower latitudes, with 0.7% (1.4%) at 80°N and 4.2% (11.2%) at 60°N using data from 2003 to 2013 (from 2012). For a 100 km resolution Equal-Area Scalable Earth Grid (EASE-Grid) cell of 1000 sample numbers, the absolute differences in these two monthly mean cloud amounts are less than 6.5% (9.0%, 11.5%) with an 80 (90, 95)%chance; such differences decrease to 4.0% (5.0%, 6.5%) with 5000 sample numbers. For a 100 km resolution EASE-Grid of 1000 sample numbers, the absolute differences in these two monthly mean cloud optical thicknesses are less than 2.7 (3.8) with a 90% chance for liquid water cloud (ice cloud); such differences decrease to 1.3 (1.0) for 5000 sample numbers. The uncertainties in monthly mean cloud amount and optical thickness estimated in this study may provide useful information for applying cloud climatologies from active sensors in climate studies and suggest the need for future spaceborne active sensors with a wide swath.

  13. On the validity of the basis set superposition error and complete basis set limit extrapolations for the binding energy of the formic acid dimer

    SciTech Connect

    Miliordos, Evangelos; Xantheas, Sotiris S.

    2015-03-07

    We report the variation of the binding energy of the Formic Acid Dimer with the size of the basis set at the Coupled Cluster with iterative Singles, Doubles and perturbatively connected Triple replacements [CCSD(T)] level of theory, estimate the Complete Basis Set (CBS) limit, and examine the validity of the Basis Set Superposition Error (BSSE)-correction for this quantity that was previously challenged by Kalescky, Kraka, and Cremer (KKC) [J. Chem. Phys. 140, 084315 (2014)]. Our results indicate that the BSSE correction, including terms that account for the substantial geometry change of the monomers due to the formation of two strong hydrogen bonds in the dimer, is indeed valid for obtaining accurate estimates for the binding energy of this system as it exhibits the expected decrease with increasing basis set size. We attribute the discrepancy between our current results and those of KKC to their use of a valence basis set in conjunction with the correlation of all electrons (i.e., including the 1s of C and O). We further show that the use of a core-valence set in conjunction with all electron correlation converges faster to the CBS limit as the BSSE correction is less than half than the valence electron/valence basis set case. The uncorrected and BSSE-corrected binding energies were found to produce the same (within 0.1 kcal/mol) CBS limits. We obtain CCSD(T)/CBS best estimates for D{sub e} = − 16.1 ± 0.1 kcal/mol and for D{sub 0} = − 14.3 ± 0.1 kcal/mol, the later in excellent agreement with the experimental value of −14.22 ± 0.12 kcal/mol.

  14. Uncertainty quantification and error analysis

    SciTech Connect

    Higdon, Dave M; Anderson, Mark C; Habib, Salman; Klein, Richard; Berliner, Mark; Covey, Curt; Ghattas, Omar; Graziani, Carlo; Seager, Mark; Sefcik, Joseph; Stark, Philip

    2010-01-01

    UQ studies all sources of error and uncertainty, including: systematic and stochastic measurement error; ignorance; limitations of theoretical models; limitations of numerical representations of those models; limitations on the accuracy and reliability of computations, approximations, and algorithms; and human error. A more precise definition for UQ is suggested below.

  15. UGV acceptance testing

    NASA Astrophysics Data System (ADS)

    Kramer, Jeffrey A.; Murphy, Robin R.

    2006-05-01

    With over 100 models of unmanned vehicles now available for military and civilian safety, security or rescue applications, it is important to for agencies to establish acceptance testing. However, there appears to be no general guidelines for what constitutes a reasonable acceptance test. This paper describes i) a preliminary method for acceptance testing by a customer of the mechanical and electrical components of an unmanned ground vehicle system, ii) how it has been applied to a man-packable micro-robot, and iii) discusses the value of testing both to ensure that the customer has a workable system and to improve design. The test method automated the operation of the robot to repeatedly exercise all aspects and combinations of components on the robot for 6 hours. The acceptance testing process uncovered many failures consistent with those shown to occur in the field, showing that testing by the user does predict failures. The process also demonstrated that the testing by the manufacturer can provide important design data that can be used to identify, diagnose, and prevent long-term problems. Also, the structured testing environment showed that sensor systems can be used to predict errors and changes in performance, as well as uncovering unmodeled behavior in subsystems.

  16. The Werner syndrome protein limits the error-prone 8-oxo-dG lesion bypass activity of human DNA polymerase kappa

    PubMed Central

    Maddukuri, Leena; Ketkar, Amit; Eddy, Sarah; Zafar, Maroof K.; Eoff, Robert L.

    2014-01-01

    Human DNA polymerase kappa (hpol κ) is the only Y-family member to preferentially insert dAMP opposite 7,8-dihydro-8-oxo-2′-deoxyguanosine (8-oxo-dG) during translesion DNA synthesis. We have studied the mechanism of action by which hpol κ activity is modulated by the Werner syndrome protein (WRN), a RecQ helicase known to influence repair of 8-oxo-dG. Here we show that WRN stimulates the 8-oxo-dG bypass activity of hpol κ in vitro by enhancing the correct base insertion opposite the lesion, as well as extension from dC:8-oxo-dG base pairs. Steady-state kinetic analysis reveals that WRN improves hpol κ-catalyzed dCMP insertion opposite 8-oxo-dG ∼10-fold and extension from dC:8-oxo-dG by 2.4-fold. Stimulation is primarily due to an increase in the rate constant for polymerization (kpol), as assessed by pre-steady-state kinetics, and it requires the RecQ C-terminal (RQC) domain. In support of the functional data, recombinant WRN and hpol κ were found to physically interact through the exo and RQC domains of WRN, and co-localization of WRN and hpol κ was observed in human cells treated with hydrogen peroxide. Thus, WRN limits the error-prone bypass of 8-oxo-dG by hpol κ, which could influence the sensitivity to oxidative damage that has previously been observed for Werner's syndrome cells. PMID:25294835

  17. The Werner syndrome protein limits the error-prone 8-oxo-dG lesion bypass activity of human DNA polymerase kappa.

    PubMed

    Maddukuri, Leena; Ketkar, Amit; Eddy, Sarah; Zafar, Maroof K; Eoff, Robert L

    2014-10-29

    Human DNA polymerase kappa (hpol κ) is the only Y-family member to preferentially insert dAMP opposite 7,8-dihydro-8-oxo-2'-deoxyguanosine (8-oxo-dG) during translesion DNA synthesis. We have studied the mechanism of action by which hpol κ activity is modulated by the Werner syndrome protein (WRN), a RecQ helicase known to influence repair of 8-oxo-dG. Here we show that WRN stimulates the 8-oxo-dG bypass activity of hpol κ in vitro by enhancing the correct base insertion opposite the lesion, as well as extension from dC:8-oxo-dG base pairs. Steady-state kinetic analysis reveals that WRN improves hpol κ-catalyzed dCMP insertion opposite 8-oxo-dG ∼10-fold and extension from dC:8-oxo-dG by 2.4-fold. Stimulation is primarily due to an increase in the rate constant for polymerization (kpol), as assessed by pre-steady-state kinetics, and it requires the RecQ C-terminal (RQC) domain. In support of the functional data, recombinant WRN and hpol κ were found to physically interact through the exo and RQC domains of WRN, and co-localization of WRN and hpol κ was observed in human cells treated with hydrogen peroxide. Thus, WRN limits the error-prone bypass of 8-oxo-dG by hpol κ, which could influence the sensitivity to oxidative damage that has previously been observed for Werner's syndrome cells.

  18. Limits of Expertise: Rethinking Pilot Error and the Causes of Airline Accidents. CRM/HF Conference, Held in Denver, Colorado on April 16-17, 2006

    NASA Technical Reports Server (NTRS)

    Dismukes, Key; Berman, Ben; Loukopoulos, Loukisa

    2007-01-01

    Reviewed NTSB reports of the 19 U.S. airline accidents between 1991-2000 attributed primarily to crew error. Asked: Why might any airline crew in situation of accident crew--knowing only what they knew--be vulnerable. Can never know with certainty why accident crew made specific errors but can determine why the population of pilots is vulnerable. Considers variability of expert performance as function of interplay of multiple factors.

  19. ALTIMETER ERRORS,

    DTIC Science & Technology

    CIVIL AVIATION, *ALTIMETERS, FLIGHT INSTRUMENTS, RELIABILITY, ERRORS , PERFORMANCE(ENGINEERING), BAROMETERS, BAROMETRIC PRESSURE, ATMOSPHERIC TEMPERATURE, ALTITUDE, CORRECTIONS, AVIATION SAFETY, USSR.

  20. Acceptance speech.

    PubMed

    Yusuf, C K

    1994-01-01

    I am proud and honored to accept this award on behalf of the Government of Bangladesh, and the millions of Bangladeshi children saved by oral rehydration solution. The Government of Bangladesh is grateful for this recognition of its commitment to international health and population research and cost-effective health care for all. The Government of Bangladesh has already made remarkable strides forward in the health and population sector, and this was recognized in UNICEF's 1993 "State of the World's Children". The national contraceptive prevalence rate, at 40%, is higher than that of many developed countries. It is appropriate that Bangladesh, where ORS was discovered, has the largest ORS production capacity in the world. It was remarkable that after the devastating cyclone in 1991, the country was able to produce enough ORS to meet the needs and remain self-sufficient. Similarly, Bangladesh has one of the most effective, flexible and efficient control of diarrheal disease and epidemic response program in the world. Through the country, doctors have been trained in diarrheal disease management, and stores of ORS are maintained ready for any outbreak. Despite grim predictions after the 1991 cyclone and the 1993 floods, relatively few people died from diarrheal disease. This is indicative of the strength of the national program. I want to take this opportunity to acknowledge the contribution of ICDDR, B and the important role it plays in supporting the Government's efforts in the health and population sector. The partnership between the Government of Bangladesh and ICDDR, B has already borne great fruit, and I hope and believe that it will continue to do so for many years in the future. Thank you.

  1. Freeform solar concentrator with a highly asymmetric acceptance cone

    NASA Astrophysics Data System (ADS)

    Wheelwright, Brian; Angel, J. Roger P.; Coughenour, Blake; Hammer, Kimberly

    2014-10-01

    A solar concentrator with a highly asymmetric acceptance cone is investigated. Concentrating photovoltaic systems require dual-axis sun tracking to maintain nominal concentration throughout the day. In addition to collecting direct rays from the solar disk, which subtends ~0.53 degrees, concentrating optics must allow for in-field tracking errors due to mechanical misalignment of the module, wind loading, and control loop biases. The angular range over which the concentrator maintains <90% of on-axis throughput is defined as the optical acceptance angle. Concentrators with substantial rotational symmetry likewise exhibit rotationally symmetric acceptance angles. In the field, this is sometimes a poor match with azimuth-elevation trackers, which have inherently asymmetric tracking performance. Pedestal-mounted trackers with low torsional stiffness about the vertical axis have better elevation tracking than azimuthal tracking. Conversely, trackers which rotate on large-footprint circular tracks are often limited by elevation tracking performance. We show that a line-focus concentrator, composed of a parabolic trough primary reflector and freeform refractive secondary, can be tailored to have a highly asymmetric acceptance angle. The design is suitable for a tracker with excellent tracking accuracy in the elevation direction, and poor accuracy in the azimuthal direction. In the 1000X design given, when trough optical errors (2mrad rms slope deviation) are accounted for, the azimuthal acceptance angle is +/- 1.65°, while the elevation acceptance angle is only +/-0.29°. This acceptance angle does not include the angular width of the sun, which consumes nearly all of the elevation tolerance at this concentration level. By decreasing the average concentration, the elevation acceptance angle can be increased. This is well-suited for a pedestal alt-azimuth tracker with a low cost slew bearing (without anti-backlash features).

  2. Errors in general practice: development of an error classification and pilot study of a method for detecting errors

    PubMed Central

    Rubin, G; George, A; Chinn, D; Richardson, C

    2003-01-01

    Objective: To describe a classification of errors and to assess the feasibility and acceptability of a method for recording staff reported errors in general practice. Design: An iterative process in a pilot practice was used to develop a classification of errors. This was incorporated in an anonymous self-report form which was then used to collect information on errors during June 2002. The acceptability of the reporting process was assessed using a self-completion questionnaire. Setting: UK general practice. Participants: Ten general practices in the North East of England. Main outcome measures: Classification of errors, frequency of errors, error rates per 1000 appointments, acceptability of the process to participants. Results: 101 events were used to create an initial error classification. This contained six categories: prescriptions, communication, appointments, equipment, clinical care, and "other" errors. Subsequently, 940 errors were recorded in a single 2 week period from 10 practices, providing additional information. 42% (397/940) were related to prescriptions, although only 6% (22/397) of these were medication errors. Communication errors accounted for 30% (282/940) of errors and clinical errors 3% (24/940). The overall error rate was 75.6/1000 appointments (95% CI 71 to 80). The method of error reporting was found to be acceptable by 68% (36/53) of respondents with only 8% (4/53) finding the process threatening. Conclusion: We have developed a classification of errors and described a practical and acceptable method for reporting them that can be used as part of the process of risk management. Errors are common and, although all have the potential to lead to an adverse event, most are administrative. PMID:14645760

  3. Acceptance of tinnitus: validation of the tinnitus acceptance questionnaire.

    PubMed

    Weise, Cornelia; Kleinstäuber, Maria; Hesser, Hugo; Westin, Vendela Zetterqvist; Andersson, Gerhard

    2013-01-01

    The concept of acceptance has recently received growing attention within tinnitus research due to the fact that tinnitus acceptance is one of the major targets of psychotherapeutic treatments. Accordingly, acceptance-based treatments will most likely be increasingly offered to tinnitus patients and assessments of acceptance-related behaviours will thus be needed. The current study investigated the factorial structure of the Tinnitus Acceptance Questionnaire (TAQ) and the role of tinnitus acceptance as mediating link between sound perception (i.e. subjective loudness of tinnitus) and tinnitus distress. In total, 424 patients with chronic tinnitus completed the TAQ and validated measures of tinnitus distress, anxiety, and depression online. Confirmatory factor analysis provided support to a good fit of the data to the hypothesised bifactor model (root-mean-square-error of approximation = .065; Comparative Fit Index = .974; Tucker-Lewis Index = .958; standardised root mean square residual = .032). In addition, mediation analysis, using a non-parametric joint coefficient approach, revealed that tinnitus-specific acceptance partially mediated the relation between subjective tinnitus loudness and tinnitus distress (path ab = 5.96; 95% CI: 4.49, 7.69). In a multiple mediator model, tinnitus acceptance had a significantly stronger indirect effect than anxiety. The results confirm the factorial structure of the TAQ and suggest the importance of a general acceptance factor that contributes important unique variance beyond that of the first-order factors activity engagement and tinnitus suppression. Tinnitus acceptance as measured with the TAQ is proposed to be a key construct in tinnitus research and should be further implemented into treatment concepts to reduce tinnitus distress.

  4. Medication Errors

    MedlinePlus

    ... common links HHS U.S. Department of Health and Human Services U.S. Food and Drug Administration A to Z Index Follow ... Practices National Patient Safety Foundation To Err is Human: ... Errors: Quality Chasm Series National Coordinating Council for Medication Error ...

  5. Development of an iterative reconstruction method to overcome 2D detector low resolution limitations in MLC leaf position error detection for 3D dose verification in IMRT.

    PubMed

    Visser, R; Godart, J; Wauben, D J L; Langendijk, J A; Van't Veld, A A; Korevaar, E W

    2016-05-21

    The objective of this study was to introduce a new iterative method to reconstruct multi leaf collimator (MLC) positions based on low resolution ionization detector array measurements and to evaluate its error detection performance. The iterative reconstruction method consists of a fluence model, a detector model and an optimizer. Expected detector response was calculated using a radiotherapy treatment plan in combination with the fluence model and detector model. MLC leaf positions were reconstructed by minimizing differences between expected and measured detector response. The iterative reconstruction method was evaluated for an Elekta SLi with 10.0 mm MLC leafs in combination with the COMPASS system and the MatriXX Evolution (IBA Dosimetry) detector with a spacing of 7.62 mm. The detector was positioned in such a way that each leaf pair of the MLC was aligned with one row of ionization chambers. Known leaf displacements were introduced in various field geometries ranging from  -10.0 mm to 10.0 mm. Error detection performance was tested for MLC leaf position dependency relative to the detector position, gantry angle dependency, monitor unit dependency, and for ten clinical intensity modulated radiotherapy (IMRT) treatment beams. For one clinical head and neck IMRT treatment beam, influence of the iterative reconstruction method on existing 3D dose reconstruction artifacts was evaluated. The described iterative reconstruction method was capable of individual MLC leaf position reconstruction with millimeter accuracy, independent of the relative detector position within the range of clinically applied MU's for IMRT. Dose reconstruction artifacts in a clinical IMRT treatment beam were considerably reduced as compared to the current dose verification procedure. The iterative reconstruction method allows high accuracy 3D dose verification by including actual MLC leaf positions reconstructed from low resolution 2D measurements.

  6. Development of an iterative reconstruction method to overcome 2D detector low resolution limitations in MLC leaf position error detection for 3D dose verification in IMRT

    NASA Astrophysics Data System (ADS)

    Visser, R.; Godart, J.; Wauben, D. J. L.; Langendijk, J. A.; van't Veld, A. A.; Korevaar, E. W.

    2016-05-01

    The objective of this study was to introduce a new iterative method to reconstruct multi leaf collimator (MLC) positions based on low resolution ionization detector array measurements and to evaluate its error detection performance. The iterative reconstruction method consists of a fluence model, a detector model and an optimizer. Expected detector response was calculated using a radiotherapy treatment plan in combination with the fluence model and detector model. MLC leaf positions were reconstructed by minimizing differences between expected and measured detector response. The iterative reconstruction method was evaluated for an Elekta SLi with 10.0 mm MLC leafs in combination with the COMPASS system and the MatriXX Evolution (IBA Dosimetry) detector with a spacing of 7.62 mm. The detector was positioned in such a way that each leaf pair of the MLC was aligned with one row of ionization chambers. Known leaf displacements were introduced in various field geometries ranging from  -10.0 mm to 10.0 mm. Error detection performance was tested for MLC leaf position dependency relative to the detector position, gantry angle dependency, monitor unit dependency, and for ten clinical intensity modulated radiotherapy (IMRT) treatment beams. For one clinical head and neck IMRT treatment beam, influence of the iterative reconstruction method on existing 3D dose reconstruction artifacts was evaluated. The described iterative reconstruction method was capable of individual MLC leaf position reconstruction with millimeter accuracy, independent of the relative detector position within the range of clinically applied MU’s for IMRT. Dose reconstruction artifacts in a clinical IMRT treatment beam were considerably reduced as compared to the current dose verification procedure. The iterative reconstruction method allows high accuracy 3D dose verification by including actual MLC leaf positions reconstructed from low resolution 2D measurements.

  7. Thermodynamics of Error Correction

    NASA Astrophysics Data System (ADS)

    Sartori, Pablo; Pigolotti, Simone

    2015-10-01

    Information processing at the molecular scale is limited by thermal fluctuations. This can cause undesired consequences in copying information since thermal noise can lead to errors that can compromise the functionality of the copy. For example, a high error rate during DNA duplication can lead to cell death. Given the importance of accurate copying at the molecular scale, it is fundamental to understand its thermodynamic features. In this paper, we derive a universal expression for the copy error as a function of entropy production and work dissipated by the system during wrong incorporations. Its derivation is based on the second law of thermodynamics; hence, its validity is independent of the details of the molecular machinery, be it any polymerase or artificial copying device. Using this expression, we find that information can be copied in three different regimes. In two of them, work is dissipated to either increase or decrease the error. In the third regime, the protocol extracts work while correcting errors, reminiscent of a Maxwell demon. As a case study, we apply our framework to study a copy protocol assisted by kinetic proofreading, and show that it can operate in any of these three regimes. We finally show that, for any effective proofreading scheme, error reduction is limited by the chemical driving of the proofreading reaction.

  8. Ultimate limits to error probabilities for ionospheric models based on solar geophysical indices and how these compare with the state of the art

    NASA Technical Reports Server (NTRS)

    Nisbet, J. S.; Stehle, C. G.

    1981-01-01

    An ideal model based on a given set of geophysical indices is defined as a model that provides a least squares fit to the data set as a function of the indices considered. Satellite measurements of electron content for three stations at different magnetic latitudes were used to provide such data sets which were each fitted to the geophysical indices. The magnitude of the difference between the measured value and the derived equation for the data set was used to estimate the probability of making an error greater than a given magnitude for such an ideal model. Atmospheric Explorer C data is used to examine the causes of the fluctuations and suggestions are made about how real improvements can be made in ionospheric forecasting ability. Joule heating inputs in the auroral electrojets are related to the AL and AU magnetic indices. Magnetic indices based on the time integral of the energy deposited in the electrojets are proposed for modeling processes affected by auroral zone heating.

  9. Uncorrected refractive errors.

    PubMed

    Naidoo, Kovin S; Jaggernath, Jyoti

    2012-01-01

    Global estimates indicate that more than 2.3 billion people in the world suffer from poor vision due to refractive error; of which 670 million people are considered visually impaired because they do not have access to corrective treatment. Refractive errors, if uncorrected, results in an impaired quality of life for millions of people worldwide, irrespective of their age, sex and ethnicity. Over the past decade, a series of studies using a survey methodology, referred to as Refractive Error Study in Children (RESC), were performed in populations with different ethnic origins and cultural settings. These studies confirmed that the prevalence of uncorrected refractive errors is considerably high for children in low-and-middle-income countries. Furthermore, uncorrected refractive error has been noted to have extensive social and economic impacts, such as limiting educational and employment opportunities of economically active persons, healthy individuals and communities. The key public health challenges presented by uncorrected refractive errors, the leading cause of vision impairment across the world, require urgent attention. To address these issues, it is critical to focus on the development of human resources and sustainable methods of service delivery. This paper discusses three core pillars to addressing the challenges posed by uncorrected refractive errors: Human Resource (HR) Development, Service Development and Social Entrepreneurship.

  10. Grazing function g and collimation angular acceptance

    SciTech Connect

    Peggs, S.G.; Previtali, V.

    2009-11-02

    The grazing function g is introduced - a synchrobetatron optical quantity that is analogous (and closely connected) to the Twiss and dispersion functions {beta}, {alpha}, {eta}, and {eta}'. It parametrizes the rate of change of total angle with respect to synchrotron amplitude for grazing particles, which just touch the surface of an aperture when their synchrotron and betatron oscillations are simultaneously (in time) at their extreme displacements. The grazing function can be important at collimators with limited acceptance angles. For example, it is important in both modes of crystal collimation operation - in channeling and in volume reflection. The grazing function is independent of the collimator type - crystal or amorphous - but can depend strongly on its azimuthal location. The rigorous synchrobetatron condition g = 0 is solved, by invoking the close connection between the grazing function and the slope of the normalized dispersion. Propagation of the grazing function is described, through drifts, dipoles, and quadrupoles. Analytic expressions are developed for g in perfectly matched periodic FODO cells, and in the presence of {beta} or {eta} error waves. These analytic approximations are shown to be, in general, in good agreement with realistic numerical examples. The grazing function is shown to scale linearly with FODO cell bend angle, but to be independent of FODO cell length. The ideal value is g = 0 at the collimator, but finite nonzero values are acceptable. Practically achievable grazing functions are described and evaluated, for both amorphous and crystal primary collimators, at RHIC, the SPS (UA9), the Tevatron (T-980), and the LHC.

  11. Smoothing error pitfalls

    NASA Astrophysics Data System (ADS)

    von Clarmann, T.

    2014-04-01

    The difference due to the content of a priori information between a constrained retrieval and the true atmospheric state is usually represented by the so-called smoothing error. In this paper it is shown that the concept of the smoothing error is questionable because it is not compliant with Gaussian error propagation. The reason for this is that the smoothing error does not represent the expected deviation of the retrieval from the true state but the expected deviation of the retrieval from the atmospheric state sampled on an arbitrary grid, which is itself a smoothed representation of the true state. The idea of a sufficiently fine sampling of this reference atmospheric state is untenable because atmospheric variability occurs on all scales, implying that there is no limit beyond which the sampling is fine enough. Even the idealization of infinitesimally fine sampling of the reference state does not help because the smoothing error is applied to quantities which are only defined in a statistical sense, which implies that a finite volume of sufficient spatial extent is needed to meaningfully talk about temperature or concentration. Smoothing differences, however, which play a role when measurements are compared, are still a useful quantity if the involved a priori covariance matrix has been evaluated on the comparison grid rather than resulting from interpolation. This is, because the undefined component of the smoothing error, which is the effect of smoothing implied by the finite grid on which the measurements are compared, cancels out when the difference is calculated.

  12. Acceptance criteria for urban dispersion model evaluation

    NASA Astrophysics Data System (ADS)

    Hanna, Steven; Chang, Joseph

    2012-05-01

    The authors suggested acceptance criteria for rural dispersion models' performance measures in this journal in 2004. The current paper suggests modified values of acceptance criteria for urban applications and tests them with tracer data from four urban field experiments. For the arc-maximum concentrations, the fractional bias should have a magnitude <0.67 (i.e., the relative mean bias is less than a factor of 2); the normalized mean-square error should be <6 (i.e., the random scatter is less than about 2.4 times the mean); and the fraction of predictions that are within a factor of two of the observations (FAC2) should be >0.3. For all data paired in space, for which a threshold concentration must always be defined, the normalized absolute difference should be <0.50, when the threshold is three times the instrument's limit of quantification (LOQ). An overall criterion is then applied that the total set of acceptance criteria should be satisfied in at least half of the field experiments. These acceptance criteria are applied to evaluations of the US Department of Defense's Joint Effects Model (JEM) with tracer data from US urban field experiments in Salt Lake City (U2000), Oklahoma City (JU2003), and Manhattan (MSG05 and MID05). JEM includes the SCIPUFF dispersion model with the urban canopy option and the urban dispersion model (UDM) option. In each set of evaluations, three or four likely options are tested for meteorological inputs (e.g., a local building top wind speed, the closest National Weather Service airport observations, or outputs from numerical weather prediction models). It is found that, due to large natural variability in the urban data, there is not a large difference between the performance measures for the two model options and the three or four meteorological input options. The more detailed UDM and the state-of-the-art numerical weather models do provide a slight improvement over the other options. The proposed urban dispersion model acceptance

  13. Correction of subtle refractive error in aviators.

    PubMed

    Rabin, J

    1996-02-01

    Optimal visual acuity is a requirement for piloting aircraft in military and civilian settings. While acuity can be corrected with glasses, spectacle wear can limit or even prohibit use of certain devices such as night vision goggles, helmet mounted displays, and/or chemical protective masks. Although current Army policy is directed toward selection of pilots who do not require spectacle correction for acceptable vision, refractive error can become manifest over time, making optical correction necessary. In such cases, contact lenses have been used quite successfully. Another approach is to neglect small amounts of refractive error, provided that vision is at least 20/20 without correction. This report describes visual findings in an aviator who was fitted with a contact lens to correct moderate astigmatism in one eye, while the other eye, with lesser refractive error, was left uncorrected. Advanced methods of testing visual resolution, including high and low contrast visual acuity and small letter contrast sensitivity, were used to compare vision achieved with full spectacle correction to that attained with the habitual, contact lens correction. Although the patient was pleased with his habitual correction, vision was significantly better with full spectacle correction, particularly on the small letter contrast test. Implications of these findings are considered.

  14. Offer/Acceptance Ratio.

    ERIC Educational Resources Information Center

    Collins, Mimi

    1997-01-01

    Explores how human resource professionals, with above average offer/acceptance ratios, streamline their recruitment efforts. Profiles company strategies with internships, internal promotion, cooperative education programs, and how to get candidates to accept offers. Also discusses how to use the offer/acceptance ratio as a measure of program…

  15. Definition of the limit of quantification in the presence of instrumental and non-instrumental errors. Comparison among various definitions applied to the calibration of zinc by inductively coupled plasma-mass spectrometry

    NASA Astrophysics Data System (ADS)

    Badocco, Denis; Lavagnini, Irma; Mondin, Andrea; Favaro, Gabriella; Pastore, Paolo

    2015-12-01

    The limit of quantification (LOQ) in the presence of instrumental and non-instrumental errors was proposed. It was theoretically defined combining the two-component variance regression and LOQ schemas already present in the literature and applied to the calibration of zinc by the ICP-MS technique. At low concentration levels, the two-component variance LOQ definition should be always used above all when a clean room is not available. Three LOQ definitions were accounted for. One of them in the concentration and two in the signal domain. The LOQ computed in the concentration domain, proposed by Currie, was completed by adding the third order terms in the Taylor expansion because they are of the same order of magnitude of the second ones so that they cannot be neglected. In this context, the error propagation was simplified by eliminating the correlation contributions by using independent random variables. Among the signal domain definitions, a particular attention was devoted to the recently proposed approach based on at least one significant digit in the measurement. The relative LOQ values resulted very large in preventing the quantitative analysis. It was found that the Currie schemas in the signal and concentration domains gave similar LOQ values but the former formulation is to be preferred as more easily computable.

  16. Improved Error Thresholds for Measurement-Free Error Correction

    NASA Astrophysics Data System (ADS)

    Crow, Daniel; Joynt, Robert; Saffman, M.

    2016-09-01

    Motivated by limitations and capabilities of neutral atom qubits, we examine whether measurement-free error correction can produce practical error thresholds. We show that this can be achieved by extracting redundant syndrome information, giving our procedure extra fault tolerance and eliminating the need for ancilla verification. The procedure is particularly favorable when multiqubit gates are available for the correction step. Simulations of the bit-flip, Bacon-Shor, and Steane codes indicate that coherent error correction can produce threshold error rates that are on the order of 10-3 to 10-4—comparable with or better than measurement-based values, and much better than previous results for other coherent error correction schemes. This indicates that coherent error correction is worthy of serious consideration for achieving protected logical qubits.

  17. Improved Error Thresholds for Measurement-Free Error Correction.

    PubMed

    Crow, Daniel; Joynt, Robert; Saffman, M

    2016-09-23

    Motivated by limitations and capabilities of neutral atom qubits, we examine whether measurement-free error correction can produce practical error thresholds. We show that this can be achieved by extracting redundant syndrome information, giving our procedure extra fault tolerance and eliminating the need for ancilla verification. The procedure is particularly favorable when multiqubit gates are available for the correction step. Simulations of the bit-flip, Bacon-Shor, and Steane codes indicate that coherent error correction can produce threshold error rates that are on the order of 10^{-3} to 10^{-4}-comparable with or better than measurement-based values, and much better than previous results for other coherent error correction schemes. This indicates that coherent error correction is worthy of serious consideration for achieving protected logical qubits.

  18. Defining acceptable conditions in wilderness

    NASA Astrophysics Data System (ADS)

    Roggenbuck, J. W.; Williams, D. R.; Watson, A. E.

    1993-03-01

    The limits of acceptable change (LAC) planning framework recognizes that forest managers must decide what indicators of wilderness conditions best represent resource naturalness and high-quality visitor experiences and how much change from the pristine is acceptable for each indicator. Visitor opinions on the aspects of the wilderness that have great impact on their experience can provide valuable input to selection of indicators. Cohutta, Georgia; Caney Creek, Arkansas; Upland Island, Texas; and Rattlesnake, Montana, wilderness visitors have high shared agreement that littering and damage to trees in campsites, noise, and seeing wildlife are very important influences on wilderness experiences. Camping within sight or sound of other people influences experience quality more than do encounters on the trails. Visitors’ standards of acceptable conditions within wilderness vary considerably, suggesting a potential need to manage different zones within wilderness for different clientele groups and experiences. Standards across wildernesses, however, are remarkably similar.

  19. Error and its meaning in forensic science.

    PubMed

    Christensen, Angi M; Crowder, Christian M; Ousley, Stephen D; Houck, Max M

    2014-01-01

    The discussion of "error" has gained momentum in forensic science in the wake of the Daubert guidelines and has intensified with the National Academy of Sciences' Report. Error has many different meanings, and too often, forensic practitioners themselves as well as the courts misunderstand scientific error and statistical error rates, often confusing them with practitioner error (or mistakes). Here, we present an overview of these concepts as they pertain to forensic science applications, discussing the difference between practitioner error (including mistakes), instrument error, statistical error, and method error. We urge forensic practitioners to ensure that potential sources of error and method limitations are understood and clearly communicated and advocate that the legal community be informed regarding the differences between interobserver errors, uncertainty, variation, and mistakes.

  20. Smoothing error pitfalls

    NASA Astrophysics Data System (ADS)

    von Clarmann, T.

    2014-09-01

    The difference due to the content of a priori information between a constrained retrieval and the true atmospheric state is usually represented by a diagnostic quantity called smoothing error. In this paper it is shown that, regardless of the usefulness of the smoothing error as a diagnostic tool in its own right, the concept of the smoothing error as a component of the retrieval error budget is questionable because it is not compliant with Gaussian error propagation. The reason for this is that the smoothing error does not represent the expected deviation of the retrieval from the true state but the expected deviation of the retrieval from the atmospheric state sampled on an arbitrary grid, which is itself a smoothed representation of the true state; in other words, to characterize the full loss of information with respect to the true atmosphere, the effect of the representation of the atmospheric state on a finite grid also needs to be considered. The idea of a sufficiently fine sampling of this reference atmospheric state is problematic because atmospheric variability occurs on all scales, implying that there is no limit beyond which the sampling is fine enough. Even the idealization of infinitesimally fine sampling of the reference state does not help, because the smoothing error is applied to quantities which are only defined in a statistical sense, which implies that a finite volume of sufficient spatial extent is needed to meaningfully discuss temperature or concentration. Smoothing differences, however, which play a role when measurements are compared, are still a useful quantity if the covariance matrix involved has been evaluated on the comparison grid rather than resulting from interpolation and if the averaging kernel matrices have been evaluated on a grid fine enough to capture all atmospheric variations that the instruments are sensitive to. This is, under the assumptions stated, because the undefined component of the smoothing error, which is the

  1. LIMS user acceptance testing.

    PubMed

    Klein, Corbett S

    2003-01-01

    Laboratory Information Management Systems (LIMS) play a key role in the pharmaceutical industry. Thorough and accurate validation of such systems is critical and is a regulatory requirement. LIMS user acceptance testing is one aspect of this testing and enables the user to make a decision to accept or reject implementation of the system. This paper discusses key elements in facilitating the development and execution of a LIMS User Acceptance Test Plan (UATP).

  2. The diffraction limit of an optical spectrum analyzer

    NASA Astrophysics Data System (ADS)

    Kolobrodov, V. G.; Tymchik, G. S.; Kolobrodov, M. S.

    2015-11-01

    This article examines a systematic error that occurs in optical spectrum analyzers and is caused by Fresnel approximation. The aim of the article is to determine acceptable errors of spatial frequency measurement in signal spectrum. The systematic error of spatial frequency measurement has been investigated on the basis of a physical and mathematical model of a coherent spectrum analyzer. It occurs as a result of the transition from light propagation in free space to Fresnel diffraction. Equations used to calculate absolute and relative measurement errors depending on a diffraction angle have been obtained. It allows us to determine the limits of the spectral range according to the given relative error of the spatial frequency measurement.

  3. On Maximum FODO Acceptance

    SciTech Connect

    Batygin, Yuri Konstantinovich

    2014-12-24

    This note illustrates maximum acceptance of FODO quadrupole focusing channel. Acceptance is the largest Floquet ellipse of a matched beam: A = $\\frac{a^2}{β}$$_{max}$ where a is the aperture of the channel and βmax is the largest value of beta-function in the channel. If aperture of the channel is restricted by a circle of radius a, the s-s acceptance is available for particles oscillating at median plane, y=0. Particles outside median plane will occupy smaller phase space area. In x-y plane, cross section of the accepted beam has a shape of ellipse with truncated boundaries.

  4. Impulsive stabilization of a class of nonlinear system with bounded gain error

    NASA Astrophysics Data System (ADS)

    Ma, Tie-Dong; Zhao, Fei-Ya

    2014-12-01

    Considering mechanical limitation or device restriction in practical application, this paper investigates impulsive stabilization of nonlinear systems with impulsive gain error. Compared with the existing impulsive analytical approaches, the proposed impulsive control method is more practically applicable, which includes control gain error with an acceptable boundary. A sufficient criterion for global exponential stability of an impulsive control system is derived, which relaxes the condition for precise impulsive gain efficiently. The effectiveness of the proposed method is confirmed by theoretical analysis and numerical simulation based on Chua's circuit.

  5. Discretization errors in particle tracking

    NASA Astrophysics Data System (ADS)

    Carmon, G.; Mamman, N.; Feingold, M.

    2007-03-01

    High precision video tracking of microscopic particles is limited by systematic and random errors. Systematic errors are partly due to the discretization process both in position and in intensity. We study the behavior of such errors in a simple tracking algorithm designed for the case of symmetric particles. This symmetry algorithm uses interpolation to estimate the value of the intensity at arbitrary points in the image plane. We show that the discretization error is composed of two parts: (1) the error due to the discretization of the intensity, bD and (2) that due to interpolation, bI. While bD behaves asymptotically like N-1 where N is the number of intensity gray levels, bI is small when using cubic spline interpolation.

  6. Calculation of magnetic error fields in hybrid insertion devices

    NASA Astrophysics Data System (ADS)

    Savoy, R.; Halbach, K.; Hassenzahl, W.; Hoyer, E.; Humphries, D.; Kincaid, B.

    1990-05-01

    The Advanced Light Source (ALS) at the Lawrence Berkeley Laboratory requires insertion devices with fields sufficiently accurate to take advantage of the small emittance of the ALS electron beam. To maintain the spectral performance of the synchrotron radiation and to limit steering effects on the electron beam these errors must be smaller than 0.25%. This paper develops a procedure for calculating the steering error due to misalignment of the easy axis of the permanent-magnet material. The procedure is based on a three-dimensional theory of the design of hybrid insertion devices developed by one of us. The acceptable tolerance for easy axis misalignment is found for a 5-cm-period undulator proposed for the ALS.

  7. Wavefront error sensing

    NASA Technical Reports Server (NTRS)

    Tubbs, Eldred F.

    1986-01-01

    A two-step approach to wavefront sensing for the Large Deployable Reflector (LDR) was examined as part of an effort to define wavefront-sensing requirements and to determine particular areas for more detailed study. A Hartmann test for coarse alignment, particularly segment tilt, seems feasible if LDR can operate at 5 microns or less. The direct measurement of the point spread function in the diffraction limited region may be a way to determine piston error, but this can only be answered by a detailed software model of the optical system. The question of suitable astronomical sources for either test must also be addressed.

  8. Newbery Medal Acceptance.

    ERIC Educational Resources Information Center

    Freedman, Russell

    1988-01-01

    Presents the Newbery Medal acceptance speech of Russell Freedman, writer of children's nonfiction. Discusses the place of nonfiction in the world of children's literature, the evolution of children's biographies, and the author's work on "Lincoln." (ARH)

  9. Measurement accuracies in band-limited extrapolation

    NASA Technical Reports Server (NTRS)

    Kritikos, H. N.

    1982-01-01

    The problem of numerical instability associated with extrapolation algorithms is addressed. An attempt is made to estimate the bounds for the acceptable errors and to place a ceiling on the measurement accuracy and computational accuracy needed for the extrapolation. It is shown that in band limited (or visible angle limited) extrapolation the larger effective aperture L' that can be realized from a finite aperture L by over sampling is a function of the accuracy of measurements. It is shown that for sampling in the interval L/b absolute value of xL, b1 the signal must be known within an error e sub N given by e sub N squared approximately = 1/4(2kL') cubed (e/8b L/L')(2kL') where L is the physical aperture, L' is the extrapolated aperture, and k = 2pi lambda.

  10. The Cline of Errors in the Writing of Japanese University Students

    ERIC Educational Resources Information Center

    French, Gary

    2005-01-01

    In this study, errors in the English writing of students in the College of World Englishes at Chukyo University, Japan are examined to determine if there is a level of acceptance among teachers. If there is, are these errors becoming part of an accepted, standardized Japanese English Results show there is little acceptance of third person…

  11. Accepting space radiation risks.

    PubMed

    Schimmerling, Walter

    2010-08-01

    The human exploration of space inevitably involves exposure to radiation. Associated with this exposure are multiple risks, i.e., probabilities that certain aspects of an astronaut's health or performance will be degraded. The management of these risks requires that such probabilities be accurately predicted, that the actual exposures be verified, and that comprehensive records be maintained. Implicit in these actions is the fact that, at some point, a decision has been made to accept a certain level of risk. This paper examines ethical and practical considerations involved in arriving at a determination that risks are acceptable, roles that the parties involved may play, and obligations arising out of reliance on the informed consent paradigm seen as the basis for ethical radiation risk acceptance in space.

  12. Current limiters

    SciTech Connect

    Loescher, D.H.; Noren, K.

    1996-09-01

    The current that flows between the electrical test equipment and the nuclear explosive must be limited to safe levels during electrical tests conducted on nuclear explosives at the DOE Pantex facility. The safest way to limit the current is to use batteries that can provide only acceptably low current into a short circuit; unfortunately this is not always possible. When it is not possible, current limiters, along with other design features, are used to limit the current. Three types of current limiters, the fuse blower, the resistor limiter, and the MOSFET-pass-transistor limiters, are used extensively in Pantex test equipment. Detailed failure mode and effects analyses were conducted on these limiters. Two other types of limiters were also analyzed. It was found that there is no best type of limiter that should be used in all applications. The fuse blower has advantages when many circuits must be monitored, a low insertion voltage drop is important, and size and weight must be kept low. However, this limiter has many failure modes that can lead to the loss of over current protection. The resistor limiter is simple and inexpensive, but is normally usable only on circuits for which the nominal current is less than a few tens of milliamperes. The MOSFET limiter can be used on high current circuits, but it has a number of single point failure modes that can lead to a loss of protective action. Because bad component placement or poor wire routing can defeat any limiter, placement and routing must be designed carefully and documented thoroughly.

  13. Robust characterization of leakage errors

    NASA Astrophysics Data System (ADS)

    Wallman, Joel J.; Barnhill, Marie; Emerson, Joseph

    2016-04-01

    Leakage errors arise when the quantum state leaks out of some subspace of interest, for example, the two-level subspace of a multi-level system defining a computational ‘qubit’, the logical code space of a quantum error-correcting code, or a decoherence-free subspace. Leakage errors pose a distinct challenge to quantum control relative to the more well-studied decoherence errors and can be a limiting factor to achieving fault-tolerant quantum computation. Here we present a scalable and robust randomized benchmarking protocol for quickly estimating the leakage rate due to an arbitrary Markovian noise process on a larger system. We illustrate the reliability of the protocol through numerical simulations.

  14. A fourier analysis on the maximum acceptable grid size for discrete proton beam dose calculation.

    PubMed

    Li, Haisen S; Romeijn, H Edwin; Dempsey, James F

    2006-09-01

    We developed an analytical method for determining the maximum acceptable grid size for discrete dose calculation in proton therapy treatment plan optimization, so that the accuracy of the optimized dose distribution is guaranteed in the phase of dose sampling and the superfluous computational work is avoided. The accuracy of dose sampling was judged by the criterion that the continuous dose distribution could be reconstructed from the discrete dose within a 2% error limit. To keep the error caused by the discrete dose sampling under a 2% limit, the dose grid size cannot exceed a maximum acceptable value. The method was based on Fourier analysis and the Shannon-Nyquist sampling theorem as an extension of our previous analysis for photon beam intensity modulated radiation therapy [J. F. Dempsey, H. E. Romeijn, J. G. Li, D. A. Low, and J. R. Palta, Med. Phys. 32, 380-388 (2005)]. The proton beam model used for the analysis was a near monoenergetic (of width about 1% the incident energy) and monodirectional infinitesimal (nonintegrated) pencil beam in water medium. By monodirection, we mean that the proton particles are in the same direction before entering the water medium and the various scattering prior to entrance to water is not taken into account. In intensity modulated proton therapy, the elementary intensity modulation entity for proton therapy is either an infinitesimal or finite sized beamlet. Since a finite sized beamlet is the superposition of infinitesimal pencil beams, the result of the maximum acceptable grid size obtained with infinitesimal pencil beam also applies to finite sized beamlet. The analytic Bragg curve function proposed by Bortfeld [T. Bortfeld, Med. Phys. 24, 2024-2033 (1997)] was employed. The lateral profile was approximated by a depth dependent Gaussian distribution. The model included the spreads of the Bragg peak and the lateral profiles due to multiple Coulomb scattering. The dependence of the maximum acceptable dose grid size on the

  15. Estimation of flood warning runoff thresholds in ungauged basins with asymmetric error functions

    NASA Astrophysics Data System (ADS)

    Toth, Elena

    2016-06-01

    In many real-world flood forecasting systems, the runoff thresholds for activating warnings or mitigation measures correspond to the flow peaks with a given return period (often 2 years, which may be associated with the bankfull discharge). At locations where the historical streamflow records are absent or very limited, the threshold can be estimated with regionally derived empirical relationships between catchment descriptors and the desired flood quantile. Whatever the function form, such models are generally parameterised by minimising the mean square error, which assigns equal importance to overprediction or underprediction errors. Considering that the consequences of an overestimated warning threshold (leading to the risk of missing alarms) generally have a much lower level of acceptance than those of an underestimated threshold (leading to the issuance of false alarms), the present work proposes to parameterise the regression model through an asymmetric error function, which penalises the overpredictions more. The estimates by models (feedforward neural networks) with increasing degree of asymmetry are compared with those of a traditional, symmetrically trained network, in a rigorous cross-validation experiment referred to a database of catchments covering the country of Italy. The analysis shows that the use of the asymmetric error function can substantially reduce the number and extent of overestimation errors, if compared to the use of the traditional square errors. Of course such reduction is at the expense of increasing underestimation errors, but the overall accurateness is still acceptable and the results illustrate the potential value of choosing an asymmetric error function when the consequences of missed alarms are more severe than those of false alarms.

  16. Estimation of flood warning runoff thresholds in ungauged basins with asymmetric error functions

    NASA Astrophysics Data System (ADS)

    Toth, E.

    2015-06-01

    In many real-world flood forecasting systems, the runoff thresholds for activating warnings or mitigation measures correspond to the flow peaks with a given return period (often the 2-year one, that may be associated with the bankfull discharge). At locations where the historical streamflow records are absent or very limited, the threshold can be estimated with regionally-derived empirical relationships between catchment descriptors and the desired flood quantile. Whatever is the function form, such models are generally parameterised by minimising the mean square error, that assigns equal importance to overprediction or underprediction errors. Considering that the consequences of an overestimated warning threshold (leading to the risk of missing alarms) generally have a much lower level of acceptance than those of an underestimated threshold (leading to the issuance of false alarms), the present work proposes to parameterise the regression model through an asymmetric error function, that penalises more the overpredictions. The estimates by models (feedforward neural networks) with increasing degree of asymmetry are compared with those of a traditional, symmetrically-trained network, in a rigorous cross-validation experiment referred to a database of catchments covering the Italian country. The analysis shows that the use of the asymmetric error function can substantially reduce the number and extent of overestimation errors, if compared to the use of the traditional square errors. Of course such reduction is at the expense of increasing underestimation errors, but the overall accurateness is still acceptable and the results illustrate the potential value of choosing an asymmetric error function when the consequences of missed alarms are more severe than those of false alarms.

  17. Why was Relativity Accepted?

    NASA Astrophysics Data System (ADS)

    Brush, S. G.

    Historians of science have published many studies of the reception of Einstein's special and general theories of relativity. Based on a review of these studies, and my own research on the role of the light-bending prediction in the reception of general relativity, I discuss the role of three kinds of reasons for accepting relativity (1) empirical predictions and explanations; (2) social-psychological factors; and (3) aesthetic-mathematical factors. According to the historical studies, acceptance was a three-stage process. First, a few leading scientists adopted the special theory for aesthetic-mathematical reasons. In the second stage, their enthusiastic advocacy persuaded other scientists to work on the theory and apply it to problems currently of interest in atomic physics. The special theory was accepted by many German physicists by 1910 and had begun to attract some interest in other countries. In the third stage, the confirmation of Einstein's light-bending prediction attracted much public attention and forced all physicists to take the general theory of relativity seriously. In addition to light-bending, the explanation of the advance of Mercury's perihelion was considered strong evidence by theoretical physicists. The American astronomers who conducted successful tests of general relativity became defenders of the theory. There is little evidence that relativity was `socially constructed' but its initial acceptance was facilitated by the prestige and resources of its advocates.

  18. Approaches to acceptable risk

    SciTech Connect

    Whipple, C.

    1997-04-30

    Several alternative approaches to address the question {open_quotes}How safe is safe enough?{close_quotes} are reviewed and an attempt is made to apply the reasoning behind these approaches to the issue of acceptability of radiation exposures received in space. The approaches to the issue of the acceptability of technological risk described here are primarily analytical, and are drawn from examples in the management of environmental health risks. These include risk-based approaches, in which specific quantitative risk targets determine the acceptability of an activity, and cost-benefit and decision analysis, which generally focus on the estimation and evaluation of risks, benefits and costs, in a framework that balances these factors against each other. These analytical methods tend by their quantitative nature to emphasize the magnitude of risks, costs and alternatives, and to downplay other factors, especially those that are not easily expressed in quantitative terms, that affect acceptance or rejection of risk. Such other factors include the issues of risk perceptions and how and by whom risk decisions are made.

  19. Antenna motion errors in bistatic SAR imagery

    NASA Astrophysics Data System (ADS)

    Wang, Ling; Yazıcı, Birsen; Cagri Yanik, H.

    2015-06-01

    Antenna trajectory or motion errors are pervasive in synthetic aperture radar (SAR) imaging. Motion errors typically result in smearing and positioning errors in SAR images. Understanding the relationship between the trajectory errors and position errors in reconstructed images is essential in forming focused SAR images. Existing studies on the effect of antenna motion errors are limited to certain geometries, trajectory error models or monostatic SAR configuration. In this paper, we present an analysis of position errors in bistatic SAR imagery due to antenna motion errors. Bistatic SAR imagery is becoming increasingly important in the context of passive imaging and multi-sensor imaging. Our analysis provides an explicit quantitative relationship between the trajectory errors and the positioning errors in bistatic SAR images. The analysis is applicable to arbitrary trajectory errors and arbitrary imaging geometries including wide apertures and large scenes. We present extensive numerical simulations to validate the analysis and to illustrate the results in commonly used bistatic configurations and certain trajectory error models.

  20. Acceptability of human risk.

    PubMed

    Kasperson, R E

    1983-10-01

    This paper has three objectives: to explore the nature of the problem implicit in the term "risk acceptability," to examine the possible contributions of scientific information to risk standard-setting, and to argue that societal response is best guided by considerations of process rather than formal methods of analysis. Most technological risks are not accepted but are imposed. There is also little reason to expect consensus among individuals on their tolerance of risk. Moreover, debates about risk levels are often at base debates over the adequacy of the institutions which manage the risks. Scientific information can contribute three broad types of analyses to risk-setting deliberations: contextual analysis, equity assessment, and public preference analysis. More effective risk-setting decisions will involve attention to the process used, particularly in regard to the requirements of procedural justice and democratic responsibility.

  1. Acceptability of human risk.

    PubMed Central

    Kasperson, R E

    1983-01-01

    This paper has three objectives: to explore the nature of the problem implicit in the term "risk acceptability," to examine the possible contributions of scientific information to risk standard-setting, and to argue that societal response is best guided by considerations of process rather than formal methods of analysis. Most technological risks are not accepted but are imposed. There is also little reason to expect consensus among individuals on their tolerance of risk. Moreover, debates about risk levels are often at base debates over the adequacy of the institutions which manage the risks. Scientific information can contribute three broad types of analyses to risk-setting deliberations: contextual analysis, equity assessment, and public preference analysis. More effective risk-setting decisions will involve attention to the process used, particularly in regard to the requirements of procedural justice and democratic responsibility. PMID:6418541

  2. Acceptance Test Plan.

    DTIC Science & Technology

    2014-09-26

    7 RD-Ai507 154 CCEPTANCE TEST PLN(U) WESTINGHOUSE DEFENSE ND i/i ELECTRO ICS CENTER BALTIMORE MD DEVELOPMENT AND OPERATIONS DIY D C KRRiJS 28 JUN...Ln ACCEPTANCE TEST PLAN FOR SPECIAL RELIABILITY TESTS FOR BROADBAND MICROWAVE AMPLIFIER PANEL David C. Kraus, Reliability Engineer WESTINGHOUSE ...ORGANIZATION b. OFFICE SYMBOL 7g& NAME OF MONITORING ORGANIZATION tIf appdeg ble) WESTINGHOUSE ELECTRIC CORP. - NAVAL RESEARCH LABORATORY e. AOORES$ (Ci7t

  3. Age and Acceptance of Euthanasia.

    ERIC Educational Resources Information Center

    Ward, Russell A.

    1980-01-01

    Study explores relationship between age (and sex and race) and acceptance of euthanasia. Women and non-Whites were less accepting because of religiosity. Among older people less acceptance was attributable to their lesser education and greater religiosity. Results suggest that quality of life in old age affects acceptability of euthanasia. (Author)

  4. SU-D-BRD-07: Evaluation of the Effectiveness of Statistical Process Control Methods to Detect Systematic Errors For Routine Electron Energy Verification

    SciTech Connect

    Parker, S

    2015-06-15

    Purpose: To evaluate the ability of statistical process control methods to detect systematic errors when using a two dimensional (2D) detector array for routine electron beam energy verification. Methods: Electron beam energy constancy was measured using an aluminum wedge and a 2D diode array on four linear accelerators. Process control limits were established. Measurements were recorded in control charts and compared with both calculated process control limits and TG-142 recommended specification limits. The data was tested for normality, process capability and process acceptability. Additional measurements were recorded while systematic errors were intentionally introduced. Systematic errors included shifts in the alignment of the wedge, incorrect orientation of the wedge, and incorrect array calibration. Results: Control limits calculated for each beam were smaller than the recommended specification limits. Process capability and process acceptability ratios were greater than one in all cases. All data was normally distributed. Shifts in the alignment of the wedge were most apparent for low energies. The smallest shift (0.5 mm) was detectable using process control limits in some cases, while the largest shift (2 mm) was detectable using specification limits in only one case. The wedge orientation tested did not affect the measurements as this did not affect the thickness of aluminum over the detectors of interest. Array calibration dependence varied with energy and selected array calibration. 6 MeV was the least sensitive to array calibration selection while 16 MeV was the most sensitive. Conclusion: Statistical process control methods demonstrated that the data distribution was normally distributed, the process was capable of meeting specifications, and that the process was centered within the specification limits. Though not all systematic errors were distinguishable from random errors, process control limits increased the ability to detect systematic errors

  5. Field error lottery

    SciTech Connect

    Elliott, C.J.; McVey, B. ); Quimby, D.C. )

    1990-01-01

    The level of field errors in an FEL is an important determinant of its performance. We have computed 3D performance of a large laser subsystem subjected to field errors of various types. These calculations have been guided by simple models such as SWOOP. The technique of choice is utilization of the FELEX free electron laser code that now possesses extensive engineering capabilities. Modeling includes the ability to establish tolerances of various types: fast and slow scale field bowing, field error level, beam position monitor error level, gap errors, defocusing errors, energy slew, displacement and pointing errors. Many effects of these errors on relative gain and relative power extraction are displayed and are the essential elements of determining an error budget. The random errors also depend on the particular random number seed used in the calculation. The simultaneous display of the performance versus error level of cases with multiple seeds illustrates the variations attributable to stochasticity of this model. All these errors are evaluated numerically for comprehensive engineering of the system. In particular, gap errors are found to place requirements beyond mechanical tolerances of {plus minus}25{mu}m, and amelioration of these may occur by a procedure utilizing direct measurement of the magnetic fields at assembly time. 4 refs., 12 figs.

  6. Inborn errors of metabolism

    MedlinePlus

    Metabolism - inborn errors of ... Bodamer OA. Approach to inborn errors of metabolism. In: Goldman L, Schafer AI, eds. Goldman's Cecil Medicine . 25th ed. Philadelphia, PA: Elsevier Saunders; 2015:chap 205. Rezvani I, Rezvani G. An ...

  7. High acceptance recoil polarimeter

    SciTech Connect

    The HARP Collaboration

    1992-12-05

    In order to detect neutrons and protons in the 50 to 600 MeV energy range and measure their polarization, an efficient, low-noise, self-calibrating device is being designed. This detector, known as the High Acceptance Recoil Polarimeter (HARP), is based on the recoil principle of proton detection from np[r arrow]n[prime]p[prime] or pp[r arrow]p[prime]p[prime] scattering (detected particles are underlined) which intrinsically yields polarization information on the incoming particle. HARP will be commissioned to carry out experiments in 1994.

  8. Baby-Crying Acceptance

    NASA Astrophysics Data System (ADS)

    Martins, Tiago; de Magalhães, Sérgio Tenreiro

    The baby's crying is his most important mean of communication. The crying monitoring performed by devices that have been developed doesn't ensure the complete safety of the child. It is necessary to join, to these technological resources, means of communicating the results to the responsible, which would involve the digital processing of information available from crying. The survey carried out, enabled to understand the level of adoption, in the continental territory of Portugal, of a technology that will be able to do such a digital processing. It was used the TAM as the theoretical referential. The statistical analysis showed that there is a good probability of acceptance of such a system.

  9. Improved modeling of multivariate measurement errors based on the Wishart distribution.

    PubMed

    Wentzell, Peter D; Cleary, Cody S; Kompany-Zareh, M

    2017-03-22

    The error covariance matrix (ECM) is an important tool for characterizing the errors from multivariate measurements, representing both the variance and covariance in the errors across multiple channels. Such information is useful in understanding and minimizing sources of experimental error and in the selection of optimal data analysis procedures. Experimental ECMs, normally obtained through replication, are inherently noisy, inconvenient to obtain, and offer limited interpretability. Significant advantages can be realized by building a model for the ECM based on established error types. Such models are less noisy, reduce the need for replication, mitigate mathematical complications such as matrix singularity, and provide greater insights. While the fitting of ECM models using least squares has been previously proposed, the present work establishes that fitting based on the Wishart distribution offers a much better approach. Simulation studies show that the Wishart method results in parameter estimates with a smaller variance and also facilitates the statistical testing of alternative models using a parameterized bootstrap method. The new approach is applied to fluorescence emission data to establish the acceptability of various models containing error terms related to offset, multiplicative offset, shot noise and uniform independent noise. The implications of the number of replicates, as well as single vs. multiple replicate sets are also described.

  10. Acceptability of Emission Offsets

    EPA Pesticide Factsheets

    This document may be of assistance in applying the New Source Review (NSR) air permitting regulations including the Prevention of Significant Deterioration (PSD) requirements. This document is part of the NSR Policy and Guidance Database. Some documents in the database are a scanned or retyped version of a paper photocopy of the original. Although we have taken considerable effort to quality assure the documents, some may contain typographical errors. Contact the office that issued the document if you need a copy of the original.

  11. Acceptance threshold theory can explain occurrence of homosexual behaviour

    PubMed Central

    Engel, Katharina C.; Männer, Lisa; Ayasse, Manfred; Steiger, Sandra

    2015-01-01

    Same-sex sexual behaviour (SSB) has been documented in a wide range of animals, but its evolutionary causes are not well understood. Here, we investigated SSB in the light of Reeve's acceptance threshold theory. When recognition is not error-proof, the acceptance threshold used by males to recognize potential mating partners should be flexibly adjusted to maximize the fitness pay-off between the costs of erroneously accepting males and the benefits of accepting females. By manipulating male burying beetles' search time for females and their reproductive potential, we influenced their perceived costs of making an acceptance or rejection error. As predicted, when the costs of rejecting females increased, males exhibited more permissive discrimination decisions and showed high levels of SSB; when the costs of accepting males increased, males were more restrictive and showed low levels of SSB. Our results support the idea that in animal species, in which the recognition cues of females and males overlap to a certain degree, SSB is a consequence of an adaptive discrimination strategy to avoid the costs of making rejection errors. PMID:25631226

  12. Acceptance threshold theory can explain occurrence of homosexual behaviour.

    PubMed

    Engel, Katharina C; Männer, Lisa; Ayasse, Manfred; Steiger, Sandra

    2015-01-01

    Same-sex sexual behaviour (SSB) has been documented in a wide range of animals, but its evolutionary causes are not well understood. Here, we investigated SSB in the light of Reeve's acceptance threshold theory. When recognition is not error-proof, the acceptance threshold used by males to recognize potential mating partners should be flexibly adjusted to maximize the fitness pay-off between the costs of erroneously accepting males and the benefits of accepting females. By manipulating male burying beetles' search time for females and their reproductive potential, we influenced their perceived costs of making an acceptance or rejection error. As predicted, when the costs of rejecting females increased, males exhibited more permissive discrimination decisions and showed high levels of SSB; when the costs of accepting males increased, males were more restrictive and showed low levels of SSB. Our results support the idea that in animal species, in which the recognition cues of females and males overlap to a certain degree, SSB is a consequence of an adaptive discrimination strategy to avoid the costs of making rejection errors.

  13. Emperical Tests of Acceptance Sampling Plans

    NASA Technical Reports Server (NTRS)

    White, K. Preston, Jr.; Johnson, Kenneth L.

    2012-01-01

    Acceptance sampling is a quality control procedure applied as an alternative to 100% inspection. A random sample of items is drawn from a lot to determine the fraction of items which have a required quality characteristic. Both the number of items to be inspected and the criterion for determining conformance of the lot to the requirement are given by an appropriate sampling plan with specified risks of Type I and Type II sampling errors. In this paper, we present the results of empirical tests of the accuracy of selected sampling plans reported in the literature. These plans are for measureable quality characteristics which are known have either binomial, exponential, normal, gamma, Weibull, inverse Gaussian, or Poisson distributions. In the main, results support the accepted wisdom that variables acceptance plans are superior to attributes (binomial) acceptance plans, in the sense that these provide comparable protection against risks at reduced sampling cost. For the Gaussian and Weibull plans, however, there are ranges of the shape parameters for which the required sample sizes are in fact larger than the corresponding attributes plans, dramatically so for instances of large skew. Tests further confirm that the published inverse-Gaussian (IG) plan is flawed, as reported by White and Johnson (2011).

  14. Empathy and error processing.

    PubMed

    Larson, Michael J; Fair, Joseph E; Good, Daniel A; Baldwin, Scott A

    2010-05-01

    Recent research suggests a relationship between empathy and error processing. Error processing is an evaluative control function that can be measured using post-error response time slowing and the error-related negativity (ERN) and post-error positivity (Pe) components of the event-related potential (ERP). Thirty healthy participants completed two measures of empathy, the Interpersonal Reactivity Index (IRI) and the Empathy Quotient (EQ), and a modified Stroop task. Post-error slowing was associated with increased empathic personal distress on the IRI. ERN amplitude was related to overall empathy score on the EQ and the fantasy subscale of the IRI. The Pe and measures of empathy were not related. Results remained consistent when negative affect was controlled via partial correlation, with an additional relationship between ERN amplitude and empathic concern on the IRI. Findings support a connection between empathy and error processing mechanisms.

  15. Burst error correction extensions for large Reed Solomon codes

    NASA Technical Reports Server (NTRS)

    Owsley, P.

    1990-01-01

    Reed Solomon codes are powerful error correcting codes that include some of the best random and burst correcting codes currently known. It is well known that an (n,k) Reed Solomon code can correct up to (n - k)/2 errors. Many applications utilizing Reed Solomon codes require corrections of errors consisting primarily of bursts. In this paper, it is shown that the burst correcting ability of Reed Solomon codes can be increased beyond (n - k)/2 with an acceptable probability of miscorrect.

  16. Aircraft system modeling error and control error

    NASA Technical Reports Server (NTRS)

    Kulkarni, Nilesh V. (Inventor); Kaneshige, John T. (Inventor); Krishnakumar, Kalmanje S. (Inventor); Burken, John J. (Inventor)

    2012-01-01

    A method for modeling error-driven adaptive control of an aircraft. Normal aircraft plant dynamics is modeled, using an original plant description in which a controller responds to a tracking error e(k) to drive the component to a normal reference value according to an asymptote curve. Where the system senses that (1) at least one aircraft plant component is experiencing an excursion and (2) the return of this component value toward its reference value is not proceeding according to the expected controller characteristics, neural network (NN) modeling of aircraft plant operation may be changed. However, if (1) is satisfied but the error component is returning toward its reference value according to expected controller characteristics, the NN will continue to model operation of the aircraft plant according to an original description.

  17. ON COMPUTING UPPER LIMITS TO SOURCE INTENSITIES

    SciTech Connect

    Kashyap, Vinay L.; Siemiginowska, Aneta; Van Dyk, David A.; Xu Jin; Connors, Alanna; Freeman, Peter E.; Zezas, Andreas E-mail: asiemiginowska@cfa.harvard.ed E-mail: jinx@ics.uci.ed E-mail: pfreeman@cmu.ed

    2010-08-10

    A common problem in astrophysics is determining how bright a source could be and still not be detected in an observation. Despite the simplicity with which the problem can be stated, the solution involves complicated statistical issues that require careful analysis. In contrast to the more familiar confidence bound, this concept has never been formally analyzed, leading to a great variety of often ad hoc solutions. Here we formulate and describe the problem in a self-consistent manner. Detection significance is usually defined by the acceptable proportion of false positives (background fluctuations that are claimed as detections, or Type I error), and we invoke the complementary concept of false negatives (real sources that go undetected, or Type II error), based on the statistical power of a test, to compute an upper limit to the detectable source intensity. To determine the minimum intensity that a source must have for it to be detected, we first define a detection threshold and then compute the probabilities of detecting sources of various intensities at the given threshold. The intensity that corresponds to the specified Type II error probability defines that minimum intensity and is identified as the upper limit. Thus, an upper limit is a characteristic of the detection procedure rather than the strength of any particular source. It should not be confused with confidence intervals or other estimates of source intensity. This is particularly important given the large number of catalogs that are being generated from increasingly sensitive surveys. We discuss, with examples, the differences between these upper limits and confidence bounds. Both measures are useful quantities that should be reported in order to extract the most science from catalogs, though they answer different statistical questions: an upper bound describes an inference range on the source intensity, while an upper limit calibrates the detection process. We provide a recipe for computing upper

  18. [CIRRNET® - learning from errors, a success story].

    PubMed

    Frank, O; Hochreutener, M; Wiederkehr, P; Staender, S

    2012-06-01

    CIRRNET® is the network of local error-reporting systems of the Swiss Patient Safety Foundation. The network has been running since 2006 together with the Swiss Society for Anaesthesiology and Resuscitation (SGAR), and network participants currently include 39 healthcare institutions from all four different language regions of Switzerland. Further institutions can join at any time. Local error reports in CIRRNET® are bundled at a supraregional level, categorised in accordance with the WHO classification, and analysed by medical experts. The CIRRNET® database offers a solid pool of data with error reports from a wide range of medical specialist's areas and provides the basis for identifying relevant problem areas in patient safety. These problem areas are then processed in cooperation with specialists with extremely varied areas of expertise, and recommendations for avoiding these errors are developed by changing care processes (Quick-Alerts®). Having been approved by medical associations and professional medical societies, Quick-Alerts® are widely supported and well accepted in professional circles. The CIRRNET® database also enables any affiliated CIRRNET® participant to access all error reports in the 'closed user area' of the CIRRNET® homepage and to use these error reports for in-house training. A healthcare institution does not have to make every mistake itself - it can learn from the errors of others, compare notes with other healthcare institutions, and use existing knowledge to advance its own patient safety.

  19. Error detection method

    DOEpatents

    Olson, Eric J.

    2013-06-11

    An apparatus, program product, and method that run an algorithm on a hardware based processor, generate a hardware error as a result of running the algorithm, generate an algorithm output for the algorithm, compare the algorithm output to another output for the algorithm, and detect the hardware error from the comparison. The algorithm is designed to cause the hardware based processor to heat to a degree that increases the likelihood of hardware errors to manifest, and the hardware error is observable in the algorithm output. As such, electronic components may be sufficiently heated and/or sufficiently stressed to create better conditions for generating hardware errors, and the output of the algorithm may be compared at the end of the run to detect a hardware error that occurred anywhere during the run that may otherwise not be detected by traditional methodologies (e.g., due to cooling, insufficient heat and/or stress, etc.).

  20. Software error detection

    NASA Technical Reports Server (NTRS)

    Buechler, W.; Tucker, A. G.

    1981-01-01

    Several methods were employed to detect both the occurrence and source of errors in the operational software of the AN/SLQ-32. A large embedded real time electronic warfare command and control system for the ROLM 1606 computer are presented. The ROLM computer provides information about invalid addressing, improper use of privileged instructions, stack overflows, and unimplemented instructions. Additionally, software techniques were developed to detect invalid jumps, indices out of range, infinte loops, stack underflows, and field size errors. Finally, data are saved to provide information about the status of the system when an error is detected. This information includes I/O buffers, interrupt counts, stack contents, and recently passed locations. The various errors detected, techniques to assist in debugging problems, and segment simulation on a nontarget computer are discussed. These error detection techniques were a major factor in the success of finding the primary cause of error in 98% of over 500 system dumps.

  1. Medication errors: definitions and classification.

    PubMed

    Aronson, Jeffrey K

    2009-06-01

    1. To understand medication errors and to identify preventive strategies, we need to classify them and define the terms that describe them. 2. The four main approaches to defining technical terms consider etymology, usage, previous definitions, and the Ramsey-Lewis method (based on an understanding of theory and practice). 3. A medication error is 'a failure in the treatment process that leads to, or has the potential to lead to, harm to the patient'. 4. Prescribing faults, a subset of medication errors, should be distinguished from prescription errors. A prescribing fault is 'a failure in the prescribing [decision-making] process that leads to, or has the potential to lead to, harm to the patient'. The converse of this, 'balanced prescribing' is 'the use of a medicine that is appropriate to the patient's condition and, within the limits created by the uncertainty that attends therapeutic decisions, in a dosage regimen that optimizes the balance of benefit to harm'. This excludes all forms of prescribing faults, such as irrational, inappropriate, and ineffective prescribing, underprescribing and overprescribing. 5. A prescription error is 'a failure in the prescription writing process that results in a wrong instruction about one or more of the normal features of a prescription'. The 'normal features' include the identity of the recipient, the identity of the drug, the formulation, dose, route, timing, frequency, and duration of administration. 6. Medication errors can be classified, invoking psychological theory, as knowledge-based mistakes, rule-based mistakes, action-based slips, and memory-based lapses. This classification informs preventive strategies.

  2. A survey of physicians' acceptance of telemedicine.

    PubMed

    Sheng, O R; Hu, P J; Chau, P Y; Hjelm, N M; Tam, K Y; Wei, C P; Tse, J

    1998-01-01

    Physicians' acceptance of telemedicine is an important managerial issue facing health-care organizations that have adopted, or are about to adopt, telemedicine. Most previous investigations of the acceptance of telemedicine have lacked theoretical foundation and been of limited scope. We examined technology acceptance and usage among physicians and specialists from 49 clinical departments at eight public tertiary hospitals in Hong Kong. Out of the 1021 questionnaires distributed, 310 were completed and returned, a 30% response rate. The preliminary findings suggested that use of telemedicine among clinicians in Hong Kong was moderate. While 18% of the respondents were using some form of telemedicine for patient care and management, it accounted for only 6.3% of the services provided. The intensity of their technology usage was also low, accounting for only 6.8% of a typical telemedicine-assisted service. These preliminary findings have managerial implications.

  3. Model Error Budgets

    NASA Technical Reports Server (NTRS)

    Briggs, Hugh C.

    2008-01-01

    An error budget is a commonly used tool in design of complex aerospace systems. It represents system performance requirements in terms of allowable errors and flows these down through a hierarchical structure to lower assemblies and components. The requirements may simply be 'allocated' based upon heuristics or experience, or they may be designed through use of physics-based models. This paper presents a basis for developing an error budget for models of the system, as opposed to the system itself. The need for model error budgets arises when system models are a principle design agent as is increasingly more common for poorly testable high performance space systems.

  4. Error coding simulations

    NASA Technical Reports Server (NTRS)

    Noble, Viveca K.

    1993-01-01

    There are various elements such as radio frequency interference (RFI) which may induce errors in data being transmitted via a satellite communication link. When a transmission is affected by interference or other error-causing elements, the transmitted data becomes indecipherable. It becomes necessary to implement techniques to recover from these disturbances. The objective of this research is to develop software which simulates error control circuits and evaluate the performance of these modules in various bit error rate environments. The results of the evaluation provide the engineer with information which helps determine the optimal error control scheme. The Consultative Committee for Space Data Systems (CCSDS) recommends the use of Reed-Solomon (RS) and convolutional encoders and Viterbi and RS decoders for error correction. The use of forward error correction techniques greatly reduces the received signal to noise needed for a certain desired bit error rate. The use of concatenated coding, e.g. inner convolutional code and outer RS code, provides even greater coding gain. The 16-bit cyclic redundancy check (CRC) code is recommended by CCSDS for error detection.

  5. Sonic boom acceptability studies

    NASA Technical Reports Server (NTRS)

    Shepherd, Kevin P.; Sullivan, Brenda M.; Leatherwood, Jack D.; Mccurdy, David A.

    1992-01-01

    The determination of the magnitude of sonic boom exposure which would be acceptable to the general population requires, as a starting point, a method to assess and compare individual sonic booms. There is no consensus within the scientific and regulatory communities regarding an appropriate sonic boom assessment metric. Loudness, being a fundamental and well-understood attribute of human hearing was chosen as a means of comparing sonic booms of differing shapes and amplitudes. The figure illustrates the basic steps which yield a calculated value of loudness. Based upon the aircraft configuration and its operating conditions, the sonic boom pressure signature which reaches the ground is calculated. This pressure-time history is transformed to the frequency domain and converted into a one-third octave band spectrum. The essence of the loudness method is to account for the frequency response and integration characteristics of the auditory system. The result of the calculation procedure is a numerical description (perceived level, dB) which represents the loudness of the sonic boom waveform.

  6. Spaceborne scanner imaging system errors

    NASA Technical Reports Server (NTRS)

    Prakash, A.

    1982-01-01

    The individual sensor system design elements which are the priori components in the registration and rectification process, and the potential impact of error budgets on multitemporal registration and side-lap registration are analyzed. The properties of scanner, MLA, and SAR imaging systems are reviewed. Each sensor displays internal distortion properties which to varying degrees make it difficult to generate on orthophoto projection of the data acceptable for multiple pass registration or meeting national map accuracy standards and is also affected to varying degrees by relief displacements in moderate to hilly terrain. Nonsensor related distortions, associated with the accuracy of ephemeris determination and platform stability, have a major impact on local geometric distortions. Platform stability improvements expected from the new multi mission spacecraft series and improved ephemeris and ground control point determination from the NAVSTAR/global positioning satellite systems are reviewed.

  7. Twenty Questions about Student Errors.

    ERIC Educational Resources Information Center

    Fisher, Kathleen M.; Lipson, Joseph Isaac

    1986-01-01

    Discusses the value of studying errors made by students in the process of learning science. Addresses 20 research questions dealing with student learning errors. Attempts to characterize errors made by students and clarify some terms used in error research. (TW)

  8. Refractive error blindness.

    PubMed Central

    Dandona, R.; Dandona, L.

    2001-01-01

    Recent data suggest that a large number of people are blind in different parts of the world due to high refractive error because they are not using appropriate refractive correction. Refractive error as a cause of blindness has been recognized only recently with the increasing use of presenting visual acuity for defining blindness. In addition to blindness due to naturally occurring high refractive error, inadequate refractive correction of aphakia after cataract surgery is also a significant cause of blindness in developing countries. Blindness due to refractive error in any population suggests that eye care services in general in that population are inadequate since treatment of refractive error is perhaps the simplest and most effective form of eye care. Strategies such as vision screening programmes need to be implemented on a large scale to detect individuals suffering from refractive error blindness. Sufficient numbers of personnel to perform reasonable quality refraction need to be trained in developing countries. Also adequate infrastructure has to be developed in underserved areas of the world to facilitate the logistics of providing affordable reasonable-quality spectacles to individuals suffering from refractive error blindness. Long-term success in reducing refractive error blindness worldwide will require attention to these issues within the context of comprehensive approaches to reduce all causes of avoidable blindness. PMID:11285669

  9. Teacher-Induced Errors.

    ERIC Educational Resources Information Center

    Richmond, Kent C.

    Students of English as a second language (ESL) often come to the classroom with little or no experience in writing in any language and with inaccurate assumptions about writing. Rather than correct these assumptions, teachers often seem to unwittingly reinforce them, actually inducing errors into their students' work. Teacher-induced errors occur…

  10. Evaluating mixed samples as a source of error in non-invasive genetic studies using microsatellites

    USGS Publications Warehouse

    Roon, David A.; Thomas, M.E.; Kendall, K.C.; Waits, L.P.

    2005-01-01

    The use of noninvasive genetic sampling (NGS) for surveying wild populations is increasing rapidly. Currently, only a limited number of studies have evaluated potential biases associated with NGS. This paper evaluates the potential errors associated with analysing mixed samples drawn from multiple animals. Most NGS studies assume that mixed samples will be identified and removed during the genotyping process. We evaluated this assumption by creating 128 mixed samples of extracted DNA from brown bear (Ursus arctos) hair samples. These mixed samples were genotyped and screened for errors at six microsatellite loci according to protocols consistent with those used in other NGS studies. Five mixed samples produced acceptable genotypes after the first screening. However, all mixed samples produced multiple alleles at one or more loci, amplified as only one of the source samples, or yielded inconsistent electropherograms by the final stage of the error-checking process. These processes could potentially reduce the number of individuals observed in NGS studies, but errors should be conservative within demographic estimates. Researchers should be aware of the potential for mixed samples and carefully design gel analysis criteria and error checking protocols to detect mixed samples.

  11. Treatment Acceptability of Healthcare Services for Children with Cerebral Palsy

    ERIC Educational Resources Information Center

    Dahl, Norm; Tervo, Raymond; Symons, Frank J.

    2007-01-01

    Background: Although treatment acceptability scales in intellectual and developmental disabilities research have been used in large- and small-scale applications, large-scale application has been limited to analogue (i.e. contrived) investigations. This study extended the application of treatment acceptability by assessing a large sample of care…

  12. Increasing Our Acceptance as Parents of Children with Special Needs

    ERIC Educational Resources Information Center

    Loewenstein, David

    2007-01-01

    Accepting the limitations of a child whose life was supposed to be imbued with endless possibilities requires parents to come to terms with expectations of themselves and the world around them. In this article, the author offers some helpful strategies for fostering acceptance and strengthening family relationships: (1) Remember that parenting is…

  13. Cognitive illusions of authorship reveal hierarchical error detection in skilled typists.

    PubMed

    Logan, Gordon D; Crump, Matthew J C

    2010-10-29

    The ability to detect errors is an essential component of cognitive control. Studies of error detection in humans typically use simple tasks and propose single-process theories of detection. We examined error detection by skilled typists and found illusions of authorship that provide evidence for two error-detection processes. We corrected errors that typists made and inserted errors in correct responses. When asked to report errors, typists took credit for corrected errors and accepted blame for inserted errors, claiming authorship for the appearance of the screen. However, their typing rate showed no evidence of these illusions, slowing down after corrected errors but not after inserted errors. This dissociation suggests two error-detection processes: one sensitive to the appearance of the screen and the other sensitive to keystrokes.

  14. Understanding and Confronting Our Mistakes: The Epidemiology of Error in Radiology and Strategies for Error Reduction.

    PubMed

    Bruno, Michael A; Walker, Eric A; Abujudeh, Hani H

    2015-10-01

    Arriving at a medical diagnosis is a highly complex process that is extremely error prone. Missed or delayed diagnoses often lead to patient harm and missed opportunities for treatment. Since medical imaging is a major contributor to the overall diagnostic process, it is also a major potential source of diagnostic error. Although some diagnoses may be missed because of the technical or physical limitations of the imaging modality, including image resolution, intrinsic or extrinsic contrast, and signal-to-noise ratio, most missed radiologic diagnoses are attributable to image interpretation errors by radiologists. Radiologic interpretation cannot be mechanized or automated; it is a human enterprise based on complex psychophysiologic and cognitive processes and is itself subject to a wide variety of error types, including perceptual errors (those in which an important abnormality is simply not seen on the images) and cognitive errors (those in which the abnormality is visually detected but the meaning or importance of the finding is not correctly understood or appreciated). The overall prevalence of radiologists' errors in practice does not appear to have changed since it was first estimated in the 1960s. The authors review the epidemiology of errors in diagnostic radiology, including a recently proposed taxonomy of radiologists' errors, as well as research findings, in an attempt to elucidate possible underlying causes of these errors. The authors also propose strategies for error reduction in radiology. On the basis of current understanding, specific suggestions are offered as to how radiologists can improve their performance in practice.

  15. Identifying subset errors in multiple sequence alignments.

    PubMed

    Roy, Aparna; Taddese, Bruck; Vohra, Shabana; Thimmaraju, Phani K; Illingworth, Christopher J R; Simpson, Lisa M; Mukherjee, Keya; Reynolds, Christopher A; Chintapalli, Sree V

    2014-01-01

    Multiple sequence alignment (MSA) accuracy is important, but there is no widely accepted method of judging the accuracy that different alignment algorithms give. We present a simple approach to detecting two types of error, namely block shifts and the misplacement of residues within a gap. Given a MSA, subsets of very similar sequences are generated through the use of a redundancy filter, typically using a 70-90% sequence identity cut-off. Subsets thus produced are typically small and degenerate, and errors can be easily detected even by manual examination. The errors, albeit minor, are inevitably associated with gaps in the alignment, and so the procedure is particularly relevant to homology modelling of protein loop regions. The usefulness of the approach is illustrated in the context of the universal but little known [K/R]KLH motif that occurs in intracellular loop 1 of G protein coupled receptors (GPCR); other issues relevant to GPCR modelling are also discussed.

  16. Human Factors Process Task Analysis: Liquid Oxygen Pump Acceptance Test Procedure at the Advanced Technology Development Center

    NASA Technical Reports Server (NTRS)

    Diorio, Kimberly A.; Voska, Ned (Technical Monitor)

    2002-01-01

    This viewgraph presentation provides information on Human Factors Process Failure Modes and Effects Analysis (HF PFMEA). HF PFMEA includes the following 10 steps: Describe mission; Define System; Identify human-machine; List human actions; Identify potential errors; Identify factors that effect error; Determine likelihood of error; Determine potential effects of errors; Evaluate risk; Generate solutions (manage error). The presentation also describes how this analysis was applied to a liquid oxygen pump acceptance test.

  17. Error Prevention Aid

    NASA Technical Reports Server (NTRS)

    1987-01-01

    In a complex computer environment there is ample opportunity for error, a mistake by a programmer, or a software-induced undesirable side effect. In insurance, errors can cost a company heavily, so protection against inadvertent change is a must for the efficient firm. The data processing center at Transport Life Insurance Company has taken a step to guard against accidental changes by adopting a software package called EQNINT (Equations Interpreter Program). EQNINT cross checks the basic formulas in a program against the formulas that make up the major production system. EQNINT assures that formulas are coded correctly and helps catch errors before they affect the customer service or its profitability.

  18. Quantum error correction via less noisy qubits.

    PubMed

    Fujiwara, Yuichiro

    2013-04-26

    Known quantum error correction schemes are typically able to take advantage of only a limited class of classical error-correcting codes. Entanglement-assisted quantum error correction is a partial solution which made it possible to exploit any classical linear codes over the binary or quaternary finite field. However, the known entanglement-assisted scheme requires noiseless qubits that help correct quantum errors on noisy qubits, which can be too severe an assumption. We prove that a more relaxed and realistic assumption is sufficient by presenting encoding and decoding operations assisted by qubits on which quantum errors of one particular kind may occur. As in entanglement assistance, our scheme can import any binary or quaternary linear codes. If the auxiliary qubits are noiseless, our codes become entanglement-assisted codes, and saturate the quantum Singleton bound when the underlying classical codes are maximum distance separable.

  19. Increasing sensing resolution with error correction.

    PubMed

    Arrad, G; Vinkler, Y; Aharonov, D; Retzker, A

    2014-04-18

    The signal to noise ratio of quantum sensing protocols scales with the square root of the coherence time. Thus, increasing this time is a key goal in the field. By utilizing quantum error correction, we present a novel way of prolonging such coherence times beyond the fundamental limits of current techniques. We develop an implementable sensing protocol that incorporates error correction, and discuss the characteristics of these protocols in different noise and measurement scenarios. We examine the use of entangled versue untangled states, and error correction's reach of the Heisenberg limit. The effects of error correction on coherence times are calculated and we show that measurement precision can be enhanced for both one-directional and general noise.

  20. Scanner qualification with IntenCD based reticle error correction

    NASA Astrophysics Data System (ADS)

    Elblinger, Yair; Finders, Jo; Demarteau, Marcel; Wismans, Onno; Minnaert Janssen, Ingrid; Duray, Frank; Ben Yishai, Michael; Mangan, Shmoolik; Cohen, Yaron; Parizat, Ziv; Attal, Shay; Polonsky, Netanel; Englard, Ilan

    2010-03-01

    Scanner introduction into the fab production environment is a challenging task. An efficient evaluation of scanner performance matrices during factory acceptance test (FAT) and later on during site acceptance test (SAT) is crucial for minimizing the cycle time for pre and post production-start activities. If done effectively, the matrices of base line performance established during the SAT are used as a reference for scanner performance and fleet matching monitoring and maintenance in the fab environment. Key elements which can influence the cycle time of the SAT, FAT and maintenance cycles are the imaging, process and mask characterizations involved with those cycles. Discrete mask measurement techniques are currently in use to create across-mask CDU maps. By subtracting these maps from their final wafer measurement CDU map counterparts, it is possible to assess the real scanner induced printed errors within certain limitations. The current discrete measurement methods are time consuming and some techniques also overlook mask based effects other than line width variations, such as transmission and phase variations, all of which influence the final printed CD variability. Applied Materials Aera2TM mask inspection tool with IntenCDTM technology can scan the mask at high speed, offer full mask coverage and accurate assessment of all masks induced source of errors simultaneously, making it beneficial for scanner qualifications and performance monitoring. In this paper we report on a study that was done to improve a scanner introduction and qualification process using the IntenCD application to map the mask induced CD non uniformity. We will present the results of six scanners in production and discuss the benefits of the new method.

  1. Cone penetrometer acceptance test report

    SciTech Connect

    Boechler, G.N.

    1996-09-19

    This Acceptance Test Report (ATR) documents the results of acceptance test procedure WHC-SD-WM-ATR-151. Included in this report is a summary of the tests, the results and issues, the signature and sign- off ATP pages, and a summarized table of the specification vs. ATP section that satisfied the specification.

  2. Estimating Bias Error Distributions

    NASA Technical Reports Server (NTRS)

    Liu, Tian-Shu; Finley, Tom D.

    2001-01-01

    This paper formulates the general methodology for estimating the bias error distribution of a device in a measuring domain from less accurate measurements when a minimal number of standard values (typically two values) are available. A new perspective is that the bias error distribution can be found as a solution of an intrinsic functional equation in a domain. Based on this theory, the scaling- and translation-based methods for determining the bias error distribution arc developed. These methods are virtually applicable to any device as long as the bias error distribution of the device can be sufficiently described by a power series (a polynomial) or a Fourier series in a domain. These methods have been validated through computational simulations and laboratory calibration experiments for a number of different devices.

  3. The surveillance error grid.

    PubMed

    Klonoff, David C; Lias, Courtney; Vigersky, Robert; Clarke, William; Parkes, Joan Lee; Sacks, David B; Kirkman, M Sue; Kovatchev, Boris

    2014-07-01

    Currently used error grids for assessing clinical accuracy of blood glucose monitors are based on out-of-date medical practices. Error grids have not been widely embraced by regulatory agencies for clearance of monitors, but this type of tool could be useful for surveillance of the performance of cleared products. Diabetes Technology Society together with representatives from the Food and Drug Administration, the American Diabetes Association, the Endocrine Society, and the Association for the Advancement of Medical Instrumentation, and representatives of academia, industry, and government, have developed a new error grid, called the surveillance error grid (SEG) as a tool to assess the degree of clinical risk from inaccurate blood glucose (BG) monitors. A total of 206 diabetes clinicians were surveyed about the clinical risk of errors of measured BG levels by a monitor. The impact of such errors on 4 patient scenarios was surveyed. Each monitor/reference data pair was scored and color-coded on a graph per its average risk rating. Using modeled data representative of the accuracy of contemporary meters, the relationships between clinical risk and monitor error were calculated for the Clarke error grid (CEG), Parkes error grid (PEG), and SEG. SEG action boundaries were consistent across scenarios, regardless of whether the patient was type 1 or type 2 or using insulin or not. No significant differences were noted between responses of adult/pediatric or 4 types of clinicians. Although small specific differences in risk boundaries between US and non-US clinicians were noted, the panel felt they did not justify separate grids for these 2 types of clinicians. The data points of the SEG were classified in 15 zones according to their assigned level of risk, which allowed for comparisons with the classic CEG and PEG. Modeled glucose monitor data with realistic self-monitoring of blood glucose errors derived from meter testing experiments plotted on the SEG when compared to

  4. Quantum Error Correction

    NASA Astrophysics Data System (ADS)

    Lidar, Daniel A.; Brun, Todd A.

    2013-09-01

    Prologue; Preface; Part I. Background: 1. Introduction to decoherence and noise in open quantum systems Daniel Lidar and Todd Brun; 2. Introduction to quantum error correction Dave Bacon; 3. Introduction to decoherence-free subspaces and noiseless subsystems Daniel Lidar; 4. Introduction to quantum dynamical decoupling Lorenza Viola; 5. Introduction to quantum fault tolerance Panos Aliferis; Part II. Generalized Approaches to Quantum Error Correction: 6. Operator quantum error correction David Kribs and David Poulin; 7. Entanglement-assisted quantum error-correcting codes Todd Brun and Min-Hsiu Hsieh; 8. Continuous-time quantum error correction Ognyan Oreshkov; Part III. Advanced Quantum Codes: 9. Quantum convolutional codes Mark Wilde; 10. Non-additive quantum codes Markus Grassl and Martin Rötteler; 11. Iterative quantum coding systems David Poulin; 12. Algebraic quantum coding theory Andreas Klappenecker; 13. Optimization-based quantum error correction Andrew Fletcher; Part IV. Advanced Dynamical Decoupling: 14. High order dynamical decoupling Zhen-Yu Wang and Ren-Bao Liu; 15. Combinatorial approaches to dynamical decoupling Martin Rötteler and Pawel Wocjan; Part V. Alternative Quantum Computation Approaches: 16. Holonomic quantum computation Paolo Zanardi; 17. Fault tolerance for holonomic quantum computation Ognyan Oreshkov, Todd Brun and Daniel Lidar; 18. Fault tolerant measurement-based quantum computing Debbie Leung; Part VI. Topological Methods: 19. Topological codes Héctor Bombín; 20. Fault tolerant topological cluster state quantum computing Austin Fowler and Kovid Goyal; Part VII. Applications and Implementations: 21. Experimental quantum error correction Dave Bacon; 22. Experimental dynamical decoupling Lorenza Viola; 23. Architectures Jacob Taylor; 24. Error correction in quantum communication Mark Wilde; Part VIII. Critical Evaluation of Fault Tolerance: 25. Hamiltonian methods in QEC and fault tolerance Eduardo Novais, Eduardo Mucciolo and

  5. Enteral feeding pumps: efficacy, safety, and patient acceptability

    PubMed Central

    White, Helen; King, Linsey

    2014-01-01

    Enteral feeding is a long established practice across pediatric and adult populations, to enhance nutritional intake and prevent malnutrition. Despite recognition of the importance of nutrition within the modern health agenda, evaluation of the efficacy of how such feeds are delivered is more limited. The accuracy, safety, and consistency with which enteral feed pump systems dispense nutritional formulae are important determinants of their use and acceptability. Enteral feed pump safety has received increased interest in recent years as enteral pumps are used across hospital and home settings. Four areas of enteral feed pump safety have emerged: the consistent and accurate delivery of formula; the minimization of errors associated with tube misconnection; the impact of continuous feed delivery itself (via an enteral feed pump); and the chemical composition of the casing used in enteral feed pump manufacture. The daily use of pumps in delivery of enteral feeds in a home setting predominantly falls to the hands of parents and caregivers. Their understanding of the use and function of their pump is necessary to ensure appropriate, safe, and accurate delivery of enteral nutrition; their experience with this is important in informing clinicians and manufacturers of the emerging needs and requirements of this diverse patient population. The review highlights current practice and areas of concern and establishes our current knowledge in this field. PMID:25170284

  6. Enteral feeding pumps: efficacy, safety, and patient acceptability.

    PubMed

    White, Helen; King, Linsey

    2014-01-01

    Enteral feeding is a long established practice across pediatric and adult populations, to enhance nutritional intake and prevent malnutrition. Despite recognition of the importance of nutrition within the modern health agenda, evaluation of the efficacy of how such feeds are delivered is more limited. The accuracy, safety, and consistency with which enteral feed pump systems dispense nutritional formulae are important determinants of their use and acceptability. Enteral feed pump safety has received increased interest in recent years as enteral pumps are used across hospital and home settings. Four areas of enteral feed pump safety have emerged: the consistent and accurate delivery of formula; the minimization of errors associated with tube misconnection; the impact of continuous feed delivery itself (via an enteral feed pump); and the chemical composition of the casing used in enteral feed pump manufacture. The daily use of pumps in delivery of enteral feeds in a home setting predominantly falls to the hands of parents and caregivers. Their understanding of the use and function of their pump is necessary to ensure appropriate, safe, and accurate delivery of enteral nutrition; their experience with this is important in informing clinicians and manufacturers of the emerging needs and requirements of this diverse patient population. The review highlights current practice and areas of concern and establishes our current knowledge in this field.

  7. Phasing piston error in segmented telescopes.

    PubMed

    Jiang, Junlun; Zhao, Weirui

    2016-08-22

    To achieve a diffraction-limited imaging, the piston errors between the segments of the segmented primary mirror telescope should be reduced to λ/40 RMS. We propose a method to detect the piston error by analyzing the intensity distribution on the image plane according to the Fourier optics principle, which can capture segments with the piston errors as large as the coherence length of the input light and reduce these to 0.026λ RMS (λ = 633nm). This method is adaptable to any segmented and deployable primary mirror telescope. Experiments have been carried out to validate the feasibility of the method.

  8. Error reduction in EMG signal decomposition.

    PubMed

    Kline, Joshua C; De Luca, Carlo J

    2014-12-01

    Decomposition of the electromyographic (EMG) signal into constituent action potentials and the identification of individual firing instances of each motor unit in the presence of ambient noise are inherently probabilistic processes, whether performed manually or with automated algorithms. Consequently, they are subject to errors. We set out to classify and reduce these errors by analyzing 1,061 motor-unit action-potential trains (MUAPTs), obtained by decomposing surface EMG (sEMG) signals recorded during human voluntary contractions. Decomposition errors were classified into two general categories: location errors representing variability in the temporal localization of each motor-unit firing instance and identification errors consisting of falsely detected or missed firing instances. To mitigate these errors, we developed an error-reduction algorithm that combines multiple decomposition estimates to determine a more probable estimate of motor-unit firing instances with fewer errors. The performance of the algorithm is governed by a trade-off between the yield of MUAPTs obtained above a given accuracy level and the time required to perform the decomposition. When applied to a set of sEMG signals synthesized from real MUAPTs, the identification error was reduced by an average of 1.78%, improving the accuracy to 97.0%, and the location error was reduced by an average of 1.66 ms. The error-reduction algorithm in this study is not limited to any specific decomposition strategy. Rather, we propose it be used for other decomposition methods, especially when analyzing precise motor-unit firing instances, as occurs when measuring synchronization.

  9. The Michelson Stellar Interferometer Error Budget for Triple Triple-Satellite Configuration

    NASA Technical Reports Server (NTRS)

    Marathay, Arvind S.; Shiefman, Joe

    1996-01-01

    This report presents the results of a study of the instrumentation tolerances for a conventional style Michelson stellar interferometer (MSI). The method used to determine the tolerances was to determine the change, due to the instrument errors, in the measured fringe visibility and phase relative to the ideal values. The ideal values are those values of fringe visibility and phase that would be measured by a perfect MSI and are attributable solely to the object being detected. Once the functional relationship for changes in visibility and phase as a function of various instrument errors is understood it is then possible to set limits on the instrument errors in order to ensure that the measured visibility and phase are different from the ideal values by no more than some specified amount. This was done as part of this study. The limits we obtained are based on a visibility error of no more than 1% and a phase error of no more than 0.063 radians (this comes from 1% of 2(pi) radians). The choice of these 1% limits is supported in the literture. The approach employed in the study involved the use of ASAP (Advanced System Analysis Program) software provided by Breault Research Organization, Inc., in conjunction with parallel analytical calculations. The interferometer accepts object radiation into two separate arms each consisting of an outer mirror, an inner mirror, a delay line (made up of two moveable mirrors and two static mirrors), and a 10:1 afocal reduction telescope. The radiation coming out of both arms is incident on a slit plane which is opaque with two openings (slits). One of the two slits is centered directly under one of the two arms of the interferometer and the other slit is centered directly under the other arm. The slit plane is followed immediately by an ideal combining lens which images the radiation in the fringe plane (also referred to subsequently as the detector plane).

  10. Variation transmission model for setting acceptance criteria in a multi-staged pharmaceutical manufacturing process.

    PubMed

    Montes, Richard O

    2012-03-01

    Pharmaceutical manufacturing processes consist of a series of stages (e.g., reaction, workup, isolation) to generate the active pharmaceutical ingredient (API). Outputs at intermediate stages (in-process control) and API need to be controlled within acceptance criteria to assure final drug product quality. In this paper, two methods based on tolerance interval to derive such acceptance criteria will be evaluated. The first method is serial worst case (SWC), an industry risk minimization strategy, wherein input materials and process parameters of a stage are fixed at their worst-case settings to calculate the maximum level expected from the stage. This maximum output then becomes input to the next stage wherein process parameters are again fixed at worst-case setting. The procedure is serially repeated throughout the process until the final stage. The calculated limits using SWC can be artificially high and may not reflect the actual process performance. The second method is the variation transmission (VT) using autoregressive model, wherein variation transmitted up to a stage is estimated by accounting for the recursive structure of the errors at each stage. Computer simulations at varying extent of variation transmission and process stage variability are performed. For the scenarios tested, VT method is demonstrated to better maintain the simulated confidence level and more precisely estimate the true proportion parameter than SWC. Real data examples are also presented that corroborate the findings from the simulation. Overall, VT is recommended for setting acceptance criteria in a multi-staged pharmaceutical manufacturing process.

  11. Technical note: The effect of midshaft location on the error ranges of femoral and tibial cross-sectional parameters.

    PubMed

    Sládek, Vladimír; Berner, Margit; Galeta, Patrik; Friedl, Lukás; Kudrnová, Sárka

    2010-02-01

    In comparing long-bone cross-sectional geometric properties between individuals, percentages of bone length are often used to identify equivalent locations along the diaphysis. In fragmentary specimens where bone lengths cannot be measured, however, these locations must be estimated more indirectly. In this study, we examine the effect of inaccurately located femoral and tibial midshafts on estimation of geometric properties. The error ranges were compared on 30 femora and tibiae from the Eneolithic and Bronze Age. Cross-sections were obtained at each 1% interval from 60 to 40% of length using CT scans. Five percent of deviation from midshaft properties was used as the maximum acceptable error. Reliability was expressed by mean percentage differences, standard deviation of percentage differences, mean percentage absolute differences, limits of agreement, and mean accuracy range (MAR) (range within which mean deviation from true midshaft values was less than 5%). On average, tibial cortical area and femoral second moments of area are the least sensitive to positioning error, with mean accuracy ranges wide enough for practical application in fragmentary specimens (MAR = 40-130 mm). In contrast, tibial second moments of area are the most sensitive to error in midshaft location (MAR = 14-20 mm). Individuals present significant variation in morphology and thus in error ranges for different properties. For highly damaged fossil femora and tibiae we recommend carrying out additional tests to better establish specific errors associated with uncertain length estimates.

  12. Error monitoring in musicians

    PubMed Central

    Maidhof, Clemens

    2013-01-01

    To err is human, and hence even professional musicians make errors occasionally during their performances. This paper summarizes recent work investigating error monitoring in musicians, i.e., the processes and their neural correlates associated with the monitoring of ongoing actions and the detection of deviations from intended sounds. Electroencephalography (EEG) studies reported an early component of the event-related potential (ERP) occurring before the onsets of pitch errors. This component, which can be altered in musicians with focal dystonia, likely reflects processes of error detection and/or error compensation, i.e., attempts to cancel the undesired sensory consequence (a wrong tone) a musician is about to perceive. Thus, auditory feedback seems not to be a prerequisite for error detection, consistent with previous behavioral results. In contrast, when auditory feedback is externally manipulated and thus unexpected, motor performance can be severely distorted, although not all feedback alterations result in performance impairments. Recent studies investigating the neural correlates of feedback processing showed that unexpected feedback elicits an ERP component after note onsets, which shows larger amplitudes during music performance than during mere perception of the same musical sequences. Hence, these results stress the role of motor actions for the processing of auditory information. Furthermore, recent methodological advances like the combination of 3D motion capture techniques with EEG will be discussed. Such combinations of different measures can potentially help to disentangle the roles of different feedback types such as proprioceptive and auditory feedback, and in general to derive at a better understanding of the complex interactions between the motor and auditory domain during error monitoring. Finally, outstanding questions and future directions in this context will be discussed. PMID:23898255

  13. Errata: Papers in Error Analysis.

    ERIC Educational Resources Information Center

    Svartvik, Jan, Ed.

    Papers presented at the symposium of error analysis in Lund, Sweden, in September 1972, approach error analysis specifically in its relation to foreign language teaching and second language learning. Error analysis is defined as having three major aspects: (1) the description of the errors, (2) the explanation of errors by means of contrastive…

  14. Extending the Technology Acceptance Model: Policy Acceptance Model (PAM)

    NASA Astrophysics Data System (ADS)

    Pierce, Tamra

    There has been extensive research on how new ideas and technologies are accepted in society. This has resulted in the creation of many models that are used to discover and assess the contributing factors. The Technology Acceptance Model (TAM) is one that is a widely accepted model. This model examines people's acceptance of new technologies based on variables that directly correlate to how the end user views the product. This paper introduces the Policy Acceptance Model (PAM), an expansion of TAM, which is designed for the analysis and evaluation of acceptance of new policy implementation. PAM includes the traditional constructs of TAM and adds the variables of age, ethnicity, and family. The model is demonstrated using a survey of people's attitude toward the upcoming healthcare reform in the United States (US) from 72 survey respondents. The aim is that the theory behind this model can be used as a framework that will be applicable to studies looking at the introduction of any new or modified policies.

  15. The influence of the IMRT QA set-up error on the 2D and 3D gamma evaluation method as obtained by using Monte Carlo simulations

    NASA Astrophysics Data System (ADS)

    Kim, Kyeong-Hyeon; Kim, Dong-Su; Kim, Tae-Ho; Kang, Seong-Hee; Cho, Min-Seok; Suh, Tae Suk

    2015-11-01

    The phantom-alignment error is one of the factors affecting delivery quality assurance (QA) accuracy in intensity-modulated radiation therapy (IMRT). Accordingly, a possibility of inadequate use of spatial information in gamma evaluation may exist for patient-specific IMRT QA. The influence of the phantom-alignment error on gamma evaluation can be demonstrated experimentally by using the gamma passing rate and the gamma value. However, such experimental methods have a limitation regarding the intrinsic verification of the influence of the phantom set-up error because experimentally measuring the phantom-alignment error accurately is impossible. To overcome this limitation, we aimed to verify the effect of the phantom set-up error within the gamma evaluation formula by using a Monte Carlo simulation. Artificial phantom set-up errors were simulated, and the concept of the true point (TP) was used to represent the actual coordinates of the measurement point for the mathematical modeling of these effects on the gamma. Using dose distributions acquired from the Monte Carlo simulation, performed gamma evaluations in 2D and 3D. The results of the gamma evaluations and the dose difference at the TP were classified to verify the degrees of dose reflection at the TP. The 2D and the 3D gamma errors were defined by comparing gamma values between the case of the imposed phantom set-up error and the TP in order to investigate the effect of the set-up error on the gamma value. According to the results for gamma errors, the 3D gamma evaluation reflected the dose at the TP better than the 2D one. Moreover, the gamma passing rates were higher for 3D than for 2D, as is widely known. Thus, the 3D gamma evaluation can increase the precision of patient-specific IMRT QA by applying stringent acceptance criteria and setting a reasonable action level for the 3D gamma passing rate.

  16. Staff Acceptance of Tele-ICU Coverage

    PubMed Central

    Chan, Paul S.; Cram, Peter

    2011-01-01

    Background: Remote coverage of ICUs is increasing, but staff acceptance of this new technology is incompletely characterized. We conducted a systematic review to summarize existing research on acceptance of tele-ICU coverage among ICU staff. Methods: We searched for published articles pertaining to critical care telemedicine systems (aka, tele-ICU) between January 1950 and March 2010 using PubMed, Cumulative Index to Nursing and Allied Health Literature, Global Health, Web of Science, and the Cochrane Library and abstracts and presentations delivered at national conferences. Studies were included if they provided original qualitative or quantitative data on staff perceptions of tele-ICU coverage. Studies were imported into content analysis software and coded by tele-ICU configuration, methodology, participants, and findings (eg, positive and negative staff evaluations). Results: Review of 3,086 citations yielded 23 eligible studies. Findings were grouped into four categories of staff evaluation: overall acceptance level of tele-ICU coverage (measured in 70% of studies), impact on patient care (measured in 96%), impact on staff (measured in 100%), and organizational impact (measured in 48%). Overall acceptance was high, despite initial ambivalence. Favorable impact on patient care was perceived by > 82% of participants. Staff impact referenced enhanced collaboration, autonomy, and training, although scrutiny, malfunctions, and contradictory advice were cited as potential barriers. Staff perceived the organizational impact to vary. An important limitation of available studies was a lack of rigorous methodology and validated survey instruments in many studies. Conclusions: Initial reports suggest high levels of staff acceptance of tele-ICU coverage, but more rigorous methodologic study is required. PMID:21051386

  17. Market Acceptance of Smart Growth

    EPA Pesticide Factsheets

    This report finds that smart growth developments enjoy market acceptance because of stability in prices over time. Housing resales in smart growth developments often have greater appreciation than their conventional suburban counterparts.

  18. L-286 Acceptance Test Record

    SciTech Connect

    HARMON, B.C.

    2000-01-14

    This document provides a detailed account of how the acceptance testing was conducted for Project L-286, ''200E Area Sanitary Water Plant Effluent Stream Reduction''. The testing of the L-286 instrumentation system was conducted under the direct supervision

  19. NLO error propagation exercise: statistical results

    SciTech Connect

    Pack, D.J.; Downing, D.J.

    1985-09-01

    Error propagation is the extrapolation and cumulation of uncertainty (variance) above total amounts of special nuclear material, for example, uranium or /sup 235/U, that are present in a defined location at a given time. The uncertainty results from the inevitable inexactness of individual measurements of weight, uranium concentration, /sup 235/U enrichment, etc. The extrapolated and cumulated uncertainty leads directly to quantified limits of error on inventory differences (LEIDs) for such material. The NLO error propagation exercise was planned as a field demonstration of the utilization of statistical error propagation methodology at the Feed Materials Production Center in Fernald, Ohio from April 1 to July 1, 1983 in a single material balance area formed specially for the exercise. Major elements of the error propagation methodology were: variance approximation by Taylor Series expansion; variance cumulation by uncorrelated primary error sources as suggested by Jaech; random effects ANOVA model estimation of variance effects (systematic error); provision for inclusion of process variance in addition to measurement variance; and exclusion of static material. The methodology was applied to material balance area transactions from the indicated time period through a FORTRAN computer code developed specifically for this purpose on the NLO HP-3000 computer. This paper contains a complete description of the error propagation methodology and a full summary of the numerical results of applying the methodlogy in the field demonstration. The error propagation LEIDs did encompass the actual uranium and /sup 235/U inventory differences. Further, one can see that error propagation actually provides guidance for reducing inventory differences and LEIDs in future time periods.

  20. Error Sensitivity Model.

    DTIC Science & Technology

    1980-04-01

    Philosophy The Positioning/Error Model has been defined in three dis- tinct phases: I - Error Sensitivity Model II - Operonal Positioning Model III...X inv VH,’itat NX*YImpY -IY+X 364: mat AX+R 365: ara R+L+R 366: if NC1,1J-N[2,2)=O and N[1,2<135+T;j, 6 367: if NC1,1]-N2,2J=6 and NCI2=;0.T;jmp 5

  1. Error Free Software

    NASA Technical Reports Server (NTRS)

    1985-01-01

    A mathematical theory for development of "higher order" software to catch computer mistakes resulted from a Johnson Space Center contract for Apollo spacecraft navigation. Two women who were involved in the project formed Higher Order Software, Inc. to develop and market the system of error analysis and correction. They designed software which is logically error-free, which, in one instance, was found to increase productivity by 600%. USE.IT defines its objectives using AXES -- a user can write in English and the system converts to computer languages. It is employed by several large corporations.

  2. Prevention of medication errors: detection and audit.

    PubMed

    Montesi, Germana; Lechi, Alessandro

    2009-06-01

    1. Medication errors have important implications for patient safety, and their identification is a main target in improving clinical practice errors, in order to prevent adverse events. 2. Error detection is the first crucial step. Approaches to this are likely to be different in research and routine care, and the most suitable must be chosen according to the setting. 3. The major methods for detecting medication errors and associated adverse drug-related events are chart review, computerized monitoring, administrative databases, and claims data, using direct observation, incident reporting, and patient monitoring. All of these methods have both advantages and limitations. 4. Reporting discloses medication errors, can trigger warnings, and encourages the diffusion of a culture of safe practice. Combining and comparing data from various and encourages the diffusion of a culture of safe practice sources increases the reliability of the system. 5. Error prevention can be planned by means of retroactive and proactive tools, such as audit and Failure Mode, Effect, and Criticality Analysis (FMECA). Audit is also an educational activity, which promotes high-quality care; it should be carried out regularly. In an audit cycle we can compare what is actually done against reference standards and put in place corrective actions to improve the performances of individuals and systems. 6. Patient safety must be the first aim in every setting, in order to build safer systems, learning from errors and reducing the human and fiscal costs.

  3. A statistical model for point-based target registration error with anisotropic fiducial localizer error.

    PubMed

    Wiles, Andrew D; Likholyot, Alexander; Frantz, Donald D; Peters, Terry M

    2008-03-01

    Error models associated with point-based medical image registration problems were first introduced in the late 1990s. The concepts of fiducial localizer error, fiducial registration error, and target registration error are commonly used in the literature. The model for estimating the target registration error at a position r in a coordinate frame defined by a set of fiducial markers rigidly fixed relative to one another is ubiquitous in the medical imaging literature. The model has also been extended to simulate the target registration error at the point of interest in optically tracked tools. However, the model is limited to describing the error in situations where the fiducial localizer error is assumed to have an isotropic normal distribution in R3. In this work, the model is generalized to include a fiducial localizer error that has an anisotropic normal distribution. Similar to the previous models, the root mean square statistic rms tre is provided along with an extension that provides the covariance Sigma tre. The new model is verified using a Monte Carlo simulation and a set of statistical hypothesis tests. Finally, the differences between the two assumptions, isotropic and anisotropic, are discussed within the context of their use in 1) optical tool tracking simulation and 2) image registration.

  4. Orwell's Instructive Errors

    ERIC Educational Resources Information Center

    Julian, Liam

    2009-01-01

    In this article, the author talks about George Orwell, his instructive errors, and the manner in which Orwell pierced worthless theory, faced facts and defended decency (with fluctuating success), and largely ignored the tradition of accumulated wisdom that has rendered him a timeless teacher--one whose inadvertent lessons, while infrequently…

  5. (Errors in statistical tests)3.

    PubMed

    Phillips, Carl V; MacLehose, Richard F; Kaufman, Jay S

    2008-07-14

    In 2004, Garcia-Berthou and Alcaraz published "Incongruence between test statistics and P values in medical papers," a critique of statistical errors that received a tremendous amount of attention. One of their observations was that the final reported digit of p-values in articles published in the journal Nature departed substantially from the uniform distribution that they suggested should be expected. In 2006, Jeng critiqued that critique, observing that the statistical analysis of those terminal digits had been based on comparing the actual distribution to a uniform continuous distribution, when digits obviously are discretely distributed. Jeng corrected the calculation and reported statistics that did not so clearly support the claim of a digit preference. However delightful it may be to read a critique of statistical errors in a critique of statistical errors, we nevertheless found several aspects of the whole exchange to be quite troubling, prompting our own meta-critique of the analysis.The previous discussion emphasized statistical significance testing. But there are various reasons to expect departure from the uniform distribution in terminal digits of p-values, so that simply rejecting the null hypothesis is not terribly informative. Much more importantly, Jeng found that the original p-value of 0.043 should have been 0.086, and suggested this represented an important difference because it was on the other side of 0.05. Among the most widely reiterated (though often ignored) tenets of modern quantitative research methods is that we should not treat statistical significance as a bright line test of whether we have observed a phenomenon. Moreover, it sends the wrong message about the role of statistics to suggest that a result should be dismissed because of limited statistical precision when it is so easy to gather more data.In response to these limitations, we gathered more data to improve the statistical precision, and analyzed the actual pattern of the

  6. Implications of Monte Carlo Statistical Errors in Criticality Safety Assessments

    SciTech Connect

    Pevey, Ronald E.

    2005-09-15

    Most criticality safety calculations are performed using Monte Carlo techniques because of Monte Carlo's ability to handle complex three-dimensional geometries. For Monte Carlo calculations, the more histories sampled, the lower the standard deviation of the resulting estimates. The common intuition is, therefore, that the more histories, the better; as a result, analysts tend to run Monte Carlo analyses as long as possible (or at least to a minimum acceptable uncertainty). For Monte Carlo criticality safety analyses, however, the optimization situation is complicated by the fact that procedures usually require that an extra margin of safety be added because of the statistical uncertainty of the Monte Carlo calculations. This additional safety margin affects the impact of the choice of the calculational standard deviation, both on production and on safety. This paper shows that, under the assumptions of normally distributed benchmarking calculational errors and exact compliance with the upper subcritical limit (USL), the standard deviation that optimizes production is zero, but there is a non-zero value of the calculational standard deviation that minimizes the risk of inadvertently labeling a supercritical configuration as subcritical. Furthermore, this value is shown to be a simple function of the typical benchmarking step outcomes--the bias, the standard deviation of the bias, the upper subcritical limit, and the number of standard deviations added to calculated k-effectives before comparison to the USL.

  7. Medical error and human factors engineering: where are we now?

    PubMed

    Gawron, Valerie J; Drury, Colin G; Fairbanks, Rollin J; Berger, Roseanne C

    2006-01-01

    The goal of human factors engineering is to optimize the relationship between humans and systems by studying human behavior, abilities, and limitations and using this knowledge to design systems for safe and effective human use. With the assumption that the human component of any system will inevitably produce errors, human factors engineers design systems and human/machine interfaces that are robust enough to reduce error rates and the effect of the inevitable error within the system. In this article, we review the extent and nature of medical error and then discuss human factors engineering tools that have potential applicability. These tools include taxonomies of human and system error and error data collection and analysis methods. Finally, we describe studies that have examined medical error, and on the basis of these studies, present conclusions about how human factors engineering can significantly reduce medical errors and their effects.

  8. Report of the Subpanel on Error Characterization and Error Budgets

    NASA Technical Reports Server (NTRS)

    1982-01-01

    The state of knowledge of both user positioning requirements and error models of current and proposed satellite systems is reviewed. In particular the error analysis models for LANDSAT D are described. Recommendations are given concerning the geometric error model for the thematic mapper; interactive user involvement in system error budgeting and modeling and verification on real data sets; and the identification of a strawman mission for modeling key error sources.

  9. The physiological basis for spacecraft environmental limits

    NASA Technical Reports Server (NTRS)

    Waligora, J. M. (Compiler)

    1979-01-01

    Limits for operational environments are discussed in terms of acceptable physiological changes. The environmental factors considered are pressure, contaminants, temperature, acceleration, noise, rf radiation, and weightlessness.

  10. A review of major factors contributing to errors in human hair association by microscopy.

    PubMed

    Smith, S L; Linch, C A

    1999-09-01

    Forensic hair examiners using traditional microscopic comparison techniques cannot state with certainty, except in extremely rare cases, that a found hair originated from a particular individual. They also cannot provide a statistical likelihood that a hair came from a certain individual and not another. There is no data available regarding the frequency of a specific microscopic hair characteristic (i.e., microtype) or trait in a particular population. Microtype is a term we use to describe certain internal characteristics and features expressed when observing hairs with unpolarized transmitted light. Courts seem to be sympathetic to lawyer's concerns that there are no accepted probability standards for human hair identification. Under Daubert, microscopic hair analysis testimony (or other scientific testimony) is allowed if the technique can be shown to have testability, peer review, general acceptance, and a known error rate. As with other forensic disciplines, laboratory error rate determination for a specific hair comparison case is not possible. Polymerase chain reaction (PCR)-based typing of hair roots offer hair examiners an opportunity to begin cataloging data with regard to microscopic hair association error rates. This is certainly a realistic manner in which to ascertain which hair microtypes and case circumstances repeatedly cause difficulty in association. Two cases are presented in which PCR typing revealed an incorrect inclusion in one and an incorrect exclusion in another. This paper does not suggest that such limited observations define a rate of occurrence. These cases illustrate evidentiary conditions or case circumstances which may potentially contribute to microscopic hair association errors. Issues discussed in this review paper address the potential questions an expert witness may expect in a Daubert hair analysis admissibility hearing.

  11. Automatic Error Analysis Using Intervals

    ERIC Educational Resources Information Center

    Rothwell, E. J.; Cloud, M. J.

    2012-01-01

    A technique for automatic error analysis using interval mathematics is introduced. A comparison to standard error propagation methods shows that in cases involving complicated formulas, the interval approach gives comparable error estimates with much less effort. Several examples are considered, and numerical errors are computed using the INTLAB…

  12. Control by model error estimation

    NASA Technical Reports Server (NTRS)

    Likins, P. W.; Skelton, R. E.

    1976-01-01

    Modern control theory relies upon the fidelity of the mathematical model of the system. Truncated modes, external disturbances, and parameter errors in linear system models are corrected by augmenting to the original system of equations an 'error system' which is designed to approximate the effects of such model errors. A Chebyshev error system is developed for application to the Large Space Telescope (LST).

  13. Imagery of Errors in Typing

    ERIC Educational Resources Information Center

    Rieger, Martina; Martinez, Fanny; Wenke, Dorit

    2011-01-01

    Using a typing task we investigated whether insufficient imagination of errors and error corrections is related to duration differences between execution and imagination. In Experiment 1 spontaneous error imagination was investigated, whereas in Experiment 2 participants were specifically instructed to imagine errors. Further, in Experiment 2 we…

  14. Speech Errors across the Lifespan

    ERIC Educational Resources Information Center

    Vousden, Janet I.; Maylor, Elizabeth A.

    2006-01-01

    Dell, Burger, and Svec (1997) proposed that the proportion of speech errors classified as anticipations (e.g., "moot and mouth") can be predicted solely from the overall error rate, such that the greater the error rate, the lower the anticipatory proportion (AP) of errors. We report a study examining whether this effect applies to changes in error…

  15. Type I Error Control for Tree Classification

    PubMed Central

    Jung, Sin-Ho; Chen, Yong; Ahn, Hongshik

    2014-01-01

    Binary tree classification has been useful for classifying the whole population based on the levels of outcome variable that is associated with chosen predictors. Often we start a classification with a large number of candidate predictors, and each predictor takes a number of different cutoff values. Because of these types of multiplicity, binary tree classification method is subject to severe type I error probability. Nonetheless, there have not been many publications to address this issue. In this paper, we propose a binary tree classification method to control the probability to accept a predictor below certain level, say 5%. PMID:25452689

  16. Roundoff error effects on spatial lattice algorithm

    NASA Technical Reports Server (NTRS)

    An, S. H.; Yao, K.

    1986-01-01

    The floating-point roundoff error effect under finite word length limitations is analyzed for the time updates of reflection coefficients in the spatial lattice algorithm. It is shown that recursive computation is superior to direct computation under finite word length limitations. Moreover, the forgetting factor, which is conventionally used to smooth the time variations of the inputs, is also a crucial parameter in the consideration of the system stability and adaptability under finite word length constraints.

  17. From requirements to acceptance tests

    NASA Technical Reports Server (NTRS)

    Baize, Lionel; Pasquier, Helene

    1993-01-01

    From user requirements definition to accepted software system, the software project management wants to be sure that the system will meet the requirements. For the development of a telecommunication satellites Control Centre, C.N.E.S. has used new rules to make the use of tracing matrix easier. From Requirements to Acceptance Tests, each item of a document must have an identifier. A unique matrix traces the system and allows the tracking of the consequences of a change in the requirements. A tool has been developed, to import documents into a relational data base. Each record of the data base corresponds to an item of a document, the access key is the item identifier. Tracing matrix is also processed, providing automatically links between the different documents. It enables the reading on the same screen of traced items. For example one can read simultaneously the User Requirements items, the corresponding Software Requirements items and the Acceptance Tests.

  18. The Relative Frequency of Spanish Pronunciation Errors.

    ERIC Educational Resources Information Center

    Hammerly, Hector

    Types of hierarchies of pronunciation difficulty are discussed, and a hierarchy based on contrastive analysis plus informal observation is proposed. This hierarchy is less one of initial difficulty than of error persistence. One feature of this hierarchy is that, because of lesser learner awareness and very limited functional load, errors…

  19. To accept, or not to accept, that is the question: citizen reactions to rationing

    PubMed Central

    Broqvist, Mari; Garpenby, Peter

    2014-01-01

    Abstract Background  The publicly financed health service in Sweden has come under increasing pressure, forcing policy makers to consider restrictions. Objective  To describe different perceptions of rationing, in particular, what citizens themselves believe influences their acceptance of having to stand aside for others in a public health service. Design  Qualitative interviews, analysed by phenomenography, describing perceptions by different categories. Setting and participants  Purposeful sample of 14 Swedish citizens, based on demographic criteria and attitudes towards allocation in health care. Results  Participants expressed high awareness of limitations in public resources and the necessity of rationing. Acceptance of rationing could increase or decrease, depending on one’s (i) awareness that healthcare resources are limited, (ii) endorsement of universal health care, (iii) knowledge and acceptance of the principles guiding rationing and (iv) knowledge about alternatives to public health services. Conclusions  This study suggests that decision makers should be more explicit in describing the dilemma of resource limitations in a publicly funded healthcare system. Openness enables citizens to gain the insight to make informed decisions, i.e. to use public services or to ‘opt out’ of the public sector solution if they consider rationing decisions unacceptable. PMID:22032636

  20. Hyponatremia: management errors.

    PubMed

    Seo, Jang Won; Park, Tae Jin

    2006-11-01

    Rapid correction of hyponatremia is frequently associated with increased morbidity and mortality. Therefore, it is important to estimate the proper volume and type of infusate required to increase the serum sodium concentration predictably. The major common management errors during the treatment of hyponatremia are inadequate investigation, treatment with fluid restriction for diuretic-induced hyponatremia and treatment with fluid restriction plus intravenous isotonic saline simultaneously. We present two cases of management errors. One is about the problem of rapid correction of hyponatremia in a patient with sepsis and acute renal failure during continuous renal replacement therapy in the intensive care unit. The other is the case of hypothyroidism in which hyponatremia was aggravated by intravenous infusion of dextrose water and isotonic saline infusion was erroneously used to increase serum sodium concentration.

  1. Error-Free Software

    NASA Technical Reports Server (NTRS)

    1989-01-01

    001 is an integrated tool suited for automatically developing ultra reliable models, simulations and software systems. Developed and marketed by Hamilton Technologies, Inc. (HTI), it has been applied in engineering, manufacturing, banking and software tools development. The software provides the ability to simplify the complex. A system developed with 001 can be a prototype or fully developed with production quality code. It is free of interface errors, consistent, logically complete and has no data or control flow errors. Systems can be designed, developed and maintained with maximum productivity. Margaret Hamilton, President of Hamilton Technologies, also directed the research and development of USE.IT, an earlier product which was the first computer aided software engineering product in the industry to concentrate on automatically supporting the development of an ultrareliable system throughout its life cycle. Both products originated in NASA technology developed under a Johnson Space Center contract.

  2. Modular error embedding

    DOEpatents

    Sandford, II, Maxwell T.; Handel, Theodore G.; Ettinger, J. Mark

    1999-01-01

    A method of embedding auxiliary information into the digital representation of host data containing noise in the low-order bits. The method applies to digital data representing analog signals, for example digital images. The method reduces the error introduced by other methods that replace the low-order bits with auxiliary information. By a substantially reverse process, the embedded auxiliary data can be retrieved easily by an authorized user through use of a digital key. The modular error embedding method includes a process to permute the order in which the host data values are processed. The method doubles the amount of auxiliary information that can be added to host data values, in comparison with bit-replacement methods for high bit-rate coding. The invention preserves human perception of the meaning and content of the host data, permitting the addition of auxiliary data in the amount of 50% or greater of the original host data.

  3. Error-correction coding

    NASA Technical Reports Server (NTRS)

    Hinds, Erold W. (Principal Investigator)

    1996-01-01

    This report describes the progress made towards the completion of a specific task on error-correcting coding. The proposed research consisted of investigating the use of modulation block codes as the inner code of a concatenated coding system in order to improve the overall space link communications performance. The study proposed to identify and analyze candidate codes that will complement the performance of the overall coding system which uses the interleaved RS (255,223) code as the outer code.

  4. Surface temperature measurement errors

    SciTech Connect

    Keltner, N.R.; Beck, J.V.

    1983-05-01

    Mathematical models are developed for the response of surface mounted thermocouples on a thick wall. These models account for the significant causes of errors in both the transient and steady-state response to changes in the wall temperature. In many cases, closed form analytical expressions are given for the response. The cases for which analytical expressions are not obtained can be easily evaluated on a programmable calculator or a small computer.

  5. Bayesian Error Estimation Functionals

    NASA Astrophysics Data System (ADS)

    Jacobsen, Karsten W.

    The challenge of approximating the exchange-correlation functional in Density Functional Theory (DFT) has led to the development of numerous different approximations of varying accuracy on different calculated properties. There is therefore a need for reliable estimation of prediction errors within the different approximation schemes to DFT. The Bayesian Error Estimation Functionals (BEEF) have been developed with this in mind. The functionals are constructed by fitting to experimental and high-quality computational databases for molecules and solids including chemisorption and van der Waals systems. This leads to reasonably accurate general-purpose functionals with particual focus on surface science. The fitting procedure involves considerations on how to combine different types of data, and applies Tikhonov regularization and bootstrap cross validation. The methodology has been applied to construct GGA and metaGGA functionals with and without inclusion of long-ranged van der Waals contributions. The error estimation is made possible by the generation of not only a single functional but through the construction of a probability distribution of functionals represented by a functional ensemble. The use of the functional ensemble is illustrated on compound heat of formation and by investigations of the reliability of calculated catalytic ammonia synthesis rates.

  6. The Location of Error: Reflections on a Research Project

    ERIC Educational Resources Information Center

    Cook, Devan

    2010-01-01

    Andrea Lunsford and Karen Lunsford conclude "Mistakes Are a Fact of Life: A National Comparative Study," a discussion of their research project exploring patterns of formal grammar and usage error in first-year writing, with an invitation to "conduct a local version of this study." The author was eager to accept their invitation; learning and…

  7. GY SAMPLING THEORY IN ENVIRONMENTAL STUDIES 2: SUBSAMPLING ERROR MEASUREMENTS

    EPA Science Inventory

    Sampling can be a significant source of error in the measurement process. The characterization and cleanup of hazardous waste sites require data that meet site-specific levels of acceptable quality if scientifically supportable decisions are to be made. In support of this effort,...

  8. We need to talk about error: causes and types of error in veterinary practice.

    PubMed

    Oxtoby, C; Ferguson, E; White, K; Mossop, L

    2015-10-31

    Patient safety research in human medicine has identified the causes and common types of medical error and subsequently informed the development of interventions which mitigate harm, such as the WHO's safe surgery checklist. There is no such evidence available to the veterinary profession. This study therefore aims to identify the causes and types of errors in veterinary practice, and presents an evidence based system for their classification. Causes of error were identified from retrospective record review of 678 claims to the profession's leading indemnity insurer and nine focus groups (average N per group=8) with vets, nurses and support staff were performed using critical incident technique. Reason's (2000) Swiss cheese model of error was used to inform the interpretation of the data. Types of error were extracted from 2978 claims records reported between the years 2009 and 2013. The major classes of error causation were identified with mistakes involving surgery the most common type of error. The results were triangulated with findings from the medical literature and highlight the importance of cognitive limitations, deficiencies in non-technical skills and a systems approach to veterinary error.

  9. Further Conceptualization of Treatment Acceptability

    ERIC Educational Resources Information Center

    Carter, Stacy L.

    2008-01-01

    A review and extension of previous conceptualizations of treatment acceptability is provided in light of progress within the area of behavior treatment development and implementation. Factors including legislation, advances in research, and service delivery models are examined as to their relationship with a comprehensive conceptualization of…

  10. Acceptance and Commitment Therapy: Introduction

    ERIC Educational Resources Information Center

    Twohig, Michael P.

    2012-01-01

    This is the introductory article to a special series in Cognitive and Behavioral Practice on Acceptance and Commitment Therapy (ACT). Instead of each article herein reviewing the basics of ACT, this article contains that review. This article provides a description of where ACT fits within the larger category of cognitive behavior therapy (CBT):…

  11. Nitrogen trailer acceptance test report

    SciTech Connect

    Kostelnik, A.J.

    1996-02-12

    This Acceptance Test Report documents compliance with the requirements of specification WHC-S-0249. The equipment was tested according to WHC-SD-WM-ATP-108 Rev.0. The equipment being tested is a portable contained nitrogen supply. The test was conducted at Norco`s facility.

  12. Imaginary Companions and Peer Acceptance

    ERIC Educational Resources Information Center

    Gleason, Tracy R.

    2004-01-01

    Early research on imaginary companions suggests that children who create them do so to compensate for poor social relationships. Consequently, the peer acceptance of children with imaginary companions was compared to that of their peers. Sociometrics were conducted on 88 preschool-aged children; 11 had invisible companions, 16 had personified…

  13. Euthanasia Acceptance: An Attitudinal Inquiry.

    ERIC Educational Resources Information Center

    Klopfer, Fredrick J.; Price, William F.

    The study presented was conducted to examine potential relationships between attitudes regarding the dying process, including acceptance of euthanasia, and other attitudinal or demographic attributes. The data of the survey was comprised of responses given by 331 respondents to a door-to-door interview. Results are discussed in terms of preferred…

  14. Helping Our Children Accept Themselves.

    ERIC Educational Resources Information Center

    Gamble, Mae

    1984-01-01

    Parents of a child with muscular dystrophy recount their reactions to learning of the diagnosis, their gradual acceptance, and their son's resistance, which was gradually lessened when he was provided with more information and treated more normally as a member of the family. (CL)

  15. Biasing errors and corrections

    NASA Technical Reports Server (NTRS)

    Meyers, James F.

    1991-01-01

    The dependence of laser velocimeter measurement rate on flow velocity is discussed. Investigations outlining that any dependence is purely statistical, and is nonstationary both spatially and temporally, are described. Main conclusions drawn are that the times between successive particle arrivals should be routinely measured and the calculation of the velocity data rate correlation coefficient should be performed to determine if a dependency exists. If none is found, accept the data ensemble as an independent sample of the flow. If a dependency is found, the data should be modified to obtain an independent sample. Universal correcting procedures should never be applied because their underlying assumptions are not valid.

  16. Quantum Error Correction with Biased Noise

    NASA Astrophysics Data System (ADS)

    Brooks, Peter

    Quantum computing offers powerful new techniques for speeding up the calculation of many classically intractable problems. Quantum algorithms can allow for the efficient simulation of physical systems, with applications to basic research, chemical modeling, and drug discovery; other algorithms have important implications for cryptography and internet security. At the same time, building a quantum computer is a daunting task, requiring the coherent manipulation of systems with many quantum degrees of freedom while preventing environmental noise from interacting too strongly with the system. Fortunately, we know that, under reasonable assumptions, we can use the techniques of quantum error correction and fault tolerance to achieve an arbitrary reduction in the noise level. In this thesis, we look at how additional information about the structure of noise, or "noise bias," can improve or alter the performance of techniques in quantum error correction and fault tolerance. In Chapter 2, we explore the possibility of designing certain quantum gates to be extremely robust with respect to errors in their operation. This naturally leads to structured noise where certain gates can be implemented in a protected manner, allowing the user to focus their protection on the noisier unprotected operations. In Chapter 3, we examine how to tailor error-correcting codes and fault-tolerant quantum circuits in the presence of dephasing biased noise, where dephasing errors are far more common than bit-flip errors. By using an appropriately asymmetric code, we demonstrate the ability to improve the amount of error reduction and decrease the physical resources required for error correction. In Chapter 4, we analyze a variety of protocols for distilling magic states, which enable universal quantum computation, in the presence of faulty Clifford operations. Here again there is a hierarchy of noise levels, with a fixed error rate for faulty gates, and a second rate for errors in the distilled

  17. Human error and the search for blame

    NASA Technical Reports Server (NTRS)

    Denning, Peter J.

    1989-01-01

    Human error is a frequent topic in discussions about risks in using computer systems. A rational analysis of human error leads through the consideration of mistakes to standards that designers use to avoid mistakes that lead to known breakdowns. The irrational side, however, is more interesting. It conditions people to think that breakdowns are inherently wrong and that there is ultimately someone who is responsible. This leads to a search for someone to blame which diverts attention from: learning from the mistakes; seeing the limitations of current engineering methodology; and improving the discourse of design.

  18. Laser Phase Errors in Seeded FELs

    SciTech Connect

    Ratner, D.; Fry, A.; Stupakov, G.; White, W.; /SLAC

    2012-03-28

    Harmonic seeding of free electron lasers has attracted significant attention from the promise of transform-limited pulses in the soft X-ray region. Harmonic multiplication schemes extend seeding to shorter wavelengths, but also amplify the spectral phase errors of the initial seed laser, and may degrade the pulse quality. In this paper we consider the effect of seed laser phase errors in high gain harmonic generation and echo-enabled harmonic generation. We use simulations to confirm analytical results for the case of linearly chirped seed lasers, and extend the results for arbitrary seed laser envelope and phase.

  19. Detecting Errors in Programs

    DTIC Science & Technology

    1979-02-01

    unclassified c. THIS PAGE unclassified Standard Form 298 (Rev. 8-98) Prescribed by ANSI Std Z39-18 DETECTING ERRORS IN PROGRAMS* Lloyd D. Fosdick...from a finite set of tests [35,36]a Recently Howden [37] presented a result showing that for a particular class of Lindenmayer grammars it was possible...Diego, CA. 37o Howden, W.E.: Lindenmayer grammars and symbolic testing. Information Processing Letters 7,1 (Jano 1978), 36-39. 38~ Fitzsimmons, Ann

  20. Speech Errors, Error Correction, and the Construction of Discourse.

    ERIC Educational Resources Information Center

    Linde, Charlotte

    Speech errors have been used in the construction of production models of the phonological and semantic components of language, and for a model of interactional processes. Errors also provide insight into how speakers plan discourse and syntactic structure,. Different types of discourse exhibit different types of error. The present data are taken…

  1. Errors in CT colonography.

    PubMed

    Trilisky, Igor; Ward, Emily; Dachman, Abraham H

    2015-10-01

    CT colonography (CTC) is a colorectal cancer screening modality which is becoming more widely implemented and has shown polyp detection rates comparable to those of optical colonoscopy. CTC has the potential to improve population screening rates due to its minimal invasiveness, no sedation requirement, potential for reduced cathartic examination, faster patient throughput, and cost-effectiveness. Proper implementation of a CTC screening program requires careful attention to numerous factors, including patient preparation prior to the examination, the technical aspects of image acquisition, and post-processing of the acquired data. A CTC workstation with dedicated software is required with integrated CTC-specific display features. Many workstations include computer-aided detection software which is designed to decrease errors of detection by detecting and displaying polyp-candidates to the reader for evaluation. There are several pitfalls which may result in false-negative and false-positive reader interpretation. We present an overview of the potential errors in CTC and a systematic approach to avoid them.

  2. Inborn Errors in Immunity

    PubMed Central

    Lionakis, M.S.; Hajishengallis, G.

    2015-01-01

    In recent years, the study of genetic defects arising from inborn errors in immunity has resulted in the discovery of new genes involved in the function of the immune system and in the elucidation of the roles of known genes whose importance was previously unappreciated. With the recent explosion in the field of genomics and the increasing number of genetic defects identified, the study of naturally occurring mutations has become a powerful tool for gaining mechanistic insight into the functions of the human immune system. In this concise perspective, we discuss emerging evidence that inborn errors in immunity constitute real-life models that are indispensable both for the in-depth understanding of human biology and for obtaining critical insights into common diseases, such as those affecting oral health. In the field of oral mucosal immunity, through the study of patients with select gene disruptions, the interleukin-17 (IL-17) pathway has emerged as a critical element in oral immune surveillance and susceptibility to inflammatory disease, with disruptions in the IL-17 axis now strongly linked to mucosal fungal susceptibility, whereas overactivation of the same pathways is linked to inflammatory periodontitis. PMID:25900229

  3. Analytical method transfer using equivalence tests with reasonable acceptance criteria and appropriate effort: extension of the ISPE concept.

    PubMed

    Kaminski, L; Schepers, U; Wätzig, H

    2010-12-15

    A method development process is commonly finalized by a method transfer from the developing to the routine laboratory. Statistical tests are performed in order to survey if a transfer succeeded or failed. However, using the classic two-sample t-test can lead to misjudgments and unsatisfying transfer results due to its test characteristics. Therefore the International Society of Pharmaceutical Engineering (ISPE) employed a fixed method transfer design using equivalence tests in their Guide for Technology Transfer. Although it was well received by analytical laboratories worldwide this fixed design can easily bring about high beta-errors (rejection of successful transfers) or high workload (many analysts employed during transfer) if sigma(AN) (error due to different analysts) exceeds 0.6%. Hence this work introduces an extended concept which will help to circumvent this disadvantage by providing guidance to select a personalized and more appropriate experimental design. First of all it demonstrates that former t-test related acceptance criteria can be scaled by a factor of 1.15, which allows for a broader tolerance without a loss of decision certainty. Furthermore a decision guidance to choose the proper number of analysts or series at given percentage acceptance limits (%AL) is presented.

  4. Acceptability of reactors in space

    SciTech Connect

    Buden, D.

    1981-01-01

    Reactors are the key to our future expansion into space. However, there has been some confusion in the public as to whether they are a safe and acceptable technology for use in space. The answer to these questions is explored. The US position is that when reactors are the preferred technical choice, that they can be used safely. In fact, it does not appear that reactors add measurably to the risk associated with the Space Transportation System.

  5. Acceptability of reactors in space

    SciTech Connect

    Buden, D.

    1981-04-01

    Reactors are the key to our future expansion into space. However, there has been some confusion in the public as to whether they are a safe and acceptable technology for use in space. The answer to these questions is explored. The US position is that when reactors are the preferred technical choice, that they can be used safely. In fact, it dies not appear that reactors add measurably to the risk associated with the Space Transportation System.

  6. Error Analysis in Mathematics Education.

    ERIC Educational Resources Information Center

    Rittner, Max

    1982-01-01

    The article reviews the development of mathematics error analysis as a means of diagnosing students' cognitive reasoning. Errors specific to addition, subtraction, multiplication, and division are described, and suggestions for remediation are provided. (CL)

  7. Skylab water balance error analysis

    NASA Technical Reports Server (NTRS)

    Leonard, J. I.

    1977-01-01

    Estimates of the precision of the net water balance were obtained for the entire Skylab preflight and inflight phases as well as for the first two weeks of flight. Quantitative estimates of both total sampling errors and instrumentation errors were obtained. It was shown that measurement error is minimal in comparison to biological variability and little can be gained from improvement in analytical accuracy. In addition, a propagation of error analysis demonstrated that total water balance error could be accounted for almost entirely by the errors associated with body mass changes. Errors due to interaction between terms in the water balance equation (covariances) represented less than 10% of the total error. Overall, the analysis provides evidence that daily measurements of body water changes obtained from the indirect balance technique are reasonable, precise, and relaible. The method is not biased toward net retention or loss.

  8. Prospective issues for error detection.

    PubMed

    Blavier, Adélaïde; Rouy, Emmanuelle; Nyssen, Anne-Sophie; de Keyser, Véronique

    2005-06-10

    From the literature on error detection, the authors select several concepts relating error detection mechanisms and prospective memory features. They emphasize the central role of intention in the classification of the errors into slips/lapses/mistakes, in the error handling process and in the usual distinction between action-based and outcome-based detection. Intention is again a core concept in their investigation of prospective memory theory, where they point out the contribution of intention retrievals, intention persistence and output monitoring in the individual's possibilities for detecting their errors. The involvement of the frontal lobes in prospective memory and in error detection is also analysed. From the chronology of a prospective memory task, the authors finally suggest a model for error detection also accounting for neural mechanisms highlighted by studies on error-related brain activity.

  9. The voices acceptance and action scale (VAAS): Pilot data.

    PubMed

    Shawyer, Frances; Ratcliff, Kirk; Mackinnon, Andrew; Farhall, John; Hayes, Steven C; Copolov, David

    2007-06-01

    Acceptance and mindfulness methods that emphasise the acceptance rather than control of symptoms are becoming more central to behavioural and cognitive therapies. Acceptance and Commitment Therapy (ACT) is the most developed of these methods; recent applications of ACT to psychosis suggest it to be a promising therapeutic approach. However, investigation of the mechanisms of therapy within this domain is difficult because there are no acceptance-based measures available specifically for psychotic symptoms. This paper describes the preliminary evaluation of a self-report instrument designed to assess acceptance-based attitudes and actions in relation to auditory and command hallucinations. Following initial scale development, a 56-item version of the Voices Acceptance and Action Scale (VAAS) was administered to 43 participants with command hallucinations as part of their baseline assessment in a larger trial. Measures of symptoms, quality of life, and depression were also administered. The scale was examined for reliability using corrected item total statistics. Based on this method, 31 items were retained. Internal consistency and test-retest reliability for the 31-item VAAS were acceptable. Subsequent examination of construct validity showed the VAAS to correlate significantly in the expected directions with depression, quality of life, and coping with command hallucinations. It also discriminated compliance from non-compliance with harmful command hallucinations. Although these results are preliminary and subject to a number of limitations, the VAAS shows promise as a useful aid in the assessment of the psychological impact of voices.

  10. Error Sources in Asteroid Astrometry

    NASA Technical Reports Server (NTRS)

    Owen, William M., Jr.

    2000-01-01

    Asteroid astrometry, like any other scientific measurement process, is subject to both random and systematic errors, not all of which are under the observer's control. To design an astrometric observing program or to improve an existing one requires knowledge of the various sources of error, how different errors affect one's results, and how various errors may be minimized by careful observation or data reduction techniques.

  11. Regulatory perspectives on acceptability testing of dosage forms in children.

    PubMed

    Kozarewicz, Piotr

    2014-08-05

    Current knowledge about the age-appropriateness of different dosage forms is still fragmented or limited. Applicants are asked to demonstrate that the target age group(s) can manage the dosage form or propose an alternative strategy. However, questions remain about how far the applicant must go and what percentage of patients must find the strategy 'acceptable'. The aim of this overview is to provide an update on current thinking and understanding of the problem, and discuss issues relating to the acceptability testing. This overview should be considered as means to start a wider discussion which hopefully will result in a harmonised, globally acceptable approach for confirmation of the acceptability in the future.

  12. Data Acceptance Criteria for Standardized Human-Associated Fecal Source Identification Quantitative Real-Time PCR Methods

    PubMed Central

    Kelty, Catherine A.; Oshiro, Robin; Haugland, Richard A.; Madi, Tania; Brooks, Lauren; Field, Katharine G.; Sivaganesan, Mano

    2016-01-01

    There is growing interest in the application of human-associated fecal source identification quantitative real-time PCR (qPCR) technologies for water quality management. The transition from a research tool to a standardized protocol requires a high degree of confidence in data quality across laboratories. Data quality is typically determined through a series of specifications that ensure good experimental practice and the absence of bias in the results due to DNA isolation and amplification interferences. However, there is currently a lack of consensus on how best to evaluate and interpret human fecal source identification qPCR experiments. This is, in part, due to the lack of standardized protocols and information on interlaboratory variability under conditions for data acceptance. The aim of this study is to provide users and reviewers with a complete series of conditions for data acceptance derived from a multiple laboratory data set using standardized procedures. To establish these benchmarks, data from HF183/BacR287 and HumM2 human-associated qPCR methods were generated across 14 laboratories. Each laboratory followed a standardized protocol utilizing the same lot of reference DNA materials, DNA isolation kits, amplification reagents, and test samples to generate comparable data. After removal of outliers, a nested analysis of variance (ANOVA) was used to establish proficiency metrics that include lab-to-lab, replicate testing within a lab, and random error for amplification inhibition and sample processing controls. Other data acceptance measurements included extraneous DNA contamination assessments (no-template and extraction blank controls) and calibration model performance (correlation coefficient, amplification efficiency, and lower limit of quantification). To demonstrate the implementation of the proposed standardized protocols and data acceptance criteria, comparable data from two additional laboratories were reviewed. The data acceptance criteria

  13. Data Acceptance Criteria for Standardized Human-Associated Fecal Source Identification Quantitative Real-Time PCR Methods.

    PubMed

    Shanks, Orin C; Kelty, Catherine A; Oshiro, Robin; Haugland, Richard A; Madi, Tania; Brooks, Lauren; Field, Katharine G; Sivaganesan, Mano

    2016-05-01

    There is growing interest in the application of human-associated fecal source identification quantitative real-time PCR (qPCR) technologies for water quality management. The transition from a research tool to a standardized protocol requires a high degree of confidence in data quality across laboratories. Data quality is typically determined through a series of specifications that ensure good experimental practice and the absence of bias in the results due to DNA isolation and amplification interferences. However, there is currently a lack of consensus on how best to evaluate and interpret human fecal source identification qPCR experiments. This is, in part, due to the lack of standardized protocols and information on interlaboratory variability under conditions for data acceptance. The aim of this study is to provide users and reviewers with a complete series of conditions for data acceptance derived from a multiple laboratory data set using standardized procedures. To establish these benchmarks, data from HF183/BacR287 and HumM2 human-associated qPCR methods were generated across 14 laboratories. Each laboratory followed a standardized protocol utilizing the same lot of reference DNA materials, DNA isolation kits, amplification reagents, and test samples to generate comparable data. After removal of outliers, a nested analysis of variance (ANOVA) was used to establish proficiency metrics that include lab-to-lab, replicate testing within a lab, and random error for amplification inhibition and sample processing controls. Other data acceptance measurements included extraneous DNA contamination assessments (no-template and extraction blank controls) and calibration model performance (correlation coefficient, amplification efficiency, and lower limit of quantification). To demonstrate the implementation of the proposed standardized protocols and data acceptance criteria, comparable data from two additional laboratories were reviewed. The data acceptance criteria

  14. Error Patterns in Problem Solving.

    ERIC Educational Resources Information Center

    Babbitt, Beatrice C.

    Although many common problem-solving errors within the realm of school mathematics have been previously identified, a compilation of such errors is not readily available within learning disabilities textbooks, mathematics education texts, or teacher's manuals for school mathematics texts. Using data on error frequencies drawn from both the Fourth…

  15. Measurement Error. For Good Measure....

    ERIC Educational Resources Information Center

    Johnson, Stephen; Dulaney, Chuck; Banks, Karen

    No test, however well designed, can measure a student's true achievement because numerous factors interfere with the ability to measure achievement. These factors are sources of measurement error, and the goal in creating tests is to have as little measurement error as possible. Error can result from the test design, factors related to individual…

  16. Feature Referenced Error Correction Apparatus.

    DTIC Science & Technology

    A feature referenced error correction apparatus utilizing the multiple images of the interstage level image format to compensate for positional...images and by the generation of an error correction signal in response to the sub-frame registration errors. (Author)

  17. 5 CFR 841.505 - Correction of error.

    Code of Federal Regulations, 2011 CFR

    2011-01-01

    ... 5 Administrative Personnel 2 2011-01-01 2011-01-01 false Correction of error. 841.505 Section 841... Contributions § 841.505 Correction of error. (a) When it is determined that an agency has paid less than the... whatsoever, including but not limited to, coverage decisions, correction of the percentage applicable or...

  18. 5 CFR 841.505 - Correction of error.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... 5 Administrative Personnel 2 2010-01-01 2010-01-01 false Correction of error. 841.505 Section 841... Contributions § 841.505 Correction of error. (a) When it is determined that an agency has paid less than the... whatsoever, including but not limited to, coverage decisions, correction of the percentage applicable or...

  19. Beam lifetime and limitations during low-energy RHIC operation

    SciTech Connect

    Fedotov, A.V.; Bai, M.; Blaskiewicz, M.; Fischer, W.; Kayran, D.; Montag, C.; Satogata, T.; Tepikian, S.; Wang, G.

    2011-03-28

    The low-energy physics program at the Relativistic Heavy Ion Collider (RHIC), motivated by a search for the QCD phase transition critical point, requires operation at low energies. At these energies, large nonlinear magnetic field errors and large beam sizes produce low beam lifetimes. A variety of beam dynamics effects such as Intrabeam Scattering (IBS), space charge and beam-beam forces also contribute. All these effects are important to understand beam lifetime limitations in RHIC at low energies. During the low-energy RHIC physics run in May-June 2010 at beam {gamma} = 6.1 and {gamma} = 4.1, gold beam lifetimes were measured for various values of space-charge tune shifts, transverse acceptance limitation by collimators, synchrotron tunes and RF voltage. This paper summarizes our observations and initial findings.

  20. Error, signal, and the placement of Ctenophora sister to all other animals.

    PubMed

    Whelan, Nathan V; Kocot, Kevin M; Moroz, Leonid L; Halanych, Kenneth M

    2015-05-05

    Elucidating relationships among early animal lineages has been difficult, and recent phylogenomic analyses place Ctenophora sister to all other extant animals, contrary to the traditional view of Porifera as the earliest-branching animal lineage. To date, phylogenetic support for either ctenophores or sponges as sister to other animals has been limited and inconsistent among studies. Lack of agreement among phylogenomic analyses using different data and methods obscures how complex traits, such as epithelia, neurons, and muscles evolved. A consensus view of animal evolution will not be accepted until datasets and methods converge on a single hypothesis of early metazoan relationships and putative sources of systematic error (e.g., long-branch attraction, compositional bias, poor model choice) are assessed. Here, we investigate possible causes of systematic error by expanding taxon sampling with eight novel transcriptomes, strictly enforcing orthology inference criteria, and progressively examining potential causes of systematic error while using both maximum-likelihood with robust data partitioning and Bayesian inference with a site-heterogeneous model. We identified ribosomal protein genes as possessing a conflicting signal compared with other genes, which caused some past studies to infer ctenophores and cnidarians as sister. Importantly, biases resulting from elevated compositional heterogeneity or elevated substitution rates are ruled out. Placement of ctenophores as sister to all other animals, and sponge monophyly, are strongly supported under multiple analyses, herein.

  1. Error-finding and error-correcting methods for the start-up of the SLC

    SciTech Connect

    Lee, M.J.; Clearwater, S.H.; Kleban, S.D.; Selig, L.J.

    1987-02-01

    During the commissioning of an accelerator, storage ring, or beam transfer line, one of the important tasks of an accelertor physicist is to check the first-order optics of the beam line and to look for errors in the system. Conceptually, it is important to distinguish between techniques for finding the machine errors that are the cause of the problem and techniques for correcting the beam errors that are the result of the machine errors. In this paper we will limit our presentation to certain applications of these two methods for finding or correcting beam-focus errors and beam-kick errors that affect the profile and trajectory of the beam respectively. Many of these methods have been used successfully in the commissioning of SLC systems. In order not to waste expensive beam time we have developed and used a beam-line simulator to test the ideas that have not been tested experimentally. To save valuable physicist's time we have further automated the beam-kick error-finding procedures by adopting methods from the field of artificial intelligence to develop a prototype expert system. Our experience with this prototype has demonstrated the usefulness of expert systems in solving accelerator control problems. The expert system is able to find the same solutions as an expert physicist but in a more systematic fashion. The methods used in these procedures and some of the recent applications will be described in this paper.

  2. On typographical errors.

    PubMed

    Hamilton, J W

    1993-09-01

    In his overall assessment of parapraxes in 1901, Freud included typographical mistakes but did not elaborate on or study this subject nor did he have anything to say about it in his later writings. This paper lists textual errors from a variety of current literary sources and explores the dynamic importance of their execution and the failure to make necessary corrections during the editorial process. While there has been a deemphasis of the role of unconscious determinants in the genesis of all slips as a result of recent findings in cognitive psychology, the examples offered suggest that, with respect to motivation, lapses in compulsivity contribute to their original commission while thematic compliance and voyeuristic issues are important in their not being discovered prior to publication.

  3. Measuring Cyclic Error in Laser Heterodyne Interferometers

    NASA Technical Reports Server (NTRS)

    Ryan, Daniel; Abramovici, Alexander; Zhao, Feng; Dekens, Frank; An, Xin; Azizi, Alireza; Chapsky, Jacob; Halverson, Peter

    2010-01-01

    An improved method and apparatus have been devised for measuring cyclic errors in the readouts of laser heterodyne interferometers that are configured and operated as displacement gauges. The cyclic errors arise as a consequence of mixing of spurious optical and electrical signals in beam launchers that are subsystems of such interferometers. The conventional approach to measurement of cyclic error involves phase measurements and yields values precise to within about 10 pm over air optical paths at laser wavelengths in the visible and near infrared. The present approach, which involves amplitude measurements instead of phase measurements, yields values precise to about .0.1 microns . about 100 times the precision of the conventional approach. In a displacement gauge of the type of interest here, the laser heterodyne interferometer is used to measure any change in distance along an optical axis between two corner-cube retroreflectors. One of the corner-cube retroreflectors is mounted on a piezoelectric transducer (see figure), which is used to introduce a low-frequency periodic displacement that can be measured by the gauges. The transducer is excited at a frequency of 9 Hz by a triangular waveform to generate a 9-Hz triangular-wave displacement having an amplitude of 25 microns. The displacement gives rise to both amplitude and phase modulation of the heterodyne signals in the gauges. The modulation includes cyclic error components, and the magnitude of the cyclic-error component of the phase modulation is what one needs to measure in order to determine the magnitude of the cyclic displacement error. The precision attainable in the conventional (phase measurement) approach to measuring cyclic error is limited because the phase measurements are af-

  4. Spin glasses and error-correcting codes

    NASA Technical Reports Server (NTRS)

    Belongie, M. L.

    1994-01-01

    In this article, we study a model for error-correcting codes that comes from spin glass theory and leads to both new codes and a new decoding technique. Using the theory of spin glasses, it has been proven that a simple construction yields a family of binary codes whose performance asymptotically approaches the Shannon bound for the Gaussian channel. The limit is approached as the number of information bits per codeword approaches infinity while the rate of the code approaches zero. Thus, the codes rapidly become impractical. We present simulation results that show the performance of a few manageable examples of these codes. In the correspondence that exists between spin glasses and error-correcting codes, the concept of a thermal average leads to a method of decoding that differs from the standard method of finding the most likely information sequence for a given received codeword. Whereas the standard method corresponds to calculating the thermal average at temperature zero, calculating the thermal average at a certain optimum temperature results instead in the sequence of most likely information bits. Since linear block codes and convolutional codes can be viewed as examples of spin glasses, this new decoding method can be used to decode these codes in a way that minimizes the bit error rate instead of the codeword error rate. We present simulation results that show a small improvement in bit error rate by using the thermal average technique.

  5. Entanglment assisted zero-error codes

    NASA Astrophysics Data System (ADS)

    Matthews, William; Mancinska, Laura; Leung, Debbie; Ozols, Maris; Roy, Aidan

    2011-03-01

    Zero-error information theory studies the transmission of data over noisy communication channels with strictly zero error probability. For classical channels and data, much of the theory can be studied in terms of combinatorial graph properties and is a source of hard open problems in that domain. In recent work, we investigated how entanglement between sender and receiver can be used in this task. We found that entanglement-assisted zero-error codes (which are still naturally studied in terms of graphs) sometimes offer an increased bit rate of zero-error communication even in the large block length limit. The assisted codes that we have constructed are closely related to Kochen-Specker proofs of non-contextuality as studied in the context of foundational physics, and our results on asymptotic rates of assisted zero-error communication yield non-contextuality proofs which are particularly `strong' in a certain quantitive sense. I will also describe formal connections to the multi-prover games known as pseudo-telepathy games.

  6. How perioperative nurses define, attribute causes of, and react to intraoperative nursing errors.

    PubMed

    Chard, Robin

    2010-01-01

    Errors in nursing practice pose a continuing threat to patient safety. A descriptive, correlational study was conducted to examine the definitions, circumstances, and perceived causes of intraoperative nursing errors; reactions of perioperative nurses to intraoperative nursing errors; and the relationships among coping with intraoperative nursing errors, emotional distress, and changes in practice made as a result of error. The results indicate that strategies of accepting responsibility and using self-control are significant predictors of emotional distress. Seeking social support and planful problem solving emerged as significant predictors of constructive changes in practice. Most predictive of defensive changes was the strategy of escape/avoidance.

  7. Rapid mapping of volumetric errors

    SciTech Connect

    Krulewich, D.; Hale, L.; Yordy, D.

    1995-09-13

    This paper describes a relatively inexpensive, fast, and easy to execute approach to mapping the volumetric errors of a machine tool, coordinate measuring machine, or robot. An error map is used to characterize a machine or to improve its accuracy by compensating for the systematic errors. The method consists of three steps: (1) modeling the relationship between the volumetric error and the current state of the machine; (2) acquiring error data based on length measurements throughout the work volume; and (3) optimizing the model to the particular machine.

  8. Register file soft error recovery

    DOEpatents

    Fleischer, Bruce M.; Fox, Thomas W.; Wait, Charles D.; Muff, Adam J.; Watson, III, Alfred T.

    2013-10-15

    Register file soft error recovery including a system that includes a first register file and a second register file that mirrors the first register file. The system also includes an arithmetic pipeline for receiving data read from the first register file, and error detection circuitry to detect whether the data read from the first register file includes corrupted data. The system further includes error recovery circuitry to insert an error recovery instruction into the arithmetic pipeline in response to detecting the corrupted data. The inserted error recovery instruction replaces the corrupted data in the first register file with a copy of the data from the second register file.

  9. Role of memory errors in quantum repeaters

    NASA Astrophysics Data System (ADS)

    Hartmann, L.; Kraus, B.; Briegel, H.-J.; Dür, W.

    2007-03-01

    We investigate the influence of memory errors in the quantum repeater scheme for long-range quantum communication. We show that the communication distance is limited in standard operation mode due to memory errors resulting from unavoidable waiting times for classical signals. We show how to overcome these limitations by (i) improving local memory and (ii) introducing two operational modes of the quantum repeater. In both operational modes, the repeater is run blindly, i.e., without waiting for classical signals to arrive. In the first scheme, entanglement purification protocols based on one-way classical communication are used allowing to communicate over arbitrary distances. However, the error thresholds for noise in local control operations are very stringent. The second scheme makes use of entanglement purification protocols with two-way classical communication and inherits the favorable error thresholds of the repeater run in standard mode. One can increase the possible communication distance by an order of magnitude with reasonable overhead in physical resources. We outline the architecture of a quantum repeater that can possibly ensure intercontinental quantum communication.

  10. Towards error-free interaction.

    PubMed

    Tsoneva, Tsvetomira; Bieger, Jordi; Garcia-Molina, Gary

    2010-01-01

    Human-machine interaction (HMI) relies on pat- tern recognition algorithms that are not perfect. To improve the performance and usability of these systems we can utilize the neural mechanisms in the human brain dealing with error awareness. This study aims at designing a practical error detection algorithm using electroencephalogram signals that can be integrated in an HMI system. Thus, real-time operation, customization, and operation convenience are important. We address these requirements in an experimental framework simulating machine errors. Our results confirm the presence of brain potentials related to processing of machine errors. These are used to implement an error detection algorithm emphasizing the differences in error processing on a per subject basis. The proposed algorithm uses the individual best bipolar combination of electrode sites and requires short calibration. The single-trial error detection performance on six subjects, characterized by the area under the ROC curve ranges from 0.75 to 0.98.

  11. Optimal input design for aircraft instrumentation systematic error estimation

    NASA Technical Reports Server (NTRS)

    Morelli, Eugene A.

    1991-01-01

    A new technique for designing optimal flight test inputs for accurate estimation of instrumentation systematic errors was developed and demonstrated. A simulation model of the F-18 High Angle of Attack Research Vehicle (HARV) aircraft was used to evaluate the effectiveness of the optimal input compared to input recorded during flight test. Instrumentation systematic error parameter estimates and their standard errors were compared. It was found that the optimal input design improved error parameter estimates and their accuracies for a fixed time input design. Pilot acceptability of the optimal input design was demonstrated using a six degree-of-freedom fixed base piloted simulation of the F-18 HARV. The technique described in this work provides a practical, optimal procedure for designing inputs for data compatibility experiments.

  12. Axelrod model: accepting or discussing

    NASA Astrophysics Data System (ADS)

    Dybiec, Bartlomiej; Mitarai, Namiko; Sneppen, Kim

    2012-10-01

    Agents building social systems are characterized by complex states, and interactions among individuals can align their opinions. The Axelrod model describes how local interactions can result in emergence of cultural domains. We propose two variants of the Axelrod model where local consensus is reached either by listening and accepting one of neighbors' opinion or two agents discuss their opinion and achieve an agreement with mixed opinions. We show that the local agreement rule affects the character of the transition between the single culture and the multiculture regimes.

  13. Modeling error analysis of stationary linear discrete-time filters

    NASA Technical Reports Server (NTRS)

    Patel, R.; Toda, M.

    1977-01-01

    The performance of Kalman-type, linear, discrete-time filters in the presence of modeling errors is considered. The discussion is limited to stationary performance, and bounds are obtained for the performance index, the mean-squared error of estimates for suboptimal and optimal (Kalman) filters. The computation of these bounds requires information on only the model matrices and the range of errors for these matrices. Consequently, a design can easily compare the performance of a suboptimal filter with that of the optimal filter, when only the range of errors in the elements of the model matrices is available.

  14. Adherence to balance tolerance limits at the Upper Mississippi Science Center, La Crosse, Wisconsin.

    USGS Publications Warehouse

    Myers, C.T.; Kennedy, D.M.

    1998-01-01

    Verification of balance accuracy entails applying a series of standard masses to a balance prior to use and recording the measured values. The recorded values for each standard should have lower and upper weight limits or tolerances that are accepted as verification of balance accuracy under normal operating conditions. Balance logbooks for seven analytical balances at the Upper Mississippi Science Center were checked over a 3.5-year period to determine if the recorded weights were within the established tolerance limits. A total of 9435 measurements were checked. There were 14 instances in which the balance malfunctioned and operators recorded a rationale in the balance logbook. Sixty-three recording errors were found. Twenty-eight operators were responsible for two types of recording errors: Measurements of weights were recorded outside of the tolerance limit but not acknowledged as an error by the operator (n = 40); and measurements were recorded with the wrong number of decimal places (n = 23). The adherence rate for following tolerance limits was 99.3%. To ensure the continued adherence to tolerance limits, the quality-assurance unit revised standard operating procedures to require more frequent review of balance logbooks.

  15. Acceptance and Commitment Therapy (ACT) as a Career Counselling Strategy

    ERIC Educational Resources Information Center

    Hoare, P. Nancey; McIlveen, Peter; Hamilton, Nadine

    2012-01-01

    Acceptance and commitment therapy (ACT) has potential to contribute to career counselling. In this paper, the theoretical tenets of ACT and a selection of its counselling techniques are overviewed along with a descriptive case vignette. There is limited empirical research into ACT's application in career counselling. Accordingly, a research agenda…

  16. Acceptance of Online Degrees by Undergraduate Mexican Students

    ERIC Educational Resources Information Center

    Padilla Rodriguez, Brenda Cecilia; Adams, Jonathan

    2014-01-01

    The quality and acceptance of online degree programs are still controversial issues. In Mexico, where access to technology is limited, there are few studies on the matter. Undergraduate students (n = 104) answered a survey that aimed to evaluate their knowledge of virtual education, their likelihood of enrollment in an online degree program, and…

  17. Acceptance-Enhanced Behavior Therapy for Trichotillomania in Adolescents

    ERIC Educational Resources Information Center

    Fine, Kathi M.; Walther, Michael R.; Joseph, Jessica M.; Robinson, Jordan; Ricketts, Emily J.; Bowe, William E.; Woods, Douglas W.

    2012-01-01

    Although several studies have examined the efficacy of Acceptance Enhanced Behavior Therapy (AEBT) for the treatment of trichotillomania (TTM) in adults, data are limited with respect to the treatment of adolescents. Our case series illustrates the use of AEBT for TTM in the treatment of two adolescents. The AEBT protocol (Woods & Twohig, 2008) is…

  18. Application of Uniform Measurement Error Distribution

    DTIC Science & Technology

    2016-03-18

    should be aware that notwithstanding any other provision of law , no person shall be subject to any penalty for failing to comply with a collection of...Uniform Measurement Error Distribution 5a. CONTRACT NUMBER 5b. GRANT NUMBER 5c. PROGRAM ELEMENT NUMBER 6. AUTHOR(S) Ghazarians, Alan; Jackson, Dennis...PFA), Probability of False Reject (PFR). 16. SECURITY CLASSIFICATION OF: 17. LIMITATION OF ABSTRACT SAR 18. NUMBER OF PAGES 15 19a. NAME

  19. Contour Error Map Algorithm

    NASA Technical Reports Server (NTRS)

    Merceret, Francis; Lane, John; Immer, Christopher; Case, Jonathan; Manobianco, John

    2005-01-01

    The contour error map (CEM) algorithm and the software that implements the algorithm are means of quantifying correlations between sets of time-varying data that are binarized and registered on spatial grids. The present version of the software is intended for use in evaluating numerical weather forecasts against observational sea-breeze data. In cases in which observational data come from off-grid stations, it is necessary to preprocess the observational data to transform them into gridded data. First, the wind direction is gridded and binarized so that D(i,j;n) is the input to CEM based on forecast data and d(i,j;n) is the input to CEM based on gridded observational data. Here, i and j are spatial indices representing 1.25-km intervals along the west-to-east and south-to-north directions, respectively; and n is a time index representing 5-minute intervals. A binary value of D or d = 0 corresponds to an offshore wind, whereas a value of D or d = 1 corresponds to an onshore wind. CEM includes two notable subalgorithms: One identifies and verifies sea-breeze boundaries; the other, which can be invoked optionally, performs an image-erosion function for the purpose of attempting to eliminate river-breeze contributions in the wind fields.

  20. Comparison of analytical error and sampling error for contaminated soil.

    PubMed

    Gustavsson, Björn; Luthbom, Karin; Lagerkvist, Anders

    2006-11-16

    Investigation of soil from contaminated sites requires several sample handling steps that, most likely, will induce uncertainties in the sample. The theory of sampling describes seven sampling errors that can be calculated, estimated or discussed in order to get an idea of the size of the sampling uncertainties. With the aim of comparing the size of the analytical error to the total sampling error, these seven errors were applied, estimated and discussed, to a case study of a contaminated site. The manageable errors were summarized, showing a range of three orders of magnitudes between the examples. The comparisons show that the quotient between the total sampling error and the analytical error is larger than 20 in most calculation examples. Exceptions were samples taken in hot spots, where some components of the total sampling error get small and the analytical error gets large in comparison. Low concentration of contaminant, small extracted sample size and large particles in the sample contribute to the extent of uncertainty.

  1. Peeling Away Timing Error in NetFlow Data

    NASA Astrophysics Data System (ADS)

    Trammell, Brian; Tellenbach, Bernhard; Schatzmann, Dominik; Burkhart, Martin

    In this paper, we characterize, quantify, and correct timing errors introduced into network flow data by collection and export via Cisco NetFlow version 9. We find that while some of these sources of error (clock skew, export delay) are generally implementation-dependent and known in the literature, there is an additional cyclic error of up to one second that is inherent to the design of the export protocol. We present a method for correcting this cyclic error in the presence of clock skew and export delay. In an evaluation using traffic with known timing collected from a national-scale network, we show that this method can successfully correct the cyclic error. However, there can also be other implementation-specific errors for which insufficient information remains for correction. On the routers we have deployed in our network, this limits the accuracy to about 70ms, reinforcing the point that implementation matters when conducting research on network measurement data.

  2. Realtime mitigation of GPS SA errors using Loran-C

    NASA Technical Reports Server (NTRS)

    Braasch, Soo Y.

    1994-01-01

    The hybrid use of Loran-C with the Global Positioning System (GPS) was shown capable of providing a sole-means of enroute air radionavigation. By allowing pilots to fly direct to their destinations, use of this system is resulting in significant time savings and therefore fuel savings as well. However, a major error source limiting the accuracy of GPS is the intentional degradation of the GPS signal known as Selective Availability (SA). SA-induced position errors are highly correlated and far exceed all other error sources (horizontal position error: 100 meters, 95 percent). Realtime mitigation of SA errors from the position solution is highly desirable. How that can be achieved is discussed. The stability of Loran-C signals is exploited to reduce SA errors. The theory behind this technique is discussed and results using bench and flight data are given.

  3. Influence of Gender and Computer Teaching Efficacy on Computer Acceptance among Malaysian Student Teachers: An Extended Technology Acceptance Model

    ERIC Educational Resources Information Center

    Wong, Kung-Teck; Teo, Timothy; Russo, Sharon

    2012-01-01

    The purpose of this study is to validate the technology acceptance model (TAM) in an educational context and explore the role of gender and computer teaching efficacy as external variables. From the literature, it appeared that only limited studies had developed models to explain statistically the chain of influence of computer teaching efficacy…

  4. Some mathematical refinements concerning error minimization in the genetic code.

    PubMed

    Buhrman, Harry; van der Gulik, Peter T S; Kelk, Steven M; Koolen, Wouter M; Stougie, Leen

    2011-01-01

    The genetic code is known to have a high level of error robustness and has been shown to be very error robust compared to randomly selected codes, but to be significantly less error robust than a certain code found by a heuristic algorithm. We formulate this optimization problem as a Quadratic Assignment Problem and use this to formally verify that the code found by the heuristic algorithm is the global optimum. We also argue that it is strongly misleading to compare the genetic code only with codes sampled from the fixed block model, because the real code space is orders of magnitude larger. We thus enlarge the space from which random codes can be sampled from approximately 2.433 × 10(18) codes to approximately 5.908 × 10(45) codes. We do this by leaving the fixed block model, and using the wobble rules to formulate the characteristics acceptable for a genetic code. By relaxing more constraints, three larger spaces are also constructed. Using a modified error function, the genetic code is found to be more error robust compared to a background of randomly generated codes with increasing space size. We point out that these results do not necessarily imply that the code was optimized during evolution for error minimization, but that other mechanisms could be the reason for this error robustness.

  5. Critical evidence for the prediction error theory in associative learning.

    PubMed

    Terao, Kanta; Matsumoto, Yukihisa; Mizunami, Makoto

    2015-03-10

    In associative learning in mammals, it is widely accepted that the discrepancy, or error, between actual and predicted reward determines whether learning occurs. Complete evidence for the prediction error theory, however, has not been obtained in any learning systems: Prediction error theory stems from the finding of a blocking phenomenon, but blocking can also be accounted for by other theories, such as the attentional theory. We demonstrated blocking in classical conditioning in crickets and obtained evidence to reject the attentional theory. To obtain further evidence supporting the prediction error theory and rejecting alternative theories, we constructed a neural model to match the prediction error theory, by modifying our previous model of learning in crickets, and we tested a prediction from the model: the model predicts that pharmacological intervention of octopaminergic transmission during appetitive conditioning impairs learning but not formation of reward prediction itself, and it thus predicts no learning in subsequent training. We observed such an "auto-blocking", which could be accounted for by the prediction error theory but not by other competitive theories to account for blocking. This study unambiguously demonstrates validity of the prediction error theory in associative learning.

  6. Dopamine reward prediction error coding.

    PubMed

    Schultz, Wolfram

    2016-03-01

    Reward prediction errors consist of the differences between received and predicted rewards. They are crucial for basic forms of learning about rewards and make us strive for more rewards-an evolutionary beneficial trait. Most dopamine neurons in the midbrain of humans, monkeys, and rodents signal a reward prediction error; they are activated by more reward than predicted (positive prediction error), remain at baseline activity for fully predicted rewards, and show depressed activity with less reward than predicted (negative prediction error). The dopamine signal increases nonlinearly with reward value and codes formal economic utility. Drugs of addiction generate, hijack, and amplify the dopamine reward signal and induce exaggerated, uncontrolled dopamine effects on neuronal plasticity. The striatum, amygdala, and frontal cortex also show reward prediction error coding, but only in subpopulations of neurons. Thus, the important concept of reward prediction errors is implemented in neuronal hardware.

  7. Dopamine reward prediction error coding

    PubMed Central

    Schultz, Wolfram

    2016-01-01

    Reward prediction errors consist of the differences between received and predicted rewards. They are crucial for basic forms of learning about rewards and make us strive for more rewards—an evolutionary beneficial trait. Most dopamine neurons in the midbrain of humans, monkeys, and rodents signal a reward prediction error; they are activated by more reward than predicted (positive prediction error), remain at baseline activity for fully predicted rewards, and show depressed activity with less reward than predicted (negative prediction error). The dopamine signal increases nonlinearly with reward value and codes formal economic utility. Drugs of addiction generate, hijack, and amplify the dopamine reward signal and induce exaggerated, uncontrolled dopamine effects on neuronal plasticity. The striatum, amygdala, and frontal cortex also show reward prediction error coding, but only in subpopulations of neurons. Thus, the important concept of reward prediction errors is implemented in neuronal hardware. PMID:27069377

  8. Processor register error correction management

    SciTech Connect

    Bose, Pradip; Cher, Chen-Yong; Gupta, Meeta S.

    2016-12-27

    Processor register protection management is disclosed. In embodiments, a method of processor register protection management can include determining a sensitive logical register for executable code generated by a compiler, generating an error-correction table identifying the sensitive logical register, and storing the error-correction table in a memory accessible by a processor. The processor can be configured to generate a duplicate register of the sensitive logical register identified by the error-correction table.

  9. Studying Student Teachers' Acceptance of Role Responsibility.

    ERIC Educational Resources Information Center

    Davis, Michael D.; Davis, Concetta M.

    1980-01-01

    There is variance in the way in which student teachers accept responsibility for the teaching act. This study explains why some variables may affect student teachers' acceptance of role responsibilities. (CM)

  10. [Subjective well-being and self acceptance].

    PubMed

    Makino, Y; Tagami, F

    1998-06-01

    The purpose of the present study was to examine the relationship between subjective well-being and self acceptance, and to design a happiness self-writing program to increase self acceptance and subjective well-being of adolescents. In study 1, we examined the relationship between social interaction and self acceptance. In study 2, we created a happiness self-writing program in cognitive behavioral approach, and examined whether the program promoted self acceptance and subjective well-being. Results indicated that acceptance of self-openness, an aspect of self acceptance, was related to subjective well-being. The happiness self-writing program increased subjective well-being, but it was not found to have increased self acceptance. It was discussed why the program could promote subjective well-being, but not self acceptance.

  11. Compensating For GPS Ephemeris Error

    NASA Technical Reports Server (NTRS)

    Wu, Jiun-Tsong

    1992-01-01

    Method of computing position of user station receiving signals from Global Positioning System (GPS) of navigational satellites compensates for most of GPS ephemeris error. Present method enables user station to reduce error in its computed position substantially. User station must have access to two or more reference stations at precisely known positions several hundred kilometers apart and must be in neighborhood of reference stations. Based on fact that when GPS data used to compute baseline between reference station and user station, vector error in computed baseline is proportional ephemeris error and length of baseline.

  12. A theory of human error

    NASA Technical Reports Server (NTRS)

    Mcruer, D. T.; Clement, W. F.; Allen, R. W.

    1980-01-01

    Human error, a significant contributing factor in a very high proportion of civil transport, general aviation, and rotorcraft accidents is investigated. Correction of the sources of human error requires that one attempt to reconstruct underlying and contributing causes of error from the circumstantial causes cited in official investigative reports. A validated analytical theory of the input-output behavior of human operators involving manual control, communication, supervisory, and monitoring tasks which are relevant to aviation operations is presented. This theory of behavior, both appropriate and inappropriate, provides an insightful basis for investigating, classifying, and quantifying the needed cause-effect relationships governing propagation of human error.

  13. Measurement Error and Equating Error in Power Analysis

    ERIC Educational Resources Information Center

    Phillips, Gary W.; Jiang, Tao

    2016-01-01

    Power analysis is a fundamental prerequisite for conducting scientific research. Without power analysis the researcher has no way of knowing whether the sample size is large enough to detect the effect he or she is looking for. This paper demonstrates how psychometric factors such as measurement error and equating error affect the power of…

  14. Correcting numerical integration errors caused by small aliasing errors

    SciTech Connect

    Smallwood, D.O.

    1997-11-01

    Small sampling errors can have a large effect on numerically integrated waveforms. An example is the integration of acceleration to compute velocity and displacement waveforms. These large integration errors complicate checking the suitability of the acceleration waveform for reproduction on shakers. For waveforms typically used for shaker reproduction, the errors become significant when the frequency content of the waveform spans a large frequency range. It is shown that these errors are essentially independent of the numerical integration method used, and are caused by small aliasing errors from the frequency components near the Nyquist frequency. A method to repair the integrated waveforms is presented. The method involves using a model of the acceleration error, and fitting this model to the acceleration, velocity, and displacement waveforms to force the waveforms to fit the assumed initial and final values. The correction is then subtracted from the acceleration before integration. The method is effective where the errors are isolated to a small section of the time history. It is shown that the common method to repair these errors using a high pass filter is sometimes ineffective for this class of problem.

  15. Anxiety and Error Monitoring: Increased Error Sensitivity or Altered Expectations?

    ERIC Educational Resources Information Center

    Compton, Rebecca J.; Carp, Joshua; Chaddock, Laura; Fineman, Stephanie L.; Quandt, Lorna C.; Ratliff, Jeffrey B.

    2007-01-01

    This study tested the prediction that the error-related negativity (ERN), a physiological measure of error monitoring, would be enhanced in anxious individuals, particularly in conditions with threatening cues. Participants made gender judgments about faces whose expressions were either happy, angry, or neutral. Replicating prior studies, midline…

  16. Error field measurement, correction and heat flux balancing on Wendelstein 7-X

    NASA Astrophysics Data System (ADS)

    Lazerson, Samuel A.; Otte, Matthias; Jakubowski, Marcin; Israeli, Ben; Wurden, Glen A.; Wenzel, Uwe; Andreeva, Tamara; Bozhenkov, Sergey; Biedermann, Christoph; Kocsis, Gábor; Szepesi, Tamás; Geiger, Joachim; Pedersen, Thomas Sunn; Gates, David; The W7-X Team

    2017-04-01

    The measurement and correction of error fields in Wendelstein 7-X (W7-X) is critical to long pulse high beta operation, as small error fields may cause overloading of divertor plates in some configurations. Accordingly, as part of a broad collaborative effort, the detection and correction of error fields on the W7-X experiment has been performed using the trim coil system in conjunction with the flux surface mapping diagnostic and high resolution infrared camera. In the early commissioning phase of the experiment, the trim coils were used to open an n/m  =  1/2 island chain in a specially designed magnetic configuration. The flux surfacing mapping diagnostic was then able to directly image the magnetic topology of the experiment, allowing the inference of a small  ∼4 cm intrinsic island chain. The suspected main sources of the error field, slight misalignment and deformations of the superconducting coils, are then confirmed through experimental modeling using the detailed measurements of the coil positions. Observations of the limiters temperatures in module 5 shows a clear dependence of the limiter heat flux pattern as the perturbing fields are rotated. Plasma experiments without applied correcting fields show a significant asymmetry in neutral pressure (centered in module 4) and light emission (visible, H-alpha, CII, and CIII). Such pressure asymmetry is associated with plasma-wall (limiter) interaction asymmetries between the modules. Application of trim coil fields with n  =  1 waveform correct the imbalance. Confirmation of the error fields allows the assessment of magnetic fields which resonate with the n/m  =  5/5 island chain. Notice: This manuscript has been authored by Princeton University under Contract Number DE-AC02-09CH11466 with the U.S. Department of Energy. The publisher, by accepting the article for publication acknowledges, that the United States Government retains a non-exclusive, paid-up, irrevocable, world

  17. Error studies for SNS Linac. Part 1: Transverse errors

    SciTech Connect

    Crandall, K.R.

    1998-12-31

    The SNS linac consist of a radio-frequency quadrupole (RFQ), a drift-tube linac (DTL), a coupled-cavity drift-tube linac (CCDTL) and a coupled-cavity linac (CCL). The RFQ and DTL are operated at 402.5 MHz; the CCDTL and CCL are operated at 805 MHz. Between the RFQ and DTL is a medium-energy beam-transport system (MEBT). This error study is concerned with the DTL, CCDTL and CCL, and each will be analyzed separately. In fact, the CCL is divided into two sections, and each of these will be analyzed separately. The types of errors considered here are those that affect the transverse characteristics of the beam. The errors that cause the beam center to be displaced from the linac axis are quad displacements and quad tilts. The errors that cause mismatches are quad gradient errors and quad rotations (roll).

  18. 48 CFR 2911.103 - Market acceptance.

    Code of Federal Regulations, 2011 CFR

    2011-10-01

    ... 48 Federal Acquisition Regulations System 7 2011-10-01 2011-10-01 false Market acceptance. 2911... DESCRIBING AGENCY NEEDS Selecting And Developing Requirements Documents 2911.103 Market acceptance. The... offered have either achieved commercial market acceptance or been satisfactorily supplied to an...

  19. 48 CFR 11.103 - Market acceptance.

    Code of Federal Regulations, 2011 CFR

    2011-10-01

    ... 48 Federal Acquisition Regulations System 1 2011-10-01 2011-10-01 false Market acceptance. 11.103... DESCRIBING AGENCY NEEDS Selecting and Developing Requirements Documents 11.103 Market acceptance. (a) Section... either— (i) Achieved commercial market acceptance; or (ii) Been satisfactorily supplied to an...

  20. Older Adults' Acceptance of Information Technology

    ERIC Educational Resources Information Center

    Wang, Lin; Rau, Pei-Luen Patrick; Salvendy, Gavriel

    2011-01-01

    This study investigated variables contributing to older adults' information technology acceptance through a survey, which was used to find factors explaining and predicting older adults' information technology acceptance behaviors. Four factors, including needs satisfaction, perceived usability, support availability, and public acceptance, were…

  1. 46 CFR 28.73 - Accepted organizations.

    Code of Federal Regulations, 2011 CFR

    2011-10-01

    ... 46 Shipping 1 2011-10-01 2011-10-01 false Accepted organizations. 28.73 Section 28.73 Shipping... INDUSTRY VESSELS General Provisions § 28.73 Accepted organizations. An organization desiring to be designated by the Commandant as an accepted organization must request such designation in writing. As...

  2. 46 CFR 28.73 - Accepted organizations.

    Code of Federal Regulations, 2014 CFR

    2014-10-01

    ... 46 Shipping 1 2014-10-01 2014-10-01 false Accepted organizations. 28.73 Section 28.73 Shipping... INDUSTRY VESSELS General Provisions § 28.73 Accepted organizations. An organization desiring to be designated by the Commandant as an accepted organization must request such designation in writing. As...

  3. 46 CFR 28.73 - Accepted organizations.

    Code of Federal Regulations, 2013 CFR

    2013-10-01

    ... 46 Shipping 1 2013-10-01 2013-10-01 false Accepted organizations. 28.73 Section 28.73 Shipping... INDUSTRY VESSELS General Provisions § 28.73 Accepted organizations. An organization desiring to be designated by the Commandant as an accepted organization must request such designation in writing. As...

  4. 46 CFR 28.73 - Accepted organizations.

    Code of Federal Regulations, 2012 CFR

    2012-10-01

    ... 46 Shipping 1 2012-10-01 2012-10-01 false Accepted organizations. 28.73 Section 28.73 Shipping... INDUSTRY VESSELS General Provisions § 28.73 Accepted organizations. An organization desiring to be designated by the Commandant as an accepted organization must request such designation in writing. As...

  5. 46 CFR 28.73 - Accepted organizations.

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... 46 Shipping 1 2010-10-01 2010-10-01 false Accepted organizations. 28.73 Section 28.73 Shipping... INDUSTRY VESSELS General Provisions § 28.73 Accepted organizations. An organization desiring to be designated by the Commandant as an accepted organization must request such designation in writing. As...

  6. 48 CFR 2911.103 - Market acceptance.

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... offered have either achieved commercial market acceptance or been satisfactorily supplied to an agency... 48 Federal Acquisition Regulations System 7 2010-10-01 2010-10-01 false Market acceptance. 2911... DESCRIBING AGENCY NEEDS Selecting And Developing Requirements Documents 2911.103 Market acceptance....

  7. 21 CFR 820.86 - Acceptance status.

    Code of Federal Regulations, 2010 CFR

    2010-04-01

    ... 21 Food and Drugs 8 2010-04-01 2010-04-01 false Acceptance status. 820.86 Section 820.86 Food and Drugs FOOD AND DRUG ADMINISTRATION, DEPARTMENT OF HEALTH AND HUMAN SERVICES (CONTINUED) MEDICAL DEVICES QUALITY SYSTEM REGULATION Acceptance Activities § 820.86 Acceptance status. Each manufacturer...

  8. 48 CFR 11.103 - Market acceptance.

    Code of Federal Regulations, 2013 CFR

    2013-10-01

    ... 48 Federal Acquisition Regulations System 1 2013-10-01 2013-10-01 false Market acceptance. 11.103... DESCRIBING AGENCY NEEDS Selecting and Developing Requirements Documents 11.103 Market acceptance. (a) Section... either— (i) Achieved commercial market acceptance; or (ii) Been satisfactorily supplied to an...

  9. 48 CFR 11.103 - Market acceptance.

    Code of Federal Regulations, 2014 CFR

    2014-10-01

    ... 48 Federal Acquisition Regulations System 1 2014-10-01 2014-10-01 false Market acceptance. 11.103... DESCRIBING AGENCY NEEDS Selecting and Developing Requirements Documents 11.103 Market acceptance. (a) 41 U.S...) Achieved commercial market acceptance; or (ii) Been satisfactorily supplied to an agency under current...

  10. 48 CFR 2911.103 - Market acceptance.

    Code of Federal Regulations, 2013 CFR

    2013-10-01

    ... 48 Federal Acquisition Regulations System 7 2013-10-01 2012-10-01 true Market acceptance. 2911.103... DESCRIBING AGENCY NEEDS Selecting And Developing Requirements Documents 2911.103 Market acceptance. The... offered have either achieved commercial market acceptance or been satisfactorily supplied to an...

  11. 48 CFR 11.103 - Market acceptance.

    Code of Federal Regulations, 2012 CFR

    2012-10-01

    ... 48 Federal Acquisition Regulations System 1 2012-10-01 2012-10-01 false Market acceptance. 11.103... DESCRIBING AGENCY NEEDS Selecting and Developing Requirements Documents 11.103 Market acceptance. (a) Section... either— (i) Achieved commercial market acceptance; or (ii) Been satisfactorily supplied to an...

  12. 48 CFR 2911.103 - Market acceptance.

    Code of Federal Regulations, 2012 CFR

    2012-10-01

    ... 48 Federal Acquisition Regulations System 7 2012-10-01 2012-10-01 false Market acceptance. 2911... DESCRIBING AGENCY NEEDS Selecting And Developing Requirements Documents 2911.103 Market acceptance. The... offered have either achieved commercial market acceptance or been satisfactorily supplied to an...

  13. 48 CFR 2911.103 - Market acceptance.

    Code of Federal Regulations, 2014 CFR

    2014-10-01

    ... 48 Federal Acquisition Regulations System 7 2014-10-01 2014-10-01 false Market acceptance. 2911... DESCRIBING AGENCY NEEDS Selecting And Developing Requirements Documents 2911.103 Market acceptance. The... offered have either achieved commercial market acceptance or been satisfactorily supplied to an...

  14. The relation between remembered parental acceptance in childhood and self-acceptance among young Turkish adults.

    PubMed

    Kuyumcu, Behire; Rohner, Ronald P

    2016-05-11

    This study examined the relation between young adults' age and remembrances of parental acceptance in childhood, and their current self-acceptance. The study was based on a sample of 236 young adults in Turkey (139 women and 97 men). The adult version of the Parental Acceptance-Rejection/Control Questionnaire for mothers and fathers along with the Self-Acceptance subscale of the Psychological Well-Being Scale, and the Personal Information Form were used as measures. Results showed that both men and women tended to remember having been accepted in childhood by both their mothers and fathers. Women, however, reported more maternal and paternal acceptance in childhood than did men. Similarly, the level of self-acceptance was high among both men and women. However, women's self-acceptance was higher than men's. Correlational analyses showed that self-acceptance was positively related to remembrances of maternal and paternal acceptance among both women and men. Results indicated that age and remembered paternal acceptance significantly predicted women's self-acceptance. Age and remembered maternal acceptance made significant and independent contributions to men's self-acceptance. Men's remembrances of paternal acceptance in childhood did not make significant contribution to their self-acceptance. Finally, the relation between women's age and self-acceptance was significantly moderated by remembrances of paternal acceptance in childhood.

  15. Error begat error: design error analysis and prevention in social infrastructure projects.

    PubMed

    Love, Peter E D; Lopez, Robert; Edwards, David J; Goh, Yang M

    2012-09-01

    Design errors contribute significantly to cost and schedule growth in social infrastructure projects and to engineering failures, which can result in accidents and loss of life. Despite considerable research that has addressed their error causation in construction projects they still remain prevalent. This paper identifies the underlying conditions that contribute to design errors in social infrastructure projects (e.g. hospitals, education, law and order type buildings). A systemic model of error causation is propagated and subsequently used to develop a learning framework for design error prevention. The research suggests that a multitude of strategies should be adopted in congruence to prevent design errors from occurring and so ensure that safety and project performance are ameliorated.

  16. Children's Scale Errors with Tools

    ERIC Educational Resources Information Center

    Casler, Krista; Eshleman, Angelica; Greene, Kimberly; Terziyan, Treysi

    2011-01-01

    Children sometimes make "scale errors," attempting to interact with tiny object replicas as though they were full size. Here, we demonstrate that instrumental tools provide special insight into the origins of scale errors and, moreover, into the broader nature of children's purpose-guided reasoning and behavior with objects. In Study 1, 1.5- to…

  17. Dual Processing and Diagnostic Errors

    ERIC Educational Resources Information Center

    Norman, Geoff

    2009-01-01

    In this paper, I review evidence from two theories in psychology relevant to diagnosis and diagnostic errors. "Dual Process" theories of thinking, frequently mentioned with respect to diagnostic error, propose that categorization decisions can be made with either a fast, unconscious, contextual process called System 1 or a slow, analytical,…

  18. Explaining Errors in Children's Questions

    ERIC Educational Resources Information Center

    Rowland, Caroline F.

    2007-01-01

    The ability to explain the occurrence of errors in children's speech is an essential component of successful theories of language acquisition. The present study tested some generativist and constructivist predictions about error on the questions produced by ten English-learning children between 2 and 5 years of age. The analyses demonstrated that,…

  19. Error Estimates for Mixed Methods.

    DTIC Science & Technology

    1979-03-01

    This paper presents abstract error estimates for mixed methods for the approximate solution of elliptic boundary value problems. These estimates are...then applied to obtain quasi-optimal error estimates in the usual Sobolev norms for four examples: three mixed methods for the biharmonic problem and a mixed method for 2nd order elliptic problems. (Author)

  20. Error Correction, Revision, and Learning

    ERIC Educational Resources Information Center

    Truscott, John; Hsu, Angela Yi-ping

    2008-01-01

    Previous research has shown that corrective feedback on an assignment helps learners reduce their errors on that assignment during the revision process. Does this finding constitute evidence that learning resulted from the feedback? Differing answers play an important role in the ongoing debate over the effectiveness of error correction,…

  1. Human Error: A Concept Analysis

    NASA Technical Reports Server (NTRS)

    Hansen, Frederick D.

    2007-01-01

    Human error is the subject of research in almost every industry and profession of our times. This term is part of our daily language and intuitively understood by most people however, it would be premature to assume that everyone's understanding of human error s the same. For example, human error is used to describe the outcome or consequence of human action, the causal factor of an accident, deliberate violations,a nd the actual action taken by a human being. As a result, researchers rarely agree on the either a specific definition or how to prevent human error. The purpose of this article is to explore the specific concept of human error using Concept Analysis as described by Walker and Avant (1995). The concept of human error is examined as currently used in the literature of a variety of industries and professions. Defining attributes and examples of model, borderline, and contrary cases are described. The antecedents and consequences of human error are also discussed and a definition of human error is offered.

  2. Twenty questions about student errors

    NASA Astrophysics Data System (ADS)

    Fisher, Kathleen M.; Lipson, Joseph Isaac

    Errors in science learning (errors in expression of organized, purposeful thought within the domain of science) provide a window through which glimpses of mental functioning can be obtained. Errors are valuable and normal occurrences in the process of learning science. A student can use his/her errors to develop a deeper understanding of a concept as long as the error can be recognized and appropriate, informative feedback can be obtained. A safe, non-threatening, and nonpunitive environment which encourages dialogue helps students to express their conceptions and to risk making errors. Pedagogical methods that systematically address common student errors produce significant gains in student learning. Just as the nature-nurture interaction is integral to the development of living things, so the individual-environment interaction is basic to thought processes. At a minimum, four systems interact: (1) the individual problem solver (who has a worldview, relatively stable cognitive characteristics, relatively malleable mental states and conditions, and aims or intentions), (2) task to be performed (including relative importance and nature of the task), (3) knowledge domain in which task is contained, and (4) the environment (including orienting conditions and the social and physical context).Several basic assumptions underlie research on errors and alternative conceptions. Among these are: Knowledge and thought involve active, constructive processes; there are many ways to acquire, organize, store, retrieve, and think about a given concept or event; and understanding is achieved by successive approximations. Application of these ideas will require a fundamental change in how science is taught.

  3. 14 CFR 189.5 - Limitation of liability.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... liability. The United States is not liable for any omission, error, or delay in transmitting or relaying, or for any failure to transmit or relay, any message accepted for transmission or relayed under this part, even if the omission, error, delay, or failure to transmit or relay is caused by the negligence of...

  4. 14 CFR 189.5 - Limitation of liability.

    Code of Federal Regulations, 2012 CFR

    2012-01-01

    ... liability. The United States is not liable for any omission, error, or delay in transmitting or relaying, or for any failure to transmit or relay, any message accepted for transmission or relayed under this part, even if the omission, error, delay, or failure to transmit or relay is caused by the negligence of...

  5. Onorbit IMU alignment error budget

    NASA Technical Reports Server (NTRS)

    Corson, R. W.

    1980-01-01

    The Star Tracker, Crew Optical Alignment Sight (COAS), and Inertial Measurement Unit (IMU) from a complex navigation system with a multitude of error sources were combined. A complete list of the system errors is presented. The errors were combined in a rational way to yield an estimate of the IMU alignment accuracy for STS-1. The expected standard deviation in the IMU alignment error for STS-1 type alignments was determined to be 72 arc seconds per axis for star tracker alignments and 188 arc seconds per axis for COAS alignments. These estimates are based on current knowledge of the star tracker, COAS, IMU, and navigation base error specifications, and were partially verified by preliminary Monte Carlo analysis.

  6. Angle interferometer cross axis errors

    SciTech Connect

    Bryan, J.B.; Carter, D.L.; Thompson, S.L.

    1994-01-01

    Angle interferometers are commonly used to measure surface plate flatness. An error can exist when the centerline of the double comer cube mirror assembly is not square to the surface plate and the guide bar for the mirror sled is curved. Typical errors can be one to two microns per meter. A similar error can exist in the calibration of rotary tables when the centerline of the double comer cube mirror assembly is not square to the axes of rotation of the angle calibrator and the calibrator axis is not parallel to the rotary table axis. Commercial double comer cube assemblies typically have non-parallelism errors of ten milli-radians between their centerlines and their sides and similar values for non-squareness between their centerlines and end surfaces. The authors have developed a simple method for measuring these errors and correcting them by remachining the reference surfaces.

  7. Angle interferometer cross axis errors

    NASA Astrophysics Data System (ADS)

    Bryan, J. B.; Carter, D. L.; Thompson, S. L.

    1994-01-01

    Angle interferometers are commonly used to measure surface plate flatness. An error can exist when the centerline of the double comer cube mirror assembly is not square to the surface plate and the guide bar for the mirror sled is curved. Typical errors can be one to two microns per meter. A similar error can exist in the calibration of rotary tables when the centerline of the double comer cube mirror assembly is not square to the axes of rotation of the angle calibrator and the calibrator axis is not parallel to the rotary table axis. Commercial double comer cube assemblies typically have non-parallelism errors of ten milli-radians between their centerlines and their sides and similar values for non-squareness between their centerlines and end surfaces. The authors have developed a simple method for measuring these errors and correcting them.

  8. A theory of human error

    NASA Technical Reports Server (NTRS)

    Mcruer, D. T.; Clement, W. F.; Allen, R. W.

    1981-01-01

    Human errors tend to be treated in terms of clinical and anecdotal descriptions, from which remedial measures are difficult to derive. Correction of the sources of human error requires an attempt to reconstruct underlying and contributing causes of error from the circumstantial causes cited in official investigative reports. A comprehensive analytical theory of the cause-effect relationships governing propagation of human error is indispensable to a reconstruction of the underlying and contributing causes. A validated analytical theory of the input-output behavior of human operators involving manual control, communication, supervisory, and monitoring tasks which are relevant to aviation, maritime, automotive, and process control operations is highlighted. This theory of behavior, both appropriate and inappropriate, provides an insightful basis for investigating, classifying, and quantifying the needed cause-effect relationships governing propagation of human error.

  9. Medical error reduction and tort reform through private, contractually-based quality medicine societies.

    PubMed

    MacCourt, Duncan; Bernstein, Joseph

    2009-01-01

    The current medical malpractice system is broken. Many patients injured by malpractice are not compensated, whereas some patients who recover in tort have not suffered medical negligence; furthermore, the system's failures demoralize patients and physicians. But most importantly, the system perpetuates medical error because the adversarial nature of litigation induces a so-called "Culture of Silence" in physicians eager to shield themselves from liability. This silence leads to the pointless repetition of error, as the open discussion and analysis of the root causes of medical mistakes does not take place as fully as it should. In 1993, President Clinton's Task Force on National Health Care Reform considered a solution characterized by Enterprise Medical Liability (EML), Alternative Dispute Resolution (ADR), some limits on recovery for non-pecuniary damages (Caps), and offsets for collateral source recovery. Yet this list of ingredients did not include a strategy to surmount the difficulties associated with each element. Specifically, EML might be efficient, but none of the enterprises contemplated to assume responsibility, i.e., hospitals and payers, control physician behavior enough so that it would be fair to foist liability on them. Likewise, although ADR might be efficient, it will be resisted by individual litigants who perceive themselves as harmed by it. Finally, while limitations on collateral source recovery and damages might effectively reduce costs, patients and trial lawyers likely would not accept them without recompense. The task force also did not place error reduction at the center of malpractice tort reform -a logical and strategic error, in our view. In response, we propose a new system that employs the ingredients suggested by the task force but also addresses the problems with each. We also explicitly consider steps to rebuff the Culture of Silence and promote error reduction. We assert that patients would be better off with a system where

  10. Acceptability of GM foods among Pakistani consumers.

    PubMed

    Ali, Akhter; Rahut, Dil Bahadur; Imtiaz, Muhammad

    2016-04-02

    In Pakistan majority of the consumers do not have information about genetically modified (GM) foods. In developing countries particularly in Pakistan few studies have focused on consumers' acceptability about GM foods. Using comprehensive primary dataset collected from 320 consumers in 2013 from Pakistan, this study analyzes the determinants of consumers' acceptability of GM foods. The data was analyzed by employing the bivariate probit model and censored least absolute deviation (CLAD) models. The empirical results indicated that urban consumers are more aware of GM foods compared to rural consumers. The acceptance of GM foods was more among females' consumers as compared to male consumers. In addition, the older consumers were more willing to accept GM food compared to young consumers. The acceptability of GM foods was also higher among wealthier households. Low price is the key factor leading to the acceptability of GM foods. The acceptability of the GM foods also reduces the risks among Pakistani consumers.

  11. Designing to Control Flight Crew Errors

    NASA Technical Reports Server (NTRS)

    Schutte, Paul C.; Willshire, Kelli F.

    1997-01-01

    It is widely accepted that human error is a major contributing factor in aircraft accidents. There has been a significant amount of research in why these errors occurred, and many reports state that the design of flight deck can actually dispose humans to err. This research has led to the call for changes in design according to human factors and human-centered principles. The National Aeronautics and Space Administration's (NASA) Langley Research Center has initiated an effort to design a human-centered flight deck from a clean slate (i.e., without constraints of existing designs.) The effort will be based on recent research in human-centered design philosophy and mission management categories. This design will match the human's model of the mission and function of the aircraft to reduce unnatural or non-intuitive interfaces. The product of this effort will be a flight deck design description, including training and procedures, and a cross reference or paper trail back to design hypotheses, and an evaluation of the design. The present paper will discuss the philosophy, process, and status of this design effort.

  12. Probabilistic simulation for flaw acceptance by dye-penetrant inspection

    NASA Technical Reports Server (NTRS)

    Russell, D. A.; Keremes, J. J.

    1990-01-01

    This paper examines the problems encountered in assessing the reliability of dye-penetrant nondestructive inspection (NDI) techniques in preventing failures due to undetected surface flaws, as well as from flaw acceptance (Fitness-For-Purpose). A Monte Carlo simulation procedure which includes the major variables of the problem is presented as a means of quantifying reliability. Some issues associated with distribution selection are examined. A methodology for selecting the penetrant type and flaw acceptance size for the specific components analyzed using the simulation is proposed. Current methodology limitations are discussed along with possible future effort. Penetrant selection and acceptable sizes of detected flaws are based on a probabilistic assessment of the effect of component and dye-penetrant system variables on structural reliability.

  13. Consumer Acceptability of Intramuscular Fat

    PubMed Central

    Frank, Damian; Joo, Seon-Tea

    2016-01-01

    Fat in meat greatly improves eating quality, yet many consumers avoid visible fat, mainly because of health concerns. Generations of consumers, especially in the English-speaking world, have been convinced by health authorities that animal fat, particularly saturated or solid fat, should be reduced or avoided to maintain a healthy diet. Decades of negative messages regarding animal fats has resulted in general avoidance of fatty cuts of meat. Paradoxically, low fat or lean meat tends to have poor eating quality and flavor and low consumer acceptability. The failure of low-fat high-carbohydrate diets to curb “globesity” has prompted many experts to re-evaluate of the place of fat in human diets, including animal fat. Attitudes towards fat vary dramatically between and within cultures. Previous generations of humans sought out fatty cuts of meat for their superior sensory properties. Many consumers in East and Southeast Asia have traditionally valued more fatty meat cuts. As nutritional messages around dietary fat change, there is evidence that attitudes towards animal fat are changing and many consumers are rediscovering and embracing fattier cuts of meat, including marbled beef. The present work provides a short overview of the unique sensory characteristics of marbled beef and changing consumer preferences for fat in meat in general. PMID:28115880

  14. Exploring Discretization Error in Simulation-Based Aerodynamic Databases

    NASA Technical Reports Server (NTRS)

    Aftosmis, Michael J.; Nemec, Marian

    2010-01-01

    This work examines the level of discretization error in simulation-based aerodynamic databases and introduces strategies for error control. Simulations are performed using a parallel, multi-level Euler solver on embedded-boundary Cartesian meshes. Discretization errors in user-selected outputs are estimated using the method of adjoint-weighted residuals and we use adaptive mesh refinement to reduce these errors to specified tolerances. Using this framework, we examine the behavior of discretization error throughout a token database computed for a NACA 0012 airfoil consisting of 120 cases. We compare the cost and accuracy of two approaches for aerodynamic database generation. In the first approach, mesh adaptation is used to compute all cases in the database to a prescribed level of accuracy. The second approach conducts all simulations using the same computational mesh without adaptation. We quantitatively assess the error landscape and computational costs in both databases. This investigation highlights sensitivities of the database under a variety of conditions. The presence of transonic shocks or the stiffness in the governing equations near the incompressible limit are shown to dramatically increase discretization error requiring additional mesh resolution to control. Results show that such pathologies lead to error levels that vary by over factor of 40 when using a fixed mesh throughout the database. Alternatively, controlling this sensitivity through mesh adaptation leads to mesh sizes which span two orders of magnitude. We propose strategies to minimize simulation cost in sensitive regions and discuss the role of error-estimation in database quality.

  15. Educational agenda for diagnostic error reduction.

    PubMed

    Trowbridge, Robert L; Dhaliwal, Gurpreet; Cosby, Karen S

    2013-10-01

    Diagnostic errors are a major patient safety concern. Although the majority of diagnostic errors are partially attributable to cognitive mistakes, the most effective means of improving clinician cognition in order to achieve gains in diagnostic reliability are unclear. We propose a tripartite educational agenda for improving diagnostic performance among students, residents and practising physicians. This agenda includes strengthening the metacognitive abilities of clinicians, fostering intuitive reasoning and increasing awareness of the role of systems in the diagnostic process. The evidence supporting initiatives in each of these realms is reviewed and a course of future implementation and study is proposed. The barriers to designing and implementing this agenda are substantial and include limited evidence supporting these initiatives and the challenges of changing the practice patterns of practising physicians. Implementation will need to be accompanied by rigorous evaluation.

  16. Force Limited Vibration Testing

    NASA Technical Reports Server (NTRS)

    Scharton, Terry; Chang, Kurng Y.

    2005-01-01

    This slide presentation reviews the concept and applications of Force Limited Vibration Testing. The goal of vibration testing of aerospace hardware is to identify problems that would result in flight failures. The commonly used aerospace vibration tests uses artificially high shaker forces and responses at the resonance frequencies of the test item. It has become common to limit the acceleration responses in the test to those predicted for the flight. This requires an analysis of the acceleration response, and requires placing accelerometers on the test item. With the advent of piezoelectric gages it has become possible to improve vibration testing. The basic equations have are reviewed. Force limits are analogous and complementary to the acceleration specifications used in conventional vibration testing. Just as the acceleration specification is the frequency spectrum envelope of the in-flight acceleration at the interface between the test item and flight mounting structure, the force limit is the envelope of the in-flight force at the interface . In force limited vibration tests, both the acceleration and force specifications are needed, and the force specification is generally based on and proportional to the acceleration specification. Therefore, force limiting does not compensate for errors in the development of the acceleration specification, e.g., too much conservatism or the lack thereof. These errors will carry over into the force specification. Since in-flight vibratory force data are scarce, force limits are often derived from coupled system analyses and impedance information obtained from measurements or finite element models (FEM). Fortunately, data on the interface forces between systems and components are now available from system acoustic and vibration tests of development test models and from a few flight experiments. Semi-empirical methods of predicting force limits are currently being developed on the basis of the limited flight and system test

  17. Reduced error signalling in medication-naive children with ADHD: associations with behavioural variability and post-error adaptations

    PubMed Central

    Plessen, Kerstin J.; Allen, Elena A.; Eichele, Heike; van Wageningen, Heidi; Høvik, Marie Farstad; Sørensen, Lin; Worren, Marius Kalsås; Hugdahl, Kenneth; Eichele, Tom

    2016-01-01

    Background We examined the blood-oxygen level–dependent (BOLD) activation in brain regions that signal errors and their association with intraindividual behavioural variability and adaptation to errors in children with attention-deficit/hyperactivity disorder (ADHD). Methods We acquired functional MRI data during a Flanker task in medication-naive children with ADHD and healthy controls aged 8–12 years and analyzed the data using independent component analysis. For components corresponding to performance monitoring networks, we compared activations across groups and conditions and correlated them with reaction times (RT). Additionally, we analyzed post-error adaptations in behaviour and motor component activations. Results We included 25 children with ADHD and 29 controls in our analysis. Children with ADHD displayed reduced activation to errors in cingulo-opercular regions and higher RT variability, but no differences of interference control. Larger BOLD amplitude to error trials significantly predicted reduced RT variability across all participants. Neither group showed evidence of post-error response slowing; however, post-error adaptation in motor networks was significantly reduced in children with ADHD. This adaptation was inversely related to activation of the right-lateralized ventral attention network (VAN) on error trials and to task-driven connectivity between the cingulo-opercular system and the VAN. Limitations Our study was limited by the modest sample size and imperfect matching across groups. Conclusion Our findings show a deficit in cingulo-opercular activation in children with ADHD that could relate to reduced signalling for errors. Moreover, the reduced orienting of the VAN signal may mediate deficient post-error motor adaptions. Pinpointing general performance monitoring problems to specific brain regions and operations in error processing may help to guide the targets of future treatments for ADHD. PMID:26441332

  18. Error detection for genetic data, using likelihood methods

    SciTech Connect

    Ehm, M.G.; Kimmel, M.; Cottingham, R.W. Jr.

    1996-01-01

    As genetic maps become denser, the effect of laboratory typing errors becomes more serious. We review a general method for detecting errors in pedigree genotyping data that is a variant of the likelihood-ratio test statistic. It pinpoints individuals and loci with relatively unlikely genotypes. Power and significance studies using Monte Carlo methods are shown by using simulated data with pedigree structures similar to the CEPH pedigrees and a larger experimental pedigree used in the study of idiopathic dilated cardiomyopathy (DCM). The studies show the index detects errors for small values of {theta} with high power and an acceptable false positive rate. The method was also used to check for errors in DCM laboratory pedigree data and to estimate the error rate in CEPH chromosome 6 data. The errors flagged by our method in the DCM pedigree were confirmed by the laboratory. The results are consistent with estimated false-positive and false-negative rates obtained using simulation. 21 refs., 5 figs., 2 tabs.

  19. Consequences of leaf calibration errors on IMRT delivery

    NASA Astrophysics Data System (ADS)

    Sastre-Padro, M.; Welleweerd, J.; Malinen, E.; Eilertsen, K.; Olsen, D. R.; van der Heide, U. A.

    2007-02-01

    IMRT treatments using multi-leaf collimators may involve a large number of segments in order to spare the organs at risk. When a large proportion of these segments are small, leaf positioning errors may become relevant and have therapeutic consequences. The performance of four head and neck IMRT treatments under eight different cases of leaf positioning errors has been studied. Systematic leaf pair offset errors in the range of ±2.0 mm were introduced, thus modifying the segment sizes of the original IMRT plans. Thirty-six films were irradiated with the original and modified segments. The dose difference and the gamma index (with 2%/2 mm criteria) were used for evaluating the discrepancies between the irradiated films. The median dose differences were linearly related to the simulated leaf pair errors. In the worst case, a 2.0 mm error generated a median dose difference of 1.5%. Following the gamma analysis, two out of the 32 modified plans were not acceptable. In conclusion, small systematic leaf bank positioning errors have a measurable impact on the delivered dose and may have consequences for the therapeutic outcome of IMRT.

  20. Altimeter error sources at the 10-cm performance level

    NASA Technical Reports Server (NTRS)

    Martin, C. F.

    1977-01-01

    Error sources affecting the calibration and operational use of a 10 cm altimeter are examined to determine the magnitudes of current errors and the investigations necessary to reduce them to acceptable bounds. Errors considered include those affecting operational data pre-processing, and those affecting altitude bias determination, with error budgets developed for both. The most significant error sources affecting pre-processing are bias calibration, propagation corrections for the ionosphere, and measurement noise. No ionospheric models are currently validated at the required 10-25% accuracy level. The optimum smoothing to reduce the effects of measurement noise is investigated and found to be on the order of one second, based on the TASC model of geoid undulations. The 10 cm calibrations are found to be feasible only through the use of altimeter passes that are very high elevation for a tracking station which tracks very close to the time of altimeter track, such as a high elevation pass across the island of Bermuda. By far the largest error source, based on the current state-of-the-art, is the location of the island tracking station relative to mean sea level in the surrounding ocean areas.

  1. Acceptance in Romantic Relationships: The Frequency and Acceptability of Partner Behavior Inventory

    ERIC Educational Resources Information Center

    Doss, Brian D.; Christensen, Andrew

    2006-01-01

    Despite the recent emphasis on acceptance in romantic relationships, no validated measure of relationship acceptance presently exists. To fill this gap, the 20-item Frequency and Acceptability of Partner Behavior Inventory (FAPBI; A. Christensen & N. S. Jacobson, 1997) was created to assess separately the acceptability and frequency of both…

  2. 24 CFR 203.202 - Plan acceptability and acceptance renewal criteria-general.

    Code of Federal Regulations, 2010 CFR

    2010-04-01

    ... HUD acceptance of such change or modification, except that changes mandated by other applicable laws... 24 Housing and Urban Development 2 2010-04-01 2010-04-01 false Plan acceptability and acceptance... Underwriting Procedures Insured Ten-Year Protection Plans (plan) § 203.202 Plan acceptability and...

  3. Controlling type-1 error rates in whole effluent toxicity testing

    SciTech Connect

    Smith, R.; Johnson, S.C.

    1995-12-31

    A form of variability, called the dose x test interaction, has been found to affect the variability of the mean differences from control in the statistical tests used to evaluate Whole Effluent Toxicity Tests for compliance purposes. Since the dose x test interaction is not included in these statistical tests, the assumed type-1 and type-2 error rates can be incorrect. The accepted type-1 error rate for these tests is 5%. Analysis of over 100 Ceriodaphnia, fathead minnow and sea urchin fertilization tests showed that when the test x dose interaction term was not included in the calculations the type-1 error rate was inflated to as high as 20%. In a compliance setting, this problem may lead to incorrect regulatory decisions. Statistical tests are proposed that properly incorporate the dose x test interaction variance.

  4. Errors, error detection, error correction and hippocampal-region damage: data and theories.

    PubMed

    MacKay, Donald G; Johnson, Laura W

    2013-11-01

    This review and perspective article outlines 15 observational constraints on theories of errors, error detection, and error correction, and their relation to hippocampal-region (HR) damage. The core observations come from 10 studies with H.M., an amnesic with cerebellar and HR damage but virtually no neocortical damage. Three studies examined the detection of errors planted in visual scenes (e.g., a bird flying in a fish bowl in a school classroom) and sentences (e.g., I helped themselves to the birthday cake). In all three experiments, H.M. detected reliably fewer errors than carefully matched memory-normal controls. Other studies examined the detection and correction of self-produced errors, with controls for comprehension of the instructions, impaired visual acuity, temporal factors, motoric slowing, forgetting, excessive memory load, lack of motivation, and deficits in visual scanning or attention. In these studies, H.M. corrected reliably fewer errors than memory-normal and cerebellar controls, and his uncorrected errors in speech, object naming, and reading aloud exhibited two consistent features: omission and anomaly. For example, in sentence production tasks, H.M. omitted one or more words in uncorrected encoding errors that rendered his sentences anomalous (incoherent, incomplete, or ungrammatical) reliably more often than controls. Besides explaining these core findings, the theoretical principles discussed here explain H.M.'s retrograde amnesia for once familiar episodic and semantic information; his anterograde amnesia for novel information; his deficits in visual cognition, sentence comprehension, sentence production, sentence reading, and object naming; and effects of aging on his ability to read isolated low frequency words aloud. These theoretical principles also explain a wide range of other data on error detection and correction and generate new predictions for future test.

  5. Mars gravitational field estimation error

    NASA Technical Reports Server (NTRS)

    Compton, H. R.; Daniels, E. F.

    1972-01-01

    The error covariance matrices associated with a weighted least-squares differential correction process have been analyzed for accuracy in determining the gravitational coefficients through degree and order five in the Mars gravitational potential junction. The results are presented in terms of standard deviations for the assumed estimated parameters. The covariance matrices were calculated by assuming Doppler tracking data from a Mars orbiter, a priori statistics for the estimated parameters, and model error uncertainties for tracking-station locations, the Mars ephemeris, the astronomical unit, the Mars gravitational constant (G sub M), and the gravitational coefficients of degrees six and seven. Model errors were treated by using the concept of consider parameters.

  6. Stochastic Models of Human Errors

    NASA Technical Reports Server (NTRS)

    Elshamy, Maged; Elliott, Dawn M. (Technical Monitor)

    2002-01-01

    Humans play an important role in the overall reliability of engineering systems. More often accidents and systems failure are traced to human errors. Therefore, in order to have meaningful system risk analysis, the reliability of the human element must be taken into consideration. Describing the human error process by mathematical models is a key to analyzing contributing factors. Therefore, the objective of this research effort is to establish stochastic models substantiated by sound theoretic foundation to address the occurrence of human errors in the processing of the space shuttle.

  7. Error bounds in cascading regressions

    USGS Publications Warehouse

    Karlinger, M.R.; Troutman, B.M.

    1985-01-01

    Cascading regressions is a technique for predicting a value of a dependent variable when no paired measurements exist to perform a standard regression analysis. Biases in coefficients of a cascaded-regression line as well as error variance of points about the line are functions of the correlation coefficient between dependent and independent variables. Although this correlation cannot be computed because of the lack of paired data, bounds can be placed on errors through the required properties of the correlation coefficient. The potential meansquared error of a cascaded-regression prediction can be large, as illustrated through an example using geomorphologic data. ?? 1985 Plenum Publishing Corporation.

  8. Error Analysis and Propagation in Metabolomics Data Analysis.

    PubMed

    Moseley, Hunter N B

    2013-01-01

    Error analysis plays a fundamental role in describing the uncertainty in experimental results. It has several fundamental uses in metabolomics including experimental design, quality control of experiments, the selection of appropriate statistical methods, and the determination of uncertainty in results. Furthermore, the importance of error analysis has grown with the increasing number, complexity, and heterogeneity of measurements characteristic of 'omics research. The increase in data complexity is particularly problematic for metabolomics, which has more heterogeneity than other omics technologies due to the much wider range of molecular entities detected and measured. This review introduces the fundamental concepts of error analysis as they apply to a wide range of metabolomics experimental designs and it discusses current methodologies for determining the propagation of uncertainty in appropriate metabolomics data analysis. These methodologies include analytical derivation and approximation techniques, Monte Carlo error analysis, and error analysis in metabolic inverse problems. Current limitations of each methodology with respect to metabolomics data analysis are also discussed.

  9. Medication error prevention in the school setting: a closer look.

    PubMed

    Richmond, Sandra L

    2011-09-01

    Empirical evidence has identified that medication errors occur in the school setting; however, there is little research that identifies medication error prevention strategies specific to the school environment. This article reviews common medication errors that occur in the school setting and presents potential medication prevention strategies, such as developing medication error reporting systems, using technology, reviewing systems and processes that support current medication administration practices, and limiting distractions. The Standards of Professional Performance developed by the National Association of School Nurses identifies the need for school nurses to enhance the quality and effectiveness of their practice. Improving the safety of medication administration and preventing medication errors are examples of how nurses can demonstrate meeting this standard.

  10. Modelling non-Gaussianity of background and observational errors by the Maximum Entropy method

    NASA Astrophysics Data System (ADS)

    Pires, Carlos; Talagrand, Olivier; Bocquet, Marc

    2010-05-01

    The Best Linear Unbiased Estimator (BLUE) has widely been used in atmospheric-oceanic data assimilation. However, when data errors have non-Gaussian pdfs, the BLUE differs from the absolute Minimum Variance Unbiased Estimator (MVUE), minimizing the mean square analysis error. The non-Gaussianity of errors can be due to the statistical skewness and positiveness of some physical observables (e.g. moisture, chemical species) or due to the nonlinearity of the data assimilation models and observation operators acting on Gaussian errors. Non-Gaussianity of assimilated data errors can be justified from a priori hypotheses or inferred from statistical diagnostics of innovations (observation minus background). Following this rationale, we compute measures of innovation non-Gaussianity, namely its skewness and kurtosis, relating it to: a) the non-Gaussianity of the individual error themselves, b) the correlation between nonlinear functions of errors, and c) the heteroscedasticity of errors within diagnostic samples. Those relationships impose bounds for skewness and kurtosis of errors which are critically dependent on the error variances, thus leading to a necessary tuning of error variances in order to accomplish consistency with innovations. We evaluate the sub-optimality of the BLUE as compared to the MVUE, in terms of excess of error variance, under the presence of non-Gaussian errors. The error pdfs are obtained by the maximum entropy method constrained by error moments up to fourth order, from which the Bayesian probability density function and the MVUE are computed. The impact is higher for skewed extreme innovations and grows in average with the skewness of data errors, especially if those skewnesses have the same sign. Application has been performed to the quality-accepted ECMWF innovations of brightness temperatures of a set of High Resolution Infrared Sounder channels. In this context, the MVUE has led in some extreme cases to a potential reduction of 20-60% error

  11. Synchrotron radiation measurement of multiphase fluid saturations in porous media: Experimental technique and error analysis

    NASA Astrophysics Data System (ADS)

    Tuck, David M.; Bierck, Barnes R.; Jaffé, Peter R.

    1998-06-01

    Multiphase flow in porous media is an important research topic. In situ, nondestructive experimental methods for studying multiphase flow are important for improving our understanding and the theory. Rapid changes in fluid saturation, characteristic of immiscible displacement, are difficult to measure accurately using gamma rays due to practical restrictions on source strength. Our objective is to describe a synchrotron radiation technique for rapid, nondestructive saturation measurements of multiple fluids in porous media, and to present a precision and accuracy analysis of the technique. Synchrotron radiation provides a high intensity, inherently collimated photon beam of tunable energy which can yield accurate measurements of fluid saturation in just one second. Measurements were obtained with precision of ±0.01 or better for tetrachloroethylene (PCE) in a 2.5 cm thick glass-bead porous medium using a counting time of 1 s. The normal distribution was shown to provide acceptable confidence limits for PCE saturation changes. Sources of error include heat load on the monochromator, periodic movement of the source beam, and errors in stepping-motor positioning system. Hypodermic needles pushed into the medium to inject PCE changed porosity in a region approximately ±1 mm of the injection point. Improved mass balance between the known and measured PCE injection volumes was obtained when appropriate corrections were applied to calibration values near the injection point.

  12. Aging transition by random errors

    PubMed Central

    Sun, Zhongkui; Ma, Ning; Xu, Wei

    2017-01-01

    In this paper, the effects of random errors on the oscillating behaviors have been studied theoretically and numerically in a prototypical coupled nonlinear oscillator. Two kinds of noises have been employed respectively to represent the measurement errors accompanied with the parameter specifying the distance from a Hopf bifurcation in the Stuart-Landau model. It has been demonstrated that when the random errors are uniform random noise, the change of the noise intensity can effectively increase the robustness of the system. While the random errors are normal random noise, the increasing of variance can also enhance the robustness of the system under certain conditions that the probability of aging transition occurs reaches a certain threshold. The opposite conclusion is obtained when the probability is less than the threshold. These findings provide an alternative candidate to control the critical value of aging transition in coupled oscillator system, which is composed of the active oscillators and inactive oscillators in practice. PMID:28198430

  13. Static Detection of Disassembly Errors

    SciTech Connect

    Krishnamoorthy, Nithya; Debray, Saumya; Fligg, Alan K

    2009-10-13

    Static disassembly is a crucial first step in reverse engineering executable files, and there is a consider- able body of work in reverse-engineering of binaries, as well as areas such as semantics-based security anal- ysis, that assumes that the input executable has been correctly disassembled. However, disassembly errors, e.g., arising from binary obfuscations, can render this assumption invalid. This work describes a machine- learning-based approach, using decision trees, for stat- ically identifying possible errors in a static disassem- bly; such potential errors may then be examined more closely, e.g., using dynamic analyses. Experimental re- sults using a variety of input executables indicate that our approach performs well, correctly identifying most disassembly errors with relatively few false positives.

  14. Prospective errors determine motor learning

    PubMed Central

    Takiyama, Ken; Hirashima, Masaya; Nozaki, Daichi

    2015-01-01

    Diverse features of motor learning have been reported by numerous studies, but no single theoretical framework concurrently accounts for these features. Here, we propose a model for motor learning to explain these features in a unified way by extending a motor primitive framework. The model assumes that the recruitment pattern of motor primitives is determined by the predicted movement error of an upcoming movement (prospective error). To validate this idea, we perform a behavioural experiment to examine the model’s novel prediction: after experiencing an environment in which the movement error is more easily predictable, subsequent motor learning should become faster. The experimental results support our prediction, suggesting that the prospective error might be encoded in the motor primitives. Furthermore, we demonstrate that this model has a strong explanatory power to reproduce a wide variety of motor-learning-related phenomena that have been separately explained by different computational models. PMID:25635628

  15. Aging transition by random errors

    NASA Astrophysics Data System (ADS)

    Sun, Zhongkui; Ma, Ning; Xu, Wei

    2017-02-01

    In this paper, the effects of random errors on the oscillating behaviors have been studied theoretically and numerically in a prototypical coupled nonlinear oscillator. Two kinds of noises have been employed respectively to represent the measurement errors accompanied with the parameter specifying the distance from a Hopf bifurcation in the Stuart-Landau model. It has been demonstrated that when the random errors are uniform random noise, the change of the noise intensity can effectively increase the robustness of the system. While the random errors are normal random noise, the increasing of variance can also enhance the robustness of the system under certain conditions that the probability of aging transition occurs reaches a certain threshold. The opposite conclusion is obtained when the probability is less than the threshold. These findings provide an alternative candidate to control the critical value of aging transition in coupled oscillator system, which is composed of the active oscillators and inactive oscillators in practice.

  16. Measurement Accuracy Limitation Analysis on Synchrophasors

    SciTech Connect

    Zhao, Jiecheng; Zhan, Lingwei; Liu, Yilu; Qi, Hairong; Gracia, Jose R; Ewing, Paul D

    2015-01-01

    This paper analyzes the theoretical accuracy limitation of synchrophasors measurements on phase angle and frequency of the power grid. Factors that cause the measurement error are analyzed, including error sources in the instruments and in the power grid signal. Different scenarios of these factors are evaluated according to the normal operation status of power grid measurement. Based on the evaluation and simulation, the errors of phase angle and frequency caused by each factor are calculated and discussed.

  17. Conflict in fiduciary duty involving health care error reporting.

    PubMed

    Stewart, Della Wyatt

    2002-08-01

    Fiduciary duty is the responsibility to act in the best interest of a person or organization. Health care professionals, as well as managers in other industries, struggle continuously with the dilemma of whether or not to admit potentially harmful mistakes to unsuspecting customers and patients. Limited public disclosure of medical errors will benefit health care staff, organizational executives, and patients if specific policies are enacted to improve error prevention.

  18. Interpolation Errors in Spectrum Analyzers

    NASA Technical Reports Server (NTRS)

    Martin, J. L.

    1996-01-01

    To obtain the proper measurement amplitude with a spectrum analyzer, the correct frequency-dependent transducer factor must be added to the voltage measured by the transducer. This report examines how entering transducer factors into a spectrum analyzer can cause significant errors in field amplitude due to the misunderstanding of the analyzer's interpolation methods. It also discusses how to reduce these errors to obtain a more accurate field amplitude reading.

  19. Estimation of rod scale errors in geodetic leveling

    USGS Publications Warehouse

    Craymer, Michael R.; Vaníček, Petr; Castle, Robert O.

    1995-01-01

    Comparisons among repeated geodetic levelings have often been used for detecting and estimating residual rod scale errors in leveled heights. Individual rod-pair scale errors are estimated by a two-step procedure using a model based on either differences in heights, differences in section height differences, or differences in section tilts. It is shown that the estimated rod-pair scale errors derived from each model are identical only when the data are correctly weighted, and the mathematical correlations are accounted for in the model based on heights. Analyses based on simple regressions of changes in height versus height can easily lead to incorrect conclusions. We also show that the statistically estimated scale errors are not a simple function of height, height difference, or tilt. The models are valid only when terrain slope is constant over adjacent pairs of setups (i.e., smoothly varying terrain). In order to discriminate between rod scale errors and vertical displacements due to crustal motion, the individual rod-pairs should be used in more than one leveling, preferably in areas of contrasting tectonic activity. From an analysis of 37 separately calibrated rod-pairs used in 55 levelings in southern California, we found eight statistically significant coefficients that could be reasonably attributed to rod scale errors, only one of which was larger than the expected random error in the applied calibration-based scale correction. However, significant differences with other independent checks indicate that caution should be exercised before accepting these results as evidence of scale error. Further refinements of the technique are clearly needed if the results are to be routinely applied in practice.

  20. Students' Formalising Process of the Limit Concept

    ERIC Educational Resources Information Center

    Kabael, Tangul

    2014-01-01

    The concept of limit is the foundation for many concepts such as the derivative and the integral in advanced mathematics. The limit concept has been a research topic in mathematics education for years and in the literature it is a broadly accepted fact that the limit is a difficult notion for most students. The study presented in this article is a…

  1. Quantum error correction for beginners.

    PubMed

    Devitt, Simon J; Munro, William J; Nemoto, Kae

    2013-07-01

    Quantum error correction (QEC) and fault-tolerant quantum computation represent one of the most vital theoretical aspects of quantum information processing. It was well known from the early developments of this exciting field that the fragility of coherent quantum systems would be a catastrophic obstacle to the development of large-scale quantum computers. The introduction of quantum error correction in 1995 showed that active techniques could be employed to mitigate this fatal problem. However, quantum error correction and fault-tolerant computation is now a much larger field and many new codes, techniques, and methodologies have been developed to implement error correction for large-scale quantum algorithms. In response, we have attempted to summarize the basic aspects of quantum error correction and fault-tolerance, not as a detailed guide, but rather as a basic introduction. The development in this area has been so pronounced that many in the field of quantum information, specifically researchers who are new to quantum information or people focused on the many other important issues in quantum computation, have found it difficult to keep up with the general formalisms and methodologies employed in this area. Rather than introducing these concepts from a rigorous mathematical and computer science framework, we instead examine error correction and fault-tolerance largely through detailed examples, which are more relevant to experimentalists today and in the near future.

  2. Error image aware content restoration

    NASA Astrophysics Data System (ADS)

    Choi, Sungwoo; Lee, Moonsik; Jung, Byunghee

    2015-12-01

    As the resolution of TV significantly increased, content consumers have become increasingly sensitive to the subtlest defect in TV contents. This rising standard in quality demanded by consumers has posed a new challenge in today's context where the tape-based process has transitioned to the file-based process: the transition necessitated digitalizing old archives, a process which inevitably produces errors such as disordered pixel blocks, scattered white noise, or totally missing pixels. Unsurprisingly, detecting and fixing such errors require a substantial amount of time and human labor to meet the standard demanded by today's consumers. In this paper, we introduce a novel, automated error restoration algorithm which can be applied to different types of classic errors by utilizing adjacent images while preserving the undamaged parts of an error image as much as possible. We tested our method to error images detected from our quality check system in KBS(Korean Broadcasting System) video archive. We are also implementing the algorithm as a plugin of well-known NLE(Non-linear editing system), which is a familiar tool for quality control agent.

  3. Dominant modes via model error

    NASA Technical Reports Server (NTRS)

    Yousuff, A.; Breida, M.

    1992-01-01

    Obtaining a reduced model of a stable mechanical system with proportional damping is considered. Such systems can be conveniently represented in modal coordinates. Two popular schemes, the modal cost analysis and the balancing method, offer simple means of identifying dominant modes for retention in the reduced model. The dominance is measured via the modal costs in the case of modal cost analysis and via the singular values of the Gramian-product in the case of balancing. Though these measures do not exactly reflect the more appropriate model error, which is the H2 norm of the output-error between the full and the reduced models, they do lead to simple computations. Normally, the model error is computed after the reduced model is obtained, since it is believed that, in general, the model error cannot be easily computed a priori. The authors point out that the model error can also be calculated a priori, just as easily as the above measures. Hence, the model error itself can be used to determine the dominant modes. Moreover, the simplicity of the computations does not presume any special properties of the system, such as small damping, orthogonal symmetry, etc.

  4. 7 CFR 1207.323 - Acceptance.

    Code of Federal Regulations, 2013 CFR

    2013-01-01

    ... AND ORDERS; MISCELLANEOUS COMMODITIES), DEPARTMENT OF AGRICULTURE POTATO RESEARCH AND PROMOTION PLAN Potato Research and Promotion Plan National Potato Promotion Board § 1207.323 Acceptance. Each...

  5. 7 CFR 1207.323 - Acceptance.

    Code of Federal Regulations, 2011 CFR

    2011-01-01

    ... AND ORDERS; MISCELLANEOUS COMMODITIES), DEPARTMENT OF AGRICULTURE POTATO RESEARCH AND PROMOTION PLAN Potato Research and Promotion Plan National Potato Promotion Board § 1207.323 Acceptance. Each...

  6. 7 CFR 1207.323 - Acceptance.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... AND ORDERS; MISCELLANEOUS COMMODITIES), DEPARTMENT OF AGRICULTURE POTATO RESEARCH AND PROMOTION PLAN Potato Research and Promotion Plan National Potato Promotion Board § 1207.323 Acceptance. Each...

  7. 7 CFR 1207.323 - Acceptance.

    Code of Federal Regulations, 2012 CFR

    2012-01-01

    ... AND ORDERS; MISCELLANEOUS COMMODITIES), DEPARTMENT OF AGRICULTURE POTATO RESEARCH AND PROMOTION PLAN Potato Research and Promotion Plan National Potato Promotion Board § 1207.323 Acceptance. Each...

  8. 7 CFR 1207.323 - Acceptance.

    Code of Federal Regulations, 2014 CFR

    2014-01-01

    ... AND ORDERS; MISCELLANEOUS COMMODITIES), DEPARTMENT OF AGRICULTURE POTATO RESEARCH AND PROMOTION PLAN Potato Research and Promotion Plan National Potato Promotion Board § 1207.323 Acceptance. Each...

  9. 7 CFR 932.32 - Acceptance.

    Code of Federal Regulations, 2012 CFR

    2012-01-01

    ... and Orders; Fruits, Vegetables, Nuts), DEPARTMENT OF AGRICULTURE OLIVES GROWN IN CALIFORNIA Order Regulating Handling Olive Administrative Committee § 932.32 Acceptance. Any person selected by the...

  10. 7 CFR 932.32 - Acceptance.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... and Orders; Fruits, Vegetables, Nuts), DEPARTMENT OF AGRICULTURE OLIVES GROWN IN CALIFORNIA Order Regulating Handling Olive Administrative Committee § 932.32 Acceptance. Any person selected by the...

  11. 7 CFR 932.32 - Acceptance.

    Code of Federal Regulations, 2011 CFR

    2011-01-01

    ... and Orders; Fruits, Vegetables, Nuts), DEPARTMENT OF AGRICULTURE OLIVES GROWN IN CALIFORNIA Order Regulating Handling Olive Administrative Committee § 932.32 Acceptance. Any person selected by the...

  12. 7 CFR 932.32 - Acceptance.

    Code of Federal Regulations, 2014 CFR

    2014-01-01

    ... AND ORDERS; FRUITS, VEGETABLES, NUTS), DEPARTMENT OF AGRICULTURE OLIVES GROWN IN CALIFORNIA Order Regulating Handling Olive Administrative Committee § 932.32 Acceptance. Any person selected by the...

  13. 7 CFR 932.32 - Acceptance.

    Code of Federal Regulations, 2013 CFR

    2013-01-01

    ... AND ORDERS; FRUITS, VEGETABLES, NUTS), DEPARTMENT OF AGRICULTURE OLIVES GROWN IN CALIFORNIA Order Regulating Handling Olive Administrative Committee § 932.32 Acceptance. Any person selected by the...

  14. Acceptance Criteria for Aerospace Structural Adhesives.

    DTIC Science & Technology

    ADHESIVES, *AIRFRAMES, PRIMERS, STRUCTURAL ENGINEERING, CHEMICAL COMPOSITION, MECHANICAL PROPERTIES, INDUSTRIAL PRODUCTION , DATA ACQUISITION , PARTICLE SIZE, ACCEPTANCE TESTS, ELASTOMERS, BONDING, QUALITY CONTROL, .

  15. 2013 SYR Accepted Poster Abstracts.

    PubMed

    2013-01-01

    SYR 2013 Accepted Poster abstracts: 1. Benefits of Yoga as a Wellness Practice in a Veterans Affairs (VA) Health Care Setting: If You Build It, Will They Come? 2. Yoga-based Psychotherapy Group With Urban Youth Exposed to Trauma. 3. Embodied Health: The Effects of a Mind�Body Course for Medical Students. 4. Interoceptive Awareness and Vegetable Intake After a Yoga and Stress Management Intervention. 5. Yoga Reduces Performance Anxiety in Adolescent Musicians. 6. Designing and Implementing a Therapeutic Yoga Program for Older Women With Knee Osteoarthritis. 7. Yoga and Life Skills Eating Disorder Prevention Among 5th Grade Females: A Controlled Trial. 8. A Randomized, Controlled Trial Comparing the Impact of Yoga and Physical Education on the Emotional and Behavioral Functioning of Middle School Children. 9. Feasibility of a Multisite, Community based Randomized Study of Yoga and Wellness Education for Women With Breast Cancer Undergoing Chemotherapy. 10. A Delphi Study for the Development of Protocol Guidelines for Yoga Interventions in Mental Health. 11. Impact Investigation of Breathwalk Daily Practice: Canada�India Collaborative Study. 12. Yoga Improves Distress, Fatigue, and Insomnia in Older Veteran Cancer Survivors: Results of a Pilot Study. 13. Assessment of Kundalini Mantra and Meditation as an Adjunctive Treatment With Mental Health Consumers. 14. Kundalini Yoga Therapy Versus Cognitive Behavior Therapy for Generalized Anxiety Disorder and Co-Occurring Mood Disorder. 15. Baseline Differences in Women Versus Men Initiating Yoga Programs to Aid Smoking Cessation: Quitting in Balance Versus QuitStrong. 16. Pranayam Practice: Impact on Focus and Everyday Life of Work and Relationships. 17. Participation in a Tailored Yoga Program is Associated With Improved Physical Health in Persons With Arthritis. 18. Effects of Yoga on Blood Pressure: Systematic Review and Meta-analysis. 19. A Quasi-experimental Trial of a Yoga based Intervention to Reduce Stress and

  16. Error analysis in nuclear density functional theory

    NASA Astrophysics Data System (ADS)

    Schunck, Nicolas; McDonnell, Jordan D.; Sarich, Jason; Wild, Stefan M.; Higdon, Dave

    2015-03-01

    Nuclear density functional theory (DFT) is the only microscopic, global approach to the structure of atomic nuclei. It is used in numerous applications, from determining the limits of stability to gaining a deep understanding of the formation of elements in the Universe or the mechanisms that power stars and reactors. The predictive power of the theory depends on the amount of physics embedded in the energy density functional as well as on efficient ways to determine a small number of free parameters and solve the DFT equations. In this article, we discuss the various sources of uncertainties and errors encountered in DFT and possible methods to quantify these uncertainties in a rigorous manner.

  17. In acceptance we trust? Conceptualising acceptance as a viable approach to NGO security management.

    PubMed

    Fast, Larissa A; Freeman, C Faith; O'Neill, Michael; Rowley, Elizabeth

    2013-04-01

    This paper documents current understanding of acceptance as a security management approach and explores issues and challenges non-governmental organisations (NGOs) confront when implementing an acceptance approach to security management. It argues that the failure of organisations to systematise and clearly articulate acceptance as a distinct security management approach and a lack of organisational policies and procedures concerning acceptance hinder its efficacy as a security management approach. The paper identifies key and cross-cutting components of acceptance that are critical to its effective implementation in order to advance a comprehensive and systematic concept of acceptance. The key components of acceptance illustrate how organisational and staff functions affect positively or negatively an organisation's acceptance, and include: an organisation's principles and mission, communications, negotiation, programming, relationships and networks, stakeholder and context analysis, staffing, and image. The paper contends that acceptance is linked not only to good programming, but also to overall organisational management and structures.

  18. Surface errors in the course of machining precision optics

    NASA Astrophysics Data System (ADS)

    Biskup, H.; Haberl, A.; Rascher, R.

    2015-08-01

    Precision optical components are usually machined by grinding and polishing in several steps with increasing accuracy. Spherical surfaces will be finished in a last step with large tools to smooth the surface. The requested surface accuracy of non-spherical surfaces only can be achieved with tools in point contact to the surface. So called mid-frequency errors (MSFE) can accumulate with zonal processes. This work is on the formation of surface errors from grinding to polishing by conducting an analysis of the surfaces in their machining steps by non-contact interferometric methods. The errors on the surface can be distinguished as described in DIN 4760 whereby 2nd to 3rd order errors are the so-called MSFE. By appropriate filtering of the measured data frequencies of errors can be suppressed in a manner that only defined spatial frequencies will be shown in the surface plot. It can be observed that some frequencies already may be formed in the early machining steps like grinding and main-polishing. Additionally it is known that MSFE can be produced by the process itself and other side effects. Beside a description of surface errors based on the limits of measurement technologies, different formation mechanisms for selected spatial frequencies are presented. A correction may be only possible by tools that have a lateral size below the wavelength of the error structure. The presented considerations may be used to develop proposals to handle surface errors.

  19. 75 FR 60865 - Surety Companies Acceptable on Federal Bonds: Amendment-Allegheny Casualty Company

    Federal Register 2010, 2011, 2012, 2013, 2014

    2010-10-01

    ... From the Federal Register Online via the Government Publishing Office DEPARTMENT OF THE TREASURY Fiscal Service Surety Companies Acceptable on Federal Bonds: Amendment-- Allegheny Casualty Company... INFORMATION: The underwriting limitation for Allegheny Casualty Company (NAIC 13285), which was listed in...

  20. Influence of Errors in Tactile Sensors on Some High Level Parameters Used for Manipulation with Robotic Hands

    PubMed Central

    Sánchez-Durán, José A.; Hidalgo-López, José A.; Castellanos-Ramos, Julián; Oballe-Peinado, Óscar; Vidal-Verdú, Fernando

    2015-01-01

    Tactile sensors suffer from many types of interference and errors like crosstalk, non-linearity, drift or hysteresis, therefore calibration should be carried out to compensate for these deviations. However, this procedure is difficult in sensors mounted on artificial hands for robots or prosthetics for instance, where the sensor usually bends to cover a curved surface. Moreover, the calibration procedure should be repeated often because the correction parameters are easily altered by time and surrounding conditions. Furthermore, this intensive and complex calibration could be less determinant, or at least simpler. This is because manipulation algorithms do not commonly use the whole data set from the tactile image, but only a few parameters such as the moments of the tactile image. These parameters could be changed less by common errors and interferences, or at least their variations could be in the order of those caused by accepted limitations, like reduced spatial resolution. This paper shows results from experiments to support this idea. The experiments are carried out with a high performance commercial sensor as well as with a low-cost error-prone sensor built with a common procedure in robotics. PMID:26295393

  1. Error propagation in energetic carrying capacity models

    USGS Publications Warehouse

    Pearse, Aaron T.; Stafford, Joshua D.

    2014-01-01

    Conservation objectives derived from carrying capacity models have been used to inform management of landscapes for wildlife populations. Energetic carrying capacity models are particularly useful in conservation planning for wildlife; these models use estimates of food abundance and energetic requirements of wildlife to target conservation actions. We provide a general method for incorporating a foraging threshold (i.e., density of food at which foraging becomes unprofitable) when estimating food availability with energetic carrying capacity models. We use a hypothetical example to describe how past methods for adjustment of foraging thresholds biased results of energetic carrying capacity models in certain instances. Adjusting foraging thresholds at the patch level of the species of interest provides results consistent with ecological foraging theory. Presentation of two case studies suggest variation in bias which, in certain instances, created large errors in conservation objectives and may have led to inefficient allocation of limited resources. Our results also illustrate how small errors or biases in application of input parameters, when extrapolated to large spatial extents, propagate errors in conservation planning and can have negative implications for target populations.

  2. North error estimation based on solar elevation errors in the third step of sky-polarimetric Viking navigation.

    PubMed

    Száz, Dénes; Farkas, Alexandra; Barta, András; Kretzer, Balázs; Egri, Ádám; Horváth, Gábor

    2016-07-01

    The theory of sky-polarimetric Viking navigation has been widely accepted for decades without any information about the accuracy of this method. Previously, we have measured the accuracy of the first and second steps of this navigation method in psychophysical laboratory and planetarium experiments. Now, we have tested the accuracy of the third step in a planetarium experiment, assuming that the first and second steps are errorless. Using the fists of their outstretched arms, 10 test persons had to estimate the elevation angles (measured in numbers of fists and fingers) of black dots (representing the position of the occluded Sun) projected onto the planetarium dome. The test persons performed 2400 elevation estimations, 48% of which were more accurate than ±1°. We selected three test persons with the (i) largest and (ii) smallest elevation errors and (iii) highest standard deviation of the elevation error. From the errors of these three persons, we calculated their error function, from which the North errors (the angles with which they deviated from the geographical North) were determined for summer solstice and spring equinox, two specific dates of the Viking sailing period. The range of possible North errors ΔωN was the lowest and highest at low and high solar elevations, respectively. At high elevations, the maximal ΔωN was 35.6° and 73.7° at summer solstice and 23.8° and 43.9° at spring equinox for the best and worst test person (navigator), respectively. Thus, the best navigator was twice as good as the worst one. At solstice and equinox, high elevations occur the most frequently during the day, thus high North errors could occur more frequently than expected before. According to our findings, the ideal periods for sky-polarimetric Viking navigation are immediately after sunrise and before sunset, because the North errors are the lowest at low solar elevations.

  3. North error estimation based on solar elevation errors in the third step of sky-polarimetric Viking navigation

    NASA Astrophysics Data System (ADS)

    Száz, Dénes; Farkas, Alexandra; Barta, András; Kretzer, Balázs; Egri, Ádám; Horváth, Gábor

    2016-07-01

    The theory of sky-polarimetric Viking navigation has been widely accepted for decades without any information about the accuracy of this method. Previously, we have measured the accuracy of the first and second steps of this navigation method in psychophysical laboratory and planetarium experiments. Now, we have tested the accuracy of the third step in a planetarium experiment, assuming that the first and second steps are errorless. Using the fists of their outstretched arms, 10 test persons had to estimate the elevation angles (measured in numbers of fists and fingers) of black dots (representing the position of the occluded Sun) projected onto the planetarium dome. The test persons performed 2400 elevation estimations, 48% of which were more accurate than ±1°. We selected three test persons with the (i) largest and (ii) smallest elevation errors and (iii) highest standard deviation of the elevation error. From the errors of these three persons, we calculated their error function, from which the North errors (the angles with which they deviated from the geographical North) were determined for summer solstice and spring equinox, two specific dates of the Viking sailing period. The range of possible North errors ΔωN was the lowest and highest at low and high solar elevations, respectively. At high elevations, the maximal ΔωN was 35.6° and 73.7° at summer solstice and 23.8° and 43.9° at spring equinox for the best and worst test person (navigator), respectively. Thus, the best navigator was twice as good as the worst one. At solstice and equinox, high elevations occur the most frequently during the day, thus high North errors could occur more frequently than expected before. According to our findings, the ideal periods for sky-polarimetric Viking navigation are immediately after sunrise and before sunset, because the North errors are the lowest at low solar elevations.

  4. 75 FR 82130 - Generalized System of Preferences (GSP): Notice Regarding the Acceptance of Petitions To Grant a...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2010-12-29

    ... (GSP): Notice Regarding the Acceptance of Petitions To Grant a Competitive Need Limitation (CNL) Waiver... competitive need limitations (CNLs) on imports of certain products that are eligible for duty-free...

  5. Error-associated behaviors and error rates for robotic geology

    NASA Technical Reports Server (NTRS)

    Anderson, Robert C.; Thomas, Geb; Wagner, Jacob; Glasgow, Justin

    2004-01-01

    This study explores human error as a function of the decision-making process. One of many models for human decision-making is Rasmussen's decision ladder [9]. The decision ladder identifies the multiple tasks and states of knowledge involved in decision-making. The tasks and states of knowledge can be classified by the level of cognitive effort required to make the decision, leading to the skill, rule, and knowledge taxonomy (Rasmussen, 1987). Skill based decisions require the least cognitive effort and knowledge based decisions require the greatest cognitive effort. Errors can occur at any of the cognitive levels.

  6. Heavy Metal, Religiosity, and Suicide Acceptability.

    ERIC Educational Resources Information Center

    Stack, Steven

    1998-01-01

    Reports on data taken from the General Social Survey that found a link between "heavy metal" rock fanship and suicide acceptability. Finds that relationship becomes nonsignificant once level of religiosity is controlled. Heavy metal fans are low in religiosity, which contributes to greater suicide acceptability. (Author/JDM)

  7. Hanford Site liquid waste acceptance criteria

    SciTech Connect

    LUECK, K.J.

    1999-09-11

    This document provides the waste acceptance criteria for liquid waste managed by Waste Management Federal Services of Hanford, Inc. (WMH). These waste acceptance criteria address the various requirements to operate a facility in compliance with applicable environmental, safety, and operational requirements. This document also addresses the sitewide miscellaneous streams program.

  8. 48 CFR 411.103 - Market acceptance.

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... 48 Federal Acquisition Regulations System 4 2010-10-01 2010-10-01 false Market acceptance. 411.103... ACQUISITION PLANNING DESCRIBING AGENCY NEEDS Selecting and Developing Requirements Documents 411.103 Market... accordance with FAR 11.103(a), the market acceptability of their items to be offered. (b) The...

  9. 48 CFR 3011.103 - Market acceptance.

    Code of Federal Regulations, 2011 CFR

    2011-10-01

    ... 48 Federal Acquisition Regulations System 7 2011-10-01 2011-10-01 false Market acceptance. 3011.103 Section 3011.103 Federal Acquisition Regulations System DEPARTMENT OF HOMELAND SECURITY, HOMELAND... Developing Requirements Documents 3011.103 Market acceptance. (a) Contracting officers may act on behalf...

  10. 48 CFR 411.103 - Market acceptance.

    Code of Federal Regulations, 2011 CFR

    2011-10-01

    ... 48 Federal Acquisition Regulations System 4 2011-10-01 2011-10-01 false Market acceptance. 411.103... ACQUISITION PLANNING DESCRIBING AGENCY NEEDS Selecting and Developing Requirements Documents 411.103 Market... accordance with FAR 11.103(a), the market acceptability of their items to be offered. (b) The...

  11. 48 CFR 3011.103 - Market acceptance.

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... 48 Federal Acquisition Regulations System 7 2010-10-01 2010-10-01 false Market acceptance. 3011.103 Section 3011.103 Federal Acquisition Regulations System DEPARTMENT OF HOMELAND SECURITY, HOMELAND... Developing Requirements Documents 3011.103 Market acceptance. (a) Contracting officers may act on behalf...

  12. Nevada Test Site Waste Acceptance Criteria (NTSWAC)

    SciTech Connect

    NNSA /NSO Waste Management Project

    2008-06-01

    This document establishes the U.S. Department of Energy, National Nuclear Security Administration Nevada Site Office, Nevada Test Site Waste Acceptance Criteria (NTSWAC). The NTSWAC provides the requirements, terms, and conditions under which the Nevada Test Site will accept low-level radioactive (LLW) and LLW Mixed Waste (MW) for disposal.

  13. Consumer acceptance of ginseng food products.

    PubMed

    Chung, Hee Sook; Lee, Young-Chul; Rhee, Young Kyung; Lee, Soo-Yeun

    2011-01-01

    Ginseng has been utilized less in food products than in dietary supplements in the United States. Sensory acceptance of ginseng food products by U.S. consumers has not been reported. The objectives of this study were to: (1) determine the sensory acceptance of commercial ginseng food products and (2) assess influence of the addition of sweeteners to ginseng tea and ginseng extract to chocolate on consumer acceptance. Total of 126 consumers participated in 3 sessions for (1) 7 commercial red ginseng food products, (2) 10 ginseng teas varying in levels of sugar or honey, and (3) 10 ginseng milk or dark chocolates varying in levels of ginseng extract. Ginseng candy with vitamin C and ginseng crunchy white chocolate were the most highly accepted, while sliced ginseng root product was the least accepted among the seven commercial products. Sensory acceptance increased in proportion to the content of sugar and honey in ginseng tea, whereas acceptance decreased with increasing content of ginseng extract in milk and dark chocolates. Findings demonstrate that ginseng food product types with which consumers have been already familiar, such as candy and chocolate, will have potential for success in the U.S. market. Chocolate could be suggested as a food matrix into which ginseng can be incorporated, as containing more bioactive compounds than ginseng tea at a similar acceptance level. Future research may include a descriptive analysis with ginseng-based products to identify the key drivers of liking and disliking for successful new product development.

  14. Genres Across Cultures: Types of Acceptability Variation

    ERIC Educational Resources Information Center

    Shaw, Philip; Gillaerts, Paul; Jacobs, Everett; Palermo, Ofelia; Shinohara, Midori; Verckens, J. Piet

    2004-01-01

    One can ask four questions about genre validity across cultures. Does a certain form or configuration occur in the culture in question? Is it acceptable? If acceptable, is it in practice preferred? Is it recommended by prescriptive authorities? This paper reports the results of an attempt to answer these questions empirically by testing the…

  15. 48 CFR 11.103 - Market acceptance.

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... 48 Federal Acquisition Regulations System 1 2010-10-01 2010-10-01 false Market acceptance. 11.103... DESCRIBING AGENCY NEEDS Selecting and Developing Requirements Documents 11.103 Market acceptance. (a) Section... may, under appropriate circumstances, require offerors to demonstrate that the items offered— (1)...

  16. 48 CFR 2811.103 - Market acceptance.

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... 48 Federal Acquisition Regulations System 6 2010-10-01 2010-10-01 true Market acceptance. 2811.103... Planning DESCRIBING AGENCY NEEDS Selecting and Developing Requirements Documents 2811.103 Market acceptance... offerors to demonstrate that the items offered meet the criteria set forth in FAR 11.103(a)....

  17. 5 CFR 1655.11 - Loan acceptance.

    Code of Federal Regulations, 2013 CFR

    2013-01-01

    ... 5 Administrative Personnel 3 2013-01-01 2013-01-01 false Loan acceptance. 1655.11 Section 1655.11 Administrative Personnel FEDERAL RETIREMENT THRIFT INVESTMENT BOARD LOAN PROGRAM § 1655.11 Loan acceptance. The TSP record keeper will reject a loan application if: (a) The participant is not qualified to apply...

  18. 5 CFR 1655.11 - Loan acceptance.

    Code of Federal Regulations, 2011 CFR

    2011-01-01

    ... 5 Administrative Personnel 3 2011-01-01 2011-01-01 false Loan acceptance. 1655.11 Section 1655.11 Administrative Personnel FEDERAL RETIREMENT THRIFT INVESTMENT BOARD LOAN PROGRAM § 1655.11 Loan acceptance. The TSP record keeper will reject a loan application if: (a) The participant is not qualified to apply...

  19. 5 CFR 1655.11 - Loan acceptance.

    Code of Federal Regulations, 2012 CFR

    2012-01-01

    ... 5 Administrative Personnel 3 2012-01-01 2012-01-01 false Loan acceptance. 1655.11 Section 1655.11 Administrative Personnel FEDERAL RETIREMENT THRIFT INVESTMENT BOARD LOAN PROGRAM § 1655.11 Loan acceptance. The TSP record keeper will reject a loan application if: (a) The participant is not qualified to apply...

  20. 48 CFR 3011.103 - Market acceptance.

    Code of Federal Regulations, 2014 CFR

    2014-10-01

    ... 48 Federal Acquisition Regulations System 7 2014-10-01 2014-10-01 false Market acceptance. 3011.103 Section 3011.103 Federal Acquisition Regulations System DEPARTMENT OF HOMELAND SECURITY, HOMELAND... Developing Requirements Documents 3011.103 Market acceptance. (a) Contracting officers may act on behalf...

  1. 48 CFR 411.103 - Market acceptance.

    Code of Federal Regulations, 2012 CFR

    2012-10-01

    ... 48 Federal Acquisition Regulations System 4 2012-10-01 2012-10-01 false Market acceptance. 411.103... ACQUISITION PLANNING DESCRIBING AGENCY NEEDS Selecting and Developing Requirements Documents 411.103 Market... accordance with FAR 11.103(a), the market acceptability of their items to be offered. (b) The...

  2. 48 CFR 411.103 - Market acceptance.

    Code of Federal Regulations, 2014 CFR

    2014-10-01

    ... 48 Federal Acquisition Regulations System 4 2014-10-01 2014-10-01 false Market acceptance. 411.103... ACQUISITION PLANNING DESCRIBING AGENCY NEEDS Selecting and Developing Requirements Documents 411.103 Market... accordance with FAR 11.103(a), the market acceptability of their items to be offered. (b) The...

  3. 48 CFR 3011.103 - Market acceptance.

    Code of Federal Regulations, 2013 CFR

    2013-10-01

    ... 48 Federal Acquisition Regulations System 7 2013-10-01 2012-10-01 true Market acceptance. 3011.103 Section 3011.103 Federal Acquisition Regulations System DEPARTMENT OF HOMELAND SECURITY, HOMELAND SECURITY... Requirements Documents 3011.103 Market acceptance. (a) Contracting officers may act on behalf of the head...

  4. 48 CFR 3011.103 - Market acceptance.

    Code of Federal Regulations, 2012 CFR

    2012-10-01

    ... 48 Federal Acquisition Regulations System 7 2012-10-01 2012-10-01 false Market acceptance. 3011.103 Section 3011.103 Federal Acquisition Regulations System DEPARTMENT OF HOMELAND SECURITY, HOMELAND... Developing Requirements Documents 3011.103 Market acceptance. (a) Contracting officers may act on behalf...

  5. 48 CFR 411.103 - Market acceptance.

    Code of Federal Regulations, 2013 CFR

    2013-10-01

    ... 48 Federal Acquisition Regulations System 4 2013-10-01 2013-10-01 false Market acceptance. 411.103... ACQUISITION PLANNING DESCRIBING AGENCY NEEDS Selecting and Developing Requirements Documents 411.103 Market... accordance with FAR 11.103(a), the market acceptability of their items to be offered. (b) The...

  6. How psychotherapists handle treatment errors – an ethical analysis

    PubMed Central

    2013-01-01

    Background Dealing with errors in psychotherapy is challenging, both ethically and practically. There is almost no empirical research on this topic. We aimed (1) to explore psychotherapists’ self-reported ways of dealing with an error made by themselves or by colleagues, and (2) to reconstruct their reasoning according to the two principle-based ethical approaches that are dominant in the ethics discourse of psychotherapy, Beauchamp & Childress (B&C) and Lindsay et al. (L). Methods We conducted 30 semi-structured interviews with 30 psychotherapists (physicians and non-physicians) and analysed the transcripts using qualitative content analysis. Answers were deductively categorized according to the two principle-based ethical approaches. Results Most psychotherapists reported that they preferred to an disclose error to the patient. They justified this by spontaneous intuitions and common values in psychotherapy, rarely using explicit ethical reasoning. The answers were attributed to the following categories with descending frequency: 1. Respect for patient autonomy (B&C; L), 2. Non-maleficence (B&C) and Responsibility (L), 3. Integrity (L), 4. Competence (L) and Beneficence (B&C). Conclusions Psychotherapists need specific ethical and communication training to complement and articulate their moral intuitions as a support when disclosing their errors to the patients. Principle-based ethical approaches seem to be useful for clarifying the reasons for disclosure. Further research should help to identify the most effective and acceptable ways of error disclosure in psychotherapy. PMID:24321503

  7. Geolocation error tracking of ZY-3 three line cameras

    NASA Astrophysics Data System (ADS)

    Pan, Hongbo

    2017-01-01

    The high-accuracy geolocation of high-resolution satellite images (HRSIs) is a key issue for mapping and integrating multi-temporal, multi-sensor images. In this manuscript, we propose a new geometric frame for analysing the geometric error of a stereo HRSI, in which the geolocation error can be divided into three parts: the epipolar direction, cross base direction, and height direction. With this frame, we proved that the height error of three line cameras (TLCs) is independent of nadir images, and that the terrain effect has a limited impact on the geolocation errors. For ZY-3 error sources, the drift error in both the pitch and roll angle and its influence on the geolocation accuracy are analysed. Epipolar and common tie-point constraints are proposed to study the bundle adjustment of HRSIs. Epipolar constraints explain that the relative orientation can reduce the number of compensation parameters in the cross base direction and have a limited impact on the height accuracy. The common tie points adjust the pitch-angle errors to be consistent with each other for TLCs. Therefore, free-net bundle adjustment of a single strip cannot significantly improve the geolocation accuracy. Furthermore, the epipolar and common tie-point constraints cause the error to propagate into the adjacent strip when multiple strips are involved in the bundle adjustment, which results in the same attitude uncertainty throughout the whole block. Two adjacent strips-Orbit 305 and Orbit 381, covering 7 and 12 standard scenes separately-and 308 ground control points (GCPs) were used for the experiments. The experiments validate the aforementioned theory. The planimetric and height root mean square errors were 2.09 and 1.28 m, respectively, when two GCPs were settled at the beginning and end of the block.

  8. Understanding diversity: the importance of social acceptance.

    PubMed

    Chen, Jacqueline M; Hamilton, David L

    2015-04-01

    Two studies investigated how people define and perceive diversity in the historically majority-group dominated contexts of business and academia. We hypothesized that individuals construe diversity as both the numeric representation of racial minorities and the social acceptance of racial minorities within a group. In Study 1, undergraduates' (especially minorities') perceptions of campus diversity were predicted by perceived social acceptance on a college campus, above and beyond perceived minority representation. Study 2 showed that increases in a company's representation and social acceptance independently led to increases in perceived diversity of the company among Whites. Among non-Whites, representation and social acceptance only increased perceived diversity of the company when both qualities were high. Together these findings demonstrate the importance of both representation and social acceptance to the achievement of diversity in groups and that perceiver race influences the relative importance of these two components of diversity.

  9. Heavy metal, religiosity, and suicide acceptability.

    PubMed

    Stack, S

    1998-01-01

    There has been little work at the national level on the subject of musical subcultures and suicide acceptability. The present work explores the link between "heavy metal" rock fanship and suicide acceptability. Metal fanship is thought to elevate suicide acceptability through such means as exposure to a culture of personal and societal chaos marked by hopelessness, and through its associations with demographic risk factors such as gender, socioeconomic status, and education. Data are taken from the General Social Survey. A link between heavy metal fanship and suicide acceptability is found. However, this relationship becomes nonsignificant once level of religiosity is controlled. Metal fans are low in religiosity, which contributes, in turn, to greater suicide acceptability.

  10. Monte Carlo determination of Phoswich Array acceptance

    SciTech Connect

    Costales, J.B.; E859 Collaboration

    1992-07-01

    The purpose of this memo is to describe the means by which the acceptance of the E859 Phoswich Array is determined. By acceptance, two things are meant: first, the geometrical acceptance (the angular size of the modules); second, the detection acceptance (the probability that a particle of a given 4-momentum initially in the detector line-of-sight is detected as such). In particular, this memo will concentrate on those particles for which the energy of the particle can be sufficiently measured; that is to say, protons, deuterons and tritons. In principle, the phoswich array can measure the low end of the pion energy spectrum, but with a poor resolution. The detection acceptance of pions and baryon clusters heavier than tritons will be neglected in this memo.

  11. Inherent error in interferometric surface plasmon microscopy

    NASA Astrophysics Data System (ADS)

    Zhang, Bei; Yan, Peng; Gao, Feng; Liu, Yu; Zhang, Qiancheng; Wang, Le

    2016-11-01

    Surface plasmon microscopy (SPRM) usually employs high refractive index prism or high numerical aperture (NA) objective as coupling device to excite surface plasmon. Here we apply high NA oil-immersion objective considering k vector conditions of SPs and localization of SPs which provides better lateral resolution and less cross-talk between adjacent areas. However, performance of an objective based SPRM is often limited by the finite aperture of a physical objective which corresponds to sudden transition and limited bandwidth. Here we give a simplified model of the SPRM and numerically calculate how the sudden transition on the clear aperture edge causes inherent error. Notch filtering algorithm is designed to suppress the noisy ripples. Compared to the pupil function engineering technique, this technique makes both the sacrifice of NA and utilization of spatial light modulator unnecessary and provides a more compact system setup without decreasing the resolution and contrast.

  12. Beam-limiting and radiation-limiting interlocks

    SciTech Connect

    Macek, R.J.

    1996-04-01

    This paper reviews several aspects of beam-limiting and radiation- limiting interlocks used for personnel protection at high-intensity accelerators. It is based heavily on the experience at the Los Alamos Neutron Science Center (LANSCE) where instrumentation-based protection is used extensively. Topics include the need for ``active`` protection systems, system requirements, design criteria, and means of achieving and assessing acceptable reliability. The experience with several specific devices (ion chamber-based beam loss interlock, beam current limiter interlock, and neutron radiation interlock) designed and/or deployed to these requirements and criteria is evaluated.

  13. On Limits

    NASA Technical Reports Server (NTRS)

    Holzmann, Gerard J.

    2008-01-01

    In the last 3 decades or so, the size of systems we have been able to verify formally with automated tools has increased dramatically. At each point in this development, we encountered a different set of limits -- many of which we were eventually able to overcome. Today, we may have reached some limits that may be much harder to conquer. The problem I will discuss is the following: given a hypothetical machine with infinite memory that is seamlessly shared among infinitely many CPUs (or CPU cores), what is the largest problem size that we could solve?

  14. POSITION ERROR IN STATION-KEEPING SATELLITE

    DTIC Science & Technology

    of an error in satellite orientation and the sun being in a plane other than the equatorial plane may result in errors in position determination. The nature of the errors involved is described and their magnitudes estimated.

  15. Orbit IMU alignment: Error analysis

    NASA Technical Reports Server (NTRS)

    Corson, R. W.

    1980-01-01

    A comprehensive accuracy analysis of orbit inertial measurement unit (IMU) alignments using the shuttle star trackers was completed and the results are presented. Monte Carlo techniques were used in a computer simulation of the IMU alignment hardware and software systems to: (1) determine the expected Space Transportation System 1 Flight (STS-1) manual mode IMU alignment accuracy; (2) investigate the accuracy of alignments in later shuttle flights when the automatic mode of star acquisition may be used; and (3) verify that an analytical model previously used for estimating the alignment error is a valid model. The analysis results do not differ significantly from expectations. The standard deviation in the IMU alignment error for STS-1 alignments was determined to the 68 arc seconds per axis. This corresponds to a 99.7% probability that the magnitude of the total alignment error is less than 258 arc seconds.

  16. Error analysis using organizational simulation.

    PubMed Central

    Fridsma, D. B.

    2000-01-01

    Organizational simulations have been used by project organizations in civil and aerospace industries to identify work processes and organizational structures that are likely to fail under certain conditions. Using a simulation system based on Galbraith's information-processing theory and Simon's notion of bounded-rationality, we retrospectively modeled a chemotherapy administration error that occurred in a hospital setting. Our simulation suggested that when there is a high rate of unexpected events, the oncology fellow was differentially backlogged with work when compared with other organizational members. Alternative scenarios suggested that providing more knowledge resources to the oncology fellow improved her performance more effectively than adding additional staff to the organization. Although it is not possible to know whether this might have prevented the error, organizational simulation may be an effective tool to prospectively evaluate organizational "weak links", and explore alternative scenarios to correct potential organizational problems before they generate errors. PMID:11079885

  17. Sensation seeking and error processing.

    PubMed

    Zheng, Ya; Sheng, Wenbin; Xu, Jing; Zhang, Yuanyuan

    2014-09-01

    Sensation seeking is defined by a strong need for varied, novel, complex, and intense stimulation, and a willingness to take risks for such experience. Several theories propose that the insensitivity to negative consequences incurred by risks is one of the hallmarks of sensation-seeking behaviors. In this study, we investigated the time course of error processing in sensation seeking by recording event-related potentials (ERPs) while high and low sensation seekers performed an Eriksen flanker task. Whereas there were no group differences in ERPs to correct trials, sensation seeking was associated with a blunted error-related negativity (ERN), which was female-specific. Further, different subdimensions of sensation seeking were related to ERN amplitude differently. These findings indicate that the relationship between sensation seeking and error processing is sex-specific.

  18. Error Field Correction in ITER

    SciTech Connect

    Park, Jong-kyu; Boozer, Allen H.; Menard, Jonathan E.; Schaffer, Michael J.

    2008-05-22

    A new method for correcting magnetic field errors in the ITER tokamak is developed using the Ideal Perturbed Equilibrium Code (IPEC). The dominant external magnetic field for driving islands is shown to be localized to the outboard midplane for three ITER equilibria that represent the projected range of operational scenarios. The coupling matrices between the poloidal harmonics of the external magnetic perturbations and the resonant fields on the rational surfaces that drive islands are combined for different equilibria and used to determine an ordered list of the dominant errors in the external magnetic field. It is found that efficient and robust error field correction is possible with a fixed setting of the correction currents relative to the currents in the main coils across the range of ITER operating scenarios that was considered.

  19. Constraint checking during error recovery

    NASA Technical Reports Server (NTRS)

    Lutz, Robyn R.; Wong, Johnny S. K.

    1993-01-01

    The system-level software onboard a spacecraft is responsible for recovery from communication, power, thermal, and computer-health anomalies that may occur. The recovery must occur without disrupting any critical scientific or engineering activity that is executing at the time of the error. Thus, the error-recovery software may have to execute concurrently with the ongoing acquisition of scientific data or with spacecraft maneuvers. This work provides a technique by which the rules that constrain the concurrent execution of these processes can be modeled in a graph. An algorithm is described that uses this model to validate that the constraints hold for all concurrent executions of the error-recovery software with the software that controls the science and engineering activities of the spacecraft. The results are applicable to a variety of control systems with critical constraints on the timing and ordering of the events they control.

  20. Meditation, mindfulness and executive control: the importance of emotional acceptance and brain-based performance monitoring.

    PubMed

    Teper, Rimma; Inzlicht, Michael

    2013-01-01

    Previous studies have documented the positive effects of mindfulness meditation on executive control. What has been lacking, however, is an understanding of the mechanism underlying this effect. Some theorists have described mindfulness as embodying two facets-present moment awareness and emotional acceptance. Here, we examine how the effect of meditation practice on executive control manifests in the brain, suggesting that emotional acceptance and performance monitoring play important roles. We investigated the effect of meditation practice on executive control and measured the neural correlates of performance monitoring, specifically, the error-related negativity (ERN), a neurophysiological response that occurs within 100 ms of error commission. Meditators and controls completed a Stroop task, during which we recorded ERN amplitudes with electroencephalography. Meditators showed greater executive control (i.e. fewer errors), a higher ERN and more emotional acceptance than controls. Finally, mediation pathway models further revealed that meditation practice relates to greater executive control and that this effect can be accounted for by heightened emotional acceptance, and to a lesser extent, increased brain-based performance monitoring.

  1. Automatic-repeat-request error control schemes

    NASA Technical Reports Server (NTRS)

    Lin, S.; Costello, D. J., Jr.; Miller, M. J.

    1983-01-01

    Error detection incorporated with automatic-repeat-request (ARQ) is widely used for error control in data communication systems. This method of error control is simple and provides high system reliability. If a properly chosen code is used for error detection, virtually error-free data transmission can be attained. Various types of ARQ and hybrid ARQ schemes, and error detection using linear block codes are surveyed.

  2. A Positive View of Peer Acceptance in Aggressive Youth: Risk for Future Peer Acceptance.

    ERIC Educational Resources Information Center

    Hughes, Jan N.; Cavell, Timothy A.; Prasad-Gaur, Archna

    2001-01-01

    Uses longitudinal data to determine whether a positive view of perceived peer acceptance is a risk factor for continued aggression and social rejection for aggressive children. Results indicate that perceived peer acceptance did not predict aggression. However, children who reported higher levels of perceived peer acceptance received lower actual…

  3. Analysis of Measurement Error and Estimator Shape in Three-Point Hydraulic Gradient Estimators

    NASA Astrophysics Data System (ADS)

    McKenna, S. A.; Wahi, A. K.

    2003-12-01

    Three spatially separated measurements of head provide a means of estimating the magnitude and orientation of the hydraulic gradient. Previous work with three-point estimators has focused on the effect of the size (area) of the three-point estimator and measurement error on the final estimates of the gradient magnitude and orientation in laboratory and field studies (Mizell, 1980; Silliman and Frost, 1995; Silliman and Mantz, 2000; Ruskauff and Rumbaugh, 1996). However, a systematic analysis of the combined effects of measurement error, estimator shape and estimator orientation relative to the gradient orientation has not previously been conducted. Monte Carlo simulation with an underlying assumption of a homogeneous transmissivity field is used to examine the effects of uncorrelated measurement error on a series of eleven different three-point estimators having the same size but different shapes as a function of the orientation of the true gradient. Results show that the variance in the estimate of both the magnitude and the orientation increase linearly with the increase in measurement error in agreement with the results of stochastic theory for estimators that are small relative to the correlation length of transmissivity (Mizell, 1980). Three-point estimator shapes with base to height ratios between 0.5 and 5.0 provide accurate estimates of magnitude and orientation across all orientations of the true gradient. As an example, these results are applied to data collected from a monitoring network of 25 wells at the WIPP site during two different time periods. The simulation results are used to reduce the set of all possible combinations of three wells to those combinations with acceptable measurement errors relative to the amount of head drop across the estimator and base to height ratios between 0.5 and 5.0. These limitations reduce the set of all possible well combinations by 98 percent and show that size alone as defined by triangle area is not a valid

  4. Management of human error by design

    NASA Technical Reports Server (NTRS)

    Wiener, Earl

    1988-01-01

    Design-induced errors and error prevention as well as the concept of lines of defense against human error are discussed. The concept of human error prevention, whose main focus has been on hardware, is extended to other features of the human-machine interface vulnerable to design-induced errors. In particular, it is pointed out that human factors and human error prevention should be part of the process of transport certification. Also, the concept of error tolerant systems is considered as a last line of defense against error.

  5. Consumer Acceptance of Dry Dog Food Variations

    PubMed Central

    Donfrancesco, Brizio Di; Koppel, Kadri; Swaney-Stueve, Marianne; Chambers, Edgar

    2014-01-01

    Simple Summary The objectives of this study were to compare the acceptance of different dry dog food products by consumers, determine consumer clusters for acceptance, and identify the characteristics of dog food that drive consumer acceptance. Pet owners evaluated dry dog food samples available in the US market. The results indicated that appearance of the sample, especially the color, influenced pet owner’s overall liking more than the aroma of the product. Abstract The objectives of this study were to compare the acceptance of different dry dog food products by consumers, determine consumer clusters for acceptance, and identify the characteristics of dog food that drive consumer acceptance. Eight dry dog food samples available in the US market were evaluated by pet owners. In this study, consumers evaluated overall liking, aroma, and appearance liking of the products. Consumers were also asked to predict their purchase intent, their dog’s liking, and cost of the samples. The results indicated that appearance of the sample, especially the color, influenced pet owner’s overall liking more than the aroma of the product. Overall liking clusters were not related to income, age, gender, or education, indicating that general consumer demographics do not appear to play a main role in individual consumer acceptance of dog food products. PMID:26480043

  6. Resolving the Physics of Error Field Correction Through Error Field Proxy Experiments in DIII-D

    NASA Astrophysics Data System (ADS)

    Buttery, R. J.; Ferraro, N. M.; La Haye, R. J.; Schaffer, M. J.; Strait, E. J.; Hanson, J. M.; Park, J.-K.; Reimerdes, H.

    2012-10-01

    Recent studies have determined the scale and likely origins of limitations to error field correction by using DIII-D's multiple coil arrays to apply known large amplitude proxy error fields and attempting correction with additional coils of different structure. It was found that even with pure n=1 proxy fields and carefully optimized correction field, the benefits of correction were substantially limited, at the ˜50% level in terms of low density access. This indicates coupling of residual fields either through higher order resonances and/or through non-resonant braking of the plasma The interpretation is confirmed by modeling with the IPEC code, which shows that the correction process reduces resonant components, but increases non-resonant NTV damping, thus decreasing rotation and easing penetration of residual resonant fields. The result is significant, suggesting multiple field components must be compensated to achieve good correction, and that the best approach may be to minimize the total field in the plasma by cancelling error fields close to their source or close to the plasma.

  7. FRamework Assessing Notorious Contributing Influences for Error (FRANCIE): Perspective on Taxonomy Development to Support Error Reporting and Analysis

    SciTech Connect

    Lon N. Haney; David I. Gertman

    2003-04-01

    Beginning in the 1980s a primary focus of human reliability analysis was estimation of human error probabilities. However, detailed qualitative modeling with comprehensive representation of contextual variables often was lacking. This was likely due to the lack of comprehensive error and performance shaping factor taxonomies, and the limited data available on observed error rates and their relationship to specific contextual variables. In the mid 90s Boeing, America West Airlines, NASA Ames Research Center and INEEL partnered in a NASA sponsored Advanced Concepts grant to: assess the state of the art in human error analysis, identify future needs for human error analysis, and develop an approach addressing these needs. Identified needs included the need for a method to identify and prioritize task and contextual characteristics affecting human reliability. Other needs identified included developing comprehensive taxonomies to support detailed qualitative modeling and to structure meaningful data collection efforts across domains. A result was the development of the FRamework Assessing Notorious Contributing Influences for Error (FRANCIE) with a taxonomy for airline maintenance tasks. The assignment of performance shaping factors to generic errors by experts proved to be valuable to qualitative modeling. Performance shaping factors and error types from such detailed approaches can be used to structure error reporting schemes. In a recent NASA Advanced Human Support Technology grant FRANCIE was refined, and two new taxonomies for use on space missions were developed. The development, sharing, and use of error taxonomies, and the refinement of approaches for increased fidelity of qualitative modeling is offered as a means to help direct useful data collection strategies.

  8. Approaches to acceptable risk: a critical guide

    SciTech Connect

    Fischhoff, B.; Lichtenstein, S.; Slovic, P.; Keeney, R.; Derby, S.

    1980-12-01

    Acceptable-risk decisions are an essential step in the management of technological hazards. In many situations, they constitute the weak (or missing) link in the management process. The absence of an adequate decision-making methodology often produces indecision, inconsistency, and dissatisfaction. The result is neither good for hazard management nor good for society. This report offers a critical analysis of the viability of various approaches as guides to acceptable-risk decisions. This report seeks to define acceptable-risk decisions and to examine some frequently proposed, but inappropriate, solutions. 255 refs., 22 figs., 25 tabs.

  9. Hanford Site Solid Waste Acceptance Criteria

    SciTech Connect

    Not Available

    1993-11-17

    This manual defines the Hanford Site radioactive, hazardous, and sanitary solid waste acceptance criteria. Criteria in the manual represent a guide for meeting state and federal regulations; DOE Orders; Hanford Site requirements; and other rules, regulations, guidelines, and standards as they apply to acceptance of radioactive and hazardous solid waste at the Hanford Site. It is not the intent of this manual to be all inclusive of the regulations; rather, it is intended that the manual provide the waste generator with only the requirements that waste must meet in order to be accepted at Hanford Site TSD facilities.

  10. Multiple-Particle Interference and Quantum Error Correction

    NASA Astrophysics Data System (ADS)

    Steane, Andrew

    1996-11-01

    The concept of multiple-particle interference is discussed, using insights provided by the classical theory of error correcting codes. This leads to a discussion of error correction in a quantum communication channel or a quantum computer. Methods of error correction in the quantum regime are presented, and their limitations assessed. A quantum channel can recover from arbitrary decoherence of x qubits if K bits of quantum information are encoded using n quantum bits, where K/n can be greater than 1 - 2H (2x/n), but must be less than 1 - 2H (x/n). This implies exponential reduction of decoherence with only a polynomial increase in the computing resources required. Therefore quantum computation can be made free of errors in the presence of physically realistic levels of decoherence. The methods also allow isolation of quantum communication from noise and evesdropping (quantum privacy amplification).

  11. Error analysis of two methods for range-images registration

    NASA Astrophysics Data System (ADS)

    Liu, Xiaoli; Yin, Yongkai; Li, Ameng; He, Dong; Peng, Xiang

    2010-08-01

    With the improvements in range image registration techniques, this paper focuses on error analysis of two registration methods being generally applied in industry metrology including the algorithm comparison, matching error, computing complexity and different application areas. One method is iterative closest points, by which beautiful matching results with little error can be achieved. However some limitations influence its application in automatic and fast metrology. The other method is based on landmarks. We also present a algorithm for registering multiple range-images with non-coding landmarks, including the landmarks' auto-identification and sub-pixel location, 3D rigid motion, point pattern matching, global iterative optimization techniques et al. The registering results by the two methods are illustrated and a thorough error analysis is performed.

  12. Transfer Error and Correction Approach in Mobile Network

    NASA Astrophysics Data System (ADS)

    Xiao-kai, Wu; Yong-jin, Shi; Da-jin, Chen; Bing-he, Ma; Qi-li, Zhou

    With the development of information technology and social progress, human demand for information has become increasingly diverse, wherever and whenever people want to be able to easily, quickly and flexibly via voice, data, images and video and other means to communicate. Visual information to the people direct and vivid image, image / video transmission also been widespread attention. Although the third generation mobile communication systems and the emergence and rapid development of IP networks, making video communications is becoming the main business of the wireless communications, however, the actual wireless and IP channel will lead to error generation, such as: wireless channel multi- fading channels generated error and blocking IP packet loss and so on. Due to channel bandwidth limitations, the video communication compression coding of data is often beyond the data, and compress data after the error is very sensitive to error conditions caused a serious decline in image quality.

  13. Sub-nanometer periodic nonlinearity error in absolute distance interferometers.

    PubMed

    Yang, Hongxing; Huang, Kaiqi; Hu, Pengcheng; Zhu, Pengfei; Tan, Jiubin; Fan, Zhigang

    2015-05-01

    Periodic nonlinearity which can result in error in nanometer scale has become a main problem limiting the absolute distance measurement accuracy. In order to eliminate this error, a new integrated interferometer with non-polarizing beam splitter is developed. This leads to disappearing of the frequency and/or polarization mixing. Furthermore, a strict requirement on the laser source polarization is highly reduced. By combining retro-reflector and angel prism, reference and measuring beams can be spatially separated, and therefore, their optical paths are not overlapped. So, the main cause of the periodic nonlinearity error, i.e., the frequency and/or polarization mixing and leakage of beam, is eliminated. Experimental results indicate that the periodic phase error is kept within 0.0018°.

  14. Three-Dimensional Turbulent RANS Adjoint-Based Error Correction

    NASA Technical Reports Server (NTRS)

    Park, Michael A.

    2003-01-01

    Engineering problems commonly require functional outputs of computational fluid dynamics (CFD) simulations with specified accuracy. These simulations are performed with limited computational resources. Computable error estimates offer the possibility of quantifying accuracy on a given mesh and predicting a fine grid functional on a coarser mesh. Such an estimate can be computed by solving the flow equations and the associated adjoint problem for the functional of interest. An adjoint-based error correction procedure is demonstrated for transonic inviscid and subsonic laminar and turbulent flow. A mesh adaptation procedure is formulated to target uncertainty in the corrected functional and terminate when error remaining in the calculation is less than a user-specified error tolerance. This adaptation scheme is shown to yield anisotropic meshes with corrected functionals that are more accurate for a given number of grid points then isotropic adapted and uniformly refined grids.

  15. Multichannel error correction code decoder

    NASA Technical Reports Server (NTRS)

    Wagner, Paul K.; Ivancic, William D.

    1993-01-01

    A brief overview of a processing satellite for a mesh very-small-aperture (VSAT) communications network is provided. The multichannel error correction code (ECC) decoder system, the uplink signal generation and link simulation equipment, and the time-shared decoder are described. The testing is discussed. Applications of the time-shared decoder are recommended.

  16. Typical errors of ESP users

    NASA Astrophysics Data System (ADS)

    Eremina, Svetlana V.; Korneva, Anna A.

    2004-07-01

    The paper presents analysis of the errors made by ESP (English for specific purposes) users which have been considered as typical. They occur as a result of misuse of resources of English grammar and tend to resist. Their origin and places of occurrence have also been discussed.

  17. Error Analysis and Remedial Teaching.

    ERIC Educational Resources Information Center

    Corder, S. Pit

    The purpose of this paper is to analyze the role of error analysis in specifying and planning remedial treatment in second language learning. Part 1 discusses situations that demand remedial action. This is a quantitative assessment that requires measurement of the varying degrees of disparity between the learner's knowledge and the demands of the…

  18. Sampling Errors of Variance Components.

    ERIC Educational Resources Information Center

    Sanders, Piet F.

    A study on sampling errors of variance components was conducted within the framework of generalizability theory by P. L. Smith (1978). The study used an intuitive approach for solving the problem of how to allocate the number of conditions to different facets in order to produce the most stable estimate of the universe score variance. Optimization…

  19. The error of our ways

    NASA Astrophysics Data System (ADS)

    Swartz, Clifford E.

    1999-10-01

    In Victorian literature it was usually some poor female who came to see the error of her ways. How prescient of her! How I wish that all writers of manuscripts for The Physics Teacher would come to similar recognition of this centerpiece of measurement. For, Brothers and Sisters, we all err.

  20. Amplify Errors to Minimize Them

    ERIC Educational Resources Information Center

    Stewart, Maria Shine

    2009-01-01

    In this article, the author offers her experience of modeling mistakes and writing spontaneously in the computer classroom to get students' attention and elicit their editorial response. She describes how she taught her class about major sentence errors--comma splices, run-ons, and fragments--through her Sentence Meditation exercise, a rendition…

  1. Having Fun with Error Analysis

    ERIC Educational Resources Information Center

    Siegel, Peter

    2007-01-01

    We present a fun activity that can be used to introduce students to error analysis: the M&M game. Students are told to estimate the number of individual candies plus uncertainty in a bag of M&M's. The winner is the group whose estimate brackets the actual number with the smallest uncertainty. The exercise produces enthusiastic discussions and…

  2. RM2: rms error comparisons

    NASA Technical Reports Server (NTRS)

    Rice, R. F.

    1976-01-01

    The root-mean-square error performance measure is used to compare the relative performance of several widely known source coding algorithms with the RM2 image data compression system. The results demonstrate that RM2 has a uniformly significant performance advantage.

  3. The Zero Product Principle Error.

    ERIC Educational Resources Information Center

    Padula, Janice

    1996-01-01

    Argues that the challenge for teachers of algebra in Australia is to find ways of making the structural aspects of algebra accessible to a greater percentage of students. Uses the zero product principle to provide an example of a common student error grounded in the difficulty of understanding the structure of algebra. (DDR)

  4. Competing Criteria for Error Gravity.

    ERIC Educational Resources Information Center

    Hughes, Arthur; Lascaratou, Chryssoula

    1982-01-01

    Presents study in which native-speaker teachers of English, Greek teachers of English, and English native-speakers who were not teachers judged seriousness of errors made by Greek-speaking students of English in their last year of high school. Results show native English speakers were more lenient than Greek teachers, and three groups differed in…

  5. What Is a Reading Error?

    ERIC Educational Resources Information Center

    Labov, William; Baker, Bettina

    2010-01-01

    Early efforts to apply knowledge of dialect differences to reading stressed the importance of the distinction between differences in pronunciation and mistakes in reading. This study develops a method of estimating the probability that a given oral reading that deviates from the text is a true reading error by observing the semantic impact of the…

  6. Error Processing in Huntington's Disease

    PubMed Central

    Andrich, Jürgen; Gold, Ralf; Falkenstein, Michael

    2006-01-01

    Background Huntington's disease (HD) is a genetic disorder expressed by a degeneration of the basal ganglia inter alia accompanied with dopaminergic alterations. These dopaminergic alterations are related to genetic factors i.e., CAG-repeat expansion. The error (related) negativity (Ne/ERN), a cognitive event-related potential related to performance monitoring, is generated in the anterior cingulate cortex (ACC) and supposed to depend on the dopaminergic system. The Ne is reduced in Parkinson's Disease (PD). Due to a dopaminergic deficit in HD, a reduction of the Ne is also likely. Furthermore it is assumed that movement dysfunction emerges as a consequence of dysfunctional error-feedback processing. Since dopaminergic alterations are related to the CAG-repeat, a Ne reduction may furthermore also be related to the genetic disease load. Methodology/Principle Findings We assessed the error negativity (Ne) in a speeded reaction task under consideration of the underlying genetic abnormalities. HD patients showed a specific reduction in the Ne, which suggests impaired error processing in these patients. Furthermore, the Ne was closely related to CAG-repeat expansion. Conclusions/Significance The reduction of the Ne is likely to be an effect of the dopaminergic pathology. The result resembles findings in Parkinson's Disease. As such the Ne might be a measure for the integrity of striatal dopaminergic output function. The relation to the CAG-repeat expansion indicates that the Ne could serve as a gene-associated “cognitive” biomarker in HD. PMID:17183717

  7. ISMP Medication Error Report Analysis.

    PubMed

    2013-10-01

    These medication errors have occurred in health care facilities at least once. They will happen again-perhaps where you work. Through education and alertness of personnel and procedural safeguards, they can be avoided. You should consider publishing accounts of errors in your newsletters and/or presenting them at your inservice training programs. Your assistance is required to continue this feature. The reports described here were received through the Institute for Safe Medication Practices (ISMP) Medication Errors Reporting Program. Any reports published by ISMP will be anonymous. Comments are also invited; the writers' names will be published if desired. ISMP may be contacted at the address shown below. Errors, close calls, or hazardous conditions may be reported directly to ISMP through the ISMP Web site (www.ismp.org), by calling 800-FAIL-SAFE, or via e-mail at ismpinfo@ismp.org. ISMP guarantees the confidentiality and security of the information received and respects reporters' wishes as to the level of detail included in publications.

  8. ISMP Medication Error Report Analysis.

    PubMed

    2014-01-01

    These medication errors have occurred in health care facilities at least once. They will happen again-perhaps where you work. Through education and alertness of personnel and procedural safeguards, they can be avoided. You should consider publishing accounts of errors in your newsletters and/or presenting them at your inservice training programs. Your assistance is required to continue this feature. The reports described here were received through the Institute for Safe Medication Practices (ISMP) Medication Errors Reporting Program. Any reports published by ISMP will be anonymous. Comments are also invited; the writers' names will be published if desired. ISMP may be contacted at the address shown below. Errors, close calls, or hazardous conditions may be reported directly to ISMP through the ISMP Web site (www.ismp.org), by calling 800-FAIL-SAFE, or via e-mail at ismpinfo@ismp.org. ISMP guarantees the confidentiality and security of the information received and respects reporters' wishes as to the level of detail included in publications.

  9. ISMP Medication Error Report Analysis.

    PubMed

    2013-05-01

    These medication errors have occurred in health care facilities at least once. They will happen again-perhaps where you work. Through education and alertness of personnel and procedural safeguards, they can be avoided. You should consider publishing accounts of errors in your newsletters and/or presenting them at your inservice training programs. Your assistance is required to continue this feature. The reports described here were received through the Institute for Safe Medication Practices (ISMP) Medication Errors Reporting Program. Any reports published by ISMP will be anonymous. Comments are also invited; the writers' names will be published if desired. ISMP may be contacted at the address shown below. Errors, close calls, or hazardous conditions may be reported directly to ISMP through the ISMP Web site (www.ismp.org), by calling 800-FAIL-SAFE, or via e-mail at ismpinfo@ismp.org. ISMP guarantees the confidentiality and security of the information received and respects reporters' wishes as to the level of detail included in publications.

  10. ISMP Medication Error Report Analysis.

    PubMed

    2013-12-01

    These medication errors have occurred in health care facilities at least once. They will happen again-perhaps where you work. Through education and alertness of personnel and procedural safeguards, they can be avoided. You should consider publishing accounts of errors in your newsletters and/or presenting them at your inservice training programs. Your assistance is required to continue this feature. The reports described here were received through the Institute for Safe Medication Practices (ISMP) Medication Errors Reporting Program. Any reports published by ISMP will be anonymous. Comments are also invited; the writers' names will be published if desired. ISMP may be contacted at the address shown below. Errors, close calls, or hazardous conditions may be reported directly to ISMP through the ISMP Web site (www.ismp.org), by calling 800-FAIL-SAFE, or via e-mail at ismpinfo@ismp.org. ISMP guarantees the confidentiality and security of the information received and respects reporters' wishes as to the level of detail included in publications.

  11. ISMP Medication Error Report Analysis.

    PubMed

    2013-11-01

    These medication errors have occurred in health care facilities at least once. They will happen again-perhaps where you work. Through education and alertness of personnel and procedural safeguards, they can be avoided. You should consider publishing accounts of errors in your newsletters and/or presenting them at your inservice training programs. Your assistance is required to continue this feature. The reports described here were received through the Institute for Safe Medication Practices (ISMP) Medication Errors Reporting Program. Any reports published by ISMP will be anonymous. Comments are also invited; the writers' names will be published if desired. ISMP may be contacted at the address shown below. Errors, close calls, or hazardous conditions may be reported directly to ISMP through the ISMP Web site (www.ismp.org), by calling 800-FAIL-SAFE, or via e-mail at ismpinfo@ismp.org. ISMP guarantees the confidentiality and security of the information received and respects reporters' wishes as to the level of detail included in publications.

  12. ISMP Medication error report analysis.

    PubMed

    2013-04-01

    These medication errors have occurred in health care facilities at least once. They will happen again-perhaps where you work. Through education and alertness of personnel and procedural safeguards, they can be avoided. You should consider publishing accounts of errors in your newsletters and/or presenting them at your inservice training programs. Your assistance is required to continue this feature. The reports described here were received through the Institute for Safe Medication Practices (ISMP) Medication Errors Reporting Program. Any reports published by ISMP will be anonymous. Comments are also invited; the writers' names will be published if desired. ISMP may be contacted at the address shown below. Errors, close calls, or hazardous conditions may be reported directly to ISMP through the ISMP Web site (www.ismp.org), by calling 800-FAIL-SAFE, or via e-mail at ismpinfo@ismp.org. ISMP guarantees the confidentiality and security of the information received and respects reporters' wishes as to the level of detail included in publications.

  13. ISMP Medication Error Report Analysis.

    PubMed

    2013-06-01

    These medication errors have occurred in health care facilities at least once. They will happen again-perhaps where you work. Through education and alertness of personnel and procedural safeguards, they can be avoided. You should consider publishing accounts of errors in your newsletters and/or presenting them at your inservice training programs. Your assistance is required to continue this feature. The reports described here were received through the Institute for Safe Medication Practices (ISMP) Medication Errors Reporting Program. Any reports published by ISMP will be anonymous. Comments are also invited; the writers' names will be published if desired. ISMP may be contacted at the address shown below. Errors, close calls, or hazardous conditions may be reported directly to ISMP through the ISMP Web site (www.ismp.org), by calling 800-FAIL-SAFE, or via e-mail at ismpinfo@ismp.org. ISMP guarantees the confidentiality and security of the information received and respects reporters' wishes as to the level of detail included in publications.

  14. ISMP Medication Error Report Analysis.

    PubMed

    2013-01-01

    These medication errors have occurred in health care facilities at least once. They will happen again-perhaps where you work. Through education and alertness of personnel and procedural safeguards, they can be avoided. You should consider publishing accounts of errors in your newsletters and/or presenting them at your inservice training programs. Your assistance is required to continue this feature. The reports described here were received through the Institute for Safe Medication Practices (ISMP) Medication Errors Reporting Program. Any reports published by ISMP will be anonymous. Comments are also invited; the writers' names will be published if desired. ISMP may be contacted at the address shown below. Errors, close calls, or hazardous conditions may be reported directly to ISMP through the ISMP Web site (www.ismp.org), by calling 800-FAIL-SAFE, or via e-mail at ismpinfo@ismp.org. ISMP guarantees the confidentiality and security of the information received and respects reporters' wishes as to the level of detail included in publications.

  15. ISMP Medication Error Report Analysis.

    PubMed

    2013-02-01

    These medication errors have occurred in health care facilities at least once. They will happen again-perhaps where you work. Through education and alertness of personnel and procedural safeguards, they can be avoided. You should consider publishing accounts of errors in your newsletters and/or presenting them at your inservice training programs. Your assistance is required to continue this feature. The reports described here were received through the Institute for Safe Medication Practices (ISMP) Medication Errors Reporting Program. Any reports published by ISMP will be anonymous. Comments are also invited; the writers' names will be published if desired. ISMP may be contacted at the address shown below. Errors, close calls, or hazardous conditions may be reported directly to ISMP through the ISMP Web site (www.ismp.org), by calling 800-FAIL-SAFE, or via e-mail at ismpinfo@ismp.org. ISMP guarantees the confidentiality and security of the information received and respects reporters' wishes as to the level of detail included in publications.

  16. ISMP Medication Error Report Analysis.

    PubMed

    2013-03-01

    These medication errors have occurred in health care facilities at least once. They will happen again-perhaps where you work. Through education and alertness of personnel and procedural safeguards, they can be avoided. You should consider publishing accounts of errors in your newsletters and/or presenting them at your inservice training programs. Your assistance is required to continue this feature. The reports described here were received through the Institute for Safe Medication Practices (ISMP) Medication Errors Reporting Program. Any reports published by ISMP will be anonymous. Comments are also invited; the writers' names will be published if desired. ISMP may be contacted at the address shown below. Errors, close calls, or hazardous conditions may be reported directly to ISMP through the ISMP Web site (www.ismp.org), by calling 800-FAIL-SAFE, or via e-mail at ismpinfo@ismp.org. ISMP guarantees the confidentiality and security of the information received and respects reporters' wishes as to the level of detail included in publications.

  17. ISMP Medication Error Report Analysis.

    PubMed

    2013-09-01

    These medication errors have occurred in health care facilities at least once. They will happen again-perhaps where you work. Through education and alertness of personnel and procedural safeguards, they can be avoided. You should consider publishing accounts of errors in your newsletters and/or presenting them at your inservice training programs. Your assistance is required to continue this feature. The reports described here were received through the Institute for Safe Medication Practices (ISMP) Medication Errors Reporting Program. Any reports published by ISMP will be anonymous. Comments are also invited; the writers' names will be published if desired. ISMP may be contacted at the address shown below. Errors, close calls, or hazardous conditions may be reported directly to ISMP through the ISMP Web site (www.ismp.org), by calling 800-FAIL-SAFE, or via e-mail at ismpinfo@ismp.org. ISMP guarantees the confidentiality and security of the information received and respects reporters' wishes as to the level of detail included in publications.

  18. ISMP Medication Error Report Analysis.

    PubMed

    2013-07-01

    These medication errors have occurred in health care facilities at least once. They will happen again-perhaps where you work. Through education and alertness of personnel and procedural safeguards, they can be avoided. You should consider publishing accounts of errors in your newsletters and/or presenting them at your inservice training programs. Your assistance is required to continue this feature. The reports described here were received through the Institute for Safe Medication Practices (ISMP) Medication Errors Reporting Program. Any reports published by ISMP will be anonymous. Comments are also invited; the writers' names will be published if desired. ISMP may be contacted at the address shown below. Errors, close calls, or hazardous conditions may be reported directly to ISMP through the ISMP Web site (www.ismp.org), by calling 800-FAIL-SAFE, or via e-mail at ismpinfo@ismp.org. ISMP guarantees the confidentiality and security of the information received and respects reporters' wishes as to the level of detail included in publications.

  19. Reduced discretization error in HZETRN

    SciTech Connect

    Slaba, Tony C.; Blattnig, Steve R.; Tweed, John

    2013-02-01

    The deterministic particle transport code HZETRN is an efficient analysis tool for studying the effects of space radiation on humans, electronics, and shielding materials. In a previous work, numerical methods in the code were reviewed, and new methods were developed that further improved efficiency and reduced overall discretization error. It was also shown that the remaining discretization error could be attributed to low energy light ions (A < 4) with residual ranges smaller than the physical step-size taken by the code. Accurately resolving the spectrum of low energy light particles is important in assessing risk associated with astronaut radiation exposure. In this work, modifications to the light particle transport formalism are presented that accurately resolve the spectrum of low energy light ion target fragments. The modified formalism is shown to significantly reduce overall discretization error and allows a physical approximation to be removed. For typical step-sizes and energy grids used in HZETRN, discretization errors for the revised light particle transport algorithms are shown to be less than 4% for aluminum and water shielding thicknesses as large as 100 g/cm{sup 2} exposed to both solar particle event and galactic cosmic ray environments.

  20. The Errors of Our Ways

    ERIC Educational Resources Information Center

    Kane, Michael

    2011-01-01

    Errors don't exist in our data, but they serve a vital function. Reality is complicated, but our models need to be simple in order to be manageable. We assume that attributes are invariant over some conditions of observation, and once we do that we need some way of accounting for the variability in observed scores over these conditions of…

  1. THE SIGNIFICANCE OF LEARNER'S ERRORS.

    ERIC Educational Resources Information Center

    CORDER, S.P.

    ERRORS (NOT MISTAKES) MADE IN BOTH SECOND LANGUAGE LEARNING AND CHILD LANGUAGE ACQUISITION PROVIDE EVIDENCE THAT A LEARNER USES A DEFINITE SYSTEM OF LANGUAGE AT EVERY POINT IN HIS DEVELOPMENT. THIS SYSTEM, OR "BUILT-IN SYLLABUS," MAY YIELD A MORE EFFICIENT SEQUENCE THAN THE INSTRUCTOR-GENERATED SEQUENCE BECAUSE IT IS MORE MEANINGFUL TO THE…

  2. Theory of Test Translation Error

    ERIC Educational Resources Information Center

    Solano-Flores, Guillermo; Backhoff, Eduardo; Contreras-Nino, Luis Angel

    2009-01-01

    In this article, we present a theory of test translation whose intent is to provide the conceptual foundation for effective, systematic work in the process of test translation and test translation review. According to the theory, translation error is multidimensional; it is not simply the consequence of defective translation but an inevitable fact…

  3. The Impact of Medical Interpretation Method on Time and Errors

    PubMed Central

    Kapelusznik, Luciano; Prakash, Kavitha; Gonzalez, Javier; Orta, Lurmag Y.; Tseng, Chi-Hong; Changrani, Jyotsna

    2007-01-01

    Background Twenty-two million Americans have limited English proficiency. Interpreting for limited English proficient patients is intended to enhance communication and delivery of quality medical care. Objective Little is known about the impact of various interpreting methods on interpreting speed and errors. This investigation addresses this important gap. Design Four scripted clinical encounters were used to enable the comparison of equivalent clinical content. These scripts were run across four interpreting methods, including remote simultaneous, remote consecutive, proximate consecutive, and proximate ad hoc interpreting. The first 3 methods utilized professional, trained interpreters, whereas the ad hoc method utilized untrained staff. Measurements Audiotaped transcripts of the encounters were coded, using a prespecified algorithm to determine medical error and linguistic error, by coders blinded to the interpreting method. Encounters were also timed. Results Remote simultaneous medical interpreting (RSMI) encounters averaged 12.72 vs 18.24 minutes for the next fastest mode (proximate ad hoc) (p = 0.002). There were 12 times more medical errors of moderate or greater clinical significance among utterances in non-RSMI encounters compared to RSMI encounters (p = 0.0002). Conclusions Whereas limited by the small number of interpreters involved, our study found that RSMI resulted in fewer medical errors and was faster than non-RSMI methods of interpreting. PMID:17957418

  4. Chinese Nurses' Acceptance of PDA: A Cross-Sectional Survey Using a Technology Acceptance Model.

    PubMed

    Wang, Yanling; Xiao, Qian; Sun, Liu; Wu, Ying

    2016-01-01

    This study explores Chinese nurses' acceptance of PDA, using a questionnaire based on the framework of Technology Acceptance Model (TAM). 357 nurses were involved in the study. The results reveal the scores of the nurses' acceptance of PDA were means 3.18~3.36 in four dimensions. The younger of nurses, the higher nurses' title, the longer previous usage time, the more experienced using PDA, and the more acceptance of PDA. Therefore, the hospital administrators may change strategies to enhance nurses' acceptance of PDA, and promote the wide application of PDA.

  5. Toward a cognitive taxonomy of medical errors.

    PubMed

    Zhang, Jiajie; Patel, Vimla L; Johnson, Todd R; Shortliffe, Edward H

    2002-01-01

    One critical step in addressing and resolving the problems associated with human errors is the development of a cognitive taxonomy of such errors. In the case of errors, such a taxonomy may be developed (1) to categorize all types of errors along cognitive dimensions, (2) to associate each type of error with a specific underlying cognitive mechanism, (3) to explain why, and even predict when and where, a specific error will occur, and (4) to generate intervention strategies for each type of error. Based on Reason's (1992) definition of human errors and Norman's (1986) cognitive theory of human action, we have developed a preliminary action-based cognitive taxonomy of errors that largely satisfies these four criteria in the domain of medicine. We discuss initial steps for applying this taxonomy to develop an online medical error reporting system that not only categorizes errors but also identifies problems and generates solutions.

  6. Problem of data quality and the limitations of the infrastructure approach

    NASA Astrophysics Data System (ADS)

    Behlen, Fred M.; Sayre, Richard E.; Rackus, Edward; Ye, Dingzhong

    1998-07-01

    The 'Infrastructure Approach' is a PACS implementation methodology wherein the archive, network and information systems interfaces are acquired first, and workstations are installed later. The approach allows building a history of archived image data, so that most prior examinations are available in digital form when workstations are deployed. A limitation of the Infrastructure Approach is that the deferred use of digital image data defeats many data quality management functions that are provided automatically by human mechanisms when data is immediately used for the completion of clinical tasks. If the digital data is used solely for archiving while reports are interpreted from film, the radiologist serves only as a check against lost films, and another person must be designated as responsible for the quality of the digital data. Data from the Radiology Information System and the PACS were analyzed to assess the nature and frequency of system and data quality errors. The error level was found to be acceptable if supported by auditing and error resolution procedures requiring additional staff time, and in any case was better than the loss rate of a hardcopy film archive. It is concluded that the problem of data quality compromises but does not negate the value of the Infrastructure Approach. The Infrastructure Approach should best be employed only to a limited extent, and that any phased PACS implementation should have a substantial complement of workstations dedicated to softcopy interpretation for at least some applications, and with full deployment following not long thereafter.

  7. On Logical Error Underlying Classical Mechanics

    NASA Astrophysics Data System (ADS)

    Kalanov, Temur Z.

    2012-03-01

    The logical analysis of the general accepted description of mechanical motion of material point M in classical mechanics is proposed. The key idea of the analysis is as follows. Let point M be moved in the positive direction of the axis O 1ptx. Motion is characterized by a change of coordinate x,( t ) -- continuous function of time t(because motion is a change in general). If δ,->;0;δ,;=;0, then δ,;->;0δ,;=;0, i.e., according to practice and formal logic, value of coordinate does not change and, hence, motion does not exist. But, contrary to practice and formal logic, differential calculus and classical mechanics contain the assertion that velocity δ,;->;0;δ,δ,;exists without motion. Then velocity δ,;->;0;δ,δ,;is not real (i.e. not physical) quantity, but fictitious quantity. Therefore, use of non-physical (unreal) quantity (i.e. the first and second derivatives of function) in classical mechanics is a logic error.

  8. WRAP low level waste (LLW) glovebox acceptance test report

    SciTech Connect

    Leist, K.J.

    1998-02-17

    In June 28, 1997, the Low Level Waste (LLW) glovebox was tested using glovebox acceptance test procedure 13031A-85. The primary focus of the glovebox acceptance test was to examine control system interlocks, display menus, alarms, and operator messages. Limited mechanical testing involving the drum ports, hoists, drum lifter, compacted drum lifter, drum tipper, transfer car, conveyors, lidder/delidder device and the supercompactor were also conducted. As of November 24, 1997, 2 of the 131 test exceptions that affect the LLW glovebox remain open. These items will be tracked and closed via the WRAP Master Test Exception Database. As part of Test Exception resolution/closure the responsible individual closing the Test Exception performs a retest of the affected item(s) to ensure the identified deficiency is corrected, and, or to test items not previously available to support testing. Test Exceptions are provided as appendices to this report.

  9. Acceptance Test Plan for the Sludge Pickup Adaptor

    SciTech Connect

    PITNER, A.L.

    2000-03-29

    This test plan documents the acceptance testing of the sludge pickup adapter for potential use during PSI Phases 3 and 4 fuel cleanliness inspection activities. The adaptex is attached to the strainer tip of the vacuum wand and used to suction up residual sludge captured in a sludge collection tray. The material is vacuumed into a chamber of known volume in the sludge pickup adapter. The device serves as an aid in helping to determine whether the observed quantity of sludge is within allowable limits (1.4 cm{sup 3} per fuel assembly). This functionality test involves underwater testing in the 305 Building Cold Test Facility to verify that sludge can be successfully vacuumed from a collection tray. Ancillary activities in this acceptance test include demonstration that the sludge pickup adapter CM be successfully attached to and detached from the vacuum wand underwater.

  10. Behavioral genetics: scientific and social acceptance.

    PubMed

    Lorenz, David R

    2003-01-01

    Human behavioral genetics can be broadly defined as the attempt to characterize and define the genetic or hereditary basis for human behavior. Examination of the history of these scientific enterprises reveals episodes of controversy, and an apparent distinction between scientific and social acceptance of the genetic nature of such complex behaviors. This essay will review the history and methodology of behavioral genetics research, including a more detailed look at case histories involving behavioral genetic research for aggressive behavior and alcoholism. It includes a discussion of the scientific versus social qualities of the acceptance of behavioral genetics research, as well as the development of a general model for scientific acceptance involving the researchers, the scientific literature, the scientific peer group, the mainstream media, and the public at large. From this model follows a discussion of the means and complications by which behavioral genetics research may be accepted by society, and an analysis of how future studies might be conducted.

  11. 7 CFR 1205.326 - Acceptance.

    Code of Federal Regulations, 2013 CFR

    2013-01-01

    ... AND ORDERS; MISCELLANEOUS COMMODITIES), DEPARTMENT OF AGRICULTURE COTTON RESEARCH AND PROMOTION Cotton Research and Promotion Order Cotton Board § 1205.326 Acceptance. Any person selected by the Secretary as...

  12. 7 CFR 1205.326 - Acceptance.

    Code of Federal Regulations, 2011 CFR

    2011-01-01

    ... AND ORDERS; MISCELLANEOUS COMMODITIES), DEPARTMENT OF AGRICULTURE COTTON RESEARCH AND PROMOTION Cotton Research and Promotion Order Cotton Board § 1205.326 Acceptance. Any person selected by the Secretary as...

  13. 7 CFR 1205.326 - Acceptance.

    Code of Federal Regulations, 2014 CFR

    2014-01-01

    ... AND ORDERS; MISCELLANEOUS COMMODITIES), DEPARTMENT OF AGRICULTURE COTTON RESEARCH AND PROMOTION Cotton Research and Promotion Order Cotton Board § 1205.326 Acceptance. Any person selected by the Secretary as...

  14. Integrated Model for E-Learning Acceptance

    NASA Astrophysics Data System (ADS)

    Ramadiani; Rodziah, A.; Hasan, S. M.; Rusli, A.; Noraini, C.

    2016-01-01

    E-learning is not going to work if the system is not used in accordance with user needs. User Interface is very important to encourage using the application. Many theories had discuss about user interface usability evaluation and technology acceptance separately, actually why we do not make it correlation between interface usability evaluation and user acceptance to enhance e-learning process. Therefore, the evaluation model for e-learning interface acceptance is considered important to investigate. The aim of this study is to propose the integrated e-learning user interface acceptance evaluation model. This model was combined some theories of e-learning interface measurement such as, user learning style, usability evaluation, and the user benefit. We formulated in constructive questionnaires which were shared at 125 English Language School (ELS) students. This research statistics used Structural Equation Model using LISREL v8.80 and MANOVA analysis.

  15. 49 CFR 193.2303 - Construction acceptance.

    Code of Federal Regulations, 2013 CFR

    2013-10-01

    ...: FEDERAL SAFETY STANDARDS Construction § 193.2303 Construction acceptance. No person may place in service any component until it passes all applicable inspections and tests prescribed by this subpart and...

  16. 49 CFR 193.2303 - Construction acceptance.

    Code of Federal Regulations, 2014 CFR

    2014-10-01

    ...: FEDERAL SAFETY STANDARDS Construction § 193.2303 Construction acceptance. No person may place in service any component until it passes all applicable inspections and tests prescribed by this subpart and...

  17. 49 CFR 193.2303 - Construction acceptance.

    Code of Federal Regulations, 2012 CFR

    2012-10-01

    ...: FEDERAL SAFETY STANDARDS Construction § 193.2303 Construction acceptance. No person may place in service any component until it passes all applicable inspections and tests prescribed by this subpart and...

  18. 49 CFR 193.2303 - Construction acceptance.

    Code of Federal Regulations, 2011 CFR

    2011-10-01

    ...: FEDERAL SAFETY STANDARDS Construction § 193.2303 Construction acceptance. No person may place in service any component until it passes all applicable inspections and tests prescribed by this subpart and...

  19. Gas characterization system software acceptance test report

    SciTech Connect

    Vo, C.V.

    1996-03-28

    This document details the results of software acceptance testing of gas characterization systems. The gas characterization systems will be used to monitor the vapor spaces of waste tanks known to contain measurable concentrations of flammable gases.

  20. Nevada Test Site Waste Acceptance Criteria

    SciTech Connect

    U.S. Department of Energy, Nevada Operations Office, Waste Acceptance Criteria

    1999-05-01

    This document provides the requirements, terms, and conditions under which the Nevada Test Site will accept low-level radioactive and mixed waste for disposal; and transuranic and transuranic mixed waste for interim storage at the Nevada Test Site.

  1. Latent human error analysis and efficient improvement strategies by fuzzy TOPSIS in aviation maintenance tasks.

    PubMed

    Chiu, Ming-Chuan; Hsieh, Min-Chih

    2016-05-01

    The purposes of this study were to develop a latent human error analysis process, to explore the factors of latent human error in aviation maintenance tasks, and to provide an efficient improvement strategy for addressing those errors. First, we used HFACS and RCA to define the error factors related to aviation maintenance tasks. Fuzzy TOPSIS with four criteria was applied to evaluate the error factors. Results show that 1) adverse physiological states, 2) physical/mental limitations, and 3) coordination, communication, and planning are the factors related to airline maintenance tasks that could be addressed easily and efficiently. This research establishes a new analytic process for investigating latent human error and provides a strategy for analyzing human error using fuzzy TOPSIS. Our analysis process complements shortages in existing methodologies by incorporating improvement efficiency, and it enhances the depth and broadness of human error analysis methodology.

  2. Report on errors in pretransfusion testing from a tertiary care center: A step toward transfusion safety

    PubMed Central

    Sidhu, Meena; Meenia, Renu; Akhter, Naveen; Sawhney, Vijay; Irm, Yasmeen

    2016-01-01

    Introduction: Errors in the process of pretransfusion testing for blood transfusion can occur at any stage from collection of the sample to administration of the blood component. The present study was conducted to analyze the errors that threaten patients’ transfusion safety and actual harm/serious adverse events that occurred to the patients due to these errors. Materials and Methods: The prospective study was conducted in the Department Of Transfusion Medicine, Shri Maharaja Gulab Singh Hospital, Government Medical College, Jammu, India from January 2014 to December 2014 for a period of 1 year. Errors were defined as any deviation from established policies and standard operating procedures. A near-miss event was defined as those errors, which did not reach the patient. Location and time of occurrence of the events/errors were also noted. Results: A total of 32,672 requisitions for the transfusion of blood and blood components were received for typing and cross-matching. Out of these, 26,683 products were issued to the various clinical departments. A total of 2,229 errors were detected over a period of 1 year. Near-miss events constituted 53% of the errors and actual harmful events due to errors occurred in 0.26% of the patients. Sample labeling errors were 2.4%, inappropriate request for blood components 2%, and information on requisition forms not matching with that on the sample 1.5% of all the requisitions received were the most frequent errors in clinical services. In transfusion services, the most common event was accepting sample in error with the frequency of 0.5% of all requisitions. ABO incompatible hemolytic reactions were the most frequent harmful event with the frequency of 2.2/10,000 transfusions. Conclusion: Sample labeling, inappropriate request, and sample received in error were the most frequent high-risk errors. PMID:27011670

  3. Acceptance Test Plan for ANSYS Software

    SciTech Connect

    CREA, B.A.

    2000-10-25

    This plan governs the acceptance testing of the ANSYS software (Full Mechanical Release 5.5) for use on Project Word Management Contract (PHMC) computer systems (either UNIX or Microsoft Windows/NT). There are two phases to the acceptance testing covered by this test plan: program execution in accordance with the guidance provided in installation manuals; and ensuring results of the execution are consistent with the expected physical behavior of the system being modeled.

  4. Reducing collective quantum state rotation errors with reversible dephasing

    SciTech Connect

    Cox, Kevin C.; Norcia, Matthew A.; Weiner, Joshua M.; Bohnet, Justin G.; Thompson, James K.

    2014-12-29

    We demonstrate that reversible dephasing via inhomogeneous broadening can greatly reduce collective quantum state rotation errors, and observe the suppression of rotation errors by more than 21 dB in the context of collective population measurements of the spin states of an ensemble of 2.1×10{sup 5} laser cooled and trapped {sup 87}Rb atoms. The large reduction in rotation noise enables direct resolution of spin state populations 13(1) dB below the fundamental quantum projection noise limit. Further, the spin state measurement projects the system into an entangled state with 9.5(5) dB of directly observed spectroscopic enhancement (squeezing) relative to the standard quantum limit, whereas no enhancement would have been obtained without the suppression of rotation errors.

  5. Development of a Drosophila cell-based error correction assay.

    PubMed

    Salemi, Jeffrey D; McGilvray, Philip T; Maresca, Thomas J

    2013-01-01

    Accurate transmission of the genome through cell division requires microtubules from opposing spindle poles to interact with protein super-structures called kinetochores that assemble on each sister chromatid. Most kinetochores establish erroneous attachments that are destabilized through a process called error correction. Failure to correct improper kinetochore-microtubule (kt-MT) interactions before anaphase onset results in chromosomal instability (CIN), which has been implicated in tumorigenesis and tumor adaptation. Thus, it is important to characterize the molecular basis of error correction to better comprehend how CIN occurs and how it can be modulated. An error correction assay has been previously developed in cultured mammalian cells in which incorrect kt-MT attachments are created through the induction of monopolar spindle assembly via chemical inhibition of kinesin-5. Error correction is then monitored following inhibitor wash out. Implementing the error correction assay in Drosophila melanogaster S2 cells would be valuable because kt-MT attachments are easily visualized and the cells are highly amenable to RNAi and high-throughput screening. However, Drosophila kinesin-5 (Klp61F) is unaffected by available small molecule inhibitors. To overcome this limitation, we have rendered S2 cells susceptible to kinesin-5 inhibitors by functionally replacing Klp61F with human kinesin-5 (Eg5). Eg5 expression rescued the assembly of monopolar spindles typically caused by Klp61F depletion. Eg5-mediated bipoles collapsed into monopoles due, in part, to kinesin-14 (Ncd) activity when treated with the kinesin-5 inhibitor S-trityl-L-cysteine (STLC). Furthermore, bipolar spindles reassembled and error correction was observed after STLC wash out. Importantly, error correction in Eg5-expressing S2 cells was dependent on the well-established error correction kinase Aurora B. This system provides a powerful new cell-based platform for studying error correction and CIN.

  6. Relationship between behavioural coping strategies and acceptance in patients with fibromyalgia syndrome: Elucidating targets of interventions

    PubMed Central

    2011-01-01

    Background Previous research has found that acceptance of pain is more successful than cognitive coping variables for predicting adjustment to pain. This research has a limitation because measures of cognitive coping rely on observations and reports of thoughts or attempts to change thoughts rather than on overt behaviours. The purpose of the present study, therefore, is to compare the influence of acceptance measures and the influence of different behavioural coping strategies on the adjustment to chronic pain. Methods A sample of 167 individuals diagnosed with fibromyalgia syndrome completed the Chronic Pain Coping Inventory (CPCI) and the Chronic Pain Acceptance Questionnaire (CPAQ). Results Correlational analyses indicated that the acceptance variables were more related to distress and functioning than were behavioural coping variables. The average magnitudes of the coefficients for activity engagement and pain willingness (both subscales of pain acceptance) across the measures of distress and functioning were r = 0.42 and 0.25, respectively, meanwhile the average magnitude of the correlation between coping and functioning was r = 0.17. Regression analyses examined the independent, relative contributions of coping and acceptance to adjustment indicators and demonstrated that acceptance accounted for more variance than did coping variables. The variance contributed by acceptance scores ranged from 4.0 to 40%. The variance contributed by the coping variables ranged from 0 to 9%. Conclusions This study extends the findings of previous work in enhancing the adoption of acceptance-based interventions for maintaining accurate functioning in fibromyalgia patients. PMID:21714918

  7. The role of pain acceptance on function in individuals with disabilities: a longitudinal study.

    PubMed

    Jensen, Mark P; Smith, Amanda E; Alschuler, Kevin N; Gillanders, David T; Amtmann, Dagmar; Molton, Ivan R

    2016-01-01

    Having higher levels of pain acceptance has been shown to be associated positively with quality of life in patients with chronic pain, but its role in adjustment to chronic pain among individuals with physical disabilities living in the community is not known. Moreover, issues related to item overlap between measures of pain acceptance and measures of patient function have limited the conclusions that can be drawn from previous research in this area. To better understand the role that pain acceptance plays in patient function, we administered measures of pain acceptance, pain intensity, depressive symptoms, and function to 392 individuals with physical disabilities, and the pain, symptom, and function measures were readministered 3.5 years later. Analyses evaluated the main and interaction effects of initial pain acceptance on subsequent changes in pain and function. Having higher levels of pain acceptance-in particular as reflected by a willingness to engage in activities despite pain-resulted in less increase in pain intensity and more improvements in pain interference, physical function, depressive symptoms, and sleep quality. The findings indicate that previous research supporting the importance of pain acceptance to function in patients from health care settings extends to individuals with chronic pain living in the community. Moreover, they indicate that pain acceptance may have long-lasting (up to 3.5 years) beneficial effects on subsequent pain and function and on the association between change in pain and depression. Research to examine the potential benefits of community-based treatments that increase pain acceptance is warranted.

  8. Reducing Error in Mail Surveys. ERIC Digest.

    ERIC Educational Resources Information Center

    Cui, Weiwei

    This Digest describes four types of errors in mail surveys and summarizes the ways they can be reduced. Any one of these sources of error can make survey results unacceptable. Sampling error is examined through inferential statistics applied to sample survey results. In general, increasing sample size will decrease sampling error when simple…

  9. Error Correction in Oral Classroom English Teaching

    ERIC Educational Resources Information Center

    Jing, Huang; Xiaodong, Hao; Yu, Liu

    2016-01-01

    As is known to all, errors are inevitable in the process of language learning for Chinese students. Should we ignore students' errors in learning English? In common with other questions, different people hold different opinions. All teachers agree that errors students make in written English are not allowed. For the errors students make in oral…

  10. 5 CFR 1601.34 - Error correction.

    Code of Federal Regulations, 2011 CFR

    2011-01-01

    ... 5 Administrative Personnel 3 2011-01-01 2011-01-01 false Error correction. 1601.34 Section 1601.34... Contribution Allocations and Interfund Transfer Requests § 1601.34 Error correction. Errors in processing... in the wrong investment fund, will be corrected in accordance with the error correction...

  11. 5 CFR 1601.34 - Error correction.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... 5 Administrative Personnel 3 2010-01-01 2010-01-01 false Error correction. 1601.34 Section 1601.34... Contribution Allocations and Interfund Transfer Requests § 1601.34 Error correction. Errors in processing... in the wrong investment fund, will be corrected in accordance with the error correction...

  12. GP-B error modeling and analysis

    NASA Technical Reports Server (NTRS)

    Hung, J. C.

    1982-01-01

    Individual source errors and their effects on the accuracy of the Gravity Probe B (GP-B) experiment were investigated. Emphasis was placed on: (1) the refinement of source error identification and classifications of error according to their physical nature; (2) error analysis for the GP-B data processing; and (3) measurement geometry for the experiment.

  13. Discretization vs. Rounding Error in Euler's Method

    ERIC Educational Resources Information Center

    Borges, Carlos F.

    2011-01-01

    Euler's method for solving initial value problems is an excellent vehicle for observing the relationship between discretization error and rounding error in numerical computation. Reductions in stepsize, in order to decrease discretization error, necessarily increase the number of steps and so introduce additional rounding error. The problem is…

  14. The Sources of Error in Spanish Writing.

    ERIC Educational Resources Information Center

    Justicia, Fernando; Defior, Sylvia; Pelegrina, Santiago; Martos, Francisco J.

    1999-01-01

    Determines the pattern of errors in Spanish spelling. Analyzes and proposes a classification system for the errors made by children in the initial stages of the acquisition of spelling skills. Finds the diverse forms of only 20 Spanish words produces 36% of the spelling errors in Spanish; and substitution is the most frequent type of error. (RS)

  15. Internal Correction Of Errors In A DRAM

    NASA Technical Reports Server (NTRS)

    Zoutendyk, John A.; Watson, R. Kevin; Schwartz, Harvey R.; Nevill, Leland R.; Hasnain, Zille

    1989-01-01

    Error-correcting Hamming code built into circuit. A 256 K dynamic random-access memory (DRAM) circuit incorporates Hamming error-correcting code in its layout. Feature provides faster detection and correction of errors at less cost in amount of equipment, operating time, and software. On-chip error-correcting feature also makes new DRAM less susceptible to single-event upsets.

  16. Error-Related Psychophysiology and Negative Affect

    ERIC Educational Resources Information Center

    Hajcak, G.; McDonald, N.; Simons, R.F.

    2004-01-01

    The error-related negativity (ERN/Ne) and error positivity (Pe) have been associated with error detection and response monitoring. More recently, heart rate (HR) and skin conductance (SC) have also been shown to be sensitive to the internal detection of errors. An enhanced ERN has consistently been observed in anxious subjects and there is some…

  17. A Posteriori Correction of Forecast and Observation Error Variances

    NASA Technical Reports Server (NTRS)

    Rukhovets, Leonid

    2005-01-01

    Proposed method of total observation and forecast error variance correction is based on the assumption about normal distribution of "observed-minus-forecast" residuals (O-F), where O is an observed value and F is usually a short-term model forecast. This assumption can be accepted for several types of observations (except humidity) which are not grossly in error. Degree of nearness to normal distribution can be estimated by the symmetry or skewness (luck of symmetry) a(sub 3) = mu(sub 3)/sigma(sup 3) and kurtosis a(sub 4) = mu(sub 4)/sigma(sup 4) - 3 Here mu(sub i) = i-order moment, sigma is a standard deviation. It is well known that for normal distribution a(sub 3) = a(sub 4) = 0.

  18. An investigation of error correcting techniques for OMV data

    NASA Technical Reports Server (NTRS)

    Ingels, Frank; Fryer, John

    1992-01-01

    Papers on the following topics are presented: considerations of testing the Orbital Maneuvering Vehicle (OMV) system with CLASS; OMV CLASS test results (first go around); equivalent system gain available from R-S encoding versus a desire to lower the power amplifier from 25 watts to 20 watts for OMV; command word acceptance/rejection rates for OMV; a memo concerning energy-to-noise ratio for the Viterbi-BSC Channel and the impact of Manchester coding loss; and an investigation of error correcting techniques for OMV and Advanced X-ray Astrophysics Facility (AXAF).

  19. Writing biomedical manuscripts part II: standard elements and common errors.

    PubMed

    Ohwovoriole, A E

    2011-01-01

    It is incumbent on, satisfying to, and rewarding for, researchers to have their work published. Many workers are denied this satisfaction because of their inability to secure acceptance after what they consider a good research. Several reasons account for rejection or delay of manuscripts submitted to biomedical journals. A research poorly conceptualised and/or conducted will fail to fly but poor writing up of the completed work accounts for a greater majority of manuscripts that get rejected. The chances of manuscript acceptance can be increased by paying attention to the standard elements and avoiding correcting the common errors that make for the rejection of manuscripts. Cultivating the habit of structuring every department of the manuscript greatly improves chances of acceptance. The final paper should follow the universally accepted pattern of aim , introduction , methods, results, and discussion. The sequence of putting the paper together is different from the order in the final form. Follow a pattern that starts with the Tables and figures for the results section , followed by final version of the methods section. The title and abstract should be about the last to be written in the final version of the manuscript. You need to have results sorted out early as the rest of what you will write is largely dictated by your results. Revise the work several times and get co - authors and third parties to help read it over. To succeed follow the universal rules of writing and those of the target journal rules while avoiding those errors that are easily amenable to correction before you submit your manuscript.

  20. ERROR ANALYSIS OF COMPOSITE SHOCK INTERACTION PROBLEMS.

    SciTech Connect

    LEE,T.MU,Y.ZHAO,M.GLIMM,J.LI,X.YE,K.

    2004-07-26

    We propose statistical models of uncertainty and error in numerical solutions. To represent errors efficiently in shock physics simulations we propose a composition law. The law allows us to estimate errors in the solutions of composite problems in terms of the errors from simpler ones as discussed in a previous paper. In this paper, we conduct a detailed analysis of the errors. One of our goals is to understand the relative magnitude of the input uncertainty vs. the errors created within the numerical solution. In more detail, we wish to understand the contribution of each wave interaction to the errors observed at the end of the simulation.